2026-03-09T14:51:03.686 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T14:51:03.694 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T14:51:03.716 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/514 branch: squid description: orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} email: null first_in_suite: false flavor: default job_id: '514' last_in_suite: false machine_type: vps meta: - desc: 'setup ceph/v18.2.0 ' name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: cluster-conf: mgr: client mount timeout: 30 debug client: 20 debug mgr: 20 debug ms: 1 mon warn on pool no app: false conf: client: client mount timeout: 600 debug client: 20 debug ms: 1 rados mon op timeout: 900 rados osd op timeout: 900 global: mon pg warn min per osd: 0 mds: debug mds: 20 debug mds balancer: 20 debug ms: 1 mds debug frag: true mds debug scatterstat: true mds op complaint time: 180 mds verify scatter: true osd op complaint time: 180 rados mon op timeout: 900 rados osd op timeout: 900 mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon down mkfs grace: 300 mon op complaint time: 120 osd: bdev async discard: true bdev enable discard: true bluestore allocator: bitmap bluestore block size: 96636764160 bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug ms: 1 debug osd: 20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd mclock iops capacity threshold hdd: 49000 osd objectstore: bluestore osd op complaint time: 180 flavor: default fs: xfs log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - FS_DEGRADED - filesystem is degraded - FS_INLINE_DATA_DEPRECATED - FS_WITH_FAILED_MDS - MDS_ALL_DOWN - filesystem is offline - is offline because no MDS - MDS_DAMAGE - MDS_DEGRADED - MDS_FAILED - MDS_INSUFFICIENT_STANDBY - MDS_UP_LESS_THAN_MAX - online, but wants - filesystem is online with fewer MDS than max_mds - POOL_APP_NOT_ENABLED - do not have an application enabled - overall HEALTH_ - Replacing daemon - deprecated feature inline_data - MGR_MODULE_ERROR - OSD_DOWN - osds down - overall HEALTH_ - \(OSD_DOWN\) - \(OSD_ - but it is still running - is not responding - MON_DOWN - PG_AVAILABILITY - PG_DEGRADED - Reduced data availability - Degraded data redundancy - pg .* is stuck inactive - pg .* is .*degraded - pg .* is stuck peering sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: bluestore: true conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} osd: bdev async discard: true bdev enable discard: true bluestore block size: 96636764160 bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd objectstore: bluestore fs: xfs install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath kclient: syntax: v1 selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 thrashosds: bdev_inject_crash: 2 bdev_inject_crash_probability: 0.5 workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - client.0 - osd.0 - osd.1 - osd.2 - - host.b - client.1 - osd.3 - osd.4 - osd.5 seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm05.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO+JSz17PEM6eVjtEfBm5GF8puDfunU3HmiuL/UHRFG4zEogzpdTtzpsRcu2p3FDMzcNLzy+Ev8YOVxCaNWu7+Q= vm09.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCa+MKbSB6zIKmF6ts4m0KjB9hwg86zJNhbx9sYFLBqfCmagH0cS6W7o8Tj6EGKIMWOsR06dxRyJQKdTYJ12s9w= tasks: - install: exclude_packages: - ceph-volume tag: v18.2.0 - print: '**** done install task...' - cephadm: compiled_cephadm_branch: reef conf: osd: osd_class_default_list: '*' osd_class_load_list: '*' image: quay.io/ceph/ceph:v18.2.0 roleless: true - print: '**** done end installing v18.2.0 cephadm ...' - cephadm.shell: host.a: - ceph config set mgr mgr/cephadm/use_repo_digest true --force - print: '**** done cephadm.shell ceph config set mgr...' - cephadm.shell: host.a: - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - cephadm.shell: host.a: - ceph fs volume create cephfs --placement=4 - ceph fs dump - cephadm.shell: host.a: - ceph fs set cephfs max_mds 1 - cephadm.shell: host.a: - ceph fs set cephfs allow_standby_replay true - cephadm.shell: host.a: - ceph fs set cephfs inline_data true --yes-i-really-really-mean-it - cephadm.shell: host.a: - ceph fs dump - ceph --format=json fs dump | jq -e ".filesystems | length == 1" - while ! ceph --format=json mds versions | jq -e ". | add == 4"; do sleep 1; done - fs.pre_upgrade_save: null - ceph-fuse: null - print: '**** done client' - parallel: - upgrade-tasks - workload-tasks - cephadm.shell: host.a: - ceph fs dump - fs.post_upgrade_checks: null teuthology: fragments_dropped: [] meta: {} postmerge: - "local kernel = py_attrgetter(yaml).get('kernel')\nif kernel ~= nil then\n local\ \ branch = py_attrgetter(kernel).get('branch')\n if branch and not kernel.branch:find\ \ \"-all$\" then\n log.debug(\"removing default kernel specification: %s\"\ , kernel)\n py_attrgetter(kernel).pop('branch', nil)\n py_attrgetter(kernel).pop('deb',\ \ nil)\n py_attrgetter(kernel).pop('flavor', nil)\n py_attrgetter(kernel).pop('kdb',\ \ nil)\n py_attrgetter(kernel).pop('koji', nil)\n py_attrgetter(kernel).pop('koji_task',\ \ nil)\n py_attrgetter(kernel).pop('rpm', nil)\n py_attrgetter(kernel).pop('sha1',\ \ nil)\n py_attrgetter(kernel).pop('tag', nil)\n end\nend\n" variables: fail_fs: true teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps upgrade-tasks: sequential: - cephadm.shell: env: - sha1 host.a: - ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force - ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force - ceph config set global log_to_journald false --force - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph versions | jq -e '.mgr | length == 1' - ceph versions | jq -e '.mgr | keys' | grep $sha1 - ceph versions | jq -e '.overall | length == 2' - ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 2' - ceph orch ps - cephadm.shell: env: - sha1 host.a: - ceph config set mgr mgr/orchestrator/fail_fs true - cephadm.shell: env: - sha1 host.a: - ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force - ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force - ceph config set global log_to_journald false --force - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 - cephadm.shell: env: - sha1 host.a: - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph fs dump; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done - ceph orch ps - ceph orch upgrade status - ceph health detail - ceph versions - echo "wait for servicemap items w/ changing names to refresh" - sleep 60 - ceph orch ps - ceph versions - ceph versions | jq -e '.overall | length == 1' - ceph versions | jq -e '.overall | keys' | grep $sha1 user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 workload-tasks: sequential: - workunit: clients: all: - suites/fsstress.sh 2026-03-09T14:51:03.716 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T14:51:03.716 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T14:51:03.716 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T14:51:03.717 INFO:teuthology.task.internal:Checking packages... 2026-03-09T14:51:03.717 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T14:51:03.717 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T14:51:03.717 INFO:teuthology.packaging:ref: None 2026-03-09T14:51:03.717 INFO:teuthology.packaging:tag: None 2026-03-09T14:51:03.717 INFO:teuthology.packaging:branch: squid 2026-03-09T14:51:03.717 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:51:03.717 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-09T14:51:04.446 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-09T14:51:04.447 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T14:51:04.447 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T14:51:04.447 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T14:51:04.448 INFO:teuthology.task.internal:Saving configuration 2026-03-09T14:51:04.456 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T14:51:04.457 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T14:51:04.463 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm05.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/514', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 14:49:49.933726', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:05', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO+JSz17PEM6eVjtEfBm5GF8puDfunU3HmiuL/UHRFG4zEogzpdTtzpsRcu2p3FDMzcNLzy+Ev8YOVxCaNWu7+Q='} 2026-03-09T14:51:04.469 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm09.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/514', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 14:49:49.934237', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:09', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCa+MKbSB6zIKmF6ts4m0KjB9hwg86zJNhbx9sYFLBqfCmagH0cS6W7o8Tj6EGKIMWOsR06dxRyJQKdTYJ12s9w='} 2026-03-09T14:51:04.469 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T14:51:04.470 INFO:teuthology.task.internal:roles: ubuntu@vm05.local - ['host.a', 'client.0', 'osd.0', 'osd.1', 'osd.2'] 2026-03-09T14:51:04.470 INFO:teuthology.task.internal:roles: ubuntu@vm09.local - ['host.b', 'client.1', 'osd.3', 'osd.4', 'osd.5'] 2026-03-09T14:51:04.470 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T14:51:04.476 DEBUG:teuthology.task.console_log:vm05 does not support IPMI; excluding 2026-03-09T14:51:04.481 DEBUG:teuthology.task.console_log:vm09 does not support IPMI; excluding 2026-03-09T14:51:04.481 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7ff96b276170>, signals=[15]) 2026-03-09T14:51:04.481 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T14:51:04.482 INFO:teuthology.task.internal:Opening connections... 2026-03-09T14:51:04.483 DEBUG:teuthology.task.internal:connecting to ubuntu@vm05.local 2026-03-09T14:51:04.483 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:51:04.542 DEBUG:teuthology.task.internal:connecting to ubuntu@vm09.local 2026-03-09T14:51:04.542 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:51:04.604 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T14:51:04.605 DEBUG:teuthology.orchestra.run.vm05:> uname -m 2026-03-09T14:51:04.659 INFO:teuthology.orchestra.run.vm05.stdout:x86_64 2026-03-09T14:51:04.660 DEBUG:teuthology.orchestra.run.vm05:> cat /etc/os-release 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:NAME="CentOS Stream" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:VERSION="9" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:ID="centos" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:ID_LIKE="rhel fedora" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_ID="9" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:PLATFORM_ID="platform:el9" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:ANSI_COLOR="0;31" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:LOGO="fedora-logo-icon" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:HOME_URL="https://centos.org/" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-09T14:51:04.716 INFO:teuthology.orchestra.run.vm05.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-09T14:51:04.717 INFO:teuthology.lock.ops:Updating vm05.local on lock server 2026-03-09T14:51:04.721 DEBUG:teuthology.orchestra.run.vm09:> uname -m 2026-03-09T14:51:04.737 INFO:teuthology.orchestra.run.vm09.stdout:x86_64 2026-03-09T14:51:04.737 DEBUG:teuthology.orchestra.run.vm09:> cat /etc/os-release 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:NAME="CentOS Stream" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:VERSION="9" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:ID="centos" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:ID_LIKE="rhel fedora" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_ID="9" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:PLATFORM_ID="platform:el9" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:ANSI_COLOR="0;31" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:LOGO="fedora-logo-icon" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:HOME_URL="https://centos.org/" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-09T14:51:04.794 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-09T14:51:04.794 INFO:teuthology.lock.ops:Updating vm09.local on lock server 2026-03-09T14:51:04.799 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T14:51:04.801 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T14:51:04.802 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T14:51:04.802 DEBUG:teuthology.orchestra.run.vm05:> test '!' -e /home/ubuntu/cephtest 2026-03-09T14:51:04.804 DEBUG:teuthology.orchestra.run.vm09:> test '!' -e /home/ubuntu/cephtest 2026-03-09T14:51:04.850 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T14:51:04.852 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T14:51:04.852 DEBUG:teuthology.orchestra.run.vm05:> test -z $(ls -A /var/lib/ceph) 2026-03-09T14:51:04.860 DEBUG:teuthology.orchestra.run.vm09:> test -z $(ls -A /var/lib/ceph) 2026-03-09T14:51:04.874 INFO:teuthology.orchestra.run.vm05.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T14:51:04.909 INFO:teuthology.orchestra.run.vm09.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T14:51:04.909 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T14:51:04.920 DEBUG:teuthology.orchestra.run.vm05:> test -e /ceph-qa-ready 2026-03-09T14:51:04.935 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:51:05.125 DEBUG:teuthology.orchestra.run.vm09:> test -e /ceph-qa-ready 2026-03-09T14:51:05.143 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:51:05.376 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T14:51:05.378 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T14:51:05.378 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T14:51:05.380 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T14:51:05.397 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T14:51:05.398 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T14:51:05.399 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T14:51:05.400 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T14:51:05.437 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T14:51:05.457 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T14:51:05.458 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T14:51:05.458 DEBUG:teuthology.orchestra.run.vm05:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T14:51:05.509 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:51:05.509 DEBUG:teuthology.orchestra.run.vm09:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T14:51:05.525 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:51:05.525 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T14:51:05.552 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T14:51:05.575 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:51:05.586 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:51:05.593 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:51:05.602 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:51:05.604 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T14:51:05.605 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T14:51:05.605 DEBUG:teuthology.orchestra.run.vm05:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T14:51:05.630 DEBUG:teuthology.orchestra.run.vm09:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T14:51:05.668 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T14:51:05.670 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T14:51:05.671 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T14:51:05.700 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T14:51:05.724 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:51:05.777 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:51:05.832 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:51:05.833 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T14:51:05.891 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:51:05.913 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:51:05.969 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:51:05.970 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T14:51:06.029 DEBUG:teuthology.orchestra.run.vm05:> sudo service rsyslog restart 2026-03-09T14:51:06.031 DEBUG:teuthology.orchestra.run.vm09:> sudo service rsyslog restart 2026-03-09T14:51:06.056 INFO:teuthology.orchestra.run.vm05.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T14:51:06.098 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T14:51:06.461 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T14:51:06.463 INFO:teuthology.task.internal:Starting timer... 2026-03-09T14:51:06.463 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T14:51:06.466 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T14:51:06.468 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-09T14:51:06.468 INFO:teuthology.task.selinux:Excluding vm05: VMs are not yet supported 2026-03-09T14:51:06.468 INFO:teuthology.task.selinux:Excluding vm09: VMs are not yet supported 2026-03-09T14:51:06.468 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T14:51:06.468 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T14:51:06.468 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T14:51:06.468 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T14:51:06.470 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T14:51:06.470 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T14:51:06.472 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T14:51:06.959 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T14:51:06.964 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T14:51:06.964 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryub5hvpx7 --limit vm05.local,vm09.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T14:53:13.091 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm05.local'), Remote(name='ubuntu@vm09.local')] 2026-03-09T14:53:13.092 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm05.local' 2026-03-09T14:53:13.092 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:53:13.159 DEBUG:teuthology.orchestra.run.vm05:> true 2026-03-09T14:53:13.239 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm05.local' 2026-03-09T14:53:13.239 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm09.local' 2026-03-09T14:53:13.239 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:53:13.308 DEBUG:teuthology.orchestra.run.vm09:> true 2026-03-09T14:53:13.386 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm09.local' 2026-03-09T14:53:13.387 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T14:53:13.389 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T14:53:13.389 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T14:53:13.389 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:53:13.391 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T14:53:13.391 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:53:13.436 INFO:teuthology.orchestra.run.vm05.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-09T14:53:13.458 INFO:teuthology.orchestra.run.vm05.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-09T14:53:13.475 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-09T14:53:13.493 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-09T14:53:13.500 INFO:teuthology.orchestra.run.vm05.stderr:sudo: ntpd: command not found 2026-03-09T14:53:13.515 INFO:teuthology.orchestra.run.vm09.stderr:sudo: ntpd: command not found 2026-03-09T14:53:13.517 INFO:teuthology.orchestra.run.vm05.stdout:506 Cannot talk to daemon 2026-03-09T14:53:13.529 INFO:teuthology.orchestra.run.vm09.stdout:506 Cannot talk to daemon 2026-03-09T14:53:13.537 INFO:teuthology.orchestra.run.vm05.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-09T14:53:13.541 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-09T14:53:13.554 INFO:teuthology.orchestra.run.vm05.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-09T14:53:13.556 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-09T14:53:13.605 INFO:teuthology.orchestra.run.vm05.stderr:bash: line 1: ntpq: command not found 2026-03-09T14:53:13.610 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-09T14:53:13.654 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T14:53:13.654 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-09T14:53:13.654 INFO:teuthology.orchestra.run.vm09.stdout:^? static.179.181.75.5.clie> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T14:53:13.654 INFO:teuthology.orchestra.run.vm09.stdout:^? 172-104-149-161.ip.linod> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T14:53:13.654 INFO:teuthology.orchestra.run.vm09.stdout:^? time.cloudflare.com 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T14:53:13.654 INFO:teuthology.orchestra.run.vm09.stdout:^? time2.sebhosting.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T14:53:13.654 INFO:teuthology.orchestra.run.vm05.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T14:53:13.654 INFO:teuthology.orchestra.run.vm05.stdout:=============================================================================== 2026-03-09T14:53:13.654 INFO:teuthology.orchestra.run.vm05.stdout:^? static.179.181.75.5.clie> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T14:53:13.654 INFO:teuthology.orchestra.run.vm05.stdout:^? 172-104-149-161.ip.linod> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T14:53:13.654 INFO:teuthology.orchestra.run.vm05.stdout:^? time.cloudflare.com 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T14:53:13.654 INFO:teuthology.orchestra.run.vm05.stdout:^? time2.sebhosting.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-09T14:53:13.655 INFO:teuthology.run_tasks:Running task install... 2026-03-09T14:53:13.656 DEBUG:teuthology.task.install:project ceph 2026-03-09T14:53:13.657 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T14:53:13.657 DEBUG:teuthology.task.install:config {'exclude_packages': ['ceph-volume'], 'tag': 'v18.2.0', 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T14:53:13.657 INFO:teuthology.task.install:Using flavor: default 2026-03-09T14:53:13.659 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T14:53:13.659 INFO:teuthology.task.install:extra packages: [] 2026-03-09T14:53:13.659 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': ['ceph-volume'], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': 'v18.2.0', 'wait_for_package': False} 2026-03-09T14:53:13.659 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using tag 2026-03-09T14:53:13.659 INFO:teuthology.packaging:ref: None 2026-03-09T14:53:13.659 INFO:teuthology.packaging:tag: v18.2.0 2026-03-09T14:53:13.659 INFO:teuthology.packaging:branch: None 2026-03-09T14:53:13.659 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:53:14.292 DEBUG:teuthology.repo_utils:git ls-remote https://github.com/ceph/ceph v18.2.0^{} -> 5dd24139a1eada541a3bc16b6941c5dde975e26d 2026-03-09T14:53:14.292 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=5dd24139a1eada541a3bc16b6941c5dde975e26d 2026-03-09T14:53:14.293 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': ['ceph-volume'], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': 'v18.2.0', 'wait_for_package': False} 2026-03-09T14:53:14.293 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using tag 2026-03-09T14:53:14.293 INFO:teuthology.packaging:ref: None 2026-03-09T14:53:14.293 INFO:teuthology.packaging:tag: v18.2.0 2026-03-09T14:53:14.293 INFO:teuthology.packaging:branch: None 2026-03-09T14:53:14.293 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:53:14.293 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=5dd24139a1eada541a3bc16b6941c5dde975e26d 2026-03-09T14:53:14.901 INFO:teuthology.task.install.rpm:Pulling from https://chacra.ceph.com/r/ceph/reef/5dd24139a1eada541a3bc16b6941c5dde975e26d/centos/9/flavors/default/ 2026-03-09T14:53:14.901 INFO:teuthology.task.install.rpm:Package version is 18.2.0-0 2026-03-09T14:53:14.986 INFO:teuthology.task.install.rpm:Pulling from https://chacra.ceph.com/r/ceph/reef/5dd24139a1eada541a3bc16b6941c5dde975e26d/centos/9/flavors/default/ 2026-03-09T14:53:14.986 INFO:teuthology.task.install.rpm:Package version is 18.2.0-0 2026-03-09T14:53:15.258 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://chacra.ceph.com/r/ceph/reef/5dd24139a1eada541a3bc16b6941c5dde975e26d/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://chacra.ceph.com/r/ceph/reef/5dd24139a1eada541a3bc16b6941c5dde975e26d/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://chacra.ceph.com/r/ceph/reef/5dd24139a1eada541a3bc16b6941c5dde975e26d/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-09T14:53:15.258 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:53:15.258 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-09T14:53:15.294 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-09T14:53:15.294 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using tag 2026-03-09T14:53:15.294 INFO:teuthology.packaging:ref: None 2026-03-09T14:53:15.294 INFO:teuthology.packaging:tag: v18.2.0 2026-03-09T14:53:15.294 INFO:teuthology.packaging:branch: None 2026-03-09T14:53:15.294 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:53:15.295 DEBUG:teuthology.orchestra.run.vm05:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;ref/v18.2.0/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-09T14:53:15.351 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://chacra.ceph.com/r/ceph/reef/5dd24139a1eada541a3bc16b6941c5dde975e26d/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://chacra.ceph.com/r/ceph/reef/5dd24139a1eada541a3bc16b6941c5dde975e26d/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://chacra.ceph.com/r/ceph/reef/5dd24139a1eada541a3bc16b6941c5dde975e26d/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-09T14:53:15.352 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:53:15.352 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-09T14:53:15.368 DEBUG:teuthology.orchestra.run.vm05:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-09T14:53:15.390 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-09T14:53:15.390 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using tag 2026-03-09T14:53:15.390 INFO:teuthology.packaging:ref: None 2026-03-09T14:53:15.390 INFO:teuthology.packaging:tag: v18.2.0 2026-03-09T14:53:15.390 INFO:teuthology.packaging:branch: None 2026-03-09T14:53:15.390 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:53:15.390 DEBUG:teuthology.orchestra.run.vm09:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;ref/v18.2.0/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-09T14:53:15.453 DEBUG:teuthology.orchestra.run.vm05:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-09T14:53:15.462 DEBUG:teuthology.orchestra.run.vm09:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-09T14:53:15.524 INFO:teuthology.orchestra.run.vm05.stdout:check_obsoletes = 1 2026-03-09T14:53:15.525 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean all 2026-03-09T14:53:15.541 DEBUG:teuthology.orchestra.run.vm09:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-09T14:53:15.611 INFO:teuthology.orchestra.run.vm09.stdout:check_obsoletes = 1 2026-03-09T14:53:15.613 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-09T14:53:15.745 INFO:teuthology.orchestra.run.vm05.stdout:41 files removed 2026-03-09T14:53:15.779 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-09T14:53:15.797 INFO:teuthology.orchestra.run.vm09.stdout:41 files removed 2026-03-09T14:53:15.823 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-09T14:53:16.805 INFO:teuthology.orchestra.run.vm05.stdout:ceph packages for x86_64 93 kB/s | 76 kB 00:00 2026-03-09T14:53:16.809 INFO:teuthology.orchestra.run.vm09.stdout:ceph packages for x86_64 94 kB/s | 76 kB 00:00 2026-03-09T14:53:17.445 INFO:teuthology.orchestra.run.vm09.stdout:ceph noarch packages 15 kB/s | 9.3 kB 00:00 2026-03-09T14:53:17.465 INFO:teuthology.orchestra.run.vm05.stdout:ceph noarch packages 15 kB/s | 9.3 kB 00:00 2026-03-09T14:53:18.089 INFO:teuthology.orchestra.run.vm09.stdout:ceph source packages 3.5 kB/s | 2.2 kB 00:00 2026-03-09T14:53:18.101 INFO:teuthology.orchestra.run.vm05.stdout:ceph source packages 3.5 kB/s | 2.2 kB 00:00 2026-03-09T14:53:19.372 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - BaseOS 7.0 MB/s | 8.9 MB 00:01 2026-03-09T14:53:19.560 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - BaseOS 6.2 MB/s | 8.9 MB 00:01 2026-03-09T14:53:21.352 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - AppStream 27 MB/s | 27 MB 00:00 2026-03-09T14:53:21.485 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - AppStream 19 MB/s | 27 MB 00:01 2026-03-09T14:53:25.317 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - CRB 7.1 MB/s | 8.0 MB 00:01 2026-03-09T14:53:25.731 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - CRB 8.6 MB/s | 8.0 MB 00:00 2026-03-09T14:53:26.793 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - Extras packages 35 kB/s | 20 kB 00:00 2026-03-09T14:53:27.667 INFO:teuthology.orchestra.run.vm09.stdout:Extra Packages for Enterprise Linux 26 MB/s | 20 MB 00:00 2026-03-09T14:53:32.404 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - Extras packages 3.5 kB/s | 20 kB 00:05 2026-03-09T14:53:32.488 INFO:teuthology.orchestra.run.vm09.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-09T14:53:33.203 INFO:teuthology.orchestra.run.vm05.stdout:Extra Packages for Enterprise Linux 29 MB/s | 20 MB 00:00 2026-03-09T14:53:33.925 INFO:teuthology.orchestra.run.vm09.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T14:53:33.925 INFO:teuthology.orchestra.run.vm09.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T14:53:33.929 INFO:teuthology.orchestra.run.vm09.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-09T14:53:33.930 INFO:teuthology.orchestra.run.vm09.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-09T14:53:33.958 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T14:53:33.961 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout:Installing: 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:18.2.0-0.el9 ceph 6.4 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:18.2.0-0.el9 ceph 5.2 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:18.2.0-0.el9 ceph 835 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:18.2.0-0.el9 ceph 142 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:18.2.0-0.el9 ceph 1.4 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:18.2.0-0.el9 ceph-noarch 127 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:18.2.0-0.el9 ceph-noarch 1.7 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:18.2.0-0.el9 ceph-noarch 7.4 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:18.2.0-0.el9 ceph-noarch 47 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:18.2.0-0.el9 ceph 7.6 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:18.2.0-0.el9 ceph 40 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:18.2.0-0.el9 ceph-noarch 209 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:18.2.0-0.el9 ceph 30 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:18.2.0-0.el9 ceph 653 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:18.2.0-0.el9 ceph 126 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:18.2.0-0.el9 ceph 155 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:18.2.0-0.el9 ceph 321 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:18.2.0-0.el9 ceph 297 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:18.2.0-0.el9 ceph 99 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:18.2.0-0.el9 ceph 86 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:18.2.0-0.el9 ceph 3.0 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:18.2.0-0.el9 ceph 169 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout:Upgrading: 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:18.2.0-0.el9 ceph 3.3 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:18.2.0-0.el9 ceph 3.0 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout:Installing dependencies: 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:18.2.0-0.el9 ceph 18 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:18.2.0-0.el9 ceph-noarch 23 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:18.2.0-0.el9 ceph 2.1 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:18.2.0-0.el9 ceph-noarch 240 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:18.2.0-0.el9 ceph 4.4 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:18.2.0-0.el9 ceph 18 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:18.2.0-0.el9 ceph-noarch 15 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:18.2.0-0.el9 ceph 24 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: fmt x86_64 8.1.1-5.el9 epel 111 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:18.2.0-0.el9 ceph 161 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:18.2.0-0.el9 ceph 474 k 2026-03-09T14:53:33.962 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:18.2.0-0.el9 ceph 4.4 M 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:18.2.0-0.el9 ceph 45 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:18.2.0-0.el9 ceph 119 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-jwt noarch 2.4.0-1.el9 epel 41 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-09T14:53:33.963 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout:Installing weak dependencies: 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout: python3-jwt+crypto noarch 2.4.0-1.el9 epel 9.0 k 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout:Install 117 Packages 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout:Upgrade 2 Packages 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout:Total download size: 182 M 2026-03-09T14:53:33.964 INFO:teuthology.orchestra.run.vm09.stdout:Downloading Packages: 2026-03-09T14:53:35.312 INFO:teuthology.orchestra.run.vm09.stdout:(1/119): ceph-18.2.0-0.el9.x86_64.rpm 21 kB/s | 6.4 kB 00:00 2026-03-09T14:53:35.908 INFO:teuthology.orchestra.run.vm09.stdout:(2/119): ceph-fuse-18.2.0-0.el9.x86_64.rpm 1.4 MB/s | 835 kB 00:00 2026-03-09T14:53:36.013 INFO:teuthology.orchestra.run.vm09.stdout:(3/119): ceph-immutable-object-cache-18.2.0-0.e 1.3 MB/s | 142 kB 00:00 2026-03-09T14:53:36.320 INFO:teuthology.orchestra.run.vm09.stdout:(4/119): ceph-mds-18.2.0-0.el9.x86_64.rpm 6.8 MB/s | 2.1 MB 00:00 2026-03-09T14:53:36.615 INFO:teuthology.orchestra.run.vm09.stdout:(5/119): ceph-base-18.2.0-0.el9.x86_64.rpm 3.2 MB/s | 5.2 MB 00:01 2026-03-09T14:53:36.622 INFO:teuthology.orchestra.run.vm09.stdout:(6/119): ceph-mgr-18.2.0-0.el9.x86_64.rpm 4.8 MB/s | 1.4 MB 00:00 2026-03-09T14:53:37.227 INFO:teuthology.orchestra.run.vm09.stdout:(7/119): ceph-common-18.2.0-0.el9.x86_64.rpm 8.2 MB/s | 18 MB 00:02 2026-03-09T14:53:37.923 INFO:teuthology.orchestra.run.vm05.stdout:lab-extras 65 kB/s | 50 kB 00:00 2026-03-09T14:53:38.141 INFO:teuthology.orchestra.run.vm09.stdout:(8/119): ceph-mon-18.2.0-0.el9.x86_64.rpm 2.9 MB/s | 4.4 MB 00:01 2026-03-09T14:53:38.263 INFO:teuthology.orchestra.run.vm09.stdout:(9/119): ceph-radosgw-18.2.0-0.el9.x86_64.rpm 7.4 MB/s | 7.6 MB 00:01 2026-03-09T14:53:38.264 INFO:teuthology.orchestra.run.vm09.stdout:(10/119): ceph-selinux-18.2.0-0.el9.x86_64.rpm 196 kB/s | 24 kB 00:00 2026-03-09T14:53:38.371 INFO:teuthology.orchestra.run.vm09.stdout:(11/119): libcephfs-devel-18.2.0-0.el9.x86_64.r 285 kB/s | 30 kB 00:00 2026-03-09T14:53:38.573 INFO:teuthology.orchestra.run.vm09.stdout:(12/119): libcephfs2-18.2.0-0.el9.x86_64.rpm 3.2 MB/s | 653 kB 00:00 2026-03-09T14:53:38.675 INFO:teuthology.orchestra.run.vm09.stdout:(13/119): libcephsqlite-18.2.0-0.el9.x86_64.rpm 1.6 MB/s | 161 kB 00:00 2026-03-09T14:53:38.776 INFO:teuthology.orchestra.run.vm09.stdout:(14/119): librados-devel-18.2.0-0.el9.x86_64.rp 1.2 MB/s | 126 kB 00:00 2026-03-09T14:53:38.976 INFO:teuthology.orchestra.run.vm09.stdout:(15/119): libradosstriper1-18.2.0-0.el9.x86_64. 2.3 MB/s | 474 kB 00:00 2026-03-09T14:53:39.296 INFO:teuthology.orchestra.run.vm05.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T14:53:39.297 INFO:teuthology.orchestra.run.vm05.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T14:53:39.302 INFO:teuthology.orchestra.run.vm05.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-09T14:53:39.303 INFO:teuthology.orchestra.run.vm05.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-09T14:53:39.331 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout:Installing: 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: ceph x86_64 2:18.2.0-0.el9 ceph 6.4 k 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base x86_64 2:18.2.0-0.el9 ceph 5.2 M 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse x86_64 2:18.2.0-0.el9 ceph 835 k 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache x86_64 2:18.2.0-0.el9 ceph 142 k 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr x86_64 2:18.2.0-0.el9 ceph 1.4 M 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm noarch 2:18.2.0-0.el9 ceph-noarch 127 k 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard noarch 2:18.2.0-0.el9 ceph-noarch 1.7 M 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local noarch 2:18.2.0-0.el9 ceph-noarch 7.4 M 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook noarch 2:18.2.0-0.el9 ceph-noarch 47 k 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw x86_64 2:18.2.0-0.el9 ceph 7.6 M 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test x86_64 2:18.2.0-0.el9 ceph 40 M 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: cephadm noarch 2:18.2.0-0.el9 ceph-noarch 209 k 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel x86_64 2:18.2.0-0.el9 ceph 30 k 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2 x86_64 2:18.2.0-0.el9 ceph 653 k 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel x86_64 2:18.2.0-0.el9 ceph 126 k 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs x86_64 2:18.2.0-0.el9 ceph 155 k 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-09T14:53:39.335 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados x86_64 2:18.2.0-0.el9 ceph 321 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd x86_64 2:18.2.0-0.el9 ceph 297 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw x86_64 2:18.2.0-0.el9 ceph 99 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse x86_64 2:18.2.0-0.el9 ceph 86 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror x86_64 2:18.2.0-0.el9 ceph 3.0 M 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd x86_64 2:18.2.0-0.el9 ceph 169 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout:Upgrading: 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: librados2 x86_64 2:18.2.0-0.el9 ceph 3.3 M 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: librbd1 x86_64 2:18.2.0-0.el9 ceph 3.0 M 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout:Installing dependencies: 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common x86_64 2:18.2.0-0.el9 ceph 18 M 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards noarch 2:18.2.0-0.el9 ceph-noarch 23 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds x86_64 2:18.2.0-0.el9 ceph 2.1 M 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core noarch 2:18.2.0-0.el9 ceph-noarch 240 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon x86_64 2:18.2.0-0.el9 ceph 4.4 M 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd x86_64 2:18.2.0-0.el9 ceph 18 M 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts noarch 2:18.2.0-0.el9 ceph-noarch 15 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux x86_64 2:18.2.0-0.el9 ceph 24 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: fmt x86_64 8.1.1-5.el9 epel 111 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite x86_64 2:18.2.0-0.el9 ceph 161 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 x86_64 2:18.2.0-0.el9 ceph 474 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: librgw2 x86_64 2:18.2.0-0.el9 ceph 4.4 M 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse x86_64 2:18.2.0-0.el9 ceph 45 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common x86_64 2:18.2.0-0.el9 ceph 119 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-09T14:53:39.336 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt noarch 2.4.0-1.el9 epel 41 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout:Installing weak dependencies: 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt+crypto noarch 2.4.0-1.el9 epel 9.0 k 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout:Install 117 Packages 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout:Upgrade 2 Packages 2026-03-09T14:53:39.337 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:53:39.338 INFO:teuthology.orchestra.run.vm05.stdout:Total download size: 182 M 2026-03-09T14:53:39.338 INFO:teuthology.orchestra.run.vm05.stdout:Downloading Packages: 2026-03-09T14:53:39.842 INFO:teuthology.orchestra.run.vm09.stdout:(16/119): ceph-osd-18.2.0-0.el9.x86_64.rpm 5.5 MB/s | 18 MB 00:03 2026-03-09T14:53:39.947 INFO:teuthology.orchestra.run.vm09.stdout:(17/119): python3-ceph-argparse-18.2.0-0.el9.x8 431 kB/s | 45 kB 00:00 2026-03-09T14:53:40.049 INFO:teuthology.orchestra.run.vm09.stdout:(18/119): python3-ceph-common-18.2.0-0.el9.x86_ 1.1 MB/s | 119 kB 00:00 2026-03-09T14:53:40.150 INFO:teuthology.orchestra.run.vm09.stdout:(19/119): python3-cephfs-18.2.0-0.el9.x86_64.rp 1.5 MB/s | 155 kB 00:00 2026-03-09T14:53:40.177 INFO:teuthology.orchestra.run.vm09.stdout:(20/119): librgw2-18.2.0-0.el9.x86_64.rpm 3.7 MB/s | 4.4 MB 00:01 2026-03-09T14:53:40.252 INFO:teuthology.orchestra.run.vm09.stdout:(21/119): python3-rados-18.2.0-0.el9.x86_64.rpm 3.1 MB/s | 321 kB 00:00 2026-03-09T14:53:40.279 INFO:teuthology.orchestra.run.vm09.stdout:(22/119): python3-rbd-18.2.0-0.el9.x86_64.rpm 2.8 MB/s | 297 kB 00:00 2026-03-09T14:53:40.352 INFO:teuthology.orchestra.run.vm09.stdout:(23/119): python3-rgw-18.2.0-0.el9.x86_64.rpm 992 kB/s | 99 kB 00:00 2026-03-09T14:53:40.379 INFO:teuthology.orchestra.run.vm09.stdout:(24/119): rbd-fuse-18.2.0-0.el9.x86_64.rpm 855 kB/s | 86 kB 00:00 2026-03-09T14:53:40.481 INFO:teuthology.orchestra.run.vm09.stdout:(25/119): rbd-nbd-18.2.0-0.el9.x86_64.rpm 1.6 MB/s | 169 kB 00:00 2026-03-09T14:53:40.581 INFO:teuthology.orchestra.run.vm09.stdout:(26/119): ceph-grafana-dashboards-18.2.0-0.el9. 232 kB/s | 23 kB 00:00 2026-03-09T14:53:40.683 INFO:teuthology.orchestra.run.vm09.stdout:(27/119): ceph-mgr-cephadm-18.2.0-0.el9.noarch. 1.2 MB/s | 127 kB 00:00 2026-03-09T14:53:40.885 INFO:teuthology.orchestra.run.vm05.stdout:(1/119): ceph-18.2.0-0.el9.x86_64.rpm 21 kB/s | 6.4 kB 00:00 2026-03-09T14:53:41.184 INFO:teuthology.orchestra.run.vm09.stdout:(28/119): ceph-mgr-dashboard-18.2.0-0.el9.noarc 3.4 MB/s | 1.7 MB 00:00 2026-03-09T14:53:41.451 INFO:teuthology.orchestra.run.vm09.stdout:(29/119): rbd-mirror-18.2.0-0.el9.x86_64.rpm 2.7 MB/s | 3.0 MB 00:01 2026-03-09T14:53:41.482 INFO:teuthology.orchestra.run.vm05.stdout:(2/119): ceph-fuse-18.2.0-0.el9.x86_64.rpm 1.4 MB/s | 835 kB 00:00 2026-03-09T14:53:41.553 INFO:teuthology.orchestra.run.vm09.stdout:(30/119): ceph-mgr-modules-core-18.2.0-0.el9.no 2.3 MB/s | 240 kB 00:00 2026-03-09T14:53:41.583 INFO:teuthology.orchestra.run.vm05.stdout:(3/119): ceph-immutable-object-cache-18.2.0-0.e 1.4 MB/s | 142 kB 00:00 2026-03-09T14:53:41.653 INFO:teuthology.orchestra.run.vm09.stdout:(31/119): ceph-mgr-rook-18.2.0-0.el9.noarch.rpm 476 kB/s | 47 kB 00:00 2026-03-09T14:53:41.753 INFO:teuthology.orchestra.run.vm09.stdout:(32/119): ceph-prometheus-alerts-18.2.0-0.el9.n 147 kB/s | 15 kB 00:00 2026-03-09T14:53:41.854 INFO:teuthology.orchestra.run.vm09.stdout:(33/119): cephadm-18.2.0-0.el9.noarch.rpm 2.0 MB/s | 209 kB 00:00 2026-03-09T14:53:41.945 INFO:teuthology.orchestra.run.vm09.stdout:(34/119): ledmon-libs-1.1.0-3.el9.x86_64.rpm 449 kB/s | 40 kB 00:00 2026-03-09T14:53:41.987 INFO:teuthology.orchestra.run.vm05.stdout:(4/119): ceph-mds-18.2.0-0.el9.x86_64.rpm 5.2 MB/s | 2.1 MB 00:00 2026-03-09T14:53:42.007 INFO:teuthology.orchestra.run.vm09.stdout:(35/119): libconfig-1.7.2-9.el9.x86_64.rpm 1.1 MB/s | 72 kB 00:00 2026-03-09T14:53:42.288 INFO:teuthology.orchestra.run.vm05.stdout:(5/119): ceph-mgr-18.2.0-0.el9.x86_64.rpm 4.8 MB/s | 1.4 MB 00:00 2026-03-09T14:53:42.301 INFO:teuthology.orchestra.run.vm09.stdout:(36/119): libgfortran-11.5.0-14.el9.x86_64.rpm 2.6 MB/s | 794 kB 00:00 2026-03-09T14:53:42.434 INFO:teuthology.orchestra.run.vm09.stdout:(37/119): libquadmath-11.5.0-14.el9.x86_64.rpm 1.4 MB/s | 184 kB 00:00 2026-03-09T14:53:42.461 INFO:teuthology.orchestra.run.vm09.stdout:(38/119): mailcap-2.1.49-5.el9.noarch.rpm 1.2 MB/s | 33 kB 00:00 2026-03-09T14:53:42.537 INFO:teuthology.orchestra.run.vm09.stdout:(39/119): python3-cffi-1.14.5-5.el9.x86_64.rpm 3.3 MB/s | 253 kB 00:00 2026-03-09T14:53:42.607 INFO:teuthology.orchestra.run.vm05.stdout:(6/119): ceph-base-18.2.0-0.el9.x86_64.rpm 2.6 MB/s | 5.2 MB 00:02 2026-03-09T14:53:42.608 INFO:teuthology.orchestra.run.vm09.stdout:(40/119): python3-cryptography-36.0.1-5.el9.x86 18 MB/s | 1.2 MB 00:00 2026-03-09T14:53:42.635 INFO:teuthology.orchestra.run.vm09.stdout:(41/119): python3-ply-3.11-14.el9.noarch.rpm 3.8 MB/s | 106 kB 00:00 2026-03-09T14:53:42.664 INFO:teuthology.orchestra.run.vm09.stdout:(42/119): python3-pycparser-2.20-6.el9.noarch.r 4.6 MB/s | 135 kB 00:00 2026-03-09T14:53:42.692 INFO:teuthology.orchestra.run.vm09.stdout:(43/119): python3-requests-2.25.1-10.el9.noarch 4.4 MB/s | 126 kB 00:00 2026-03-09T14:53:42.742 INFO:teuthology.orchestra.run.vm09.stdout:(44/119): python3-urllib3-1.26.5-7.el9.noarch.r 4.3 MB/s | 218 kB 00:00 2026-03-09T14:53:43.022 INFO:teuthology.orchestra.run.vm09.stdout:(45/119): boost-program-options-1.75.0-13.el9.x 372 kB/s | 104 kB 00:00 2026-03-09T14:53:43.086 INFO:teuthology.orchestra.run.vm09.stdout:(46/119): ceph-mgr-diskprediction-local-18.2.0- 3.9 MB/s | 7.4 MB 00:01 2026-03-09T14:53:43.106 INFO:teuthology.orchestra.run.vm09.stdout:(47/119): flexiblas-3.0.4-9.el9.x86_64.rpm 355 kB/s | 30 kB 00:00 2026-03-09T14:53:43.196 INFO:teuthology.orchestra.run.vm09.stdout:(48/119): flexiblas-openblas-openmp-3.0.4-9.el9 165 kB/s | 15 kB 00:00 2026-03-09T14:53:43.197 INFO:teuthology.orchestra.run.vm05.stdout:(7/119): ceph-mon-18.2.0-0.el9.x86_64.rpm 4.8 MB/s | 4.4 MB 00:00 2026-03-09T14:53:43.381 INFO:teuthology.orchestra.run.vm09.stdout:(49/119): libpmemobj-1.12.1-1.el9.x86_64.rpm 871 kB/s | 160 kB 00:00 2026-03-09T14:53:43.469 INFO:teuthology.orchestra.run.vm09.stdout:(50/119): librabbitmq-0.11.0-7.el9.x86_64.rpm 512 kB/s | 45 kB 00:00 2026-03-09T14:53:43.622 INFO:teuthology.orchestra.run.vm09.stdout:(51/119): flexiblas-netlib-3.0.4-9.el9.x86_64.r 5.6 MB/s | 3.0 MB 00:00 2026-03-09T14:53:43.723 INFO:teuthology.orchestra.run.vm09.stdout:(52/119): libstoragemgmt-1.10.1-1.el9.x86_64.rp 2.4 MB/s | 246 kB 00:00 2026-03-09T14:53:43.748 INFO:teuthology.orchestra.run.vm09.stdout:(53/119): librdkafka-1.6.1-102.el9.x86_64.rpm 2.3 MB/s | 662 kB 00:00 2026-03-09T14:53:43.890 INFO:teuthology.orchestra.run.vm09.stdout:(54/119): ceph-test-18.2.0-0.el9.x86_64.rpm 7.0 MB/s | 40 MB 00:05 2026-03-09T14:53:43.893 INFO:teuthology.orchestra.run.vm09.stdout:(55/119): libxslt-1.1.34-12.el9.x86_64.rpm 1.3 MB/s | 233 kB 00:00 2026-03-09T14:53:43.938 INFO:teuthology.orchestra.run.vm09.stdout:(56/119): lttng-ust-2.12.0-6.el9.x86_64.rpm 1.5 MB/s | 292 kB 00:00 2026-03-09T14:53:44.094 INFO:teuthology.orchestra.run.vm09.stdout:(57/119): openblas-0.3.29-1.el9.x86_64.rpm 207 kB/s | 42 kB 00:00 2026-03-09T14:53:44.292 INFO:teuthology.orchestra.run.vm09.stdout:(58/119): python3-devel-3.9.25-3.el9.x86_64.rpm 1.2 MB/s | 244 kB 00:00 2026-03-09T14:53:44.417 INFO:teuthology.orchestra.run.vm09.stdout:(59/119): python3-jinja2-2.11.3-8.el9.noarch.rp 1.9 MB/s | 249 kB 00:00 2026-03-09T14:53:44.504 INFO:teuthology.orchestra.run.vm09.stdout:(60/119): openblas-openmp-0.3.29-1.el9.x86_64.r 8.7 MB/s | 5.3 MB 00:00 2026-03-09T14:53:44.505 INFO:teuthology.orchestra.run.vm09.stdout:(61/119): python3-jmespath-1.0.1-1.el9.noarch.r 546 kB/s | 48 kB 00:00 2026-03-09T14:53:44.595 INFO:teuthology.orchestra.run.vm09.stdout:(62/119): python3-mako-1.1.4-6.el9.noarch.rpm 1.9 MB/s | 172 kB 00:00 2026-03-09T14:53:44.606 INFO:teuthology.orchestra.run.vm09.stdout:(63/119): python3-libstoragemgmt-1.10.1-1.el9.x 1.7 MB/s | 177 kB 00:00 2026-03-09T14:53:44.679 INFO:teuthology.orchestra.run.vm09.stdout:(64/119): python3-markupsafe-1.1.1-12.el9.x86_6 416 kB/s | 35 kB 00:00 2026-03-09T14:53:44.873 INFO:teuthology.orchestra.run.vm09.stdout:(65/119): python3-numpy-f2py-1.23.5-2.el9.x86_6 2.2 MB/s | 442 kB 00:00 2026-03-09T14:53:44.894 INFO:teuthology.orchestra.run.vm09.stdout:(66/119): python3-numpy-1.23.5-2.el9.x86_64.rpm 21 MB/s | 6.1 MB 00:00 2026-03-09T14:53:44.972 INFO:teuthology.orchestra.run.vm09.stdout:(67/119): python3-pyasn1-0.4.8-7.el9.noarch.rpm 1.6 MB/s | 157 kB 00:00 2026-03-09T14:53:44.985 INFO:teuthology.orchestra.run.vm09.stdout:(68/119): python3-pyasn1-modules-0.4.8-7.el9.no 3.0 MB/s | 277 kB 00:00 2026-03-09T14:53:45.052 INFO:teuthology.orchestra.run.vm09.stdout:(69/119): python3-requests-oauthlib-1.3.0-12.el 676 kB/s | 54 kB 00:00 2026-03-09T14:53:45.130 INFO:teuthology.orchestra.run.vm09.stdout:(70/119): python3-toml-0.10.2-6.el9.noarch.rpm 534 kB/s | 42 kB 00:00 2026-03-09T14:53:45.222 INFO:teuthology.orchestra.run.vm09.stdout:(71/119): socat-1.7.4.1-8.el9.x86_64.rpm 3.2 MB/s | 303 kB 00:00 2026-03-09T14:53:45.299 INFO:teuthology.orchestra.run.vm09.stdout:(72/119): xmlstarlet-1.6.1-20.el9.x86_64.rpm 825 kB/s | 64 kB 00:00 2026-03-09T14:53:45.309 INFO:teuthology.orchestra.run.vm09.stdout:(73/119): fmt-8.1.1-5.el9.x86_64.rpm 11 MB/s | 111 kB 00:00 2026-03-09T14:53:45.324 INFO:teuthology.orchestra.run.vm09.stdout:(74/119): gperftools-libs-2.9.1-3.el9.x86_64.rp 21 MB/s | 308 kB 00:00 2026-03-09T14:53:45.390 INFO:teuthology.orchestra.run.vm05.stdout:(8/119): ceph-radosgw-18.2.0-0.el9.x86_64.rpm 3.5 MB/s | 7.6 MB 00:02 2026-03-09T14:53:45.416 INFO:teuthology.orchestra.run.vm09.stdout:(75/119): libarrow-9.0.0-15.el9.x86_64.rpm 48 MB/s | 4.4 MB 00:00 2026-03-09T14:53:45.419 INFO:teuthology.orchestra.run.vm09.stdout:(76/119): libarrow-doc-9.0.0-15.el9.noarch.rpm 9.9 MB/s | 25 kB 00:00 2026-03-09T14:53:45.421 INFO:teuthology.orchestra.run.vm09.stdout:(77/119): liboath-2.6.12-1.el9.x86_64.rpm 19 MB/s | 49 kB 00:00 2026-03-09T14:53:45.425 INFO:teuthology.orchestra.run.vm09.stdout:(78/119): libunwind-1.6.2-1.el9.x86_64.rpm 20 MB/s | 67 kB 00:00 2026-03-09T14:53:45.442 INFO:teuthology.orchestra.run.vm09.stdout:(79/119): parquet-libs-9.0.0-15.el9.x86_64.rpm 47 MB/s | 838 kB 00:00 2026-03-09T14:53:45.452 INFO:teuthology.orchestra.run.vm09.stdout:(80/119): python3-asyncssh-2.13.2-5.el9.noarch. 59 MB/s | 548 kB 00:00 2026-03-09T14:53:45.454 INFO:teuthology.orchestra.run.vm09.stdout:(81/119): python3-autocommand-2.2.2-8.el9.noarc 13 MB/s | 29 kB 00:00 2026-03-09T14:53:45.457 INFO:teuthology.orchestra.run.vm09.stdout:(82/119): python3-backports-tarfile-1.2.0-1.el9 16 MB/s | 60 kB 00:00 2026-03-09T14:53:45.461 INFO:teuthology.orchestra.run.vm09.stdout:(83/119): python3-bcrypt-3.2.2-1.el9.x86_64.rpm 15 MB/s | 43 kB 00:00 2026-03-09T14:53:45.463 INFO:teuthology.orchestra.run.vm09.stdout:(84/119): python3-cachetools-4.2.4-1.el9.noarch 14 MB/s | 32 kB 00:00 2026-03-09T14:53:45.465 INFO:teuthology.orchestra.run.vm09.stdout:(85/119): python3-certifi-2023.05.07-4.el9.noar 6.4 MB/s | 14 kB 00:00 2026-03-09T14:53:45.469 INFO:teuthology.orchestra.run.vm09.stdout:(86/119): python3-cheroot-10.0.1-4.el9.noarch.r 42 MB/s | 173 kB 00:00 2026-03-09T14:53:45.476 INFO:teuthology.orchestra.run.vm09.stdout:(87/119): python3-cherrypy-18.6.1-2.el9.noarch. 57 MB/s | 358 kB 00:00 2026-03-09T14:53:45.481 INFO:teuthology.orchestra.run.vm09.stdout:(88/119): python3-google-auth-2.45.0-1.el9.noar 50 MB/s | 254 kB 00:00 2026-03-09T14:53:45.483 INFO:teuthology.orchestra.run.vm09.stdout:(89/119): python3-jaraco-8.2.1-3.el9.noarch.rpm 4.9 MB/s | 11 kB 00:00 2026-03-09T14:53:45.486 INFO:teuthology.orchestra.run.vm09.stdout:(90/119): python3-jaraco-classes-3.2.1-5.el9.no 7.9 MB/s | 18 kB 00:00 2026-03-09T14:53:45.488 INFO:teuthology.orchestra.run.vm09.stdout:(91/119): python3-jaraco-collections-3.0.0-8.el 8.7 MB/s | 23 kB 00:00 2026-03-09T14:53:45.491 INFO:teuthology.orchestra.run.vm05.stdout:(9/119): ceph-selinux-18.2.0-0.el9.x86_64.rpm 238 kB/s | 24 kB 00:00 2026-03-09T14:53:45.491 INFO:teuthology.orchestra.run.vm09.stdout:(92/119): python3-jaraco-context-6.0.1-3.el9.no 6.9 MB/s | 20 kB 00:00 2026-03-09T14:53:45.494 INFO:teuthology.orchestra.run.vm09.stdout:(93/119): python3-jaraco-functools-3.5.0-2.el9. 6.2 MB/s | 19 kB 00:00 2026-03-09T14:53:45.497 INFO:teuthology.orchestra.run.vm09.stdout:(94/119): python3-jaraco-text-4.0.0-2.el9.noarc 12 MB/s | 26 kB 00:00 2026-03-09T14:53:45.499 INFO:teuthology.orchestra.run.vm09.stdout:(95/119): python3-jwt+crypto-2.4.0-1.el9.noarch 4.6 MB/s | 9.0 kB 00:00 2026-03-09T14:53:45.502 INFO:teuthology.orchestra.run.vm09.stdout:(96/119): python3-jwt-2.4.0-1.el9.noarch.rpm 16 MB/s | 41 kB 00:00 2026-03-09T14:53:45.520 INFO:teuthology.orchestra.run.vm09.stdout:(97/119): python3-kubernetes-26.1.0-3.el9.noarc 57 MB/s | 1.0 MB 00:00 2026-03-09T14:53:45.523 INFO:teuthology.orchestra.run.vm09.stdout:(98/119): python3-logutils-0.3.5-21.el9.noarch. 15 MB/s | 46 kB 00:00 2026-03-09T14:53:45.526 INFO:teuthology.orchestra.run.vm09.stdout:(99/119): python3-more-itertools-8.12.0-2.el9.n 27 MB/s | 79 kB 00:00 2026-03-09T14:53:45.529 INFO:teuthology.orchestra.run.vm09.stdout:(100/119): python3-natsort-7.1.1-5.el9.noarch.r 19 MB/s | 58 kB 00:00 2026-03-09T14:53:45.535 INFO:teuthology.orchestra.run.vm09.stdout:(101/119): python3-pecan-1.4.2-3.el9.noarch.rpm 50 MB/s | 272 kB 00:00 2026-03-09T14:53:45.537 INFO:teuthology.orchestra.run.vm09.stdout:(102/119): python3-portend-3.1.0-2.el9.noarch.r 7.3 MB/s | 16 kB 00:00 2026-03-09T14:53:45.541 INFO:teuthology.orchestra.run.vm09.stdout:(103/119): python3-pyOpenSSL-21.0.0-1.el9.noarc 24 MB/s | 90 kB 00:00 2026-03-09T14:53:45.545 INFO:teuthology.orchestra.run.vm09.stdout:(104/119): python3-repoze-lru-0.7-16.el9.noarch 7.5 MB/s | 31 kB 00:00 2026-03-09T14:53:45.552 INFO:teuthology.orchestra.run.vm09.stdout:(105/119): python3-routes-2.5.1-5.el9.noarch.rp 26 MB/s | 188 kB 00:00 2026-03-09T14:53:45.556 INFO:teuthology.orchestra.run.vm09.stdout:(106/119): python3-rsa-4.9-2.el9.noarch.rpm 15 MB/s | 59 kB 00:00 2026-03-09T14:53:45.559 INFO:teuthology.orchestra.run.vm09.stdout:(107/119): python3-tempora-5.0.0-2.el9.noarch.r 13 MB/s | 36 kB 00:00 2026-03-09T14:53:45.563 INFO:teuthology.orchestra.run.vm09.stdout:(108/119): python3-typing-extensions-4.15.0-1.e 20 MB/s | 86 kB 00:00 2026-03-09T14:53:45.571 INFO:teuthology.orchestra.run.vm09.stdout:(109/119): python3-webob-1.8.8-2.el9.noarch.rpm 30 MB/s | 230 kB 00:00 2026-03-09T14:53:45.575 INFO:teuthology.orchestra.run.vm09.stdout:(110/119): python3-websocket-client-1.2.3-2.el9 22 MB/s | 90 kB 00:00 2026-03-09T14:53:45.586 INFO:teuthology.orchestra.run.vm09.stdout:(111/119): python3-werkzeug-2.0.3-3.el9.1.noarc 41 MB/s | 427 kB 00:00 2026-03-09T14:53:45.589 INFO:teuthology.orchestra.run.vm09.stdout:(112/119): python3-xmltodict-0.12.0-15.el9.noar 6.5 MB/s | 22 kB 00:00 2026-03-09T14:53:45.595 INFO:teuthology.orchestra.run.vm09.stdout:(113/119): python3-zc-lockfile-2.0-10.el9.noarc 3.3 MB/s | 20 kB 00:00 2026-03-09T14:53:45.601 INFO:teuthology.orchestra.run.vm09.stdout:(114/119): re2-20211101-20.el9.x86_64.rpm 34 MB/s | 191 kB 00:00 2026-03-09T14:53:45.630 INFO:teuthology.orchestra.run.vm09.stdout:(115/119): thrift-0.15.0-4.el9.x86_64.rpm 56 MB/s | 1.6 MB 00:00 2026-03-09T14:53:45.713 INFO:teuthology.orchestra.run.vm09.stdout:(116/119): python3-scipy-1.9.3-2.el9.x86_64.rpm 26 MB/s | 19 MB 00:00 2026-03-09T14:53:46.421 INFO:teuthology.orchestra.run.vm09.stdout:(117/119): python3-babel-2.9.1-2.el9.noarch.rpm 2.4 MB/s | 6.0 MB 00:02 2026-03-09T14:53:46.827 INFO:teuthology.orchestra.run.vm09.stdout:(118/119): librados2-18.2.0-0.el9.x86_64.rpm 2.7 MB/s | 3.3 MB 00:01 2026-03-09T14:53:46.892 INFO:teuthology.orchestra.run.vm05.stdout:(10/119): ceph-common-18.2.0-0.el9.x86_64.rpm 2.9 MB/s | 18 MB 00:06 2026-03-09T14:53:46.904 INFO:teuthology.orchestra.run.vm09.stdout:(119/119): librbd1-18.2.0-0.el9.x86_64.rpm 2.5 MB/s | 3.0 MB 00:01 2026-03-09T14:53:46.906 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-09T14:53:46.906 INFO:teuthology.orchestra.run.vm09.stdout:Total 14 MB/s | 182 MB 00:12 2026-03-09T14:53:46.996 INFO:teuthology.orchestra.run.vm05.stdout:(11/119): libcephfs-devel-18.2.0-0.el9.x86_64.r 295 kB/s | 30 kB 00:00 2026-03-09T14:53:47.292 INFO:teuthology.orchestra.run.vm05.stdout:(12/119): libcephfs2-18.2.0-0.el9.x86_64.rpm 2.2 MB/s | 653 kB 00:00 2026-03-09T14:53:47.393 INFO:teuthology.orchestra.run.vm05.stdout:(13/119): libcephsqlite-18.2.0-0.el9.x86_64.rpm 1.6 MB/s | 161 kB 00:00 2026-03-09T14:53:47.393 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T14:53:47.444 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T14:53:47.444 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T14:53:47.493 INFO:teuthology.orchestra.run.vm05.stdout:(14/119): librados-devel-18.2.0-0.el9.x86_64.rp 1.2 MB/s | 126 kB 00:00 2026-03-09T14:53:47.692 INFO:teuthology.orchestra.run.vm05.stdout:(15/119): libradosstriper1-18.2.0-0.el9.x86_64. 2.3 MB/s | 474 kB 00:00 2026-03-09T14:53:48.137 INFO:teuthology.orchestra.run.vm05.stdout:(16/119): ceph-osd-18.2.0-0.el9.x86_64.rpm 3.2 MB/s | 18 MB 00:05 2026-03-09T14:53:48.172 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T14:53:48.172 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T14:53:48.238 INFO:teuthology.orchestra.run.vm05.stdout:(17/119): python3-ceph-argparse-18.2.0-0.el9.x8 447 kB/s | 45 kB 00:00 2026-03-09T14:53:48.339 INFO:teuthology.orchestra.run.vm05.stdout:(18/119): python3-ceph-common-18.2.0-0.el9.x86_ 1.2 MB/s | 119 kB 00:00 2026-03-09T14:53:48.442 INFO:teuthology.orchestra.run.vm05.stdout:(19/119): python3-cephfs-18.2.0-0.el9.x86_64.rp 1.5 MB/s | 155 kB 00:00 2026-03-09T14:53:48.641 INFO:teuthology.orchestra.run.vm05.stdout:(20/119): python3-rados-18.2.0-0.el9.x86_64.rpm 1.6 MB/s | 321 kB 00:00 2026-03-09T14:53:48.744 INFO:teuthology.orchestra.run.vm05.stdout:(21/119): python3-rbd-18.2.0-0.el9.x86_64.rpm 2.8 MB/s | 297 kB 00:00 2026-03-09T14:53:48.845 INFO:teuthology.orchestra.run.vm05.stdout:(22/119): python3-rgw-18.2.0-0.el9.x86_64.rpm 993 kB/s | 99 kB 00:00 2026-03-09T14:53:48.945 INFO:teuthology.orchestra.run.vm05.stdout:(23/119): rbd-fuse-18.2.0-0.el9.x86_64.rpm 857 kB/s | 86 kB 00:00 2026-03-09T14:53:48.984 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T14:53:48.993 INFO:teuthology.orchestra.run.vm09.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/121 2026-03-09T14:53:49.007 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/121 2026-03-09T14:53:49.082 INFO:teuthology.orchestra.run.vm05.stdout:(24/119): librgw2-18.2.0-0.el9.x86_64.rpm 3.2 MB/s | 4.4 MB 00:01 2026-03-09T14:53:49.173 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/121 2026-03-09T14:53:49.175 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librados2-2:18.2.0-0.el9.x86_64 4/121 2026-03-09T14:53:49.184 INFO:teuthology.orchestra.run.vm05.stdout:(25/119): rbd-nbd-18.2.0-0.el9.x86_64.rpm 1.6 MB/s | 169 kB 00:00 2026-03-09T14:53:49.222 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:18.2.0-0.el9.x86_64 4/121 2026-03-09T14:53:49.224 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs2-2:18.2.0-0.el9.x86_64 5/121 2026-03-09T14:53:49.254 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:18.2.0-0.el9.x86_64 5/121 2026-03-09T14:53:49.265 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rados-2:18.2.0-0.el9.x86_64 6/121 2026-03-09T14:53:49.270 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/121 2026-03-09T14:53:49.272 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/121 2026-03-09T14:53:49.282 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/121 2026-03-09T14:53:49.283 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephsqlite-2:18.2.0-0.el9.x86_64 10/121 2026-03-09T14:53:49.284 INFO:teuthology.orchestra.run.vm05.stdout:(26/119): ceph-grafana-dashboards-18.2.0-0.el9. 233 kB/s | 23 kB 00:00 2026-03-09T14:53:49.319 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:18.2.0-0.el9.x86_64 10/121 2026-03-09T14:53:49.321 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libradosstriper1-2:18.2.0-0.el9.x86_64 11/121 2026-03-09T14:53:49.370 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:18.2.0-0.el9.x86_64 11/121 2026-03-09T14:53:49.376 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 12/121 2026-03-09T14:53:49.385 INFO:teuthology.orchestra.run.vm05.stdout:(27/119): ceph-mgr-cephadm-18.2.0-0.el9.noarch. 1.2 MB/s | 127 kB 00:00 2026-03-09T14:53:49.402 INFO:teuthology.orchestra.run.vm09.stdout: Installing : liboath-2.6.12-1.el9.x86_64 13/121 2026-03-09T14:53:49.411 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 14/121 2026-03-09T14:53:49.415 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 15/121 2026-03-09T14:53:49.442 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 16/121 2026-03-09T14:53:49.459 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 17/121 2026-03-09T14:53:49.464 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 18/121 2026-03-09T14:53:49.472 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 19/121 2026-03-09T14:53:49.475 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 20/121 2026-03-09T14:53:49.480 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 21/121 2026-03-09T14:53:49.491 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-argparse-2:18.2.0-0.el9.x86_64 22/121 2026-03-09T14:53:49.505 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cephfs-2:18.2.0-0.el9.x86_64 23/121 2026-03-09T14:53:49.534 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 24/121 2026-03-09T14:53:49.596 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 25/121 2026-03-09T14:53:49.613 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 26/121 2026-03-09T14:53:49.621 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rsa-4.9-2.el9.noarch 27/121 2026-03-09T14:53:49.631 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 28/121 2026-03-09T14:53:49.636 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librados-devel-2:18.2.0-0.el9.x86_64 29/121 2026-03-09T14:53:49.671 INFO:teuthology.orchestra.run.vm09.stdout: Installing : re2-1:20211101-20.el9.x86_64 30/121 2026-03-09T14:53:49.678 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 31/121 2026-03-09T14:53:49.697 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 32/121 2026-03-09T14:53:49.724 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 33/121 2026-03-09T14:53:49.732 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 34/121 2026-03-09T14:53:49.738 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 35/121 2026-03-09T14:53:49.754 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 36/121 2026-03-09T14:53:49.766 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 37/121 2026-03-09T14:53:49.780 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 38/121 2026-03-09T14:53:49.847 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 39/121 2026-03-09T14:53:49.856 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 40/121 2026-03-09T14:53:49.867 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 41/121 2026-03-09T14:53:49.915 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 42/121 2026-03-09T14:53:49.944 INFO:teuthology.orchestra.run.vm05.stdout:(28/119): rbd-mirror-18.2.0-0.el9.x86_64.rpm 3.0 MB/s | 3.0 MB 00:00 2026-03-09T14:53:49.982 INFO:teuthology.orchestra.run.vm05.stdout:(29/119): ceph-mgr-dashboard-18.2.0-0.el9.noarc 2.8 MB/s | 1.7 MB 00:00 2026-03-09T14:53:50.085 INFO:teuthology.orchestra.run.vm05.stdout:(30/119): ceph-mgr-modules-core-18.2.0-0.el9.no 2.3 MB/s | 240 kB 00:00 2026-03-09T14:53:50.185 INFO:teuthology.orchestra.run.vm05.stdout:(31/119): ceph-mgr-rook-18.2.0-0.el9.noarch.rpm 475 kB/s | 47 kB 00:00 2026-03-09T14:53:50.284 INFO:teuthology.orchestra.run.vm05.stdout:(32/119): ceph-prometheus-alerts-18.2.0-0.el9.n 147 kB/s | 15 kB 00:00 2026-03-09T14:53:50.324 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 43/121 2026-03-09T14:53:50.342 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 44/121 2026-03-09T14:53:50.348 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 45/121 2026-03-09T14:53:50.357 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 46/121 2026-03-09T14:53:50.362 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 47/121 2026-03-09T14:53:50.371 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 48/121 2026-03-09T14:53:50.374 INFO:teuthology.orchestra.run.vm09.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 49/121 2026-03-09T14:53:50.378 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 50/121 2026-03-09T14:53:50.386 INFO:teuthology.orchestra.run.vm05.stdout:(33/119): cephadm-18.2.0-0.el9.noarch.rpm 2.0 MB/s | 209 kB 00:00 2026-03-09T14:53:50.389 INFO:teuthology.orchestra.run.vm09.stdout: Installing : fmt-8.1.1-5.el9.x86_64 51/121 2026-03-09T14:53:50.398 INFO:teuthology.orchestra.run.vm09.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 52/121 2026-03-09T14:53:50.403 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 53/121 2026-03-09T14:53:50.412 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 54/121 2026-03-09T14:53:50.418 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 55/121 2026-03-09T14:53:50.427 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 56/121 2026-03-09T14:53:50.434 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 57/121 2026-03-09T14:53:50.480 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 58/121 2026-03-09T14:53:50.489 INFO:teuthology.orchestra.run.vm05.stdout:(34/119): ledmon-libs-1.1.0-3.el9.x86_64.rpm 395 kB/s | 40 kB 00:00 2026-03-09T14:53:50.556 INFO:teuthology.orchestra.run.vm05.stdout:(35/119): libconfig-1.7.2-9.el9.x86_64.rpm 1.1 MB/s | 72 kB 00:00 2026-03-09T14:53:50.687 INFO:teuthology.orchestra.run.vm05.stdout:(36/119): libgfortran-11.5.0-14.el9.x86_64.rpm 5.9 MB/s | 794 kB 00:00 2026-03-09T14:53:50.723 INFO:teuthology.orchestra.run.vm05.stdout:(37/119): libquadmath-11.5.0-14.el9.x86_64.rpm 5.1 MB/s | 184 kB 00:00 2026-03-09T14:53:50.757 INFO:teuthology.orchestra.run.vm05.stdout:(38/119): mailcap-2.1.49-5.el9.noarch.rpm 977 kB/s | 33 kB 00:00 2026-03-09T14:53:50.782 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 59/121 2026-03-09T14:53:50.799 INFO:teuthology.orchestra.run.vm05.stdout:(39/119): python3-cffi-1.14.5-5.el9.x86_64.rpm 6.0 MB/s | 253 kB 00:00 2026-03-09T14:53:50.815 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 60/121 2026-03-09T14:53:50.821 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 61/121 2026-03-09T14:53:50.870 INFO:teuthology.orchestra.run.vm05.stdout:(40/119): python3-cryptography-36.0.1-5.el9.x86 18 MB/s | 1.2 MB 00:00 2026-03-09T14:53:50.887 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-0.3.29-1.el9.x86_64 62/121 2026-03-09T14:53:50.891 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 63/121 2026-03-09T14:53:50.904 INFO:teuthology.orchestra.run.vm05.stdout:(41/119): python3-ply-3.11-14.el9.noarch.rpm 3.1 MB/s | 106 kB 00:00 2026-03-09T14:53:50.915 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 64/121 2026-03-09T14:53:50.940 INFO:teuthology.orchestra.run.vm05.stdout:(42/119): python3-pycparser-2.20-6.el9.noarch.r 3.6 MB/s | 135 kB 00:00 2026-03-09T14:53:50.980 INFO:teuthology.orchestra.run.vm05.stdout:(43/119): python3-requests-2.25.1-10.el9.noarch 3.1 MB/s | 126 kB 00:00 2026-03-09T14:53:51.017 INFO:teuthology.orchestra.run.vm05.stdout:(44/119): python3-urllib3-1.26.5-7.el9.noarch.r 5.8 MB/s | 218 kB 00:00 2026-03-09T14:53:51.159 INFO:teuthology.orchestra.run.vm05.stdout:(45/119): boost-program-options-1.75.0-13.el9.x 732 kB/s | 104 kB 00:00 2026-03-09T14:53:51.198 INFO:teuthology.orchestra.run.vm05.stdout:(46/119): flexiblas-3.0.4-9.el9.x86_64.rpm 767 kB/s | 30 kB 00:00 2026-03-09T14:53:51.340 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 65/121 2026-03-09T14:53:51.366 INFO:teuthology.orchestra.run.vm05.stdout:(47/119): flexiblas-netlib-3.0.4-9.el9.x86_64.r 18 MB/s | 3.0 MB 00:00 2026-03-09T14:53:51.413 INFO:teuthology.orchestra.run.vm05.stdout:(48/119): flexiblas-openblas-openmp-3.0.4-9.el9 317 kB/s | 15 kB 00:00 2026-03-09T14:53:51.437 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 66/121 2026-03-09T14:53:51.461 INFO:teuthology.orchestra.run.vm05.stdout:(49/119): libpmemobj-1.12.1-1.el9.x86_64.rpm 3.2 MB/s | 160 kB 00:00 2026-03-09T14:53:51.502 INFO:teuthology.orchestra.run.vm05.stdout:(50/119): librabbitmq-0.11.0-7.el9.x86_64.rpm 1.1 MB/s | 45 kB 00:00 2026-03-09T14:53:51.562 INFO:teuthology.orchestra.run.vm05.stdout:(51/119): librdkafka-1.6.1-102.el9.x86_64.rpm 11 MB/s | 662 kB 00:00 2026-03-09T14:53:51.610 INFO:teuthology.orchestra.run.vm05.stdout:(52/119): libstoragemgmt-1.10.1-1.el9.x86_64.rp 5.0 MB/s | 246 kB 00:00 2026-03-09T14:53:51.639 INFO:teuthology.orchestra.run.vm05.stdout:(53/119): libxslt-1.1.34-12.el9.x86_64.rpm 8.1 MB/s | 233 kB 00:00 2026-03-09T14:53:51.681 INFO:teuthology.orchestra.run.vm05.stdout:(54/119): lttng-ust-2.12.0-6.el9.x86_64.rpm 6.9 MB/s | 292 kB 00:00 2026-03-09T14:53:51.711 INFO:teuthology.orchestra.run.vm05.stdout:(55/119): openblas-0.3.29-1.el9.x86_64.rpm 1.3 MB/s | 42 kB 00:00 2026-03-09T14:53:51.857 INFO:teuthology.orchestra.run.vm05.stdout:(56/119): openblas-openmp-0.3.29-1.el9.x86_64.r 36 MB/s | 5.3 MB 00:00 2026-03-09T14:53:52.007 INFO:teuthology.orchestra.run.vm05.stdout:(57/119): python3-babel-2.9.1-2.el9.noarch.rpm 40 MB/s | 6.0 MB 00:00 2026-03-09T14:53:52.052 INFO:teuthology.orchestra.run.vm05.stdout:(58/119): python3-devel-3.9.25-3.el9.x86_64.rpm 5.4 MB/s | 244 kB 00:00 2026-03-09T14:53:52.118 INFO:teuthology.orchestra.run.vm05.stdout:(59/119): python3-jinja2-2.11.3-8.el9.noarch.rp 3.7 MB/s | 249 kB 00:00 2026-03-09T14:53:52.159 INFO:teuthology.orchestra.run.vm05.stdout:(60/119): python3-jmespath-1.0.1-1.el9.noarch.r 1.1 MB/s | 48 kB 00:00 2026-03-09T14:53:52.202 INFO:teuthology.orchestra.run.vm05.stdout:(61/119): python3-libstoragemgmt-1.10.1-1.el9.x 4.0 MB/s | 177 kB 00:00 2026-03-09T14:53:52.235 INFO:teuthology.orchestra.run.vm05.stdout:(62/119): python3-mako-1.1.4-6.el9.noarch.rpm 5.2 MB/s | 172 kB 00:00 2026-03-09T14:53:52.279 INFO:teuthology.orchestra.run.vm05.stdout:(63/119): python3-markupsafe-1.1.1-12.el9.x86_6 808 kB/s | 35 kB 00:00 2026-03-09T14:53:52.310 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 67/121 2026-03-09T14:53:52.341 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 68/121 2026-03-09T14:53:52.342 INFO:teuthology.orchestra.run.vm05.stdout:(64/119): ceph-mgr-diskprediction-local-18.2.0- 3.1 MB/s | 7.4 MB 00:02 2026-03-09T14:53:52.349 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 69/121 2026-03-09T14:53:52.354 INFO:teuthology.orchestra.run.vm09.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 70/121 2026-03-09T14:53:52.470 INFO:teuthology.orchestra.run.vm05.stdout:(65/119): python3-numpy-1.23.5-2.el9.x86_64.rpm 32 MB/s | 6.1 MB 00:00 2026-03-09T14:53:52.509 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 71/121 2026-03-09T14:53:52.511 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librbd1-2:18.2.0-0.el9.x86_64 72/121 2026-03-09T14:53:52.517 INFO:teuthology.orchestra.run.vm05.stdout:(66/119): python3-pyasn1-0.4.8-7.el9.noarch.rpm 3.3 MB/s | 157 kB 00:00 2026-03-09T14:53:52.542 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:18.2.0-0.el9.x86_64 72/121 2026-03-09T14:53:52.545 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rbd-2:18.2.0-0.el9.x86_64 73/121 2026-03-09T14:53:52.549 INFO:teuthology.orchestra.run.vm05.stdout:(67/119): python3-numpy-f2py-1.23.5-2.el9.x86_6 2.1 MB/s | 442 kB 00:00 2026-03-09T14:53:52.553 INFO:teuthology.orchestra.run.vm09.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 74/121 2026-03-09T14:53:52.559 INFO:teuthology.orchestra.run.vm05.stdout:(68/119): python3-pyasn1-modules-0.4.8-7.el9.no 6.5 MB/s | 277 kB 00:00 2026-03-09T14:53:52.586 INFO:teuthology.orchestra.run.vm05.stdout:(69/119): python3-requests-oauthlib-1.3.0-12.el 1.4 MB/s | 54 kB 00:00 2026-03-09T14:53:52.717 INFO:teuthology.orchestra.run.vm05.stdout:(70/119): python3-toml-0.10.2-6.el9.noarch.rpm 319 kB/s | 42 kB 00:00 2026-03-09T14:53:52.762 INFO:teuthology.orchestra.run.vm05.stdout:(71/119): socat-1.7.4.1-8.el9.x86_64.rpm 6.7 MB/s | 303 kB 00:00 2026-03-09T14:53:52.764 INFO:teuthology.orchestra.run.vm09.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 75/121 2026-03-09T14:53:52.766 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librgw2-2:18.2.0-0.el9.x86_64 76/121 2026-03-09T14:53:52.786 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:18.2.0-0.el9.x86_64 76/121 2026-03-09T14:53:52.795 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rgw-2:18.2.0-0.el9.x86_64 77/121 2026-03-09T14:53:52.801 INFO:teuthology.orchestra.run.vm05.stdout:(72/119): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.6 MB/s | 64 kB 00:00 2026-03-09T14:53:52.812 INFO:teuthology.orchestra.run.vm05.stdout:(73/119): fmt-8.1.1-5.el9.x86_64.rpm 10 MB/s | 111 kB 00:00 2026-03-09T14:53:52.815 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ply-3.11-14.el9.noarch 78/121 2026-03-09T14:53:52.825 INFO:teuthology.orchestra.run.vm05.stdout:(74/119): gperftools-libs-2.9.1-3.el9.x86_64.rp 23 MB/s | 308 kB 00:00 2026-03-09T14:53:52.837 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 79/121 2026-03-09T14:53:52.937 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 80/121 2026-03-09T14:53:52.948 INFO:teuthology.orchestra.run.vm05.stdout:(75/119): libarrow-9.0.0-15.el9.x86_64.rpm 36 MB/s | 4.4 MB 00:00 2026-03-09T14:53:52.951 INFO:teuthology.orchestra.run.vm05.stdout:(76/119): libarrow-doc-9.0.0-15.el9.noarch.rpm 11 MB/s | 25 kB 00:00 2026-03-09T14:53:52.954 INFO:teuthology.orchestra.run.vm05.stdout:(77/119): liboath-2.6.12-1.el9.x86_64.rpm 17 MB/s | 49 kB 00:00 2026-03-09T14:53:52.957 INFO:teuthology.orchestra.run.vm05.stdout:(78/119): libunwind-1.6.2-1.el9.x86_64.rpm 25 MB/s | 67 kB 00:00 2026-03-09T14:53:52.958 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 81/121 2026-03-09T14:53:52.973 INFO:teuthology.orchestra.run.vm05.stdout:(79/119): parquet-libs-9.0.0-15.el9.x86_64.rpm 53 MB/s | 838 kB 00:00 2026-03-09T14:53:52.988 INFO:teuthology.orchestra.run.vm05.stdout:(80/119): python3-asyncssh-2.13.2-5.el9.noarch. 38 MB/s | 548 kB 00:00 2026-03-09T14:53:52.990 INFO:teuthology.orchestra.run.vm05.stdout:(81/119): python3-autocommand-2.2.2-8.el9.noarc 12 MB/s | 29 kB 00:00 2026-03-09T14:53:52.993 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 82/121 2026-03-09T14:53:52.993 INFO:teuthology.orchestra.run.vm05.stdout:(82/119): python3-backports-tarfile-1.2.0-1.el9 24 MB/s | 60 kB 00:00 2026-03-09T14:53:52.996 INFO:teuthology.orchestra.run.vm05.stdout:(83/119): python3-bcrypt-3.2.2-1.el9.x86_64.rpm 17 MB/s | 43 kB 00:00 2026-03-09T14:53:52.999 INFO:teuthology.orchestra.run.vm05.stdout:(84/119): python3-cachetools-4.2.4-1.el9.noarch 11 MB/s | 32 kB 00:00 2026-03-09T14:53:53.001 INFO:teuthology.orchestra.run.vm05.stdout:(85/119): python3-certifi-2023.05.07-4.el9.noar 6.9 MB/s | 14 kB 00:00 2026-03-09T14:53:53.006 INFO:teuthology.orchestra.run.vm05.stdout:(86/119): python3-cheroot-10.0.1-4.el9.noarch.r 38 MB/s | 173 kB 00:00 2026-03-09T14:53:53.017 INFO:teuthology.orchestra.run.vm05.stdout:(87/119): python3-cherrypy-18.6.1-2.el9.noarch. 32 MB/s | 358 kB 00:00 2026-03-09T14:53:53.034 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 83/121 2026-03-09T14:53:53.063 INFO:teuthology.orchestra.run.vm05.stdout:(88/119): python3-scipy-1.9.3-2.el9.x86_64.rpm 38 MB/s | 19 MB 00:00 2026-03-09T14:53:53.064 INFO:teuthology.orchestra.run.vm05.stdout:(89/119): python3-google-auth-2.45.0-1.el9.noar 5.3 MB/s | 254 kB 00:00 2026-03-09T14:53:53.067 INFO:teuthology.orchestra.run.vm05.stdout:(90/119): python3-jaraco-classes-3.2.1-5.el9.no 8.3 MB/s | 18 kB 00:00 2026-03-09T14:53:53.067 INFO:teuthology.orchestra.run.vm05.stdout:(91/119): python3-jaraco-8.2.1-3.el9.noarch.rpm 2.5 MB/s | 11 kB 00:00 2026-03-09T14:53:53.069 INFO:teuthology.orchestra.run.vm05.stdout:(92/119): python3-jaraco-collections-3.0.0-8.el 8.9 MB/s | 23 kB 00:00 2026-03-09T14:53:53.070 INFO:teuthology.orchestra.run.vm05.stdout:(93/119): python3-jaraco-context-6.0.1-3.el9.no 6.8 MB/s | 20 kB 00:00 2026-03-09T14:53:53.072 INFO:teuthology.orchestra.run.vm05.stdout:(94/119): python3-jaraco-functools-3.5.0-2.el9. 9.2 MB/s | 19 kB 00:00 2026-03-09T14:53:53.073 INFO:teuthology.orchestra.run.vm05.stdout:(95/119): python3-jaraco-text-4.0.0-2.el9.noarc 11 MB/s | 26 kB 00:00 2026-03-09T14:53:53.074 INFO:teuthology.orchestra.run.vm05.stdout:(96/119): python3-jwt+crypto-2.4.0-1.el9.noarch 4.4 MB/s | 9.0 kB 00:00 2026-03-09T14:53:53.075 INFO:teuthology.orchestra.run.vm05.stdout:(97/119): python3-jwt-2.4.0-1.el9.noarch.rpm 18 MB/s | 41 kB 00:00 2026-03-09T14:53:53.079 INFO:teuthology.orchestra.run.vm05.stdout:(98/119): python3-logutils-0.3.5-21.el9.noarch. 11 MB/s | 46 kB 00:00 2026-03-09T14:53:53.084 INFO:teuthology.orchestra.run.vm05.stdout:(99/119): python3-more-itertools-8.12.0-2.el9.n 17 MB/s | 79 kB 00:00 2026-03-09T14:53:53.088 INFO:teuthology.orchestra.run.vm05.stdout:(100/119): python3-natsort-7.1.1-5.el9.noarch.r 15 MB/s | 58 kB 00:00 2026-03-09T14:53:53.093 INFO:teuthology.orchestra.run.vm05.stdout:(101/119): python3-kubernetes-26.1.0-3.el9.noar 54 MB/s | 1.0 MB 00:00 2026-03-09T14:53:53.096 INFO:teuthology.orchestra.run.vm05.stdout:(102/119): python3-portend-3.1.0-2.el9.noarch.r 6.8 MB/s | 16 kB 00:00 2026-03-09T14:53:53.097 INFO:teuthology.orchestra.run.vm05.stdout:(103/119): python3-pecan-1.4.2-3.el9.noarch.rpm 29 MB/s | 272 kB 00:00 2026-03-09T14:53:53.099 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 84/121 2026-03-09T14:53:53.099 INFO:teuthology.orchestra.run.vm05.stdout:(104/119): python3-pyOpenSSL-21.0.0-1.el9.noarc 32 MB/s | 90 kB 00:00 2026-03-09T14:53:53.100 INFO:teuthology.orchestra.run.vm05.stdout:(105/119): python3-repoze-lru-0.7-16.el9.noarch 14 MB/s | 31 kB 00:00 2026-03-09T14:53:53.104 INFO:teuthology.orchestra.run.vm05.stdout:(106/119): python3-rsa-4.9-2.el9.noarch.rpm 14 MB/s | 59 kB 00:00 2026-03-09T14:53:53.105 INFO:teuthology.orchestra.run.vm05.stdout:(107/119): python3-routes-2.5.1-5.el9.noarch.rp 29 MB/s | 188 kB 00:00 2026-03-09T14:53:53.106 INFO:teuthology.orchestra.run.vm05.stdout:(108/119): python3-tempora-5.0.0-2.el9.noarch.r 16 MB/s | 36 kB 00:00 2026-03-09T14:53:53.108 INFO:teuthology.orchestra.run.vm05.stdout:(109/119): python3-typing-extensions-4.15.0-1.e 30 MB/s | 86 kB 00:00 2026-03-09T14:53:53.113 INFO:teuthology.orchestra.run.vm05.stdout:(110/119): python3-websocket-client-1.2.3-2.el9 20 MB/s | 90 kB 00:00 2026-03-09T14:53:53.114 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 85/121 2026-03-09T14:53:53.114 INFO:teuthology.orchestra.run.vm05.stdout:(111/119): python3-webob-1.8.8-2.el9.noarch.rpm 29 MB/s | 230 kB 00:00 2026-03-09T14:53:53.117 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jwt-2.4.0-1.el9.noarch 86/121 2026-03-09T14:53:53.117 INFO:teuthology.orchestra.run.vm05.stdout:(112/119): python3-xmltodict-0.12.0-15.el9.noar 7.3 MB/s | 22 kB 00:00 2026-03-09T14:53:53.120 INFO:teuthology.orchestra.run.vm05.stdout:(113/119): python3-werkzeug-2.0.3-3.el9.1.noarc 57 MB/s | 427 kB 00:00 2026-03-09T14:53:53.121 INFO:teuthology.orchestra.run.vm05.stdout:(114/119): python3-zc-lockfile-2.0-10.el9.noarc 4.7 MB/s | 20 kB 00:00 2026-03-09T14:53:53.124 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jwt+crypto-2.4.0-1.el9.noarch 87/121 2026-03-09T14:53:53.125 INFO:teuthology.orchestra.run.vm05.stdout:(115/119): re2-20211101-20.el9.x86_64.rpm 48 MB/s | 191 kB 00:00 2026-03-09T14:53:53.130 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 88/121 2026-03-09T14:53:53.135 INFO:teuthology.orchestra.run.vm09.stdout: Installing : mailcap-2.1.49-5.el9.noarch 89/121 2026-03-09T14:53:53.138 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 90/121 2026-03-09T14:53:53.147 INFO:teuthology.orchestra.run.vm05.stdout:(116/119): thrift-0.15.0-4.el9.x86_64.rpm 64 MB/s | 1.6 MB 00:00 2026-03-09T14:53:53.158 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 91/121 2026-03-09T14:53:53.158 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'libstoragemgmt' with GID 994. 2026-03-09T14:53:53.158 INFO:teuthology.orchestra.run.vm09.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 994 and GID 994. 2026-03-09T14:53:53.158 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:53:53.171 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 91/121 2026-03-09T14:53:53.204 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 91/121 2026-03-09T14:53:53.204 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-09T14:53:53.204 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:53:53.223 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 92/121 2026-03-09T14:53:53.285 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:18.2.0-0.el9.noarch 93/121 2026-03-09T14:53:53.288 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cephadm-2:18.2.0-0.el9.noarch 93/121 2026-03-09T14:53:53.293 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-prometheus-alerts-2:18.2.0-0.el9.noarch 94/121 2026-03-09T14:53:53.323 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-grafana-dashboards-2:18.2.0-0.el9.noarch 95/121 2026-03-09T14:53:53.327 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-common-2:18.2.0-0.el9.x86_64 96/121 2026-03-09T14:53:54.299 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:18.2.0-0.el9.x86_64 97/121 2026-03-09T14:53:54.305 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-common-2:18.2.0-0.el9.x86_64 97/121 2026-03-09T14:53:54.422 INFO:teuthology.orchestra.run.vm05.stdout:(117/119): librados2-18.2.0-0.el9.x86_64.rpm 2.5 MB/s | 3.3 MB 00:01 2026-03-09T14:53:54.438 INFO:teuthology.orchestra.run.vm05.stdout:(118/119): librbd1-18.2.0-0.el9.x86_64.rpm 2.3 MB/s | 3.0 MB 00:01 2026-03-09T14:53:54.623 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:18.2.0-0.el9.x86_64 97/121 2026-03-09T14:53:54.631 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-base-2:18.2.0-0.el9.x86_64 98/121 2026-03-09T14:53:54.671 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:18.2.0-0.el9.x86_64 98/121 2026-03-09T14:53:54.671 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-09T14:53:54.671 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-09T14:53:54.671 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:53:54.725 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-selinux-2:18.2.0-0.el9.x86_64 99/121 2026-03-09T14:53:54.978 INFO:teuthology.orchestra.run.vm05.stdout:(119/119): ceph-test-18.2.0-0.el9.x86_64.rpm 4.2 MB/s | 40 MB 00:09 2026-03-09T14:53:54.980 INFO:teuthology.orchestra.run.vm05.stdout:-------------------------------------------------------------------------------- 2026-03-09T14:53:54.980 INFO:teuthology.orchestra.run.vm05.stdout:Total 12 MB/s | 182 MB 00:15 2026-03-09T14:53:55.515 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T14:53:55.569 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T14:53:55.569 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T14:53:56.356 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T14:53:56.356 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T14:53:57.197 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T14:53:57.206 INFO:teuthology.orchestra.run.vm05.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/121 2026-03-09T14:53:57.219 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/121 2026-03-09T14:53:57.393 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/121 2026-03-09T14:53:57.395 INFO:teuthology.orchestra.run.vm05.stdout: Upgrading : librados2-2:18.2.0-0.el9.x86_64 4/121 2026-03-09T14:53:57.442 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:18.2.0-0.el9.x86_64 4/121 2026-03-09T14:53:57.444 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs2-2:18.2.0-0.el9.x86_64 5/121 2026-03-09T14:53:57.475 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephfs2-2:18.2.0-0.el9.x86_64 5/121 2026-03-09T14:53:57.486 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rados-2:18.2.0-0.el9.x86_64 6/121 2026-03-09T14:53:57.491 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/121 2026-03-09T14:53:57.494 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/121 2026-03-09T14:53:57.565 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/121 2026-03-09T14:53:57.566 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephsqlite-2:18.2.0-0.el9.x86_64 10/121 2026-03-09T14:53:57.603 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephsqlite-2:18.2.0-0.el9.x86_64 10/121 2026-03-09T14:53:57.606 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libradosstriper1-2:18.2.0-0.el9.x86_64 11/121 2026-03-09T14:53:57.656 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libradosstriper1-2:18.2.0-0.el9.x86_64 11/121 2026-03-09T14:53:57.662 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 12/121 2026-03-09T14:53:57.690 INFO:teuthology.orchestra.run.vm05.stdout: Installing : liboath-2.6.12-1.el9.x86_64 13/121 2026-03-09T14:53:57.701 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 14/121 2026-03-09T14:53:57.705 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 15/121 2026-03-09T14:53:57.736 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 16/121 2026-03-09T14:53:57.755 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 17/121 2026-03-09T14:53:57.761 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 18/121 2026-03-09T14:53:57.771 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 19/121 2026-03-09T14:53:57.774 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 20/121 2026-03-09T14:53:57.781 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 21/121 2026-03-09T14:53:57.792 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ceph-argparse-2:18.2.0-0.el9.x86_64 22/121 2026-03-09T14:53:57.807 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cephfs-2:18.2.0-0.el9.x86_64 23/121 2026-03-09T14:53:57.843 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 24/121 2026-03-09T14:53:57.917 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 25/121 2026-03-09T14:53:57.943 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 26/121 2026-03-09T14:53:57.952 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rsa-4.9-2.el9.noarch 27/121 2026-03-09T14:53:57.963 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 28/121 2026-03-09T14:53:57.968 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librados-devel-2:18.2.0-0.el9.x86_64 29/121 2026-03-09T14:53:58.004 INFO:teuthology.orchestra.run.vm05.stdout: Installing : re2-1:20211101-20.el9.x86_64 30/121 2026-03-09T14:53:58.011 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 31/121 2026-03-09T14:53:58.032 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 32/121 2026-03-09T14:53:58.061 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 33/121 2026-03-09T14:53:58.068 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 34/121 2026-03-09T14:53:58.075 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 35/121 2026-03-09T14:53:58.090 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 36/121 2026-03-09T14:53:58.101 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 37/121 2026-03-09T14:53:58.113 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 38/121 2026-03-09T14:53:58.189 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 39/121 2026-03-09T14:53:58.197 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 40/121 2026-03-09T14:53:58.209 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 41/121 2026-03-09T14:53:58.263 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 42/121 2026-03-09T14:53:58.703 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 43/121 2026-03-09T14:53:58.721 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 44/121 2026-03-09T14:53:58.728 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 45/121 2026-03-09T14:53:58.735 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 46/121 2026-03-09T14:53:58.741 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 47/121 2026-03-09T14:53:58.749 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 48/121 2026-03-09T14:53:58.752 INFO:teuthology.orchestra.run.vm05.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 49/121 2026-03-09T14:53:58.756 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 50/121 2026-03-09T14:53:58.767 INFO:teuthology.orchestra.run.vm05.stdout: Installing : fmt-8.1.1-5.el9.x86_64 51/121 2026-03-09T14:53:58.776 INFO:teuthology.orchestra.run.vm05.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 52/121 2026-03-09T14:53:58.781 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 53/121 2026-03-09T14:53:58.789 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 54/121 2026-03-09T14:53:58.798 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 55/121 2026-03-09T14:53:58.807 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 56/121 2026-03-09T14:53:58.813 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 57/121 2026-03-09T14:53:58.858 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 58/121 2026-03-09T14:53:59.146 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 59/121 2026-03-09T14:53:59.178 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 60/121 2026-03-09T14:53:59.185 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 61/121 2026-03-09T14:53:59.251 INFO:teuthology.orchestra.run.vm05.stdout: Installing : openblas-0.3.29-1.el9.x86_64 62/121 2026-03-09T14:53:59.254 INFO:teuthology.orchestra.run.vm05.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 63/121 2026-03-09T14:53:59.279 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 64/121 2026-03-09T14:53:59.679 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 65/121 2026-03-09T14:53:59.774 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 66/121 2026-03-09T14:54:00.664 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 67/121 2026-03-09T14:54:00.997 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 68/121 2026-03-09T14:54:01.006 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 69/121 2026-03-09T14:54:01.011 INFO:teuthology.orchestra.run.vm05.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 70/121 2026-03-09T14:54:01.173 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 71/121 2026-03-09T14:54:01.176 INFO:teuthology.orchestra.run.vm05.stdout: Upgrading : librbd1-2:18.2.0-0.el9.x86_64 72/121 2026-03-09T14:54:01.208 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:18.2.0-0.el9.x86_64 72/121 2026-03-09T14:54:01.212 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rbd-2:18.2.0-0.el9.x86_64 73/121 2026-03-09T14:54:01.220 INFO:teuthology.orchestra.run.vm05.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 74/121 2026-03-09T14:54:01.439 INFO:teuthology.orchestra.run.vm05.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 75/121 2026-03-09T14:54:01.441 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librgw2-2:18.2.0-0.el9.x86_64 76/121 2026-03-09T14:54:01.461 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librgw2-2:18.2.0-0.el9.x86_64 76/121 2026-03-09T14:54:01.471 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rgw-2:18.2.0-0.el9.x86_64 77/121 2026-03-09T14:54:01.489 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ply-3.11-14.el9.noarch 78/121 2026-03-09T14:54:01.509 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 79/121 2026-03-09T14:54:01.533 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:18.2.0-0.el9.x86_64 99/121 2026-03-09T14:54:01.533 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-09T14:54:01.533 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-09T14:54:01.533 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-09T14:54:01.533 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-09T14:54:01.533 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-09T14:54:01.533 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-09T14:54:01.533 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-09T14:54:01.533 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:54:01.565 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 100/121 2026-03-09T14:54:01.599 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 80/121 2026-03-09T14:54:01.612 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 81/121 2026-03-09T14:54:01.640 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 82/121 2026-03-09T14:54:01.679 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 83/121 2026-03-09T14:54:01.693 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 100/121 2026-03-09T14:54:01.699 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 101/121 2026-03-09T14:54:01.743 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 84/121 2026-03-09T14:54:01.756 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 85/121 2026-03-09T14:54:01.759 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jwt-2.4.0-1.el9.noarch 86/121 2026-03-09T14:54:01.767 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jwt+crypto-2.4.0-1.el9.noarch 87/121 2026-03-09T14:54:01.772 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 88/121 2026-03-09T14:54:01.777 INFO:teuthology.orchestra.run.vm05.stdout: Installing : mailcap-2.1.49-5.el9.noarch 89/121 2026-03-09T14:54:01.781 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 90/121 2026-03-09T14:54:01.803 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 91/121 2026-03-09T14:54:01.803 INFO:teuthology.orchestra.run.vm05.stdout:Creating group 'libstoragemgmt' with GID 994. 2026-03-09T14:54:01.803 INFO:teuthology.orchestra.run.vm05.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 994 and GID 994. 2026-03-09T14:54:01.803 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:01.816 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 91/121 2026-03-09T14:54:01.845 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 91/121 2026-03-09T14:54:01.845 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-09T14:54:01.845 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:01.866 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 92/121 2026-03-09T14:54:01.927 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: cephadm-2:18.2.0-0.el9.noarch 93/121 2026-03-09T14:54:01.930 INFO:teuthology.orchestra.run.vm05.stdout: Installing : cephadm-2:18.2.0-0.el9.noarch 93/121 2026-03-09T14:54:01.935 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-prometheus-alerts-2:18.2.0-0.el9.noarch 94/121 2026-03-09T14:54:01.964 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-grafana-dashboards-2:18.2.0-0.el9.noarch 95/121 2026-03-09T14:54:01.967 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ceph-common-2:18.2.0-0.el9.x86_64 96/121 2026-03-09T14:54:02.238 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 101/121 2026-03-09T14:54:02.239 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noa 102/121 2026-03-09T14:54:02.299 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noa 102/121 2026-03-09T14:54:02.373 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-modules-core-2:18.2.0-0.el9.noarch 103/121 2026-03-09T14:54:02.375 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-2:18.2.0-0.el9.x86_64 104/121 2026-03-09T14:54:02.398 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:18.2.0-0.el9.x86_64 104/121 2026-03-09T14:54:02.398 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:02.398 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T14:54:02.398 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T14:54:02.398 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T14:54:02.398 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:54:02.410 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-rook-2:18.2.0-0.el9.noarch 105/121 2026-03-09T14:54:02.515 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:18.2.0-0.el9.noarch 105/121 2026-03-09T14:54:02.518 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mds-2:18.2.0-0.el9.x86_64 106/121 2026-03-09T14:54:02.539 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:18.2.0-0.el9.x86_64 106/121 2026-03-09T14:54:02.539 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:02.539 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T14:54:02.539 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T14:54:02.539 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T14:54:02.539 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:54:02.765 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mon-2:18.2.0-0.el9.x86_64 107/121 2026-03-09T14:54:02.788 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:18.2.0-0.el9.x86_64 107/121 2026-03-09T14:54:02.788 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:02.788 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T14:54:02.788 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T14:54:02.789 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T14:54:02.789 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:54:02.982 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:18.2.0-0.el9.x86_64 97/121 2026-03-09T14:54:02.988 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-common-2:18.2.0-0.el9.x86_64 97/121 2026-03-09T14:54:03.317 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:18.2.0-0.el9.x86_64 97/121 2026-03-09T14:54:03.325 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-base-2:18.2.0-0.el9.x86_64 98/121 2026-03-09T14:54:03.363 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:18.2.0-0.el9.x86_64 98/121 2026-03-09T14:54:03.364 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-09T14:54:03.364 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-09T14:54:03.364 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:03.368 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-selinux-2:18.2.0-0.el9.x86_64 99/121 2026-03-09T14:54:03.627 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-osd-2:18.2.0-0.el9.x86_64 108/121 2026-03-09T14:54:03.651 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:18.2.0-0.el9.x86_64 108/121 2026-03-09T14:54:03.651 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:03.651 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T14:54:03.651 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T14:54:03.651 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T14:54:03.651 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:54:04.012 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-2:18.2.0-0.el9.x86_64 109/121 2026-03-09T14:54:04.016 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-radosgw-2:18.2.0-0.el9.x86_64 110/121 2026-03-09T14:54:04.037 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:18.2.0-0.el9.x86_64 110/121 2026-03-09T14:54:04.037 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:04.037 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T14:54:04.037 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T14:54:04.037 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T14:54:04.037 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:54:04.048 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-immutable-object-cache-2:18.2.0-0.el9.x86_6 111/121 2026-03-09T14:54:04.069 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.0-0.el9.x86_6 111/121 2026-03-09T14:54:04.069 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:04.069 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T14:54:04.069 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:54:04.214 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-mirror-2:18.2.0-0.el9.x86_64 112/121 2026-03-09T14:54:04.235 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:18.2.0-0.el9.x86_64 112/121 2026-03-09T14:54:04.235 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:04.235 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T14:54:04.235 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T14:54:04.235 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T14:54:04.235 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:54:06.314 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-test-2:18.2.0-0.el9.x86_64 113/121 2026-03-09T14:54:06.326 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-fuse-2:18.2.0-0.el9.x86_64 114/121 2026-03-09T14:54:06.332 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-nbd-2:18.2.0-0.el9.x86_64 115/121 2026-03-09T14:54:06.373 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs-devel-2:18.2.0-0.el9.x86_64 116/121 2026-03-09T14:54:06.380 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-fuse-2:18.2.0-0.el9.x86_64 117/121 2026-03-09T14:54:06.390 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 118/121 2026-03-09T14:54:06.394 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 119/121 2026-03-09T14:54:06.394 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 120/121 2026-03-09T14:54:06.413 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 120/121 2026-03-09T14:54:06.413 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 121/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 121/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:18.2.0-0.el9.x86_64 1/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:18.2.0-0.el9.x86_64 2/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:18.2.0-0.el9.x86_64 3/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:18.2.0-0.el9.x86_64 4/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:18.2.0-0.el9.x86_6 5/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:18.2.0-0.el9.x86_64 6/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:18.2.0-0.el9.x86_64 7/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:18.2.0-0.el9.x86_64 8/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:18.2.0-0.el9.x86_64 9/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:18.2.0-0.el9.x86_64 10/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:18.2.0-0.el9.x86_64 11/121 2026-03-09T14:54:07.470 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:18.2.0-0.el9.x86_64 12/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:18.2.0-0.el9.x86_64 13/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:18.2.0-0.el9.x86_64 14/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:18.2.0-0.el9.x86_64 15/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:18.2.0-0.el9.x86_64 16/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:18.2.0-0.el9.x86_64 17/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:18.2.0-0.el9.x86_64 18/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:18.2.0-0.el9.x86_64 19/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:18.2.0-0.el9.x86_64 20/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:18.2.0-0.el9.x86_64 21/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:18.2.0-0.el9.x86_64 22/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:18.2.0-0.el9.x86_64 23/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:18.2.0-0.el9.x86_64 24/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:18.2.0-0.el9.x86_64 25/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:18.2.0-0.el9.x86_64 26/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:18.2.0-0.el9.x86_64 27/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:18.2.0-0.el9.noarch 28/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 29/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 30/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noa 31/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:18.2.0-0.el9.noarch 32/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:18.2.0-0.el9.noarch 33/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:18.2.0-0.el9.noarch 34/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:18.2.0-0.el9.noarch 35/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 36/121 2026-03-09T14:54:07.471 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 37/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 38/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 39/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 40/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 41/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 42/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 43/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 44/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 45/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 46/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 47/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 48/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 49/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 50/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 51/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 52/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 53/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 54/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 55/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 56/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 57/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 58/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 59/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 60/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 61/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 62/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 63/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 64/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 65/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 66/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 67/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 68/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 69/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 70/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 71/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 72/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 73/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 74/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : fmt-8.1.1-5.el9.x86_64 75/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 76/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 77/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 78/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 79/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 80/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 81/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 82/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 83/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 84/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 85/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 86/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 87/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 88/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 89/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 90/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 91/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 92/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 93/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 94/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 95/121 2026-03-09T14:54:07.473 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 96/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jwt+crypto-2.4.0-1.el9.noarch 97/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jwt-2.4.0-1.el9.noarch 98/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 99/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 100/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 101/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 102/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 103/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 104/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 105/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 106/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 107/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 108/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 109/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 110/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 111/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 112/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 113/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 114/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 115/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 116/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 117/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:18.2.0-0.el9.x86_64 118/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 119/121 2026-03-09T14:54:07.474 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:18.2.0-0.el9.x86_64 120/121 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 121/121 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout:Upgraded: 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:18.2.0-0.el9.x86_64 librbd1-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout:Installed: 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:18.2.0-0.el9.noarch 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noarch 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:18.2.0-0.el9.noarch 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:18.2.0-0.el9.noarch 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:18.2.0-0.el9.noarch 2026-03-09T14:54:07.573 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:18.2.0-0.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: fmt-8.1.1-5.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T14:54:07.574 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-jwt-2.4.0-1.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-jwt+crypto-2.4.0-1.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:54:07.575 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T14:54:07.670 DEBUG:teuthology.parallel:result is None 2026-03-09T14:54:09.974 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-selinux-2:18.2.0-0.el9.x86_64 99/121 2026-03-09T14:54:09.975 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /sys 2026-03-09T14:54:09.975 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /proc 2026-03-09T14:54:09.975 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /mnt 2026-03-09T14:54:09.975 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /var/tmp 2026-03-09T14:54:09.975 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /home 2026-03-09T14:54:09.975 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /root 2026-03-09T14:54:09.975 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /tmp 2026-03-09T14:54:09.975 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:10.006 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 100/121 2026-03-09T14:54:10.132 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 100/121 2026-03-09T14:54:10.137 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 101/121 2026-03-09T14:54:10.658 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 101/121 2026-03-09T14:54:10.660 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noa 102/121 2026-03-09T14:54:10.722 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noa 102/121 2026-03-09T14:54:10.798 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-modules-core-2:18.2.0-0.el9.noarch 103/121 2026-03-09T14:54:10.801 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-2:18.2.0-0.el9.x86_64 104/121 2026-03-09T14:54:10.826 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:18.2.0-0.el9.x86_64 104/121 2026-03-09T14:54:10.826 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:10.826 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T14:54:10.826 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T14:54:10.826 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T14:54:10.826 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:10.840 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-rook-2:18.2.0-0.el9.noarch 105/121 2026-03-09T14:54:10.962 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-rook-2:18.2.0-0.el9.noarch 105/121 2026-03-09T14:54:10.965 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mds-2:18.2.0-0.el9.x86_64 106/121 2026-03-09T14:54:10.988 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:18.2.0-0.el9.x86_64 106/121 2026-03-09T14:54:10.988 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:10.988 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T14:54:10.988 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T14:54:10.988 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T14:54:10.988 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:11.228 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mon-2:18.2.0-0.el9.x86_64 107/121 2026-03-09T14:54:11.253 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:18.2.0-0.el9.x86_64 107/121 2026-03-09T14:54:11.253 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:11.253 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T14:54:11.253 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T14:54:11.253 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T14:54:11.253 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:12.093 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-osd-2:18.2.0-0.el9.x86_64 108/121 2026-03-09T14:54:12.122 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:18.2.0-0.el9.x86_64 108/121 2026-03-09T14:54:12.123 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:12.123 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T14:54:12.123 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T14:54:12.123 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T14:54:12.123 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:12.513 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-2:18.2.0-0.el9.x86_64 109/121 2026-03-09T14:54:12.517 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-radosgw-2:18.2.0-0.el9.x86_64 110/121 2026-03-09T14:54:12.542 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:18.2.0-0.el9.x86_64 110/121 2026-03-09T14:54:12.542 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:12.542 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T14:54:12.542 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T14:54:12.543 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T14:54:12.543 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:12.556 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-immutable-object-cache-2:18.2.0-0.el9.x86_6 111/121 2026-03-09T14:54:12.580 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.0-0.el9.x86_6 111/121 2026-03-09T14:54:12.580 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:12.580 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T14:54:12.580 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:12.742 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-mirror-2:18.2.0-0.el9.x86_64 112/121 2026-03-09T14:54:12.767 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:18.2.0-0.el9.x86_64 112/121 2026-03-09T14:54:12.767 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:54:12.767 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T14:54:12.767 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T14:54:12.767 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T14:54:12.767 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:14.911 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-test-2:18.2.0-0.el9.x86_64 113/121 2026-03-09T14:54:14.923 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-fuse-2:18.2.0-0.el9.x86_64 114/121 2026-03-09T14:54:14.928 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-nbd-2:18.2.0-0.el9.x86_64 115/121 2026-03-09T14:54:14.970 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs-devel-2:18.2.0-0.el9.x86_64 116/121 2026-03-09T14:54:14.977 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-fuse-2:18.2.0-0.el9.x86_64 117/121 2026-03-09T14:54:14.986 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 118/121 2026-03-09T14:54:14.990 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 119/121 2026-03-09T14:54:14.990 INFO:teuthology.orchestra.run.vm05.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 120/121 2026-03-09T14:54:15.007 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 120/121 2026-03-09T14:54:15.007 INFO:teuthology.orchestra.run.vm05.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 121/121 2026-03-09T14:54:16.117 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 121/121 2026-03-09T14:54:16.117 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-2:18.2.0-0.el9.x86_64 1/121 2026-03-09T14:54:16.117 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-base-2:18.2.0-0.el9.x86_64 2/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-common-2:18.2.0-0.el9.x86_64 3/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-fuse-2:18.2.0-0.el9.x86_64 4/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-immutable-object-cache-2:18.2.0-0.el9.x86_6 5/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mds-2:18.2.0-0.el9.x86_64 6/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-2:18.2.0-0.el9.x86_64 7/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mon-2:18.2.0-0.el9.x86_64 8/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-osd-2:18.2.0-0.el9.x86_64 9/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-radosgw-2:18.2.0-0.el9.x86_64 10/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-selinux-2:18.2.0-0.el9.x86_64 11/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-test-2:18.2.0-0.el9.x86_64 12/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs-devel-2:18.2.0-0.el9.x86_64 13/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs2-2:18.2.0-0.el9.x86_64 14/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephsqlite-2:18.2.0-0.el9.x86_64 15/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados-devel-2:18.2.0-0.el9.x86_64 16/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libradosstriper1-2:18.2.0-0.el9.x86_64 17/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librgw2-2:18.2.0-0.el9.x86_64 18/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-argparse-2:18.2.0-0.el9.x86_64 19/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-common-2:18.2.0-0.el9.x86_64 20/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cephfs-2:18.2.0-0.el9.x86_64 21/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rados-2:18.2.0-0.el9.x86_64 22/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rbd-2:18.2.0-0.el9.x86_64 23/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rgw-2:18.2.0-0.el9.x86_64 24/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-fuse-2:18.2.0-0.el9.x86_64 25/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-mirror-2:18.2.0-0.el9.x86_64 26/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-nbd-2:18.2.0-0.el9.x86_64 27/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-grafana-dashboards-2:18.2.0-0.el9.noarch 28/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 29/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 30/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noa 31/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-modules-core-2:18.2.0-0.el9.noarch 32/121 2026-03-09T14:54:16.118 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-rook-2:18.2.0-0.el9.noarch 33/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-prometheus-alerts-2:18.2.0-0.el9.noarch 34/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cephadm-2:18.2.0-0.el9.noarch 35/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 36/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 37/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 38/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 39/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 40/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 41/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 42/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ply-3.11-14.el9.noarch 43/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 44/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 45/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 46/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 47/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 48/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 49/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 50/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 51/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 52/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 53/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 54/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 55/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 56/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 57/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 58/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 59/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 60/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 61/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 62/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 63/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 64/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 65/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 66/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 67/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 68/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 69/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 70/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 71/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 72/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 73/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 74/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : fmt-8.1.1-5.el9.x86_64 75/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 76/121 2026-03-09T14:54:16.120 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 77/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 78/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 79/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 80/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 81/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 82/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 83/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 84/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 85/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 86/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 87/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 88/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 89/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 90/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 91/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 92/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 93/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 94/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 95/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 96/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jwt+crypto-2.4.0-1.el9.noarch 97/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jwt-2.4.0-1.el9.noarch 98/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 99/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 100/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 101/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 102/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 103/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 104/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 105/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 106/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 107/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 108/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 109/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 110/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 111/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 112/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 113/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 114/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 115/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : re2-1:20211101-20.el9.x86_64 116/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 117/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:18.2.0-0.el9.x86_64 118/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 119/121 2026-03-09T14:54:16.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:18.2.0-0.el9.x86_64 120/121 2026-03-09T14:54:16.227 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 121/121 2026-03-09T14:54:16.227 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:16.227 INFO:teuthology.orchestra.run.vm05.stdout:Upgraded: 2026-03-09T14:54:16.227 INFO:teuthology.orchestra.run.vm05.stdout: librados2-2:18.2.0-0.el9.x86_64 librbd1-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.227 INFO:teuthology.orchestra.run.vm05.stdout:Installed: 2026-03-09T14:54:16.227 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T14:54:16.227 INFO:teuthology.orchestra.run.vm05.stdout: ceph-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards-2:18.2.0-0.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core-2:18.2.0-0.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook-2:18.2.0-0.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts-2:18.2.0-0.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: cephadm-2:18.2.0-0.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: fmt-8.1.1-5.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: librgw2-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T14:54:16.228 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt-2.4.0-1.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt+crypto-2.4.0-1.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd-2:18.2.0-0.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:54:16.229 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T14:54:16.323 DEBUG:teuthology.parallel:result is None 2026-03-09T14:54:16.323 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using tag 2026-03-09T14:54:16.323 INFO:teuthology.packaging:ref: None 2026-03-09T14:54:16.323 INFO:teuthology.packaging:tag: v18.2.0 2026-03-09T14:54:16.323 INFO:teuthology.packaging:branch: None 2026-03-09T14:54:16.323 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:54:16.323 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=5dd24139a1eada541a3bc16b6941c5dde975e26d 2026-03-09T14:54:16.937 DEBUG:teuthology.orchestra.run.vm05:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-09T14:54:16.958 INFO:teuthology.orchestra.run.vm05.stdout:18.2.0-0.el9 2026-03-09T14:54:16.958 INFO:teuthology.packaging:The installed version of ceph is 18.2.0-0.el9 2026-03-09T14:54:16.958 INFO:teuthology.task.install:The correct ceph version 18.2.0-0 is installed. 2026-03-09T14:54:16.959 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using tag 2026-03-09T14:54:16.959 INFO:teuthology.packaging:ref: None 2026-03-09T14:54:16.959 INFO:teuthology.packaging:tag: v18.2.0 2026-03-09T14:54:16.959 INFO:teuthology.packaging:branch: None 2026-03-09T14:54:16.959 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:54:16.959 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=5dd24139a1eada541a3bc16b6941c5dde975e26d 2026-03-09T14:54:17.670 DEBUG:teuthology.orchestra.run.vm09:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-09T14:54:17.693 INFO:teuthology.orchestra.run.vm09.stdout:18.2.0-0.el9 2026-03-09T14:54:17.693 INFO:teuthology.packaging:The installed version of ceph is 18.2.0-0.el9 2026-03-09T14:54:17.693 INFO:teuthology.task.install:The correct ceph version 18.2.0-0 is installed. 2026-03-09T14:54:17.694 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T14:54:17.694 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:54:17.694 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T14:54:17.725 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:54:17.725 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T14:54:17.764 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T14:54:17.764 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:54:17.765 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T14:54:17.789 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T14:54:17.855 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:54:17.855 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T14:54:17.882 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T14:54:17.949 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T14:54:17.949 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:54:17.949 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T14:54:17.975 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T14:54:18.042 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:54:18.042 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T14:54:18.071 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T14:54:18.136 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T14:54:18.136 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:54:18.136 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T14:54:18.163 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T14:54:18.230 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:54:18.231 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T14:54:18.254 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T14:54:18.320 INFO:teuthology.run_tasks:Running task print... 2026-03-09T14:54:18.322 INFO:teuthology.task.print:**** done install task... 2026-03-09T14:54:18.322 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T14:54:18.371 INFO:tasks.cephadm:Config: {'compiled_cephadm_branch': 'reef', 'conf': {'osd': {'osd_class_default_list': '*', 'osd_class_load_list': '*', 'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'bitmap', 'bluestore block size': 96636764160, 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd op complaint time': 180}, 'client': {'client mount timeout': 600, 'debug client': 20, 'debug ms': 1, 'rados mon op timeout': 900, 'rados osd op timeout': 900}, 'global': {'mon pg warn min per osd': 0}, 'mds': {'debug mds': 20, 'debug mds balancer': 20, 'debug ms': 1, 'mds debug frag': True, 'mds debug scatterstat': True, 'mds op complaint time': 180, 'mds verify scatter': True, 'osd op complaint time': 180, 'rados mon op timeout': 900, 'rados osd op timeout': 900}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20, 'mon down mkfs grace': 300, 'mon op complaint time': 120}}, 'image': 'quay.io/ceph/ceph:v18.2.0', 'roleless': True, 'cluster-conf': {'mgr': {'client mount timeout': 30, 'debug client': 20, 'debug mgr': 20, 'debug ms': 1, 'mon warn on pool no app': False}}, 'flavor': 'default', 'fs': 'xfs', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'FS_DEGRADED', 'filesystem is degraded', 'FS_INLINE_DATA_DEPRECATED', 'FS_WITH_FAILED_MDS', 'MDS_ALL_DOWN', 'filesystem is offline', 'is offline because no MDS', 'MDS_DAMAGE', 'MDS_DEGRADED', 'MDS_FAILED', 'MDS_INSUFFICIENT_STANDBY', 'MDS_UP_LESS_THAN_MAX', 'online, but wants', 'filesystem is online with fewer MDS than max_mds', 'POOL_APP_NOT_ENABLED', 'do not have an application enabled', 'overall HEALTH_', 'Replacing daemon', 'deprecated feature inline_data', 'MGR_MODULE_ERROR', 'OSD_DOWN', 'osds down', 'overall HEALTH_', '\\(OSD_DOWN\\)', '\\(OSD_', 'but it is still running', 'is not responding', 'MON_DOWN', 'PG_AVAILABILITY', 'PG_DEGRADED', 'Reduced data availability', 'Degraded data redundancy', 'pg .* is stuck inactive', 'pg .* is .*degraded', 'pg .* is stuck peering'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-09T14:54:18.371 INFO:tasks.cephadm:Cluster image is quay.io/ceph/ceph:v18.2.0 2026-03-09T14:54:18.371 INFO:tasks.cephadm:Cluster fsid is d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:54:18.371 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T14:54:18.372 INFO:tasks.cephadm:No mon roles; fabricating mons 2026-03-09T14:54:18.372 INFO:tasks.cephadm:Monitor IPs: {'mon.vm05': '192.168.123.105', 'mon.vm09': '192.168.123.109'} 2026-03-09T14:54:18.372 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T14:54:18.372 DEBUG:teuthology.orchestra.run.vm05:> sudo hostname $(hostname -s) 2026-03-09T14:54:18.402 DEBUG:teuthology.orchestra.run.vm09:> sudo hostname $(hostname -s) 2026-03-09T14:54:18.429 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra for reef 2026-03-09T14:54:18.429 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:54:19.006 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-09T14:54:19.809 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref reef, sha1 ab47f43c099b2cbae6e21342fe673ce251da54d6 from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&ref=reef 2026-03-09T14:54:19.810 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/x86_64/flavors/default/cephadm 2026-03-09T14:54:19.810 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/x86_64/flavors/default/cephadm 2026-03-09T14:54:19.810 DEBUG:teuthology.orchestra.run.vm05:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T14:54:21.276 INFO:teuthology.orchestra.run.vm05.stdout:-rw-r--r--. 1 ubuntu ubuntu 217462 Mar 9 14:54 /home/ubuntu/cephtest/cephadm 2026-03-09T14:54:21.276 DEBUG:teuthology.orchestra.run.vm09:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/reef/ab47f43c099b2cbae6e21342fe673ce251da54d6/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T14:54:22.600 INFO:teuthology.orchestra.run.vm09.stdout:-rw-r--r--. 1 ubuntu ubuntu 217462 Mar 9 14:54 /home/ubuntu/cephtest/cephadm 2026-03-09T14:54:22.600 DEBUG:teuthology.orchestra.run.vm05:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T14:54:22.618 DEBUG:teuthology.orchestra.run.vm09:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T14:54:22.642 INFO:tasks.cephadm:Pulling image quay.io/ceph/ceph:v18.2.0 on all hosts... 2026-03-09T14:54:22.642 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 pull 2026-03-09T14:54:22.660 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 pull 2026-03-09T14:54:22.797 INFO:teuthology.orchestra.run.vm05.stderr:Pulling container image quay.io/ceph/ceph:v18.2.0... 2026-03-09T14:54:22.814 INFO:teuthology.orchestra.run.vm09.stderr:Pulling container image quay.io/ceph/ceph:v18.2.0... 2026-03-09T14:55:06.710 INFO:teuthology.orchestra.run.vm09.stdout:{ 2026-03-09T14:55:06.711 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_version": "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)", 2026-03-09T14:55:06.711 INFO:teuthology.orchestra.run.vm09.stdout: "image_id": "dc2bc1663786ad5862ef6efcff18243ab7dfeda0bcb911ddf757a0d20feca946", 2026-03-09T14:55:06.711 INFO:teuthology.orchestra.run.vm09.stdout: "repo_digests": [ 2026-03-09T14:55:06.711 INFO:teuthology.orchestra.run.vm09.stdout: "quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007", 2026-03-09T14:55:06.711 INFO:teuthology.orchestra.run.vm09.stdout: "quay.io/ceph/ceph@sha256:d58cf65589d0abf9d5261cd46fb62be3bfb29098febc78c0fcdf116a15274d27" 2026-03-09T14:55:06.711 INFO:teuthology.orchestra.run.vm09.stdout: ] 2026-03-09T14:55:06.711 INFO:teuthology.orchestra.run.vm09.stdout:} 2026-03-09T14:55:06.879 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T14:55:06.879 INFO:teuthology.orchestra.run.vm05.stdout: "ceph_version": "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)", 2026-03-09T14:55:06.879 INFO:teuthology.orchestra.run.vm05.stdout: "image_id": "dc2bc1663786ad5862ef6efcff18243ab7dfeda0bcb911ddf757a0d20feca946", 2026-03-09T14:55:06.879 INFO:teuthology.orchestra.run.vm05.stdout: "repo_digests": [ 2026-03-09T14:55:06.879 INFO:teuthology.orchestra.run.vm05.stdout: "quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007", 2026-03-09T14:55:06.879 INFO:teuthology.orchestra.run.vm05.stdout: "quay.io/ceph/ceph@sha256:d58cf65589d0abf9d5261cd46fb62be3bfb29098febc78c0fcdf116a15274d27" 2026-03-09T14:55:06.879 INFO:teuthology.orchestra.run.vm05.stdout: ] 2026-03-09T14:55:06.879 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T14:55:06.891 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /etc/ceph 2026-03-09T14:55:06.919 DEBUG:teuthology.orchestra.run.vm09:> sudo mkdir -p /etc/ceph 2026-03-09T14:55:06.949 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 777 /etc/ceph 2026-03-09T14:55:06.987 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 777 /etc/ceph 2026-03-09T14:55:07.015 INFO:tasks.cephadm:Writing seed config... 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] osd_class_default_list = * 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] osd_class_load_list = * 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] bdev async discard = True 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] bdev enable discard = True 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] bluestore allocator = bitmap 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] bluestore block size = 96636764160 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] bluestore fsck on mount = True 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] debug bluefs = 1/20 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] debug bluestore = 1/20 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] debug rocksdb = 4/10 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] mon osd backfillfull_ratio = 0.85 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] mon osd full ratio = 0.9 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] mon osd nearfull ratio = 0.8 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] osd failsafe full ratio = 0.95 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] osd objectstore = bluestore 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [osd] osd op complaint time = 180 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [client] client mount timeout = 600 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [client] debug client = 20 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [client] debug ms = 1 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [client] rados mon op timeout = 900 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [client] rados osd op timeout = 900 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [global] mon pg warn min per osd = 0 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [mds] debug mds = 20 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [mds] debug mds balancer = 20 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [mds] debug ms = 1 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [mds] mds debug frag = True 2026-03-09T14:55:07.015 INFO:tasks.cephadm: override: [mds] mds debug scatterstat = True 2026-03-09T14:55:07.016 INFO:tasks.cephadm: override: [mds] mds op complaint time = 180 2026-03-09T14:55:07.016 INFO:tasks.cephadm: override: [mds] mds verify scatter = True 2026-03-09T14:55:07.016 INFO:tasks.cephadm: override: [mds] osd op complaint time = 180 2026-03-09T14:55:07.016 INFO:tasks.cephadm: override: [mds] rados mon op timeout = 900 2026-03-09T14:55:07.016 INFO:tasks.cephadm: override: [mds] rados osd op timeout = 900 2026-03-09T14:55:07.016 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T14:55:07.016 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T14:55:07.016 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T14:55:07.016 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T14:55:07.016 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T14:55:07.016 INFO:tasks.cephadm: override: [mon] mon down mkfs grace = 300 2026-03-09T14:55:07.016 INFO:tasks.cephadm: override: [mon] mon op complaint time = 120 2026-03-09T14:55:07.016 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:55:07.016 DEBUG:teuthology.orchestra.run.vm05:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T14:55:07.043 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 mon pg warn min per osd = 0 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true osd_class_default_list = * osd_class_load_list = * bdev async discard = True bdev enable discard = True bluestore allocator = bitmap bluestore block size = 96636764160 bluestore fsck on mount = True debug bluefs = 1/20 debug bluestore = 1/20 debug ms = 1 debug osd = 20 debug rocksdb = 4/10 mon osd backfillfull_ratio = 0.85 mon osd full ratio = 0.9 mon osd nearfull ratio = 0.8 osd failsafe full ratio = 0.95 osd mclock iops capacity threshold hdd = 49000 osd objectstore = bluestore osd op complaint time = 180 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 mon down mkfs grace = 300 mon op complaint time = 120 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true [client] client mount timeout = 600 debug client = 20 debug ms = 1 rados mon op timeout = 900 rados osd op timeout = 900 [mds] debug mds = 20 debug mds balancer = 20 debug ms = 1 mds debug frag = True mds debug scatterstat = True mds op complaint time = 180 mds verify scatter = True osd op complaint time = 180 rados mon op timeout = 900 rados osd op timeout = 900 2026-03-09T14:55:07.044 DEBUG:teuthology.orchestra.run.vm05:mon.vm05> sudo journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm05.service 2026-03-09T14:55:07.085 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T14:55:07.085 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 -v bootstrap --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 192.168.123.105 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:55:07.202 INFO:teuthology.orchestra.run.vm05.stdout:-------------------------------------------------------------------------------- 2026-03-09T14:55:07.202 INFO:teuthology.orchestra.run.vm05.stdout:cephadm ['--image', 'quay.io/ceph/ceph:v18.2.0', '-v', 'bootstrap', '--fsid', 'd952ca1a-1bc7-11f1-a184-f9dcb7ee7000', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-ip', '192.168.123.105', '--skip-admin-label'] 2026-03-09T14:55:07.220 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stdout 5.8.0 2026-03-09T14:55:07.220 INFO:teuthology.orchestra.run.vm05.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T14:55:07.221 INFO:teuthology.orchestra.run.vm05.stdout:Verifying podman|docker is present... 2026-03-09T14:55:07.239 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stdout 5.8.0 2026-03-09T14:55:07.240 INFO:teuthology.orchestra.run.vm05.stdout:Verifying lvm2 is present... 2026-03-09T14:55:07.240 INFO:teuthology.orchestra.run.vm05.stdout:Verifying time synchronization is in place... 2026-03-09T14:55:07.247 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T14:55:07.247 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T14:55:07.252 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T14:55:07.252 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-09T14:55:07.258 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout enabled 2026-03-09T14:55:07.265 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout active 2026-03-09T14:55:07.265 INFO:teuthology.orchestra.run.vm05.stdout:Unit chronyd.service is enabled and running 2026-03-09T14:55:07.265 INFO:teuthology.orchestra.run.vm05.stdout:Repeating the final host check... 2026-03-09T14:55:07.283 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stdout 5.8.0 2026-03-09T14:55:07.283 INFO:teuthology.orchestra.run.vm05.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-09T14:55:07.283 INFO:teuthology.orchestra.run.vm05.stdout:systemctl is present 2026-03-09T14:55:07.283 INFO:teuthology.orchestra.run.vm05.stdout:lvcreate is present 2026-03-09T14:55:07.289 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T14:55:07.289 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T14:55:07.295 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T14:55:07.295 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-09T14:55:07.301 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout enabled 2026-03-09T14:55:07.308 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout active 2026-03-09T14:55:07.308 INFO:teuthology.orchestra.run.vm05.stdout:Unit chronyd.service is enabled and running 2026-03-09T14:55:07.308 INFO:teuthology.orchestra.run.vm05.stdout:Host looks OK 2026-03-09T14:55:07.308 INFO:teuthology.orchestra.run.vm05.stdout:Cluster fsid: d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:55:07.309 INFO:teuthology.orchestra.run.vm05.stdout:Acquiring lock 140384465089632 on /run/cephadm/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000.lock 2026-03-09T14:55:07.309 INFO:teuthology.orchestra.run.vm05.stdout:Lock 140384465089632 acquired on /run/cephadm/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000.lock 2026-03-09T14:55:07.309 INFO:teuthology.orchestra.run.vm05.stdout:Verifying IP 192.168.123.105 port 3300 ... 2026-03-09T14:55:07.309 INFO:teuthology.orchestra.run.vm05.stdout:Verifying IP 192.168.123.105 port 6789 ... 2026-03-09T14:55:07.309 INFO:teuthology.orchestra.run.vm05.stdout:Base mon IP(s) is [192.168.123.105:3300, 192.168.123.105:6789], mon addrv is [v2:192.168.123.105:3300,v1:192.168.123.105:6789] 2026-03-09T14:55:07.313 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.105 metric 100 2026-03-09T14:55:07.313 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.105 metric 100 2026-03-09T14:55:07.317 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T14:55:07.317 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-09T14:55:07.319 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T14:55:07.319 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T14:55:07.319 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T14:55:07.319 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-09T14:55:07.319 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:5/64 scope link noprefixroute 2026-03-09T14:55:07.319 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T14:55:07.319 INFO:teuthology.orchestra.run.vm05.stdout:Mon IP `192.168.123.105` is in CIDR network `192.168.123.0/24` 2026-03-09T14:55:07.320 INFO:teuthology.orchestra.run.vm05.stdout:Mon IP `192.168.123.105` is in CIDR network `192.168.123.0/24` 2026-03-09T14:55:07.320 INFO:teuthology.orchestra.run.vm05.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-09T14:55:07.320 INFO:teuthology.orchestra.run.vm05.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T14:55:07.321 INFO:teuthology.orchestra.run.vm05.stdout:Pulling container image quay.io/ceph/ceph:v18.2.0... 2026-03-09T14:55:08.774 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stdout dc2bc1663786ad5862ef6efcff18243ab7dfeda0bcb911ddf757a0d20feca946 2026-03-09T14:55:08.774 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Trying to pull quay.io/ceph/ceph:v18.2.0... 2026-03-09T14:55:08.774 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Getting image source signatures 2026-03-09T14:55:08.774 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Copying blob sha256:3bd20aeff60302f668275dc2005d10679ae56492967a3a5a54fd3dde85333aec 2026-03-09T14:55:08.774 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Copying blob sha256:46af8f5390d4e94fc57efb422ccb97bb53dfe5b948546bfc191b46557eb2dbd9 2026-03-09T14:55:08.774 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Copying config sha256:dc2bc1663786ad5862ef6efcff18243ab7dfeda0bcb911ddf757a0d20feca946 2026-03-09T14:55:08.774 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-09T14:55:08.933 INFO:teuthology.orchestra.run.vm05.stdout:ceph: stdout ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable) 2026-03-09T14:55:08.933 INFO:teuthology.orchestra.run.vm05.stdout:Ceph version: ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable) 2026-03-09T14:55:08.933 INFO:teuthology.orchestra.run.vm05.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T14:55:09.050 INFO:teuthology.orchestra.run.vm05.stdout:stat: stdout 167 167 2026-03-09T14:55:09.050 INFO:teuthology.orchestra.run.vm05.stdout:Creating initial keys... 2026-03-09T14:55:09.170 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-authtool: stdout AQBN365pI5uHBxAAVfzGZee5fSWK1z9TXxK/ew== 2026-03-09T14:55:09.266 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-authtool: stdout AQBN365p1SnbDhAAnPyOpEw4cx6sf8UjgWvTGg== 2026-03-09T14:55:09.367 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-authtool: stdout AQBN365pekxiFBAAnNKSFo4uA3sjjcHt5O+ANQ== 2026-03-09T14:55:09.367 INFO:teuthology.orchestra.run.vm05.stdout:Creating initial monmap... 2026-03-09T14:55:09.469 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T14:55:09.469 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = pacific 2026-03-09T14:55:09.469 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:55:09.469 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T14:55:09.469 INFO:teuthology.orchestra.run.vm05.stdout:monmaptool for vm05 [v2:192.168.123.105:3300,v1:192.168.123.105:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T14:55:09.469 INFO:teuthology.orchestra.run.vm05.stdout:setting min_mon_release = pacific 2026-03-09T14:55:09.469 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: set fsid to d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:55:09.469 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T14:55:09.469 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:55:09.469 INFO:teuthology.orchestra.run.vm05.stdout:Creating mon... 2026-03-09T14:55:09.611 INFO:teuthology.orchestra.run.vm05.stdout:create mon.vm05 on 2026-03-09T14:55:09.768 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-09T14:55:09.890 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T14:55:10.021 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000.target → /etc/systemd/system/ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000.target. 2026-03-09T14:55:10.021 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000.target → /etc/systemd/system/ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000.target. 2026-03-09T14:55:10.171 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm05 2026-03-09T14:55:10.171 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to reset failed state of unit ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm05.service: Unit ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm05.service not loaded. 2026-03-09T14:55:10.311 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000.target.wants/ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm05.service → /etc/systemd/system/ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@.service. 2026-03-09T14:55:10.507 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-09T14:55:10.507 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T14:55:10.507 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mon to start... 2026-03-09T14:55:10.507 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mon... 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout id: d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout services: 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum vm05 (age 0.190022s) 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout data: 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.669+0000 7f41cc71e700 1 Processor -- start 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.671+0000 7f41cc71e700 1 -- start start 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.671+0000 7f41cc71e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4107f40 0x7f41c4108350 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.671+0000 7f41cc71e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f41c4108890 con 0x7f41c4107f40 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.672+0000 7f41ca4ba700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4107f40 0x7f41c4108350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.672+0000 7f41ca4ba700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4107f40 0x7f41c4108350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48088/0 (socket says 192.168.123.105:48088) 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.672+0000 7f41ca4ba700 1 -- 192.168.123.105:0/2860147666 learned_addr learned my addr 192.168.123.105:0/2860147666 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.672+0000 7f41ca4ba700 1 -- 192.168.123.105:0/2860147666 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f41c41089d0 con 0x7f41c4107f40 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.673+0000 7f41ca4ba700 1 --2- 192.168.123.105:0/2860147666 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4107f40 0x7f41c4108350 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f41c0009cf0 tx=0x7f41c000b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b2893836d3712939 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.673+0000 7f41c94b8700 1 -- 192.168.123.105:0/2860147666 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f41c0004030 con 0x7f41c4107f40 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.673+0000 7f41c94b8700 1 -- 192.168.123.105:0/2860147666 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (secure 0 0 0) 0x7f41c0004190 con 0x7f41c4107f40 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.673+0000 7f41c94b8700 1 -- 192.168.123.105:0/2860147666 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f41c0004320 con 0x7f41c4107f40 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.674+0000 7f41cc71e700 1 -- 192.168.123.105:0/2860147666 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4107f40 msgr2=0x7f41c4108350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.674+0000 7f41cc71e700 1 --2- 192.168.123.105:0/2860147666 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4107f40 0x7f41c4108350 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f41c0009cf0 tx=0x7f41c000b0e0 comp rx=0 tx=0).stop 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.674+0000 7f41cc71e700 1 -- 192.168.123.105:0/2860147666 shutdown_connections 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.674+0000 7f41cc71e700 1 --2- 192.168.123.105:0/2860147666 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4107f40 0x7f41c4108350 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.674+0000 7f41cc71e700 1 -- 192.168.123.105:0/2860147666 >> 192.168.123.105:0/2860147666 conn(0x7f41c4103770 msgr2=0x7f41c4105b50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.674+0000 7f41cc71e700 1 -- 192.168.123.105:0/2860147666 shutdown_connections 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.674+0000 7f41cc71e700 1 -- 192.168.123.105:0/2860147666 wait complete. 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.675+0000 7f41cc71e700 1 Processor -- start 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.675+0000 7f41cc71e700 1 -- start start 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.675+0000 7f41cc71e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4198820 0x7f41c4196eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.675+0000 7f41cc71e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f41c41973f0 con 0x7f41c4198820 2026-03-09T14:55:10.777 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.676+0000 7f41ca4ba700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4198820 0x7f41c4196eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.676+0000 7f41ca4ba700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4198820 0x7f41c4196eb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48102/0 (socket says 192.168.123.105:48102) 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.676+0000 7f41ca4ba700 1 -- 192.168.123.105:0/2967057335 learned_addr learned my addr 192.168.123.105:0/2967057335 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.676+0000 7f41ca4ba700 1 -- 192.168.123.105:0/2967057335 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f41c0009740 con 0x7f41c4198820 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.676+0000 7f41ca4ba700 1 --2- 192.168.123.105:0/2967057335 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4198820 0x7f41c4196eb0 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f41c0009130 tx=0x7f41c0004750 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.677+0000 7f41bb7fe700 1 -- 192.168.123.105:0/2967057335 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f41c0004030 con 0x7f41c4198820 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.677+0000 7f41bb7fe700 1 -- 192.168.123.105:0/2967057335 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (secure 0 0 0) 0x7f41c00036a0 con 0x7f41c4198820 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.677+0000 7f41bb7fe700 1 -- 192.168.123.105:0/2967057335 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f41c0003810 con 0x7f41c4198820 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.677+0000 7f41cc71e700 1 -- 192.168.123.105:0/2967057335 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f41c41975f0 con 0x7f41c4198820 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.677+0000 7f41cc71e700 1 -- 192.168.123.105:0/2967057335 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f41c4197a90 con 0x7f41c4198820 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.678+0000 7f41bb7fe700 1 -- 192.168.123.105:0/2967057335 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7f41c0022020 con 0x7f41c4198820 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.678+0000 7f41bb7fe700 1 -- 192.168.123.105:0/2967057335 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f41c001ba60 con 0x7f41c4198820 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.678+0000 7f41cc71e700 1 -- 192.168.123.105:0/2967057335 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f41c404f9e0 con 0x7f41c4198820 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.680+0000 7f41bb7fe700 1 -- 192.168.123.105:0/2967057335 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72422 (secure 0 0 0) 0x7f41c0044b00 con 0x7f41c4198820 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.721+0000 7f41cc71e700 1 -- 192.168.123.105:0/2967057335 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "status"} v 0) v1 -- 0x7f41c4062380 con 0x7f41c4198820 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.722+0000 7f41bb7fe700 1 -- 192.168.123.105:0/2967057335 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "status"}]=0 v0) v1 ==== 54+0+320 (secure 0 0 0) 0x7f41c0033030 con 0x7f41c4198820 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.724+0000 7f41cc71e700 1 -- 192.168.123.105:0/2967057335 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4198820 msgr2=0x7f41c4196eb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.724+0000 7f41cc71e700 1 --2- 192.168.123.105:0/2967057335 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4198820 0x7f41c4196eb0 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f41c0009130 tx=0x7f41c0004750 comp rx=0 tx=0).stop 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.724+0000 7f41cc71e700 1 -- 192.168.123.105:0/2967057335 shutdown_connections 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.724+0000 7f41cc71e700 1 --2- 192.168.123.105:0/2967057335 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41c4198820 0x7f41c4196eb0 secure :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f41c0009130 tx=0x7f41c0004750 comp rx=0 tx=0).stop 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.724+0000 7f41cc71e700 1 -- 192.168.123.105:0/2967057335 >> 192.168.123.105:0/2967057335 conn(0x7f41c4103770 msgr2=0x7f41c418f9a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.725+0000 7f41cc71e700 1 -- 192.168.123.105:0/2967057335 shutdown_connections 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.725+0000 7f41cc71e700 1 -- 192.168.123.105:0/2967057335 wait complete. 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:mon is available 2026-03-09T14:55:10.778 INFO:teuthology.orchestra.run.vm05.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout fsid = d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.105:3300,v1:192.168.123.105:6789] 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_backfillfull_ratio = 0.85 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_full_ratio = 0.9 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_nearfull_ratio = 0.8 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.910+0000 7fe364d49700 1 Processor -- start 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.911+0000 7fe364d49700 1 -- start start 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.911+0000 7fe364d49700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 0x7fe360108330 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.911+0000 7fe364d49700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe360108870 con 0x7fe360107f20 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.912+0000 7fe35e59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 0x7fe360108330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.912+0000 7fe35e59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 0x7fe360108330 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48108/0 (socket says 192.168.123.105:48108) 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.912+0000 7fe35e59c700 1 -- 192.168.123.105:0/3494299790 learned_addr learned my addr 192.168.123.105:0/3494299790 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:11.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.913+0000 7fe35e59c700 1 -- 192.168.123.105:0/3494299790 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe3601089b0 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.913+0000 7fe35e59c700 1 --2- 192.168.123.105:0/3494299790 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 0x7fe360108330 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fe348009cf0 tx=0x7fe34800b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=445b82c4165f655f server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.913+0000 7fe35d59a700 1 -- 192.168.123.105:0/3494299790 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe348004030 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.913+0000 7fe35d59a700 1 -- 192.168.123.105:0/3494299790 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (secure 0 0 0) 0x7fe348004190 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.913+0000 7fe35d59a700 1 -- 192.168.123.105:0/3494299790 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe348004320 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.914+0000 7fe364d49700 1 -- 192.168.123.105:0/3494299790 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 msgr2=0x7fe360108330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.914+0000 7fe364d49700 1 --2- 192.168.123.105:0/3494299790 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 0x7fe360108330 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fe348009cf0 tx=0x7fe34800b0e0 comp rx=0 tx=0).stop 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.914+0000 7fe364d49700 1 -- 192.168.123.105:0/3494299790 shutdown_connections 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.914+0000 7fe364d49700 1 --2- 192.168.123.105:0/3494299790 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 0x7fe360108330 unknown :-1 s=CLOSED pgs=3 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.914+0000 7fe364d49700 1 -- 192.168.123.105:0/3494299790 >> 192.168.123.105:0/3494299790 conn(0x7fe36007b4b0 msgr2=0x7fe36007b8d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.914+0000 7fe364d49700 1 -- 192.168.123.105:0/3494299790 shutdown_connections 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.915+0000 7fe364d49700 1 -- 192.168.123.105:0/3494299790 wait complete. 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.915+0000 7fe364d49700 1 Processor -- start 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.915+0000 7fe364d49700 1 -- start start 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.916+0000 7fe364d49700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 0x7fe36019b2f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.916+0000 7fe364d49700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe36019b830 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.916+0000 7fe35e59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 0x7fe36019b2f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.916+0000 7fe35e59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 0x7fe36019b2f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48122/0 (socket says 192.168.123.105:48122) 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.916+0000 7fe35e59c700 1 -- 192.168.123.105:0/2486305682 learned_addr learned my addr 192.168.123.105:0/2486305682 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.916+0000 7fe35e59c700 1 -- 192.168.123.105:0/2486305682 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe348009740 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.917+0000 7fe35e59c700 1 --2- 192.168.123.105:0/2486305682 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 0x7fe36019b2f0 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7fe360108c50 tx=0x7fe348004750 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.917+0000 7fe3577fe700 1 -- 192.168.123.105:0/2486305682 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe348004030 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.917+0000 7fe3577fe700 1 -- 192.168.123.105:0/2486305682 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (secure 0 0 0) 0x7fe3480036a0 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.917+0000 7fe364d49700 1 -- 192.168.123.105:0/2486305682 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe36019ba30 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.917+0000 7fe364d49700 1 -- 192.168.123.105:0/2486305682 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe36019bed0 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.918+0000 7fe3577fe700 1 -- 192.168.123.105:0/2486305682 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe348004030 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.918+0000 7fe3577fe700 1 -- 192.168.123.105:0/2486305682 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7fe348003b10 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.918+0000 7fe3577fe700 1 -- 192.168.123.105:0/2486305682 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7fe348025440 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.918+0000 7fe364d49700 1 -- 192.168.123.105:0/2486305682 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe360194b00 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.920+0000 7fe3577fe700 1 -- 192.168.123.105:0/2486305682 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72422 (secure 0 0 0) 0x7fe348044b00 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.958+0000 7fe364d49700 1 -- 192.168.123.105:0/2486305682 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) v1 -- 0x7fe360062380 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.966+0000 7fe3577fe700 1 -- 192.168.123.105:0/2486305682 <== mon.0 v2:192.168.123.105:3300/0 7 ==== config(27 keys) v1 ==== 1037+0+0 (secure 0 0 0) 0x7fe3480036a0 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.966+0000 7fe3577fe700 1 -- 192.168.123.105:0/2486305682 <== mon.0 v2:192.168.123.105:3300/0 8 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v2) v1 ==== 70+0+435 (secure 0 0 0) 0x7fe348033030 con 0x7fe360107f20 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.968+0000 7fe364d49700 1 -- 192.168.123.105:0/2486305682 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 msgr2=0x7fe36019b2f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.968+0000 7fe364d49700 1 --2- 192.168.123.105:0/2486305682 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 0x7fe36019b2f0 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7fe360108c50 tx=0x7fe348004750 comp rx=0 tx=0).stop 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.968+0000 7fe364d49700 1 -- 192.168.123.105:0/2486305682 shutdown_connections 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.968+0000 7fe364d49700 1 --2- 192.168.123.105:0/2486305682 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe360107f20 0x7fe36019b2f0 unknown :-1 s=CLOSED pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.968+0000 7fe364d49700 1 -- 192.168.123.105:0/2486305682 >> 192.168.123.105:0/2486305682 conn(0x7fe36007b4b0 msgr2=0x7fe36018f980 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.968+0000 7fe364d49700 1 -- 192.168.123.105:0/2486305682 shutdown_connections 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:10.968+0000 7fe364d49700 1 -- 192.168.123.105:0/2486305682 wait complete. 2026-03-09T14:55:11.014 INFO:teuthology.orchestra.run.vm05.stdout:Generating new minimal ceph.conf... 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.124+0000 7f03c8b4f700 1 Processor -- start 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.124+0000 7f03c8b4f700 1 -- start start 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.125+0000 7f03c8b4f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 0x7f03c4108330 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.125+0000 7f03c8b4f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f03c4108870 con 0x7f03c4107f20 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.125+0000 7f03c259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 0x7f03c4108330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.125+0000 7f03c259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 0x7f03c4108330 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48132/0 (socket says 192.168.123.105:48132) 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.125+0000 7f03c259c700 1 -- 192.168.123.105:0/2385717235 learned_addr learned my addr 192.168.123.105:0/2385717235 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.125+0000 7f03c259c700 1 -- 192.168.123.105:0/2385717235 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f03c41089b0 con 0x7f03c4107f20 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.125+0000 7f03c259c700 1 --2- 192.168.123.105:0/2385717235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 0x7f03c4108330 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f03b4009a90 tx=0x7f03b4009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=1661939466b9ff09 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.126+0000 7f03c1d9b700 1 -- 192.168.123.105:0/2385717235 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f03b4004030 con 0x7f03c4107f20 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.126+0000 7f03c1d9b700 1 -- 192.168.123.105:0/2385717235 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(27 keys) v1 ==== 1037+0+0 (secure 0 0 0) 0x7f03b400b7e0 con 0x7f03c4107f20 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.126+0000 7f03c1d9b700 1 -- 192.168.123.105:0/2385717235 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f03b4003920 con 0x7f03c4107f20 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.126+0000 7f03c8b4f700 1 -- 192.168.123.105:0/2385717235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 msgr2=0x7f03c4108330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.126+0000 7f03c8b4f700 1 --2- 192.168.123.105:0/2385717235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 0x7f03c4108330 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f03b4009a90 tx=0x7f03b4009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.127+0000 7f03c8b4f700 1 -- 192.168.123.105:0/2385717235 shutdown_connections 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.127+0000 7f03c8b4f700 1 --2- 192.168.123.105:0/2385717235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 0x7f03c4108330 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.127+0000 7f03c8b4f700 1 -- 192.168.123.105:0/2385717235 >> 192.168.123.105:0/2385717235 conn(0x7f03c407b4b0 msgr2=0x7f03c407b8d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.127+0000 7f03c8b4f700 1 -- 192.168.123.105:0/2385717235 shutdown_connections 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.127+0000 7f03c8b4f700 1 -- 192.168.123.105:0/2385717235 wait complete. 2026-03-09T14:55:11.228 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.128+0000 7f03c8b4f700 1 Processor -- start 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.128+0000 7f03c8b4f700 1 -- start start 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.128+0000 7f03c8b4f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 0x7f03c419ba80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.128+0000 7f03c8b4f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f03c419bfc0 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.128+0000 7f03c259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 0x7f03c419ba80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.128+0000 7f03c259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 0x7f03c419ba80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48134/0 (socket says 192.168.123.105:48134) 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.128+0000 7f03c259c700 1 -- 192.168.123.105:0/3092256820 learned_addr learned my addr 192.168.123.105:0/3092256820 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.128+0000 7f03c259c700 1 -- 192.168.123.105:0/3092256820 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f03b4009740 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.128+0000 7f03c259c700 1 --2- 192.168.123.105:0/3092256820 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 0x7f03c419ba80 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f03b400be80 tx=0x7f03b4003c40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.129+0000 7f03bbfff700 1 -- 192.168.123.105:0/3092256820 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f03b4004060 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.129+0000 7f03c8b4f700 1 -- 192.168.123.105:0/3092256820 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f03c419c1c0 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.129+0000 7f03c8b4f700 1 -- 192.168.123.105:0/3092256820 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f03c419c660 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.129+0000 7f03bbfff700 1 -- 192.168.123.105:0/3092256820 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(27 keys) v1 ==== 1037+0+0 (secure 0 0 0) 0x7f03b402b430 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.129+0000 7f03bbfff700 1 -- 192.168.123.105:0/3092256820 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f03b401a430 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.129+0000 7f03bbfff700 1 -- 192.168.123.105:0/3092256820 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7f03b401a970 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.129+0000 7f03bbfff700 1 -- 192.168.123.105:0/3092256820 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f03b40193c0 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.131+0000 7f03c8b4f700 1 -- 192.168.123.105:0/3092256820 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f03a4005320 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.132+0000 7f03bbfff700 1 -- 192.168.123.105:0/3092256820 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72422 (secure 0 0 0) 0x7f03b402b9f0 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.171+0000 7f03c8b4f700 1 -- 192.168.123.105:0/3092256820 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) v1 -- 0x7f03a4005190 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.171+0000 7f03bbfff700 1 -- 192.168.123.105:0/3092256820 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v2) v1 ==== 76+0+181 (secure 0 0 0) 0x7f03b402b5a0 con 0x7f03c4107f20 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.172+0000 7f03c8b4f700 1 -- 192.168.123.105:0/3092256820 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 msgr2=0x7f03c419ba80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.172+0000 7f03c8b4f700 1 --2- 192.168.123.105:0/3092256820 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 0x7f03c419ba80 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f03b400be80 tx=0x7f03b4003c40 comp rx=0 tx=0).stop 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.173+0000 7f03c8b4f700 1 -- 192.168.123.105:0/3092256820 shutdown_connections 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.173+0000 7f03c8b4f700 1 --2- 192.168.123.105:0/3092256820 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03c4107f20 0x7f03c419ba80 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.173+0000 7f03c8b4f700 1 -- 192.168.123.105:0/3092256820 >> 192.168.123.105:0/3092256820 conn(0x7f03c407b4b0 msgr2=0x7f03c4105620 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.173+0000 7f03c8b4f700 1 -- 192.168.123.105:0/3092256820 shutdown_connections 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.173+0000 7f03c8b4f700 1 -- 192.168.123.105:0/3092256820 wait complete. 2026-03-09T14:55:11.229 INFO:teuthology.orchestra.run.vm05.stdout:Restarting the monitor... 2026-03-09T14:55:11.507 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 bash[50531]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05 2026-03-09T14:55:11.647 INFO:teuthology.orchestra.run.vm05.stdout:Setting public_network to 192.168.123.0/24 in global config section 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm05.service: Deactivated successfully. 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 systemd[1]: Stopped Ceph mon.vm05 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 systemd[1]: Starting Ceph mon.vm05 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 podman[50597]: 2026-03-09 14:55:11.606601549 +0000 UTC m=+0.015032258 container create c83e96b622518bee42ad8f809a026a817b70dbacd70f6f3ad1494d52d8c535e1 (image=quay.io/ceph/ceph:v18.2.0, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05, org.label-schema.build-date=20231212, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GIT_BRANCH=HEAD, CEPH_POINT_RELEASE=-18.2.0, GIT_CLEAN=True, org.label-schema.name=CentOS Stream 8 Base Image, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, io.buildah.version=1.29.1, ceph=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=HEAD, org.label-schema.license=GPLv2) 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 podman[50597]: 2026-03-09 14:55:11.638718357 +0000 UTC m=+0.047149066 container init c83e96b622518bee42ad8f809a026a817b70dbacd70f6f3ad1494d52d8c535e1 (image=quay.io/ceph/ceph:v18.2.0, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20231212, org.label-schema.schema-version=1.0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.label-schema.vendor=CentOS, GIT_BRANCH=HEAD, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=-18.2.0, org.label-schema.name=CentOS Stream 8 Base Image, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, RELEASE=HEAD, io.buildah.version=1.29.1) 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 podman[50597]: 2026-03-09 14:55:11.641277901 +0000 UTC m=+0.049708610 container start c83e96b622518bee42ad8f809a026a817b70dbacd70f6f3ad1494d52d8c535e1 (image=quay.io/ceph/ceph:v18.2.0, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05, GIT_CLEAN=True, org.label-schema.name=CentOS Stream 8 Base Image, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, RELEASE=HEAD, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=-18.2.0, io.buildah.version=1.29.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20231212, GIT_BRANCH=HEAD, GIT_REPO=https://github.com/ceph/ceph-container.git) 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 bash[50597]: c83e96b622518bee42ad8f809a026a817b70dbacd70f6f3ad1494d52d8c535e1 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 podman[50597]: 2026-03-09 14:55:11.600287386 +0000 UTC m=+0.008718105 image pull dc2bc1663786ad5862ef6efcff18243ab7dfeda0bcb911ddf757a0d20feca946 quay.io/ceph/ceph:v18.2.0 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 systemd[1]: Started Ceph mon.vm05 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable), process ceph-mon, pid 2 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: pidfile_write: ignore empty --pid-file 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: load: jerasure load: lrc 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: RocksDB version: 7.9.2 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Git sha 0 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Compile date 2023-08-03 19:21:13 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: DB SUMMARY 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: DB Session ID: CYTP72441SX4DY3WUT2U 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: CURRENT file: CURRENT 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm05/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm05/store.db: 000009.log size: 89048 ; 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.error_if_exists: 0 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.create_if_missing: 0 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.paranoid_checks: 1 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.env: 0x555591ca4720 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.info_log: 0x55559403d340 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.statistics: (nil) 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.use_fsync: 0 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_log_file_size: 0 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T14:55:11.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.allow_fallocate: 1 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.use_direct_reads: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.db_log_dir: 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.wal_dir: 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.write_buffer_manager: 0x5555932cc5a0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.unordered_write: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.row_cache: None 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.wal_filter: None 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.two_write_queues: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.wal_compression: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.atomic_flush: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.log_readahead_size: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_background_jobs: 2 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_background_compactions: -1 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_subcompactions: 1 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T14:55:11.777 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_open_files: -1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_background_flushes: -1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Compression algorithms supported: 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: kZSTD supported: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: kXpressCompression supported: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: kZlibCompression supported: 1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: kSnappyCompression supported: 1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: kLZ4Compression supported: 1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: kBZip2Compression supported: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm05/store.db/MANIFEST-000010 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.merge_operator: 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_filter: None 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55559403d460) 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: cache_index_and_filter_blocks: 1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: pin_top_level_index_and_filter: 1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: index_type: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: data_block_index_type: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: index_shortening: 1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: checksum: 4 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: no_block_cache: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_cache: 0x55559334f350 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_cache_name: BinnedLRUCache 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_cache_options: 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: capacity : 536870912 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: num_shard_bits : 4 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: strict_capacity_limit : 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: high_pri_pool_ratio: 0.000 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_cache_compressed: (nil) 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: persistent_cache: (nil) 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_size: 4096 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_size_deviation: 10 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_restart_interval: 16 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: index_block_restart_interval: 1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: metadata_block_size: 4096 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: partition_filters: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: use_delta_encoding: 1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: filter_policy: bloomfilter 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: whole_key_filtering: 1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: verify_compression: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: read_amp_bytes_per_bit: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: format_version: 5 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: enable_index_compression: 1 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_align: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: max_auto_readahead_size: 262144 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: prepopulate_block_cache: 0 2026-03-09T14:55:11.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout: initial_auto_readahead_size: 8192 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compression: NoCompression 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.num_levels: 7 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.table_properties_collectors: 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.inplace_update_support: 0 2026-03-09T14:55:11.779 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.bloom_locality: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.max_successive_merges: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.ttl: 2592000 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.enable_blob_files: false 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.min_blob_size: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm05/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4f82c324-c7ea-4fb7-862b-89fcdd638479 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773068111672435, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773068111673733, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 84711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 287, "table_properties": {"data_size": 82789, "index_size": 209, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 13288, "raw_average_key_size": 51, "raw_value_size": 75614, "raw_average_value_size": 293, "num_data_blocks": 9, "num_entries": 258, "num_filter_entries": 258, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773068111, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4f82c324-c7ea-4fb7-862b-89fcdd638479", "db_session_id": "CYTP72441SX4DY3WUT2U", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773068111673777, "job": 1, "event": "recovery_finished"} 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm05/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5555933ec000 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: DB pointer 0x5555933d8000 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ** DB Stats ** 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ** Compaction Stats [default] ** 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: L0 2/0 84.57 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 74.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Sum 2/0 84.57 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 74.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 74.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ** Compaction Stats [default] ** 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 74.8 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:55:11.780 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Cumulative compaction: 0.00 GB write, 13.36 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Interval compaction: 0.00 GB write, 13.36 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Block cache BinnedLRUCache@0x55559334f350#2 capacity: 512.00 MB usage: 1.30 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-06 secs_since: 0 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Block cache entry stats(count,size,portion): FilterBlock(2,0.89 KB,0.000169873%) IndexBlock(2,0.41 KB,7.7486e-05%) Misc(1,0.00 KB,0%) 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: starting mon.vm05 rank 0 at public addrs [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] at bind addrs [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon_data /var/lib/ceph/mon/ceph-vm05 fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: mon.vm05@-1(???) e1 preinit fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: mon.vm05@-1(???).mds e1 new map 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: mon.vm05@-1(???).mds e1 print_map 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: e1 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: legacy client fscid: -1 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout: No filesystems configured 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: mon.vm05@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: mon.vm05@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: mon.vm05@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: mon.vm05@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: mon.vm05@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: mon.vm05 is new leader, mons vm05 in quorum (ranks 0) 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: monmap e1: 1 mons at {vm05=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0]} removed_ranks: {} 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: fsmap 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: osdmap e1: 0 total, 0 up, 0 in 2026-03-09T14:55:11.781 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:11 vm05 ceph-mon[50611]: mgrmap e1: no daemons active 2026-03-09T14:55:11.864 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.786+0000 7f5faa235700 1 Processor -- start 2026-03-09T14:55:11.864 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.787+0000 7f5faa235700 1 -- start start 2026-03-09T14:55:11.864 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.787+0000 7f5faa235700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 0x7f5fa4108350 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:11.864 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.787+0000 7f5faa235700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5fa4108890 con 0x7f5fa4107f40 2026-03-09T14:55:11.864 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.788+0000 7f5fa37fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 0x7f5fa4108350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:11.865 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.788+0000 7f5fa37fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 0x7f5fa4108350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48142/0 (socket says 192.168.123.105:48142) 2026-03-09T14:55:11.865 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.788+0000 7f5fa37fe700 1 -- 192.168.123.105:0/1846740336 learned_addr learned my addr 192.168.123.105:0/1846740336 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:11.865 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.788+0000 7f5fa37fe700 1 -- 192.168.123.105:0/1846740336 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5fa41089d0 con 0x7f5fa4107f40 2026-03-09T14:55:11.865 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.788+0000 7f5fa37fe700 1 --2- 192.168.123.105:0/1846740336 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 0x7f5fa4108350 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f5f8c009cf0 tx=0x7f5f8c00b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c72d829edd023a1c server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:11.865 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.789+0000 7f5fa27fc700 1 -- 192.168.123.105:0/1846740336 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5f8c004030 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.789+0000 7f5fa27fc700 1 -- 192.168.123.105:0/1846740336 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(27 keys) v1 ==== 1037+0+0 (secure 0 0 0) 0x7f5f8c00b810 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.789+0000 7f5fa27fc700 1 -- 192.168.123.105:0/1846740336 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5f8c0039c0 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.789+0000 7f5faa235700 1 -- 192.168.123.105:0/1846740336 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 msgr2=0x7f5fa4108350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.789+0000 7f5faa235700 1 --2- 192.168.123.105:0/1846740336 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 0x7f5fa4108350 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f5f8c009cf0 tx=0x7f5f8c00b0e0 comp rx=0 tx=0).stop 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.789+0000 7f5faa235700 1 -- 192.168.123.105:0/1846740336 shutdown_connections 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.789+0000 7f5faa235700 1 --2- 192.168.123.105:0/1846740336 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 0x7f5fa4108350 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.789+0000 7f5faa235700 1 -- 192.168.123.105:0/1846740336 >> 192.168.123.105:0/1846740336 conn(0x7f5fa4103770 msgr2=0x7f5fa4105b50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.789+0000 7f5faa235700 1 -- 192.168.123.105:0/1846740336 shutdown_connections 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.790+0000 7f5faa235700 1 -- 192.168.123.105:0/1846740336 wait complete. 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.790+0000 7f5faa235700 1 Processor -- start 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.790+0000 7f5faa235700 1 -- start start 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.790+0000 7f5faa235700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 0x7f5fa419bae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.790+0000 7f5faa235700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5fa419c020 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.790+0000 7f5fa37fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 0x7f5fa419bae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.791+0000 7f5fa37fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 0x7f5fa419bae0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48148/0 (socket says 192.168.123.105:48148) 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.791+0000 7f5fa37fe700 1 -- 192.168.123.105:0/2789517899 learned_addr learned my addr 192.168.123.105:0/2789517899 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.791+0000 7f5fa37fe700 1 -- 192.168.123.105:0/2789517899 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5f8c009740 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.791+0000 7f5fa37fe700 1 --2- 192.168.123.105:0/2789517899 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 0x7f5fa419bae0 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f5f8c003e60 tx=0x7f5f8c003f40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.791+0000 7f5fa0ff9700 1 -- 192.168.123.105:0/2789517899 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5f8c004110 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.791+0000 7f5fa0ff9700 1 -- 192.168.123.105:0/2789517899 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(27 keys) v1 ==== 1037+0+0 (secure 0 0 0) 0x7f5f8c01a460 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.791+0000 7f5fa0ff9700 1 -- 192.168.123.105:0/2789517899 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5f8c011420 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.791+0000 7f5faa235700 1 -- 192.168.123.105:0/2789517899 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5fa419c220 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.791+0000 7f5faa235700 1 -- 192.168.123.105:0/2789517899 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5fa419c6c0 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.792+0000 7f5fa0ff9700 1 -- 192.168.123.105:0/2789517899 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7f5f8c011580 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.792+0000 7f5fa0ff9700 1 -- 192.168.123.105:0/2789517899 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f5f8c025700 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.792+0000 7f5faa235700 1 -- 192.168.123.105:0/2789517899 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5fa404f9e0 con 0x7f5fa4107f40 2026-03-09T14:55:11.866 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.794+0000 7f5fa0ff9700 1 -- 192.168.123.105:0/2789517899 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72422 (secure 0 0 0) 0x7f5f8c020020 con 0x7f5fa4107f40 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.830+0000 7f5faa235700 1 -- 192.168.123.105:0/2789517899 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command([{prefix=config set, name=public_network}] v 0) v1 -- 0x7f5fa4062380 con 0x7f5fa4107f40 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.832+0000 7f5fa0ff9700 1 -- 192.168.123.105:0/2789517899 <== mon.0 v2:192.168.123.105:3300/0 7 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f5f8c004570 con 0x7f5fa4107f40 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.832+0000 7f5fa0ff9700 1 -- 192.168.123.105:0/2789517899 <== mon.0 v2:192.168.123.105:3300/0 8 ==== mon_command_ack([{prefix=config set, name=public_network}]=0 v3)=0 v3) v1 ==== 130+0+0 (secure 0 0 0) 0x7f5f8c025c70 con 0x7f5fa4107f40 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.834+0000 7f5faa235700 1 -- 192.168.123.105:0/2789517899 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 msgr2=0x7f5fa419bae0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.834+0000 7f5faa235700 1 --2- 192.168.123.105:0/2789517899 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 0x7f5fa419bae0 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f5f8c003e60 tx=0x7f5f8c003f40 comp rx=0 tx=0).stop 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.834+0000 7f5faa235700 1 -- 192.168.123.105:0/2789517899 shutdown_connections 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.834+0000 7f5faa235700 1 --2- 192.168.123.105:0/2789517899 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5fa4107f40 0x7f5fa419bae0 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.834+0000 7f5faa235700 1 -- 192.168.123.105:0/2789517899 >> 192.168.123.105:0/2789517899 conn(0x7f5fa4103770 msgr2=0x7f5fa4105410 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.835+0000 7f5faa235700 1 -- 192.168.123.105:0/2789517899 shutdown_connections 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:11.835+0000 7f5faa235700 1 -- 192.168.123.105:0/2789517899 wait complete. 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:55:11.867 INFO:teuthology.orchestra.run.vm05.stdout:Creating mgr... 2026-03-09T14:55:11.868 INFO:teuthology.orchestra.run.vm05.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T14:55:11.868 INFO:teuthology.orchestra.run.vm05.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T14:55:11.868 INFO:teuthology.orchestra.run.vm05.stdout:Verifying port 0.0.0.0:8443 ... 2026-03-09T14:55:12.031 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mgr.vm05.lhsexd 2026-03-09T14:55:12.031 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to reset failed state of unit ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mgr.vm05.lhsexd.service: Unit ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mgr.vm05.lhsexd.service not loaded. 2026-03-09T14:55:12.162 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000.target.wants/ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mgr.vm05.lhsexd.service → /etc/systemd/system/ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@.service. 2026-03-09T14:55:12.369 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-09T14:55:12.369 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T14:55:12.369 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-09T14:55:12.369 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to open ports <[9283, 8765, 8443]>. firewalld.service is not available 2026-03-09T14:55:12.369 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr to start... 2026-03-09T14:55:12.369 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr... 2026-03-09T14:55:12.626 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "d952ca1a-1bc7-11f1-a184-f9dcb7ee7000", 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "vm05" 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "reef", 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T14:55:12.627 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T14:55:10.534547+0000", 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.533+0000 7f84ba8e2700 1 Processor -- start 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.533+0000 7f84ba8e2700 1 -- start start 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.534+0000 7f84ba8e2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b410ccf0 0x7f84b410f0d0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.534+0000 7f84ba8e2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f84b4080500 con 0x7f84b410ccf0 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.534+0000 7f84b3fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b410ccf0 0x7f84b410f0d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.535+0000 7f84b3fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b410ccf0 0x7f84b410f0d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48178/0 (socket says 192.168.123.105:48178) 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.535+0000 7f84b3fff700 1 -- 192.168.123.105:0/1168828326 learned_addr learned my addr 192.168.123.105:0/1168828326 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.535+0000 7f84b3fff700 1 -- 192.168.123.105:0/1168828326 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f84b410f610 con 0x7f84b410ccf0 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.535+0000 7f84b3fff700 1 --2- 192.168.123.105:0/1168828326 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b410ccf0 0x7f84b410f0d0 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f84a8009a90 tx=0x7f84a8009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=59b7d055febf9d7c server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.535+0000 7f84b2ffd700 1 -- 192.168.123.105:0/1168828326 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f84a8004030 con 0x7f84b410ccf0 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.535+0000 7f84b2ffd700 1 -- 192.168.123.105:0/1168828326 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f84a800b7e0 con 0x7f84b410ccf0 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.536+0000 7f84ba8e2700 1 -- 192.168.123.105:0/1168828326 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b410ccf0 msgr2=0x7f84b410f0d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.536+0000 7f84ba8e2700 1 --2- 192.168.123.105:0/1168828326 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b410ccf0 0x7f84b410f0d0 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f84a8009a90 tx=0x7f84a8009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.536+0000 7f84ba8e2700 1 -- 192.168.123.105:0/1168828326 shutdown_connections 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.536+0000 7f84ba8e2700 1 --2- 192.168.123.105:0/1168828326 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b410ccf0 0x7f84b410f0d0 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.536+0000 7f84ba8e2700 1 -- 192.168.123.105:0/1168828326 >> 192.168.123.105:0/1168828326 conn(0x7f84b407b4b0 msgr2=0x7f84b407d8e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.536+0000 7f84ba8e2700 1 -- 192.168.123.105:0/1168828326 shutdown_connections 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.536+0000 7f84ba8e2700 1 -- 192.168.123.105:0/1168828326 wait complete. 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.537+0000 7f84ba8e2700 1 Processor -- start 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.537+0000 7f84ba8e2700 1 -- start start 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.537+0000 7f84ba8e2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b41a89e0 0x7f84b41a8df0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.537+0000 7f84ba8e2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f84b4080500 con 0x7f84b41a89e0 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.537+0000 7f84b3fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b41a89e0 0x7f84b41a8df0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.537+0000 7f84b3fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b41a89e0 0x7f84b41a8df0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48184/0 (socket says 192.168.123.105:48184) 2026-03-09T14:55:12.628 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.537+0000 7f84b3fff700 1 -- 192.168.123.105:0/2367184137 learned_addr learned my addr 192.168.123.105:0/2367184137 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.538+0000 7f84b3fff700 1 -- 192.168.123.105:0/2367184137 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f84a8009740 con 0x7f84b41a89e0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.538+0000 7f84b3fff700 1 --2- 192.168.123.105:0/2367184137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b41a89e0 0x7f84b41a8df0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f84a800bd00 tx=0x7f84a800bde0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.540+0000 7f84b17fa700 1 -- 192.168.123.105:0/2367184137 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f84a801a670 con 0x7f84b41a89e0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.540+0000 7f84ba8e2700 1 -- 192.168.123.105:0/2367184137 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f84b41a9330 con 0x7f84b41a89e0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.540+0000 7f84ba8e2700 1 -- 192.168.123.105:0/2367184137 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f84b41abfc0 con 0x7f84b41a89e0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.540+0000 7f84b17fa700 1 -- 192.168.123.105:0/2367184137 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f84a801ac70 con 0x7f84b41a89e0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.540+0000 7f84b17fa700 1 -- 192.168.123.105:0/2367184137 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f84a80044e0 con 0x7f84b41a89e0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.540+0000 7f84ba8e2700 1 -- 192.168.123.105:0/2367184137 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f84b404fa50 con 0x7f84b41a89e0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.541+0000 7f84b17fa700 1 -- 192.168.123.105:0/2367184137 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7f84a802c430 con 0x7f84b41a89e0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.542+0000 7f84b17fa700 1 -- 192.168.123.105:0/2367184137 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f84a8011760 con 0x7f84b41a89e0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.542+0000 7f84b17fa700 1 -- 192.168.123.105:0/2367184137 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72422 (secure 0 0 0) 0x7f84a801ade0 con 0x7f84b41a89e0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.586+0000 7f84ba8e2700 1 -- 192.168.123.105:0/2367184137 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1 -- 0x7f84b41ac310 con 0x7f84b41a89e0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.587+0000 7f84b17fa700 1 -- 192.168.123.105:0/2367184137 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) v1 ==== 79+0+1241 (secure 0 0 0) 0x7f84a8004640 con 0x7f84b41a89e0 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.590+0000 7f849affd700 1 -- 192.168.123.105:0/2367184137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b41a89e0 msgr2=0x7f84b41a8df0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.590+0000 7f849affd700 1 --2- 192.168.123.105:0/2367184137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b41a89e0 0x7f84b41a8df0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f84a800bd00 tx=0x7f84a800bde0 comp rx=0 tx=0).stop 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.590+0000 7f849affd700 1 -- 192.168.123.105:0/2367184137 shutdown_connections 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.590+0000 7f849affd700 1 --2- 192.168.123.105:0/2367184137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84b41a89e0 0x7f84b41a8df0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.590+0000 7f849affd700 1 -- 192.168.123.105:0/2367184137 >> 192.168.123.105:0/2367184137 conn(0x7f84b407b4b0 msgr2=0x7f84b407d160 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.590+0000 7f849affd700 1 -- 192.168.123.105:0/2367184137 shutdown_connections 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:12.590+0000 7f849affd700 1 -- 192.168.123.105:0/2367184137 wait complete. 2026-03-09T14:55:12.629 INFO:teuthology.orchestra.run.vm05.stdout:mgr not available, waiting (1/15)... 2026-03-09T14:55:13.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:12 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2789517899' entity='client.admin' 2026-03-09T14:55:13.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:12 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2367184137' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:55:14.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:14.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:55:14.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "d952ca1a-1bc7-11f1-a184-f9dcb7ee7000", 2026-03-09T14:55:14.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T14:55:14.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T14:55:14.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T14:55:14.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T14:55:14.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:14.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T14:55:14.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T14:55:14.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-09T14:55:14.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "vm05" 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "reef", 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T14:55:10.534547+0000", 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.757+0000 7f92adbb9700 1 Processor -- start 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.757+0000 7f92adbb9700 1 -- start start 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.758+0000 7f92adbb9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a8072b50 0x7f92a8071050 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.758+0000 7f92adbb9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f92a8071590 con 0x7f92a8072b50 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.758+0000 7f92a77fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a8072b50 0x7f92a8071050 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:14.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.758+0000 7f92a77fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a8072b50 0x7f92a8071050 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48188/0 (socket says 192.168.123.105:48188) 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.758+0000 7f92a77fe700 1 -- 192.168.123.105:0/284320459 learned_addr learned my addr 192.168.123.105:0/284320459 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.759+0000 7f92a77fe700 1 -- 192.168.123.105:0/284320459 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f92a80716d0 con 0x7f92a8072b50 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.759+0000 7f92a77fe700 1 --2- 192.168.123.105:0/284320459 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a8072b50 0x7f92a8071050 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f9298009a90 tx=0x7f9298009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=afe2f30579f8d1b2 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.760+0000 7f92a67fc700 1 -- 192.168.123.105:0/284320459 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9298004030 con 0x7f92a8072b50 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.760+0000 7f92a67fc700 1 -- 192.168.123.105:0/284320459 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f929800b7e0 con 0x7f92a8072b50 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.760+0000 7f92adbb9700 1 -- 192.168.123.105:0/284320459 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a8072b50 msgr2=0x7f92a8071050 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.760+0000 7f92adbb9700 1 --2- 192.168.123.105:0/284320459 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a8072b50 0x7f92a8071050 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f9298009a90 tx=0x7f9298009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.760+0000 7f92adbb9700 1 -- 192.168.123.105:0/284320459 shutdown_connections 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.760+0000 7f92adbb9700 1 --2- 192.168.123.105:0/284320459 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a8072b50 0x7f92a8071050 secure :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f9298009a90 tx=0x7f9298009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.760+0000 7f92adbb9700 1 -- 192.168.123.105:0/284320459 >> 192.168.123.105:0/284320459 conn(0x7f92a806c970 msgr2=0x7f92a806eda0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.762+0000 7f92adbb9700 1 -- 192.168.123.105:0/284320459 shutdown_connections 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.762+0000 7f92adbb9700 1 -- 192.168.123.105:0/284320459 wait complete. 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.762+0000 7f92adbb9700 1 Processor -- start 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.762+0000 7f92adbb9700 1 -- start start 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.763+0000 7f92adbb9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a811b110 0x7f92a811b520 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.763+0000 7f92adbb9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f92a8071590 con 0x7f92a811b110 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.763+0000 7f92a77fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a811b110 0x7f92a811b520 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.764+0000 7f92a77fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a811b110 0x7f92a811b520 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48198/0 (socket says 192.168.123.105:48198) 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.764+0000 7f92a77fe700 1 -- 192.168.123.105:0/2735313099 learned_addr learned my addr 192.168.123.105:0/2735313099 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.764+0000 7f92a77fe700 1 -- 192.168.123.105:0/2735313099 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9298009740 con 0x7f92a811b110 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.764+0000 7f92a77fe700 1 --2- 192.168.123.105:0/2735313099 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a811b110 0x7f92a811b520 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f929800bd00 tx=0x7f929800bde0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.767+0000 7f92a4ff9700 1 -- 192.168.123.105:0/2735313099 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f929801a670 con 0x7f92a811b110 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.767+0000 7f92adbb9700 1 -- 192.168.123.105:0/2735313099 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f92a811ba60 con 0x7f92a811b110 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.767+0000 7f92adbb9700 1 -- 192.168.123.105:0/2735313099 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f92a811bf00 con 0x7f92a811b110 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.768+0000 7f92adbb9700 1 -- 192.168.123.105:0/2735313099 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f92a8062380 con 0x7f92a811b110 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.768+0000 7f92a4ff9700 1 -- 192.168.123.105:0/2735313099 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f929801ac70 con 0x7f92a811b110 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.768+0000 7f92a4ff9700 1 -- 192.168.123.105:0/2735313099 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f92980044e0 con 0x7f92a811b110 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.768+0000 7f92a4ff9700 1 -- 192.168.123.105:0/2735313099 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7f929802c730 con 0x7f92a811b110 2026-03-09T14:55:14.846 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.768+0000 7f92a4ff9700 1 -- 192.168.123.105:0/2735313099 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f9298011a30 con 0x7f92a811b110 2026-03-09T14:55:14.847 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.771+0000 7f92a4ff9700 1 -- 192.168.123.105:0/2735313099 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72422 (secure 0 0 0) 0x7f929801ade0 con 0x7f92a811b110 2026-03-09T14:55:14.847 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.808+0000 7f92adbb9700 1 -- 192.168.123.105:0/2735313099 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1 -- 0x7f92a81a3910 con 0x7f92a811b110 2026-03-09T14:55:14.847 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.809+0000 7f92a4ff9700 1 -- 192.168.123.105:0/2735313099 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) v1 ==== 79+0+1241 (secure 0 0 0) 0x7f9298011db0 con 0x7f92a811b110 2026-03-09T14:55:14.847 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.811+0000 7f928e7fc700 1 -- 192.168.123.105:0/2735313099 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a811b110 msgr2=0x7f92a811b520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:14.847 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.811+0000 7f928e7fc700 1 --2- 192.168.123.105:0/2735313099 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a811b110 0x7f92a811b520 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f929800bd00 tx=0x7f929800bde0 comp rx=0 tx=0).stop 2026-03-09T14:55:14.847 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.811+0000 7f928e7fc700 1 -- 192.168.123.105:0/2735313099 shutdown_connections 2026-03-09T14:55:14.847 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.811+0000 7f928e7fc700 1 --2- 192.168.123.105:0/2735313099 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f92a811b110 0x7f92a811b520 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:14.847 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.811+0000 7f928e7fc700 1 -- 192.168.123.105:0/2735313099 >> 192.168.123.105:0/2735313099 conn(0x7f92a806c970 msgr2=0x7f92a806d550 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:14.847 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.812+0000 7f928e7fc700 1 -- 192.168.123.105:0/2735313099 shutdown_connections 2026-03-09T14:55:14.847 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:14.812+0000 7f928e7fc700 1 -- 192.168.123.105:0/2735313099 wait complete. 2026-03-09T14:55:14.847 INFO:teuthology.orchestra.run.vm05.stdout:mgr not available, waiting (2/15)... 2026-03-09T14:55:15.283 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:14 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2735313099' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "d952ca1a-1bc7-11f1-a184-f9dcb7ee7000", 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "vm05" 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "reef", 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T14:55:17.106 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T14:55:10.534547+0000", 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:55:17.107 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.007+0000 7f88e1536700 1 Processor -- start 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.008+0000 7f88e1536700 1 -- start start 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.008+0000 7f88e1536700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc072b50 0x7f88dc071050 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.008+0000 7f88e1536700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f88dc071590 con 0x7f88dc072b50 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.008+0000 7f88daffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc072b50 0x7f88dc071050 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.008+0000 7f88daffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc072b50 0x7f88dc071050 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48208/0 (socket says 192.168.123.105:48208) 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.008+0000 7f88daffd700 1 -- 192.168.123.105:0/3425777397 learned_addr learned my addr 192.168.123.105:0/3425777397 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.009+0000 7f88daffd700 1 -- 192.168.123.105:0/3425777397 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f88dc0716d0 con 0x7f88dc072b50 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.009+0000 7f88daffd700 1 --2- 192.168.123.105:0/3425777397 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc072b50 0x7f88dc071050 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f88cc009a90 tx=0x7f88cc009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=a8d3c8ddad0422e4 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.010+0000 7f88d9ffb700 1 -- 192.168.123.105:0/3425777397 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f88cc004030 con 0x7f88dc072b50 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.010+0000 7f88d9ffb700 1 -- 192.168.123.105:0/3425777397 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f88cc00b7e0 con 0x7f88dc072b50 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.010+0000 7f88d9ffb700 1 -- 192.168.123.105:0/3425777397 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f88cc0039f0 con 0x7f88dc072b50 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.010+0000 7f88e1536700 1 -- 192.168.123.105:0/3425777397 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc072b50 msgr2=0x7f88dc071050 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.010+0000 7f88e1536700 1 --2- 192.168.123.105:0/3425777397 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc072b50 0x7f88dc071050 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f88cc009a90 tx=0x7f88cc009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.010+0000 7f88e1536700 1 -- 192.168.123.105:0/3425777397 shutdown_connections 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.010+0000 7f88e1536700 1 --2- 192.168.123.105:0/3425777397 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc072b50 0x7f88dc071050 secure :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f88cc009a90 tx=0x7f88cc009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.010+0000 7f88e1536700 1 -- 192.168.123.105:0/3425777397 >> 192.168.123.105:0/3425777397 conn(0x7f88dc06c970 msgr2=0x7f88dc06eda0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.013+0000 7f88e1536700 1 -- 192.168.123.105:0/3425777397 shutdown_connections 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.013+0000 7f88e1536700 1 -- 192.168.123.105:0/3425777397 wait complete. 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.013+0000 7f88e1536700 1 Processor -- start 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.013+0000 7f88e1536700 1 -- start start 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.013+0000 7f88e1536700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc1a8870 0x7f88dc1a8c80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.013+0000 7f88e1536700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f88dc1a91c0 con 0x7f88dc1a8870 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.014+0000 7f88daffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc1a8870 0x7f88dc1a8c80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.014+0000 7f88daffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc1a8870 0x7f88dc1a8c80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:48220/0 (socket says 192.168.123.105:48220) 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.014+0000 7f88daffd700 1 -- 192.168.123.105:0/1110224812 learned_addr learned my addr 192.168.123.105:0/1110224812 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.014+0000 7f88daffd700 1 -- 192.168.123.105:0/1110224812 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f88cc009740 con 0x7f88dc1a8870 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.014+0000 7f88daffd700 1 --2- 192.168.123.105:0/1110224812 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc1a8870 0x7f88dc1a8c80 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f88cc009a90 tx=0x7f88cc003b40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.015+0000 7f88c3fff700 1 -- 192.168.123.105:0/1110224812 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f88cc003fa0 con 0x7f88dc1a8870 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.015+0000 7f88c3fff700 1 -- 192.168.123.105:0/1110224812 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f88cc0045a0 con 0x7f88dc1a8870 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.015+0000 7f88c3fff700 1 -- 192.168.123.105:0/1110224812 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f88cc01b440 con 0x7f88dc1a8870 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.015+0000 7f88e1536700 1 -- 192.168.123.105:0/1110224812 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f88dc1a93c0 con 0x7f88dc1a8870 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.015+0000 7f88e1536700 1 -- 192.168.123.105:0/1110224812 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f88dc07b250 con 0x7f88dc1a8870 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.015+0000 7f88e1536700 1 -- 192.168.123.105:0/1110224812 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f88dc04efc0 con 0x7f88dc1a8870 2026-03-09T14:55:17.108 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.017+0000 7f88c3fff700 1 -- 192.168.123.105:0/1110224812 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 1) v1 ==== 660+0+0 (secure 0 0 0) 0x7f88cc02b030 con 0x7f88dc1a8870 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.017+0000 7f88c3fff700 1 -- 192.168.123.105:0/1110224812 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f88cc01b690 con 0x7f88dc1a8870 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.017+0000 7f88c3fff700 1 -- 192.168.123.105:0/1110224812 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+72422 (secure 0 0 0) 0x7f88cc01f070 con 0x7f88dc1a8870 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.053+0000 7f88c3fff700 1 -- 192.168.123.105:0/1110224812 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mgrmap(e 2) v1 ==== 44835+0+0 (secure 0 0 0) 0x7f88cc004100 con 0x7f88dc1a8870 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.060+0000 7f88e1536700 1 -- 192.168.123.105:0/1110224812 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1 -- 0x7f88dc07b530 con 0x7f88dc1a8870 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.062+0000 7f88c3fff700 1 -- 192.168.123.105:0/1110224812 <== mon.0 v2:192.168.123.105:3300/0 8 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) v1 ==== 79+0+1241 (secure 0 0 0) 0x7f88cc01f070 con 0x7f88dc1a8870 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.065+0000 7f88c1ffb700 1 -- 192.168.123.105:0/1110224812 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc1a8870 msgr2=0x7f88dc1a8c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.065+0000 7f88c1ffb700 1 --2- 192.168.123.105:0/1110224812 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc1a8870 0x7f88dc1a8c80 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f88cc009a90 tx=0x7f88cc003b40 comp rx=0 tx=0).stop 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.065+0000 7f88c1ffb700 1 -- 192.168.123.105:0/1110224812 shutdown_connections 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.065+0000 7f88c1ffb700 1 --2- 192.168.123.105:0/1110224812 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f88dc1a8870 0x7f88dc1a8c80 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.065+0000 7f88c1ffb700 1 -- 192.168.123.105:0/1110224812 >> 192.168.123.105:0/1110224812 conn(0x7f88dc06c970 msgr2=0x7f88dc06dfa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.066+0000 7f88c1ffb700 1 -- 192.168.123.105:0/1110224812 shutdown_connections 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:17.066+0000 7f88c1ffb700 1 -- 192.168.123.105:0/1110224812 wait complete. 2026-03-09T14:55:17.109 INFO:teuthology.orchestra.run.vm05.stdout:mgr not available, waiting (3/15)... 2026-03-09T14:55:18.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: Activating manager daemon vm05.lhsexd 2026-03-09T14:55:18.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: mgrmap e2: vm05.lhsexd(active, starting, since 0.00366428s) 2026-03-09T14:55:18.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: from='mgr.14100 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:55:18.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: from='mgr.14100 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:55:18.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: from='mgr.14100 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:55:18.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: from='mgr.14100 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T14:55:18.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: from='mgr.14100 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm05.lhsexd", "id": "vm05.lhsexd"}]: dispatch 2026-03-09T14:55:18.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/1110224812' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:55:18.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: Manager daemon vm05.lhsexd is now available 2026-03-09T14:55:18.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: from='mgr.14100 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:55:18.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: from='mgr.14100 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/trash_purge_schedule"}]: dispatch 2026-03-09T14:55:18.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: from='mgr.14100 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:18.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: from='mgr.14100 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:18.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:17 vm05 ceph-mon[50611]: from='mgr.14100 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:19.398 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:19 vm05 ceph-mon[50611]: mgrmap e3: vm05.lhsexd(active, since 1.00772s) 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "d952ca1a-1bc7-11f1-a184-f9dcb7ee7000", 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "vm05" 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 7, 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "reef", 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T14:55:19.433 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T14:55:19.436 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T14:55:10.534547+0000", 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.242+0000 7f9dd9ccd700 1 Processor -- start 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.243+0000 7f9dd9ccd700 1 -- start start 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.243+0000 7f9dd9ccd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd407acf0 0x7f9dd40791f0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.243+0000 7f9dd9ccd700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9dd4079730 con 0x7f9dd407acf0 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.244+0000 7f9dd37fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd407acf0 0x7f9dd40791f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.244+0000 7f9dd37fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd407acf0 0x7f9dd40791f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44016/0 (socket says 192.168.123.105:44016) 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.244+0000 7f9dd37fe700 1 -- 192.168.123.105:0/294829691 learned_addr learned my addr 192.168.123.105:0/294829691 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.244+0000 7f9dd37fe700 1 -- 192.168.123.105:0/294829691 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9dd4079870 con 0x7f9dd407acf0 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.244+0000 7f9dd37fe700 1 --2- 192.168.123.105:0/294829691 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd407acf0 0x7f9dd40791f0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f9dbc009a90 tx=0x7f9dbc009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d36742e3f3176b3e server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.244+0000 7f9dd27fc700 1 -- 192.168.123.105:0/294829691 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9dbc004030 con 0x7f9dd407acf0 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.245+0000 7f9dd27fc700 1 -- 192.168.123.105:0/294829691 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f9dbc00b7e0 con 0x7f9dd407acf0 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.245+0000 7f9dd27fc700 1 -- 192.168.123.105:0/294829691 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9dbc0039f0 con 0x7f9dd407acf0 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.245+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/294829691 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd407acf0 msgr2=0x7f9dd40791f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.245+0000 7f9dd9ccd700 1 --2- 192.168.123.105:0/294829691 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd407acf0 0x7f9dd40791f0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f9dbc009a90 tx=0x7f9dbc009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.245+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/294829691 shutdown_connections 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.246+0000 7f9dd9ccd700 1 --2- 192.168.123.105:0/294829691 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd407acf0 0x7f9dd40791f0 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.246+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/294829691 >> 192.168.123.105:0/294829691 conn(0x7f9dd41013a0 msgr2=0x7f9dd41037b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.246+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/294829691 shutdown_connections 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.246+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/294829691 wait complete. 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.246+0000 7f9dd9ccd700 1 Processor -- start 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.247+0000 7f9dd9ccd700 1 -- start start 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.247+0000 7f9dd9ccd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd419be10 0x7f9dd419c220 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.247+0000 7f9dd9ccd700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9dd419c760 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.247+0000 7f9dd37fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd419be10 0x7f9dd419c220 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.247+0000 7f9dd37fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd419be10 0x7f9dd419c220 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44018/0 (socket says 192.168.123.105:44018) 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.247+0000 7f9dd37fe700 1 -- 192.168.123.105:0/176614592 learned_addr learned my addr 192.168.123.105:0/176614592 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.248+0000 7f9dd37fe700 1 -- 192.168.123.105:0/176614592 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9dbc009740 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.248+0000 7f9dd37fe700 1 --2- 192.168.123.105:0/176614592 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd419be10 0x7f9dd419c220 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f9dd407aa20 tx=0x7f9dbc003b40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.248+0000 7f9dd0ff9700 1 -- 192.168.123.105:0/176614592 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9dbc003fa0 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.248+0000 7f9dd0ff9700 1 -- 192.168.123.105:0/176614592 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f9dbc0045a0 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.248+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/176614592 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9dd419c960 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.248+0000 7f9dd0ff9700 1 -- 192.168.123.105:0/176614592 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9dbc01b440 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.249+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/176614592 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9dd419f5c0 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.250+0000 7f9dd0ff9700 1 -- 192.168.123.105:0/176614592 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 4) v1 ==== 45068+0+0 (secure 0 0 0) 0x7f9dbc01b5a0 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.250+0000 7f9dd0ff9700 1 --2- 192.168.123.105:0/176614592 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9dc0038470 0x7f9dc003a920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.250+0000 7f9dd0ff9700 1 -- 192.168.123.105:0/176614592 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f9dbc04d130 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.250+0000 7f9dd2ffd700 1 --2- 192.168.123.105:0/176614592 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9dc0038470 0x7f9dc003a920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.251+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/176614592 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9dd4062380 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.254+0000 7f9dd0ff9700 1 -- 192.168.123.105:0/176614592 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f9dbc01f030 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.254+0000 7f9dd2ffd700 1 --2- 192.168.123.105:0/176614592 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9dc0038470 0x7f9dc003a920 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f9dc4006fd0 tx=0x7f9dc4006e40 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.397+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/176614592 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1 -- 0x7f9dd419f7b0 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.400+0000 7f9dd0ff9700 1 -- 192.168.123.105:0/176614592 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) v1 ==== 79+0+1240 (secure 0 0 0) 0x7f9dbc04a030 con 0x7f9dd419be10 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.402+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/176614592 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9dc0038470 msgr2=0x7f9dc003a920 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.402+0000 7f9dd9ccd700 1 --2- 192.168.123.105:0/176614592 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9dc0038470 0x7f9dc003a920 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f9dc4006fd0 tx=0x7f9dc4006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.402+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/176614592 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd419be10 msgr2=0x7f9dd419c220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.403+0000 7f9dd9ccd700 1 --2- 192.168.123.105:0/176614592 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd419be10 0x7f9dd419c220 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f9dd407aa20 tx=0x7f9dbc003b40 comp rx=0 tx=0).stop 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.403+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/176614592 shutdown_connections 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.403+0000 7f9dd9ccd700 1 --2- 192.168.123.105:0/176614592 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9dc0038470 0x7f9dc003a920 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.403+0000 7f9dd9ccd700 1 --2- 192.168.123.105:0/176614592 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9dd419be10 0x7f9dd419c220 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.403+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/176614592 >> 192.168.123.105:0/176614592 conn(0x7f9dd41013a0 msgr2=0x7f9dd4102080 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.403+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/176614592 shutdown_connections 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.403+0000 7f9dd9ccd700 1 -- 192.168.123.105:0/176614592 wait complete. 2026-03-09T14:55:19.437 INFO:teuthology.orchestra.run.vm05.stdout:mgr is available 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout fsid = d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_backfillfull_ratio = 0.85 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_full_ratio = 0.9 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_nearfull_ratio = 0.8 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.573+0000 7fbda13b5700 1 Processor -- start 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.573+0000 7fbda13b5700 1 -- start start 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.573+0000 7fbda13b5700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 0x7fbd9c07bae0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.573+0000 7fbda13b5700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbd9c07c020 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.574+0000 7fbd9affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 0x7fbd9c07bae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.574+0000 7fbd9affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 0x7fbd9c07bae0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44022/0 (socket says 192.168.123.105:44022) 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.574+0000 7fbd9affd700 1 -- 192.168.123.105:0/2770079686 learned_addr learned my addr 192.168.123.105:0/2770079686 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.574+0000 7fbd9affd700 1 -- 192.168.123.105:0/2770079686 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbd9c07c160 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.575+0000 7fbd9affd700 1 --2- 192.168.123.105:0/2770079686 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 0x7fbd9c07bae0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fbd8c009a90 tx=0x7fbd8c009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=858c0cdac511db30 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.575+0000 7fbd9a7fc700 1 -- 192.168.123.105:0/2770079686 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbd8c004030 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.575+0000 7fbd9a7fc700 1 -- 192.168.123.105:0/2770079686 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fbd8c00b7e0 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.575+0000 7fbda13b5700 1 -- 192.168.123.105:0/2770079686 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 msgr2=0x7fbd9c07bae0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.575+0000 7fbda13b5700 1 --2- 192.168.123.105:0/2770079686 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 0x7fbd9c07bae0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fbd8c009a90 tx=0x7fbd8c009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.575+0000 7fbda13b5700 1 -- 192.168.123.105:0/2770079686 shutdown_connections 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.575+0000 7fbda13b5700 1 --2- 192.168.123.105:0/2770079686 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 0x7fbd9c07bae0 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.575+0000 7fbda13b5700 1 -- 192.168.123.105:0/2770079686 >> 192.168.123.105:0/2770079686 conn(0x7fbd9c103770 msgr2=0x7fbd9c105b50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.576+0000 7fbda13b5700 1 -- 192.168.123.105:0/2770079686 shutdown_connections 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.576+0000 7fbda13b5700 1 -- 192.168.123.105:0/2770079686 wait complete. 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.576+0000 7fbda13b5700 1 Processor -- start 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.576+0000 7fbda13b5700 1 -- start start 2026-03-09T14:55:19.749 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.576+0000 7fbda13b5700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 0x7fbd9c1a8960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.576+0000 7fbda13b5700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbd9c07c020 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.576+0000 7fbd9affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 0x7fbd9c1a8960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.577+0000 7fbd9affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 0x7fbd9c1a8960 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44034/0 (socket says 192.168.123.105:44034) 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.577+0000 7fbd9affd700 1 -- 192.168.123.105:0/193044296 learned_addr learned my addr 192.168.123.105:0/193044296 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.577+0000 7fbd9affd700 1 -- 192.168.123.105:0/193044296 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbd8c009740 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.577+0000 7fbd9affd700 1 --2- 192.168.123.105:0/193044296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 0x7fbd9c1a8960 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fbd8c003f70 tx=0x7fbd8c004050 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.577+0000 7fbd98ff9700 1 -- 192.168.123.105:0/193044296 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbd8c0043b0 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.577+0000 7fbd98ff9700 1 -- 192.168.123.105:0/193044296 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fbd8c004510 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.577+0000 7fbd98ff9700 1 -- 192.168.123.105:0/193044296 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbd8c0115e0 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.577+0000 7fbda13b5700 1 -- 192.168.123.105:0/193044296 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbd9c1a8ea0 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.578+0000 7fbda13b5700 1 -- 192.168.123.105:0/193044296 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbd9c1a92c0 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.579+0000 7fbd98ff9700 1 -- 192.168.123.105:0/193044296 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 4) v1 ==== 45068+0+0 (secure 0 0 0) 0x7fbd8c011740 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.579+0000 7fbd98ff9700 1 --2- 192.168.123.105:0/193044296 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbd7c038460 0x7fbd7c03a910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.579+0000 7fbd98ff9700 1 -- 192.168.123.105:0/193044296 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7fbd8c04cf70 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.579+0000 7fbd93fff700 1 --2- 192.168.123.105:0/193044296 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbd7c038460 0x7fbd7c03a910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.580+0000 7fbda13b5700 1 -- 192.168.123.105:0/193044296 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbd9c10c1e0 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.580+0000 7fbd93fff700 1 --2- 192.168.123.105:0/193044296 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbd7c038460 0x7fbd7c03a910 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fbd84006fd0 tx=0x7fbd84006e40 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.584+0000 7fbd98ff9700 1 -- 192.168.123.105:0/193044296 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fbd8c0119f0 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.704+0000 7fbda13b5700 1 -- 192.168.123.105:0/193044296 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) v1 -- 0x7fbd9c1a9600 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.705+0000 7fbd98ff9700 1 -- 192.168.123.105:0/193044296 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v3) v1 ==== 70+0+373 (secure 0 0 0) 0x7fbd8c018b40 con 0x7fbd9c07b6d0 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.710+0000 7fbda13b5700 1 -- 192.168.123.105:0/193044296 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbd7c038460 msgr2=0x7fbd7c03a910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.710+0000 7fbda13b5700 1 --2- 192.168.123.105:0/193044296 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbd7c038460 0x7fbd7c03a910 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fbd84006fd0 tx=0x7fbd84006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.710+0000 7fbda13b5700 1 -- 192.168.123.105:0/193044296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 msgr2=0x7fbd9c1a8960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.710+0000 7fbda13b5700 1 --2- 192.168.123.105:0/193044296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 0x7fbd9c1a8960 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fbd8c003f70 tx=0x7fbd8c004050 comp rx=0 tx=0).stop 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.711+0000 7fbda13b5700 1 -- 192.168.123.105:0/193044296 shutdown_connections 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.711+0000 7fbda13b5700 1 --2- 192.168.123.105:0/193044296 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbd7c038460 0x7fbd7c03a910 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.711+0000 7fbda13b5700 1 --2- 192.168.123.105:0/193044296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd9c07b6d0 0x7fbd9c1a8960 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.711+0000 7fbda13b5700 1 -- 192.168.123.105:0/193044296 >> 192.168.123.105:0/193044296 conn(0x7fbd9c103770 msgr2=0x7fbd9c106930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:19.750 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.711+0000 7fbda13b5700 1 -- 192.168.123.105:0/193044296 shutdown_connections 2026-03-09T14:55:19.751 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.712+0000 7fbda13b5700 1 -- 192.168.123.105:0/193044296 wait complete. 2026-03-09T14:55:19.751 INFO:teuthology.orchestra.run.vm05.stdout:Enabling cephadm module... 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.868+0000 7fd7eb59e700 1 Processor -- start 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.869+0000 7fd7eb59e700 1 -- start start 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.869+0000 7fd7eb59e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec107d50 0x7fd7ec108160 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.869+0000 7fd7eb59e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd7ec1086a0 con 0x7fd7ec107d50 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.869+0000 7fd7ea59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec107d50 0x7fd7ec108160 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.869+0000 7fd7ea59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec107d50 0x7fd7ec108160 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44046/0 (socket says 192.168.123.105:44046) 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.869+0000 7fd7ea59c700 1 -- 192.168.123.105:0/347183021 learned_addr learned my addr 192.168.123.105:0/347183021 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.870+0000 7fd7ea59c700 1 -- 192.168.123.105:0/347183021 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd7ec1087e0 con 0x7fd7ec107d50 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.870+0000 7fd7ea59c700 1 --2- 192.168.123.105:0/347183021 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec107d50 0x7fd7ec108160 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7fd7d4009a90 tx=0x7fd7d4009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=5e7fef3448646a35 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.870+0000 7fd7e959a700 1 -- 192.168.123.105:0/347183021 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd7d4004030 con 0x7fd7ec107d50 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.870+0000 7fd7e959a700 1 -- 192.168.123.105:0/347183021 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fd7d400b7e0 con 0x7fd7ec107d50 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.870+0000 7fd7eb59e700 1 -- 192.168.123.105:0/347183021 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec107d50 msgr2=0x7fd7ec108160 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.870+0000 7fd7eb59e700 1 --2- 192.168.123.105:0/347183021 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec107d50 0x7fd7ec108160 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7fd7d4009a90 tx=0x7fd7d4009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.871+0000 7fd7eb59e700 1 -- 192.168.123.105:0/347183021 shutdown_connections 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.871+0000 7fd7eb59e700 1 --2- 192.168.123.105:0/347183021 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec107d50 0x7fd7ec108160 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.871+0000 7fd7eb59e700 1 -- 192.168.123.105:0/347183021 >> 192.168.123.105:0/347183021 conn(0x7fd7ec1035a0 msgr2=0x7fd7ec105980 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.871+0000 7fd7eb59e700 1 -- 192.168.123.105:0/347183021 shutdown_connections 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.871+0000 7fd7eb59e700 1 -- 192.168.123.105:0/347183021 wait complete. 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.871+0000 7fd7eb59e700 1 Processor -- start 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.871+0000 7fd7eb59e700 1 -- start start 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.872+0000 7fd7eb59e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec19bb90 0x7fd7ec19bfa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.872+0000 7fd7eb59e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd7ec1086a0 con 0x7fd7ec19bb90 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.872+0000 7fd7ea59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec19bb90 0x7fd7ec19bfa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.872+0000 7fd7ea59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec19bb90 0x7fd7ec19bfa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44056/0 (socket says 192.168.123.105:44056) 2026-03-09T14:55:20.114 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.872+0000 7fd7ea59c700 1 -- 192.168.123.105:0/3624404075 learned_addr learned my addr 192.168.123.105:0/3624404075 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.872+0000 7fd7ea59c700 1 -- 192.168.123.105:0/3624404075 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd7d4009740 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.872+0000 7fd7ea59c700 1 --2- 192.168.123.105:0/3624404075 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec19bb90 0x7fd7ec19bfa0 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7fd7d400bdb0 tx=0x7fd7d400be90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.872+0000 7fd7e37fe700 1 -- 192.168.123.105:0/3624404075 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd7d4003ec0 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.872+0000 7fd7e37fe700 1 -- 192.168.123.105:0/3624404075 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fd7d40044c0 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.872+0000 7fd7e37fe700 1 -- 192.168.123.105:0/3624404075 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd7d401ace0 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.872+0000 7fd7eb59e700 1 -- 192.168.123.105:0/3624404075 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd7ec19c4e0 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.872+0000 7fd7eb59e700 1 -- 192.168.123.105:0/3624404075 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd7ec19f170 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.874+0000 7fd7e37fe700 1 -- 192.168.123.105:0/3624404075 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 4) v1 ==== 45068+0+0 (secure 0 0 0) 0x7fd7d402c430 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.874+0000 7fd7e37fe700 1 --2- 192.168.123.105:0/3624404075 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7d8038460 0x7fd7d803a910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.874+0000 7fd7e37fe700 1 -- 192.168.123.105:0/3624404075 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7fd7d404ca40 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.874+0000 7fd7e9d9b700 1 --2- 192.168.123.105:0/3624404075 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7d8038460 0x7fd7d803a910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.875+0000 7fd7eb59e700 1 -- 192.168.123.105:0/3624404075 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd7ec04efc0 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.878+0000 7fd7e9d9b700 1 --2- 192.168.123.105:0/3624404075 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7d8038460 0x7fd7d803a910 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fd7dc009990 tx=0x7fd7dc006e30 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:19.878+0000 7fd7e37fe700 1 -- 192.168.123.105:0/3624404075 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fd7d401ae40 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.008+0000 7fd7eb59e700 1 -- 192.168.123.105:0/3624404075 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1 -- 0x7fd7ec062380 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.067+0000 7fd7e37fe700 1 -- 192.168.123.105:0/3624404075 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mgrmap(e 5) v1 ==== 45079+0+0 (secure 0 0 0) 0x7fd7d4011420 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.068+0000 7fd7e37fe700 1 -- 192.168.123.105:0/3624404075 <== mon.0 v2:192.168.123.105:3300/0 8 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "cephadm"}]=0 v5) v1 ==== 86+0+0 (secure 0 0 0) 0x7fd7d401ae40 con 0x7fd7ec19bb90 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.073+0000 7fd7eb59e700 1 -- 192.168.123.105:0/3624404075 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7d8038460 msgr2=0x7fd7d803a910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.073+0000 7fd7eb59e700 1 --2- 192.168.123.105:0/3624404075 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7d8038460 0x7fd7d803a910 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fd7dc009990 tx=0x7fd7dc006e30 comp rx=0 tx=0).stop 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.073+0000 7fd7eb59e700 1 -- 192.168.123.105:0/3624404075 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec19bb90 msgr2=0x7fd7ec19bfa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.073+0000 7fd7eb59e700 1 --2- 192.168.123.105:0/3624404075 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec19bb90 0x7fd7ec19bfa0 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7fd7d400bdb0 tx=0x7fd7d400be90 comp rx=0 tx=0).stop 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.075+0000 7fd7eb59e700 1 -- 192.168.123.105:0/3624404075 shutdown_connections 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.075+0000 7fd7eb59e700 1 --2- 192.168.123.105:0/3624404075 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7d8038460 0x7fd7d803a910 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.075+0000 7fd7eb59e700 1 --2- 192.168.123.105:0/3624404075 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7ec19bb90 0x7fd7ec19bfa0 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.075+0000 7fd7eb59e700 1 -- 192.168.123.105:0/3624404075 >> 192.168.123.105:0/3624404075 conn(0x7fd7ec1035a0 msgr2=0x7fd7ec1050f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.075+0000 7fd7eb59e700 1 -- 192.168.123.105:0/3624404075 shutdown_connections 2026-03-09T14:55:20.115 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.075+0000 7fd7eb59e700 1 -- 192.168.123.105:0/3624404075 wait complete. 2026-03-09T14:55:20.262 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:20 vm05 ceph-mon[50611]: mgrmap e4: vm05.lhsexd(active, since 2s) 2026-03-09T14:55:20.262 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:20 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/176614592' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:55:20.262 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:20 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/193044296' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T14:55:20.262 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:20 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3624404075' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T14:55:20.486 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:55:20.486 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-09T14:55:20.486 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T14:55:20.486 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "active_name": "vm05.lhsexd", 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.260+0000 7ffb37280700 1 Processor -- start 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.262+0000 7ffb37280700 1 -- start start 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.262+0000 7ffb37280700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb30071410 0x7ffb30071820 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.262+0000 7ffb37280700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ffb30071d60 con 0x7ffb30071410 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.263+0000 7ffb3627e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb30071410 0x7ffb30071820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.263+0000 7ffb3627e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb30071410 0x7ffb30071820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44082/0 (socket says 192.168.123.105:44082) 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.263+0000 7ffb3627e700 1 -- 192.168.123.105:0/2899097498 learned_addr learned my addr 192.168.123.105:0/2899097498 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.263+0000 7ffb3627e700 1 -- 192.168.123.105:0/2899097498 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ffb30071ea0 con 0x7ffb30071410 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.263+0000 7ffb3627e700 1 --2- 192.168.123.105:0/2899097498 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb30071410 0x7ffb30071820 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7ffb2c00ab30 tx=0x7ffb2c010730 comp rx=0 tx=0).ready entity=mon.0 client_cookie=5844e60645a13285 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.265+0000 7ffb3527c700 1 -- 192.168.123.105:0/2899097498 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ffb2c010e00 con 0x7ffb30071410 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.265+0000 7ffb3527c700 1 -- 192.168.123.105:0/2899097498 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7ffb2c0044d0 con 0x7ffb30071410 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.265+0000 7ffb37280700 1 -- 192.168.123.105:0/2899097498 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb30071410 msgr2=0x7ffb30071820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.265+0000 7ffb37280700 1 --2- 192.168.123.105:0/2899097498 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb30071410 0x7ffb30071820 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7ffb2c00ab30 tx=0x7ffb2c010730 comp rx=0 tx=0).stop 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.265+0000 7ffb37280700 1 -- 192.168.123.105:0/2899097498 shutdown_connections 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.265+0000 7ffb37280700 1 --2- 192.168.123.105:0/2899097498 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb30071410 0x7ffb30071820 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.265+0000 7ffb37280700 1 -- 192.168.123.105:0/2899097498 >> 192.168.123.105:0/2899097498 conn(0x7ffb3006c9d0 msgr2=0x7ffb3006ee00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.266+0000 7ffb37280700 1 -- 192.168.123.105:0/2899097498 shutdown_connections 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.266+0000 7ffb37280700 1 -- 192.168.123.105:0/2899097498 wait complete. 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.266+0000 7ffb37280700 1 Processor -- start 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.266+0000 7ffb37280700 1 -- start start 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.267+0000 7ffb37280700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb301a86b0 0x7ffb301a8ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.267+0000 7ffb37280700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ffb2c01a410 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.267+0000 7ffb3627e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb301a86b0 0x7ffb301a8ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.267+0000 7ffb3627e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb301a86b0 0x7ffb301a8ac0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44084/0 (socket says 192.168.123.105:44084) 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.267+0000 7ffb3627e700 1 -- 192.168.123.105:0/3783581523 learned_addr learned my addr 192.168.123.105:0/3783581523 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.267+0000 7ffb3627e700 1 -- 192.168.123.105:0/3783581523 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ffb2c00a7e0 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.267+0000 7ffb3627e700 1 --2- 192.168.123.105:0/3783581523 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb301a86b0 0x7ffb301a8ac0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7ffb2c00bbd0 tx=0x7ffb2c003980 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.269+0000 7ffb277fe700 1 -- 192.168.123.105:0/3783581523 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ffb2c003bd0 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.269+0000 7ffb37280700 1 -- 192.168.123.105:0/3783581523 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ffb301a9000 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.269+0000 7ffb37280700 1 -- 192.168.123.105:0/3783581523 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ffb301abd10 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.269+0000 7ffb277fe700 1 -- 192.168.123.105:0/3783581523 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7ffb2c00f070 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.270+0000 7ffb277fe700 1 -- 192.168.123.105:0/3783581523 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ffb2c0229a0 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.271+0000 7ffb277fe700 1 -- 192.168.123.105:0/3783581523 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 5) v1 ==== 45079+0+0 (secure 0 0 0) 0x7ffb2c018070 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.271+0000 7ffb277fe700 1 --2- 192.168.123.105:0/3783581523 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ffb1c0384b0 0x7ffb1c03a960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.271+0000 7ffb35a7d700 1 -- 192.168.123.105:0/3783581523 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ffb1c0384b0 msgr2=0x7ffb1c03a960 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.271+0000 7ffb35a7d700 1 --2- 192.168.123.105:0/3783581523 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ffb1c0384b0 0x7ffb1c03a960 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.271+0000 7ffb277fe700 1 -- 192.168.123.105:0/3783581523 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7ffb2c04ad30 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.271+0000 7ffb37280700 1 -- 192.168.123.105:0/3783581523 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ffb14005320 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.274+0000 7ffb277fe700 1 -- 192.168.123.105:0/3783581523 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7ffb2c027070 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.411+0000 7ffb37280700 1 -- 192.168.123.105:0/3783581523 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) v1 -- 0x7ffb14006200 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.411+0000 7ffb277fe700 1 -- 192.168.123.105:0/3783581523 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v5) v1 ==== 56+0+98 (secure 0 0 0) 0x7ffb2c022b00 con 0x7ffb301a86b0 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.419+0000 7ffb257fa700 1 -- 192.168.123.105:0/3783581523 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ffb1c0384b0 msgr2=0x7ffb1c03a960 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.419+0000 7ffb257fa700 1 --2- 192.168.123.105:0/3783581523 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ffb1c0384b0 0x7ffb1c03a960 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.419+0000 7ffb257fa700 1 -- 192.168.123.105:0/3783581523 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb301a86b0 msgr2=0x7ffb301a8ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.419+0000 7ffb257fa700 1 --2- 192.168.123.105:0/3783581523 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb301a86b0 0x7ffb301a8ac0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7ffb2c00bbd0 tx=0x7ffb2c003980 comp rx=0 tx=0).stop 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.419+0000 7ffb257fa700 1 -- 192.168.123.105:0/3783581523 shutdown_connections 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.419+0000 7ffb257fa700 1 --2- 192.168.123.105:0/3783581523 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ffb1c0384b0 0x7ffb1c03a960 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.419+0000 7ffb257fa700 1 --2- 192.168.123.105:0/3783581523 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ffb301a86b0 0x7ffb301a8ac0 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.420+0000 7ffb257fa700 1 -- 192.168.123.105:0/3783581523 >> 192.168.123.105:0/3783581523 conn(0x7ffb3006c9d0 msgr2=0x7ffb3006d470 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.420+0000 7ffb257fa700 1 -- 192.168.123.105:0/3783581523 shutdown_connections 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.420+0000 7ffb257fa700 1 -- 192.168.123.105:0/3783581523 wait complete. 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for the mgr to restart... 2026-03-09T14:55:20.487 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr epoch 5... 2026-03-09T14:55:21.181 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:21 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3624404075' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T14:55:21.181 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:21 vm05 ceph-mon[50611]: mgrmap e5: vm05.lhsexd(active, since 3s) 2026-03-09T14:55:21.181 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:21 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3783581523' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T14:55:25.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: Active manager daemon vm05.lhsexd restarted 2026-03-09T14:55:25.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: Activating manager daemon vm05.lhsexd 2026-03-09T14:55:25.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: osdmap e2: 0 total, 0 up, 0 in 2026-03-09T14:55:25.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: mgrmap e6: vm05.lhsexd(active, starting, since 0.0043119s) 2026-03-09T14:55:25.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T14:55:25.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm05.lhsexd", "id": "vm05.lhsexd"}]: dispatch 2026-03-09T14:55:25.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:55:25.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:55:25.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:55:25.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: Manager daemon vm05.lhsexd is now available 2026-03-09T14:55:25.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:25.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:25.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:55:25.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:55:25.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:55:25.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:24 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.643+0000 7f719ca89700 1 Processor -- start 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.643+0000 7f719ca89700 1 -- start start 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.643+0000 7f719ca89700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 0x7f71981045e0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.643+0000 7f719ca89700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7198104b20 con 0x7f71981041d0 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.644+0000 7f71977fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 0x7f71981045e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.644+0000 7f71977fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 0x7f71981045e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44096/0 (socket says 192.168.123.105:44096) 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.644+0000 7f71977fe700 1 -- 192.168.123.105:0/2956306653 learned_addr learned my addr 192.168.123.105:0/2956306653 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.644+0000 7f71977fe700 1 -- 192.168.123.105:0/2956306653 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7198104c60 con 0x7f71981041d0 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.644+0000 7f71977fe700 1 --2- 192.168.123.105:0/2956306653 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 0x7f71981045e0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f718800b0d0 tx=0x7f718800b490 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c4ce4e8175d2504c server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.646+0000 7f71967fc700 1 -- 192.168.123.105:0/2956306653 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f718800e070 con 0x7f71981041d0 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.646+0000 7f71967fc700 1 -- 192.168.123.105:0/2956306653 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f7188003a20 con 0x7f71981041d0 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.646+0000 7f71967fc700 1 -- 192.168.123.105:0/2956306653 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7188004670 con 0x7f71981041d0 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.646+0000 7f719ca89700 1 -- 192.168.123.105:0/2956306653 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 msgr2=0x7f71981045e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.646+0000 7f719ca89700 1 --2- 192.168.123.105:0/2956306653 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 0x7f71981045e0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f718800b0d0 tx=0x7f718800b490 comp rx=0 tx=0).stop 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.648+0000 7f719ca89700 1 -- 192.168.123.105:0/2956306653 shutdown_connections 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.648+0000 7f719ca89700 1 --2- 192.168.123.105:0/2956306653 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 0x7f71981045e0 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.648+0000 7f719ca89700 1 -- 192.168.123.105:0/2956306653 >> 192.168.123.105:0/2956306653 conn(0x7f71980ff840 msgr2=0x7f7198101c50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:25.719 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.648+0000 7f719ca89700 1 -- 192.168.123.105:0/2956306653 shutdown_connections 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.648+0000 7f719ca89700 1 -- 192.168.123.105:0/2956306653 wait complete. 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.648+0000 7f719ca89700 1 Processor -- start 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.649+0000 7f719ca89700 1 -- start start 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.649+0000 7f719ca89700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 0x7f71981a8610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.649+0000 7f719ca89700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f71981a8b50 con 0x7f71981041d0 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.649+0000 7f71977fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 0x7f71981a8610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.649+0000 7f71977fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 0x7f71981a8610 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44110/0 (socket says 192.168.123.105:44110) 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.649+0000 7f71977fe700 1 -- 192.168.123.105:0/1055127597 learned_addr learned my addr 192.168.123.105:0/1055127597 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.649+0000 7f71977fe700 1 -- 192.168.123.105:0/1055127597 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7188009d20 con 0x7f71981041d0 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.649+0000 7f71977fe700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 0x7f71981a8610 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f7188000f80 tx=0x7f718800bd60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.650+0000 7f7194ff9700 1 -- 192.168.123.105:0/1055127597 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f718800e070 con 0x7f71981041d0 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.650+0000 7f7194ff9700 1 -- 192.168.123.105:0/1055127597 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f71880092e0 con 0x7f71981041d0 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.650+0000 7f7194ff9700 1 -- 192.168.123.105:0/1055127597 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f71880127f0 con 0x7f71981041d0 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.650+0000 7f719ca89700 1 -- 192.168.123.105:0/1055127597 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f71981a8d50 con 0x7f71981041d0 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.650+0000 7f719ca89700 1 -- 192.168.123.105:0/1055127597 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f71981a9250 con 0x7f71981041d0 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.652+0000 7f7194ff9700 1 -- 192.168.123.105:0/1055127597 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 5) v1 ==== 45079+0+0 (secure 0 0 0) 0x7f7188019040 con 0x7f71981041d0 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.652+0000 7f7194ff9700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 0x7f718003a9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.652+0000 7f7194ff9700 1 -- 192.168.123.105:0/1055127597 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(1..1 src has 1..1) v4 ==== 725+0+0 (secure 0 0 0) 0x7f718804ba70 con 0x7f71981041d0 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.652+0000 7f7196ffd700 1 -- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 msgr2=0x7f718003a9b0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.652+0000 7f7196ffd700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 0x7f718003a9b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.652+0000 7f719ca89700 1 -- 192.168.123.105:0/1055127597 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- command(tid 0: {"prefix": "get_command_descriptions"}) v1 -- 0x7f7198062380 con 0x7f7180038500 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.853+0000 7f7196ffd700 1 -- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 msgr2=0x7f718003a9b0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:20.853+0000 7f7196ffd700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 0x7f718003a9b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:21.253+0000 7f7196ffd700 1 -- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 msgr2=0x7f718003a9b0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:21.253+0000 7f7196ffd700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 0x7f718003a9b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:22.054+0000 7f7196ffd700 1 -- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 msgr2=0x7f718003a9b0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:22.054+0000 7f7196ffd700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 0x7f718003a9b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-09T14:55:25.720 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:23.656+0000 7f7196ffd700 1 -- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 msgr2=0x7f718003a9b0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:23.656+0000 7f7196ffd700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 0x7f718003a9b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 3.200000 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:24.679+0000 7f7194ff9700 1 -- 192.168.123.105:0/1055127597 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mgrmap(e 6) v1 ==== 44846+0+0 (secure 0 0 0) 0x7f718801bbe0 con 0x7f71981041d0 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:24.679+0000 7f7194ff9700 1 -- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 msgr2=0x7f718003a9b0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:24.679+0000 7f7194ff9700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 0x7f718003a9b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.682+0000 7f7194ff9700 1 -- 192.168.123.105:0/1055127597 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mgrmap(e 7) v1 ==== 44973+0+0 (secure 0 0 0) 0x7f718804ce00 con 0x7f71981041d0 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.682+0000 7f7194ff9700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 0x7f718003a9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.682+0000 7f7194ff9700 1 -- 192.168.123.105:0/1055127597 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- command(tid 0: {"prefix": "get_command_descriptions"}) v1 -- 0x7f7198062380 con 0x7f7180038500 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.683+0000 7f7196ffd700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 0x7f718003a9b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.684+0000 7f7196ffd700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 0x7f718003a9b0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f718c003a10 tx=0x7f718c0092b0 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.686+0000 7f7194ff9700 1 -- 192.168.123.105:0/1055127597 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== command_reply(tid 0: 0 ) v1 ==== 8+0+6910 (secure 0 0 0) 0x7f7198062380 con 0x7f7180038500 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.689+0000 7f719ca89700 1 -- 192.168.123.105:0/1055127597 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- command(tid 1: {"prefix": "mgr_status"}) v1 -- 0x7f71981a9950 con 0x7f7180038500 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.689+0000 7f7194ff9700 1 -- 192.168.123.105:0/1055127597 <== mgr.14120 v2:192.168.123.105:6800/2 2 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+51 (secure 0 0 0) 0x7f71981a9950 con 0x7f7180038500 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.690+0000 7f719ca89700 1 -- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 msgr2=0x7f718003a9b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.690+0000 7f719ca89700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 0x7f718003a9b0 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f718c003a10 tx=0x7f718c0092b0 comp rx=0 tx=0).stop 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.690+0000 7f719ca89700 1 -- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 msgr2=0x7f71981a8610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.690+0000 7f719ca89700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 0x7f71981a8610 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f7188000f80 tx=0x7f718800bd60 comp rx=0 tx=0).stop 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.690+0000 7f719ca89700 1 -- 192.168.123.105:0/1055127597 shutdown_connections 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.690+0000 7f719ca89700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7180038500 0x7f718003a9b0 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.690+0000 7f719ca89700 1 --2- 192.168.123.105:0/1055127597 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71981041d0 0x7f71981a8610 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.690+0000 7f719ca89700 1 -- 192.168.123.105:0/1055127597 >> 192.168.123.105:0/1055127597 conn(0x7f71980ff840 msgr2=0x7f7198100170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.690+0000 7f719ca89700 1 -- 192.168.123.105:0/1055127597 shutdown_connections 2026-03-09T14:55:25.721 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.690+0000 7f719ca89700 1 -- 192.168.123.105:0/1055127597 wait complete. 2026-03-09T14:55:25.722 INFO:teuthology.orchestra.run.vm05.stdout:mgr epoch 5 is available 2026-03-09T14:55:25.722 INFO:teuthology.orchestra.run.vm05.stdout:Setting orchestrator backend to cephadm... 2026-03-09T14:55:25.971 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:25 vm05 ceph-mon[50611]: Found migration_current of "None". Setting to last migration. 2026-03-09T14:55:25.971 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:25 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/trash_purge_schedule"}]: dispatch 2026-03-09T14:55:25.971 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:25 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:25.971 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:25 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:25.971 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:25 vm05 ceph-mon[50611]: mgrmap e7: vm05.lhsexd(active, since 1.00939s) 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.849+0000 7f15e021c700 1 Processor -- start 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.849+0000 7f15e021c700 1 -- start start 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.849+0000 7f15e021c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d8107f20 0x7f15d8108330 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.849+0000 7f15e021c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f15d8108870 con 0x7f15d8107f20 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.850+0000 7f15ddfb8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d8107f20 0x7f15d8108330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.850+0000 7f15ddfb8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d8107f20 0x7f15d8108330 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44198/0 (socket says 192.168.123.105:44198) 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.850+0000 7f15ddfb8700 1 -- 192.168.123.105:0/852477573 learned_addr learned my addr 192.168.123.105:0/852477573 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.850+0000 7f15ddfb8700 1 -- 192.168.123.105:0/852477573 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f15d81089b0 con 0x7f15d8107f20 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.850+0000 7f15ddfb8700 1 --2- 192.168.123.105:0/852477573 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d8107f20 0x7f15d8108330 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f15d4009a90 tx=0x7f15d4009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=2f6b6a65ac7f65c7 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.851+0000 7f15dcfb6700 1 -- 192.168.123.105:0/852477573 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f15d4004030 con 0x7f15d8107f20 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.851+0000 7f15dcfb6700 1 -- 192.168.123.105:0/852477573 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f15d400b7e0 con 0x7f15d8107f20 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.851+0000 7f15dcfb6700 1 -- 192.168.123.105:0/852477573 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f15d40039f0 con 0x7f15d8107f20 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.851+0000 7f15e021c700 1 -- 192.168.123.105:0/852477573 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d8107f20 msgr2=0x7f15d8108330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.851+0000 7f15e021c700 1 --2- 192.168.123.105:0/852477573 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d8107f20 0x7f15d8108330 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f15d4009a90 tx=0x7f15d4009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.851+0000 7f15e021c700 1 -- 192.168.123.105:0/852477573 shutdown_connections 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.851+0000 7f15e021c700 1 --2- 192.168.123.105:0/852477573 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d8107f20 0x7f15d8108330 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.851+0000 7f15e021c700 1 -- 192.168.123.105:0/852477573 >> 192.168.123.105:0/852477573 conn(0x7f15d807b4b0 msgr2=0x7f15d807b8d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.852+0000 7f15e021c700 1 -- 192.168.123.105:0/852477573 shutdown_connections 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.852+0000 7f15e021c700 1 -- 192.168.123.105:0/852477573 wait complete. 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.852+0000 7f15e021c700 1 Processor -- start 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.852+0000 7f15e021c700 1 -- start start 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.853+0000 7f15e021c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d819bdd0 0x7f15d819c1e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.853+0000 7f15e021c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f15d819c720 con 0x7f15d819bdd0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.853+0000 7f15ddfb8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d819bdd0 0x7f15d819c1e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.853+0000 7f15ddfb8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d819bdd0 0x7f15d819c1e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44210/0 (socket says 192.168.123.105:44210) 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.853+0000 7f15ddfb8700 1 -- 192.168.123.105:0/3288940487 learned_addr learned my addr 192.168.123.105:0/3288940487 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.853+0000 7f15ddfb8700 1 -- 192.168.123.105:0/3288940487 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f15d4009740 con 0x7f15d819bdd0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.853+0000 7f15ddfb8700 1 --2- 192.168.123.105:0/3288940487 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d819bdd0 0x7f15d819c1e0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f15d40037e0 tx=0x7f15d4003b40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.854+0000 7f15caffd700 1 -- 192.168.123.105:0/3288940487 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f15d4003fd0 con 0x7f15d819bdd0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.854+0000 7f15caffd700 1 -- 192.168.123.105:0/3288940487 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f15d4024460 con 0x7f15d819bdd0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.854+0000 7f15caffd700 1 -- 192.168.123.105:0/3288940487 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f15d401b440 con 0x7f15d819bdd0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.854+0000 7f15e021c700 1 -- 192.168.123.105:0/3288940487 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f15d819c920 con 0x7f15d819bdd0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.854+0000 7f15e021c700 1 -- 192.168.123.105:0/3288940487 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f15d819f580 con 0x7f15d819bdd0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.855+0000 7f15caffd700 1 -- 192.168.123.105:0/3288940487 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 7) v1 ==== 44973+0+0 (secure 0 0 0) 0x7f15d401b5a0 con 0x7f15d819bdd0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.855+0000 7f15caffd700 1 --2- 192.168.123.105:0/3288940487 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f15c4038340 0x7f15c403a7f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.855+0000 7f15caffd700 1 -- 192.168.123.105:0/3288940487 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f15d404cfd0 con 0x7f15d819bdd0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.855+0000 7f15dd7b7700 1 --2- 192.168.123.105:0/3288940487 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f15c4038340 0x7f15c403a7f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.856+0000 7f15dd7b7700 1 --2- 192.168.123.105:0/3288940487 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f15c4038340 0x7f15c403a7f0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f15cc006fd0 tx=0x7f15cc006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.856+0000 7f15e021c700 1 -- 192.168.123.105:0/3288940487 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f15d804f9e0 con 0x7f15d819bdd0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.859+0000 7f15caffd700 1 -- 192.168.123.105:0/3288940487 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f15d401f030 con 0x7f15d819bdd0 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.969+0000 7f15e021c700 1 -- 192.168.123.105:0/3288940487 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}) v1 -- 0x7f15d8105df0 con 0x7f15c4038340 2026-03-09T14:55:26.029 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.978+0000 7f15caffd700 1 -- 192.168.123.105:0/3288940487 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+0 (secure 0 0 0) 0x7f15d8105df0 con 0x7f15c4038340 2026-03-09T14:55:26.030 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.981+0000 7f15e021c700 1 -- 192.168.123.105:0/3288940487 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f15c4038340 msgr2=0x7f15c403a7f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:26.030 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.981+0000 7f15e021c700 1 --2- 192.168.123.105:0/3288940487 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f15c4038340 0x7f15c403a7f0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f15cc006fd0 tx=0x7f15cc006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:26.030 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.981+0000 7f15e021c700 1 -- 192.168.123.105:0/3288940487 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d819bdd0 msgr2=0x7f15d819c1e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:26.030 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.981+0000 7f15e021c700 1 --2- 192.168.123.105:0/3288940487 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d819bdd0 0x7f15d819c1e0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f15d40037e0 tx=0x7f15d4003b40 comp rx=0 tx=0).stop 2026-03-09T14:55:26.030 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.981+0000 7f15e021c700 1 -- 192.168.123.105:0/3288940487 shutdown_connections 2026-03-09T14:55:26.030 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.981+0000 7f15e021c700 1 --2- 192.168.123.105:0/3288940487 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f15c4038340 0x7f15c403a7f0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:26.030 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.982+0000 7f15e021c700 1 --2- 192.168.123.105:0/3288940487 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f15d819bdd0 0x7f15d819c1e0 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:26.030 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.982+0000 7f15e021c700 1 -- 192.168.123.105:0/3288940487 >> 192.168.123.105:0/3288940487 conn(0x7f15d807b4b0 msgr2=0x7f15d81056e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:26.030 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.982+0000 7f15e021c700 1 -- 192.168.123.105:0/3288940487 shutdown_connections 2026-03-09T14:55:26.030 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:25.982+0000 7f15e021c700 1 -- 192.168.123.105:0/3288940487 wait complete. 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.145+0000 7fd4e7ce8700 1 Processor -- start 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.146+0000 7fd4e7ce8700 1 -- start start 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.146+0000 7fd4e7ce8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 0x7fd4e0108c10 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.146+0000 7fd4e7ce8700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd4e00745b0 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.146+0000 7fd4e5a84700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 0x7fd4e0108c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.146+0000 7fd4e5a84700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 0x7fd4e0108c10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44226/0 (socket says 192.168.123.105:44226) 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.146+0000 7fd4e5a84700 1 -- 192.168.123.105:0/2075984082 learned_addr learned my addr 192.168.123.105:0/2075984082 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.147+0000 7fd4e5a84700 1 -- 192.168.123.105:0/2075984082 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd4e00746f0 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.147+0000 7fd4e5a84700 1 --2- 192.168.123.105:0/2075984082 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 0x7fd4e0108c10 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fd4d0009a90 tx=0x7fd4d0009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=11fbe6f69a013ada server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.147+0000 7fd4e4a82700 1 -- 192.168.123.105:0/2075984082 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd4d0004030 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.147+0000 7fd4e4a82700 1 -- 192.168.123.105:0/2075984082 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fd4d000b7e0 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.147+0000 7fd4e4a82700 1 -- 192.168.123.105:0/2075984082 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd4d0003a40 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.148+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/2075984082 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 msgr2=0x7fd4e0108c10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.148+0000 7fd4e7ce8700 1 --2- 192.168.123.105:0/2075984082 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 0x7fd4e0108c10 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fd4d0009a90 tx=0x7fd4d0009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.148+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/2075984082 shutdown_connections 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.148+0000 7fd4e7ce8700 1 --2- 192.168.123.105:0/2075984082 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 0x7fd4e0108c10 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.148+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/2075984082 >> 192.168.123.105:0/2075984082 conn(0x7fd4e0100270 msgr2=0x7fd4e01026a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.148+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/2075984082 shutdown_connections 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.148+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/2075984082 wait complete. 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.149+0000 7fd4e7ce8700 1 Processor -- start 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.149+0000 7fd4e7ce8700 1 -- start start 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.149+0000 7fd4e7ce8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 0x7fd4e0197770 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.149+0000 7fd4e7ce8700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd4e0197cb0 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.150+0000 7fd4e5a84700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 0x7fd4e0197770 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.150+0000 7fd4e5a84700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 0x7fd4e0197770 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44240/0 (socket says 192.168.123.105:44240) 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.150+0000 7fd4e5a84700 1 -- 192.168.123.105:0/1755117871 learned_addr learned my addr 192.168.123.105:0/1755117871 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.150+0000 7fd4e5a84700 1 -- 192.168.123.105:0/1755117871 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd4d0009740 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.150+0000 7fd4e5a84700 1 --2- 192.168.123.105:0/1755117871 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 0x7fd4e0197770 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fd4d000bef0 tx=0x7fd4d0003b40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.151+0000 7fd4d6ffd700 1 -- 192.168.123.105:0/1755117871 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd4d0004140 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.151+0000 7fd4d6ffd700 1 -- 192.168.123.105:0/1755117871 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fd4d00042a0 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.151+0000 7fd4d6ffd700 1 -- 192.168.123.105:0/1755117871 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd4d00114c0 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.151+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/1755117871 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd4e0197eb0 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.151+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/1755117871 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd4e0198350 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.152+0000 7fd4d6ffd700 1 -- 192.168.123.105:0/1755117871 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 7) v1 ==== 44973+0+0 (secure 0 0 0) 0x7fd4d001a430 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.152+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/1755117871 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd4e0191670 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.152+0000 7fd4d6ffd700 1 --2- 192.168.123.105:0/1755117871 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd4cc038320 0x7fd4cc03a7d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.152+0000 7fd4d6ffd700 1 -- 192.168.123.105:0/1755117871 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fd4d004bec0 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.155+0000 7fd4e5283700 1 --2- 192.168.123.105:0/1755117871 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd4cc038320 0x7fd4cc03a7d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.155+0000 7fd4e5283700 1 --2- 192.168.123.105:0/1755117871 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd4cc038320 0x7fd4cc03a7d0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fd4dc006fd0 tx=0x7fd4dc006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.155+0000 7fd4d6ffd700 1 -- 192.168.123.105:0/1755117871 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fd4d0021b50 con 0x7fd4e0106830 2026-03-09T14:55:26.310 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.258+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/1755117871 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}) v1 -- 0x7fd4e0061190 con 0x7fd4cc038320 2026-03-09T14:55:26.311 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.259+0000 7fd4d6ffd700 1 -- 192.168.123.105:0/1755117871 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+16 (secure 0 0 0) 0x7fd4e0061190 con 0x7fd4cc038320 2026-03-09T14:55:26.311 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.262+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/1755117871 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd4cc038320 msgr2=0x7fd4cc03a7d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:26.311 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.262+0000 7fd4e7ce8700 1 --2- 192.168.123.105:0/1755117871 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd4cc038320 0x7fd4cc03a7d0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fd4dc006fd0 tx=0x7fd4dc006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:26.311 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.262+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/1755117871 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 msgr2=0x7fd4e0197770 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:26.311 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.262+0000 7fd4e7ce8700 1 --2- 192.168.123.105:0/1755117871 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 0x7fd4e0197770 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fd4d000bef0 tx=0x7fd4d0003b40 comp rx=0 tx=0).stop 2026-03-09T14:55:26.311 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.263+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/1755117871 shutdown_connections 2026-03-09T14:55:26.311 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.263+0000 7fd4e7ce8700 1 --2- 192.168.123.105:0/1755117871 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd4cc038320 0x7fd4cc03a7d0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:26.311 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.263+0000 7fd4e7ce8700 1 --2- 192.168.123.105:0/1755117871 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4e0106830 0x7fd4e0197770 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:26.311 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.263+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/1755117871 >> 192.168.123.105:0/1755117871 conn(0x7fd4e0100270 msgr2=0x7fd4e0100f20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:26.311 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.263+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/1755117871 shutdown_connections 2026-03-09T14:55:26.311 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.263+0000 7fd4e7ce8700 1 -- 192.168.123.105:0/1755117871 wait complete. 2026-03-09T14:55:26.311 INFO:teuthology.orchestra.run.vm05.stdout:Generating ssh key... 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.437+0000 7f4d2f3d7700 1 Processor -- start 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.437+0000 7f4d2f3d7700 1 -- start start 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.438+0000 7f4d2f3d7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 0x7f4d28108330 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.438+0000 7f4d2f3d7700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4d28108870 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.438+0000 7f4d2d173700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 0x7f4d28108330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.438+0000 7f4d2d173700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 0x7f4d28108330 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44254/0 (socket says 192.168.123.105:44254) 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.438+0000 7f4d2d173700 1 -- 192.168.123.105:0/2369849389 learned_addr learned my addr 192.168.123.105:0/2369849389 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.439+0000 7f4d2d173700 1 -- 192.168.123.105:0/2369849389 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4d281089b0 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.439+0000 7f4d2d173700 1 --2- 192.168.123.105:0/2369849389 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 0x7f4d28108330 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f4d18009a90 tx=0x7f4d18009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=102413c6a08c9d16 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.439+0000 7f4d1ffff700 1 -- 192.168.123.105:0/2369849389 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f4d18004030 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.439+0000 7f4d1ffff700 1 -- 192.168.123.105:0/2369849389 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f4d1800b7e0 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.439+0000 7f4d1ffff700 1 -- 192.168.123.105:0/2369849389 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f4d18003a40 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.440+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/2369849389 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 msgr2=0x7f4d28108330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.440+0000 7f4d2f3d7700 1 --2- 192.168.123.105:0/2369849389 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 0x7f4d28108330 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f4d18009a90 tx=0x7f4d18009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.440+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/2369849389 shutdown_connections 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.440+0000 7f4d2f3d7700 1 --2- 192.168.123.105:0/2369849389 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 0x7f4d28108330 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.440+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/2369849389 >> 192.168.123.105:0/2369849389 conn(0x7f4d2807b4b0 msgr2=0x7f4d2807b8d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.440+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/2369849389 shutdown_connections 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.441+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/2369849389 wait complete. 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.441+0000 7f4d2f3d7700 1 Processor -- start 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.441+0000 7f4d2f3d7700 1 -- start start 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.442+0000 7f4d2f3d7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 0x7f4d2819bba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.442+0000 7f4d2f3d7700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4d2819c0e0 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.442+0000 7f4d2d173700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 0x7f4d2819bba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.442+0000 7f4d2d173700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 0x7f4d2819bba0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44268/0 (socket says 192.168.123.105:44268) 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.442+0000 7f4d2d173700 1 -- 192.168.123.105:0/1803479053 learned_addr learned my addr 192.168.123.105:0/1803479053 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.442+0000 7f4d2d173700 1 -- 192.168.123.105:0/1803479053 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4d18009740 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.443+0000 7f4d2d173700 1 --2- 192.168.123.105:0/1803479053 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 0x7f4d2819bba0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f4d18003710 tx=0x7f4d18003b40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.443+0000 7f4d1e7fc700 1 -- 192.168.123.105:0/1803479053 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f4d18004110 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.443+0000 7f4d1e7fc700 1 -- 192.168.123.105:0/1803479053 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f4d1801a430 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.443+0000 7f4d1e7fc700 1 -- 192.168.123.105:0/1803479053 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f4d18011460 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.443+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/1803479053 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4d2819c2e0 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.443+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/1803479053 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4d2819c780 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.444+0000 7f4d1e7fc700 1 -- 192.168.123.105:0/1803479053 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 7) v1 ==== 44973+0+0 (secure 0 0 0) 0x7f4d180115c0 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.444+0000 7f4d1e7fc700 1 --2- 192.168.123.105:0/1803479053 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4d14038340 0x7f4d1403a7f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.445+0000 7f4d2c972700 1 --2- 192.168.123.105:0/1803479053 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4d14038340 0x7f4d1403a7f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.445+0000 7f4d1e7fc700 1 -- 192.168.123.105:0/1803479053 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f4d1804cc50 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.445+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/1803479053 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4d28062380 con 0x7f4d28107f20 2026-03-09T14:55:26.715 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.446+0000 7f4d2c972700 1 --2- 192.168.123.105:0/1803479053 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4d14038340 0x7f4d1403a7f0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f4d24006fd0 tx=0x7f4d24006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.448+0000 7f4d1e7fc700 1 -- 192.168.123.105:0/1803479053 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f4d1804ce60 con 0x7f4d28107f20 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.553+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/1803479053 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}) v1 -- 0x7f4d28105ce0 con 0x7f4d14038340 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.669+0000 7f4d1e7fc700 1 -- 192.168.123.105:0/1803479053 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+0 (secure 0 0 0) 0x7f4d28105ce0 con 0x7f4d14038340 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.672+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/1803479053 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4d14038340 msgr2=0x7f4d1403a7f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.672+0000 7f4d2f3d7700 1 --2- 192.168.123.105:0/1803479053 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4d14038340 0x7f4d1403a7f0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f4d24006fd0 tx=0x7f4d24006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.672+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/1803479053 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 msgr2=0x7f4d2819bba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.672+0000 7f4d2f3d7700 1 --2- 192.168.123.105:0/1803479053 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 0x7f4d2819bba0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f4d18003710 tx=0x7f4d18003b40 comp rx=0 tx=0).stop 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.672+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/1803479053 shutdown_connections 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.672+0000 7f4d2f3d7700 1 --2- 192.168.123.105:0/1803479053 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4d14038340 0x7f4d1403a7f0 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.672+0000 7f4d2f3d7700 1 --2- 192.168.123.105:0/1803479053 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4d28107f20 0x7f4d2819bba0 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.672+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/1803479053 >> 192.168.123.105:0/1803479053 conn(0x7f4d2807b4b0 msgr2=0x7f4d281055d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.672+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/1803479053 shutdown_connections 2026-03-09T14:55:26.716 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.672+0000 7f4d2f3d7700 1 -- 192.168.123.105:0/1803479053 wait complete. 2026-03-09T14:55:26.832 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:26 vm05 ceph-mon[50611]: [09/Mar/2026:14:55:25] ENGINE Bus STARTING 2026-03-09T14:55:26.832 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:26 vm05 ceph-mon[50611]: [09/Mar/2026:14:55:25] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T14:55:26.832 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:26 vm05 ceph-mon[50611]: from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T14:55:26.832 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:26 vm05 ceph-mon[50611]: from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T14:55:26.832 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:26 vm05 ceph-mon[50611]: [09/Mar/2026:14:55:25] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T14:55:26.832 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:26 vm05 ceph-mon[50611]: [09/Mar/2026:14:55:25] ENGINE Bus STARTED 2026-03-09T14:55:26.832 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:26 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:55:26.832 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:26 vm05 ceph-mon[50611]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:26.832 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:26 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:26.832 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:26 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:55:26.832 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:26 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:26.832 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:26 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyOqi4+YN0cFEuNE+epK3rFx/DtF3RriFSg+xXPyigspQYGcYModV+a15nnuQOq+70+6SZcl8zIoMKlVKwRvKe24eW+xGf34oQgEkNWDgCxR6ChKdLruaB2Tl18lA4iiSF4Y0heN2xcDiLE8lpMEQFevtwrLXRaY/eoKwhgMjlOAKAmiVt1barDGwFDxhUSewyKLlpIy5B73IVcjQiWVhZNWADGwKPYQHuI/gTp1voLSEdZBr0LjP9sA7p+fT2i9N+yoRxNYyaYqwOdjeZJGJJdEROgddNoGGj7Nsu7pp8zS6Bd2RTw6DjKZtvyHkZTu+/ujq1rtLRwXDF+9dBZMniSzGqWATBDnH3tGxG/tiLpaqYjHFRpXmT71RyU5QwJJN7omUPr1L/o1u469LmDN9FIe45FLy+BBBTOJgzf3K0zCvdfPMhIRqvYYUiizGzy8MPJd7ttmelaXxPB4XBeeTk7hgKpmR2GrVmMPEq8BwDF9NGSusH0RC45p1W6+O9alU= ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.846+0000 7f7b48e4b700 1 Processor -- start 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.847+0000 7f7b48e4b700 1 -- start start 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.847+0000 7f7b48e4b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 0x7f7b44108350 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.847+0000 7f7b48e4b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7b44108890 con 0x7f7b44107f40 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.847+0000 7f7b4259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 0x7f7b44108350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.847+0000 7f7b4259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 0x7f7b44108350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44274/0 (socket says 192.168.123.105:44274) 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.847+0000 7f7b4259c700 1 -- 192.168.123.105:0/1462110538 learned_addr learned my addr 192.168.123.105:0/1462110538 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.848+0000 7f7b4259c700 1 -- 192.168.123.105:0/1462110538 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7b441089d0 con 0x7f7b44107f40 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.848+0000 7f7b4259c700 1 --2- 192.168.123.105:0/1462110538 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 0x7f7b44108350 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f7b34009cf0 tx=0x7f7b3400b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=ee1e6c4f663bc3de server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.848+0000 7f7b41d9b700 1 -- 192.168.123.105:0/1462110538 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7b34004030 con 0x7f7b44107f40 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.848+0000 7f7b41d9b700 1 -- 192.168.123.105:0/1462110538 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f7b3400b810 con 0x7f7b44107f40 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.848+0000 7f7b41d9b700 1 -- 192.168.123.105:0/1462110538 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7b34003a90 con 0x7f7b44107f40 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.848+0000 7f7b48e4b700 1 -- 192.168.123.105:0/1462110538 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 msgr2=0x7f7b44108350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.848+0000 7f7b48e4b700 1 --2- 192.168.123.105:0/1462110538 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 0x7f7b44108350 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f7b34009cf0 tx=0x7f7b3400b0e0 comp rx=0 tx=0).stop 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.849+0000 7f7b48e4b700 1 -- 192.168.123.105:0/1462110538 shutdown_connections 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.849+0000 7f7b48e4b700 1 --2- 192.168.123.105:0/1462110538 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 0x7f7b44108350 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.849+0000 7f7b48e4b700 1 -- 192.168.123.105:0/1462110538 >> 192.168.123.105:0/1462110538 conn(0x7f7b4407b4b0 msgr2=0x7f7b4407b8d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.849+0000 7f7b48e4b700 1 -- 192.168.123.105:0/1462110538 shutdown_connections 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.849+0000 7f7b48e4b700 1 -- 192.168.123.105:0/1462110538 wait complete. 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.850+0000 7f7b48e4b700 1 Processor -- start 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.850+0000 7f7b48e4b700 1 -- start start 2026-03-09T14:55:27.014 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.850+0000 7f7b48e4b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 0x7f7b4419bb00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.850+0000 7f7b48e4b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7b4419c040 con 0x7f7b44107f40 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.850+0000 7f7b4259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 0x7f7b4419bb00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.850+0000 7f7b4259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 0x7f7b4419bb00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44288/0 (socket says 192.168.123.105:44288) 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.850+0000 7f7b4259c700 1 -- 192.168.123.105:0/3423497286 learned_addr learned my addr 192.168.123.105:0/3423497286 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.850+0000 7f7b4259c700 1 -- 192.168.123.105:0/3423497286 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7b34009740 con 0x7f7b44107f40 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.850+0000 7f7b4259c700 1 --2- 192.168.123.105:0/3423497286 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 0x7f7b4419bb00 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f7b34000c00 tx=0x7f7b34011870 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.851+0000 7f7b3b7fe700 1 -- 192.168.123.105:0/3423497286 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7b34011b10 con 0x7f7b44107f40 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.851+0000 7f7b3b7fe700 1 -- 192.168.123.105:0/3423497286 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f7b34011c70 con 0x7f7b44107f40 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.851+0000 7f7b3b7fe700 1 -- 192.168.123.105:0/3423497286 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7b3401a5b0 con 0x7f7b44107f40 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.851+0000 7f7b48e4b700 1 -- 192.168.123.105:0/3423497286 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7b4419c240 con 0x7f7b44107f40 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.851+0000 7f7b48e4b700 1 -- 192.168.123.105:0/3423497286 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7b4419c6e0 con 0x7f7b44107f40 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.852+0000 7f7b3b7fe700 1 -- 192.168.123.105:0/3423497286 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 7) v1 ==== 44973+0+0 (secure 0 0 0) 0x7f7b3401a710 con 0x7f7b44107f40 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.852+0000 7f7b3b7fe700 1 --2- 192.168.123.105:0/3423497286 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7b24038400 0x7f7b2403a8b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.852+0000 7f7b3b7fe700 1 -- 192.168.123.105:0/3423497286 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f7b3401e070 con 0x7f7b44107f40 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.853+0000 7f7b3bfff700 1 --2- 192.168.123.105:0/3423497286 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7b24038400 0x7f7b2403a8b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.853+0000 7f7b48e4b700 1 -- 192.168.123.105:0/3423497286 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7b4404f9e0 con 0x7f7b44107f40 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.856+0000 7f7b3bfff700 1 --2- 192.168.123.105:0/3423497286 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7b24038400 0x7f7b2403a8b0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f7b2c006fd0 tx=0x7f7b2c006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.856+0000 7f7b3b7fe700 1 -- 192.168.123.105:0/3423497286 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f7b3401a9c0 con 0x7f7b44107f40 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.960+0000 7f7b48e4b700 1 -- 192.168.123.105:0/3423497286 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}) v1 -- 0x7f7b44105be0 con 0x7f7b24038400 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.961+0000 7f7b3b7fe700 1 -- 192.168.123.105:0/3423497286 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+595 (secure 0 0 0) 0x7f7b44105be0 con 0x7f7b24038400 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.963+0000 7f7b48e4b700 1 -- 192.168.123.105:0/3423497286 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7b24038400 msgr2=0x7f7b2403a8b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.963+0000 7f7b48e4b700 1 --2- 192.168.123.105:0/3423497286 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7b24038400 0x7f7b2403a8b0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f7b2c006fd0 tx=0x7f7b2c006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.963+0000 7f7b48e4b700 1 -- 192.168.123.105:0/3423497286 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 msgr2=0x7f7b4419bb00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.963+0000 7f7b48e4b700 1 --2- 192.168.123.105:0/3423497286 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 0x7f7b4419bb00 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f7b34000c00 tx=0x7f7b34011870 comp rx=0 tx=0).stop 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.963+0000 7f7b48e4b700 1 -- 192.168.123.105:0/3423497286 shutdown_connections 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.963+0000 7f7b48e4b700 1 --2- 192.168.123.105:0/3423497286 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7b24038400 0x7f7b2403a8b0 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.963+0000 7f7b48e4b700 1 --2- 192.168.123.105:0/3423497286 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7b44107f40 0x7f7b4419bb00 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.963+0000 7f7b48e4b700 1 -- 192.168.123.105:0/3423497286 >> 192.168.123.105:0/3423497286 conn(0x7f7b4407b4b0 msgr2=0x7f7b441054d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.964+0000 7f7b48e4b700 1 -- 192.168.123.105:0/3423497286 shutdown_connections 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:26.964+0000 7f7b48e4b700 1 -- 192.168.123.105:0/3423497286 wait complete. 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:Adding key to root@localhost authorized_keys... 2026-03-09T14:55:27.015 INFO:teuthology.orchestra.run.vm05.stdout:Adding host vm05... 2026-03-09T14:55:27.901 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:27 vm05 ceph-mon[50611]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:27.901 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:27 vm05 ceph-mon[50611]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:27.901 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:27 vm05 ceph-mon[50611]: Generating ssh key... 2026-03-09T14:55:27.901 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:27 vm05 ceph-mon[50611]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:27.901 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:27 vm05 ceph-mon[50611]: mgrmap e8: vm05.lhsexd(active, since 2s) 2026-03-09T14:55:28.991 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:28 vm05 ceph-mon[50611]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "addr": "192.168.123.105", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:28.991 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:28 vm05 ceph-mon[50611]: Deploying cephadm binary to vm05 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Added host 'vm05' with addr '192.168.123.105' 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.141+0000 7fb33e8a3700 1 Processor -- start 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.142+0000 7fb33e8a3700 1 -- start start 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.142+0000 7fb33e8a3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 0x7fb338108350 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.142+0000 7fb33e8a3700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb338108890 con 0x7fb338107f40 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.142+0000 7fb337fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 0x7fb338108350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.142+0000 7fb337fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 0x7fb338108350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44296/0 (socket says 192.168.123.105:44296) 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.142+0000 7fb337fff700 1 -- 192.168.123.105:0/4162906943 learned_addr learned my addr 192.168.123.105:0/4162906943 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.143+0000 7fb337fff700 1 -- 192.168.123.105:0/4162906943 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb3381089d0 con 0x7fb338107f40 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.143+0000 7fb337fff700 1 --2- 192.168.123.105:0/4162906943 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 0x7fb338108350 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fb320009a90 tx=0x7fb320009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=9104c15dda555f78 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.143+0000 7fb336ffd700 1 -- 192.168.123.105:0/4162906943 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb320004030 con 0x7fb338107f40 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.143+0000 7fb336ffd700 1 -- 192.168.123.105:0/4162906943 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fb32000b7e0 con 0x7fb338107f40 2026-03-09T14:55:29.048 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.143+0000 7fb336ffd700 1 -- 192.168.123.105:0/4162906943 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb320003a40 con 0x7fb338107f40 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.143+0000 7fb33e8a3700 1 -- 192.168.123.105:0/4162906943 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 msgr2=0x7fb338108350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.143+0000 7fb33e8a3700 1 --2- 192.168.123.105:0/4162906943 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 0x7fb338108350 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fb320009a90 tx=0x7fb320009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.144+0000 7fb33e8a3700 1 -- 192.168.123.105:0/4162906943 shutdown_connections 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.144+0000 7fb33e8a3700 1 --2- 192.168.123.105:0/4162906943 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 0x7fb338108350 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.144+0000 7fb33e8a3700 1 -- 192.168.123.105:0/4162906943 >> 192.168.123.105:0/4162906943 conn(0x7fb338103770 msgr2=0x7fb338105b50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.144+0000 7fb33e8a3700 1 -- 192.168.123.105:0/4162906943 shutdown_connections 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.144+0000 7fb33e8a3700 1 -- 192.168.123.105:0/4162906943 wait complete. 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.144+0000 7fb33e8a3700 1 Processor -- start 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.144+0000 7fb33e8a3700 1 -- start start 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.145+0000 7fb33e8a3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 0x7fb33819bbe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.145+0000 7fb33e8a3700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb33819c120 con 0x7fb338107f40 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.145+0000 7fb337fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 0x7fb33819bbe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.145+0000 7fb337fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 0x7fb33819bbe0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44308/0 (socket says 192.168.123.105:44308) 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.145+0000 7fb337fff700 1 -- 192.168.123.105:0/3482783140 learned_addr learned my addr 192.168.123.105:0/3482783140 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.145+0000 7fb337fff700 1 -- 192.168.123.105:0/3482783140 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb320009740 con 0x7fb338107f40 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.145+0000 7fb337fff700 1 --2- 192.168.123.105:0/3482783140 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 0x7fb33819bbe0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fb320003710 tx=0x7fb320003b40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.145+0000 7fb3357fa700 1 -- 192.168.123.105:0/3482783140 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb320004110 con 0x7fb338107f40 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.145+0000 7fb3357fa700 1 -- 192.168.123.105:0/3482783140 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7fb32001a430 con 0x7fb338107f40 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.146+0000 7fb3357fa700 1 -- 192.168.123.105:0/3482783140 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fb320011460 con 0x7fb338107f40 2026-03-09T14:55:29.049 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.146+0000 7fb33e8a3700 1 -- 192.168.123.105:0/3482783140 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb33819c320 con 0x7fb338107f40 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.146+0000 7fb33e8a3700 1 -- 192.168.123.105:0/3482783140 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb33819c7c0 con 0x7fb338107f40 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.147+0000 7fb3357fa700 1 -- 192.168.123.105:0/3482783140 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 7) v1 ==== 44973+0+0 (secure 0 0 0) 0x7fb32001a5a0 con 0x7fb338107f40 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.147+0000 7fb33e8a3700 1 -- 192.168.123.105:0/3482783140 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb338062380 con 0x7fb338107f40 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.147+0000 7fb3357fa700 1 --2- 192.168.123.105:0/3482783140 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb324038340 0x7fb32403a7f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.147+0000 7fb3357fa700 1 -- 192.168.123.105:0/3482783140 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fb320027020 con 0x7fb338107f40 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.147+0000 7fb3377fe700 1 --2- 192.168.123.105:0/3482783140 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb324038340 0x7fb32403a7f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.148+0000 7fb3377fe700 1 --2- 192.168.123.105:0/3482783140 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb324038340 0x7fb32403a7f0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fb328006fd0 tx=0x7fb328006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.150+0000 7fb3357fa700 1 -- 192.168.123.105:0/3482783140 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fb32001aa60 con 0x7fb338107f40 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.253+0000 7fb33e8a3700 1 -- 192.168.123.105:0/3482783140 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm05", "addr": "192.168.123.105", "target": ["mon-mgr", ""]}) v1 -- 0x7fb338105aa0 con 0x7fb324038340 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:27.671+0000 7fb3357fa700 1 -- 192.168.123.105:0/3482783140 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mgrmap(e 8) v1 ==== 45079+0+0 (secure 0 0 0) 0x7fb3200115c0 con 0x7fb338107f40 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:28.987+0000 7fb3357fa700 1 -- 192.168.123.105:0/3482783140 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+46 (secure 0 0 0) 0x7fb338105aa0 con 0x7fb324038340 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:28.990+0000 7fb33e8a3700 1 -- 192.168.123.105:0/3482783140 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb324038340 msgr2=0x7fb32403a7f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:28.990+0000 7fb33e8a3700 1 --2- 192.168.123.105:0/3482783140 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb324038340 0x7fb32403a7f0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fb328006fd0 tx=0x7fb328006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:28.990+0000 7fb33e8a3700 1 -- 192.168.123.105:0/3482783140 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 msgr2=0x7fb33819bbe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:28.990+0000 7fb33e8a3700 1 --2- 192.168.123.105:0/3482783140 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 0x7fb33819bbe0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fb320003710 tx=0x7fb320003b40 comp rx=0 tx=0).stop 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:28.990+0000 7fb33e8a3700 1 -- 192.168.123.105:0/3482783140 shutdown_connections 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:28.990+0000 7fb33e8a3700 1 --2- 192.168.123.105:0/3482783140 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb324038340 0x7fb32403a7f0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:28.990+0000 7fb33e8a3700 1 --2- 192.168.123.105:0/3482783140 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb338107f40 0x7fb33819bbe0 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:28.990+0000 7fb33e8a3700 1 -- 192.168.123.105:0/3482783140 >> 192.168.123.105:0/3482783140 conn(0x7fb338103770 msgr2=0x7fb338105390 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:28.990+0000 7fb33e8a3700 1 -- 192.168.123.105:0/3482783140 shutdown_connections 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:28.990+0000 7fb33e8a3700 1 -- 192.168.123.105:0/3482783140 wait complete. 2026-03-09T14:55:29.050 INFO:teuthology.orchestra.run.vm05.stdout:Deploying mon service with default placement... 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.191+0000 7f53d728f700 1 Processor -- start 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.191+0000 7f53d728f700 1 -- start start 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.191+0000 7f53d728f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d0104a80 0x7f53d0104e90 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.191+0000 7f53d728f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f53d01053d0 con 0x7f53d0104a80 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.191+0000 7f53d502b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d0104a80 0x7f53d0104e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.191+0000 7f53d502b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d0104a80 0x7f53d0104e90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35040/0 (socket says 192.168.123.105:35040) 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.191+0000 7f53d502b700 1 -- 192.168.123.105:0/721597713 learned_addr learned my addr 192.168.123.105:0/721597713 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.192+0000 7f53d502b700 1 -- 192.168.123.105:0/721597713 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f53d0105510 con 0x7f53d0104a80 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.192+0000 7f53d502b700 1 --2- 192.168.123.105:0/721597713 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d0104a80 0x7f53d0104e90 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f53cc009a90 tx=0x7f53cc009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=574d2919b8578faa server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.192+0000 7f53c7fff700 1 -- 192.168.123.105:0/721597713 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f53cc004030 con 0x7f53d0104a80 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.192+0000 7f53c7fff700 1 -- 192.168.123.105:0/721597713 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f53cc00b7e0 con 0x7f53d0104a80 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.193+0000 7f53d728f700 1 -- 192.168.123.105:0/721597713 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d0104a80 msgr2=0x7f53d0104e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.193+0000 7f53d728f700 1 --2- 192.168.123.105:0/721597713 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d0104a80 0x7f53d0104e90 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f53cc009a90 tx=0x7f53cc009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.193+0000 7f53d728f700 1 -- 192.168.123.105:0/721597713 shutdown_connections 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.193+0000 7f53d728f700 1 --2- 192.168.123.105:0/721597713 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d0104a80 0x7f53d0104e90 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.193+0000 7f53d728f700 1 -- 192.168.123.105:0/721597713 >> 192.168.123.105:0/721597713 conn(0x7f53d01000f0 msgr2=0x7f53d0102500 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.193+0000 7f53d728f700 1 -- 192.168.123.105:0/721597713 shutdown_connections 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.193+0000 7f53d728f700 1 -- 192.168.123.105:0/721597713 wait complete. 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.193+0000 7f53d728f700 1 Processor -- start 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.193+0000 7f53d728f700 1 -- start start 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.193+0000 7f53d728f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d01a0000 0x7f53d01a0410 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.193+0000 7f53d728f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f53d01053d0 con 0x7f53d01a0000 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.194+0000 7f53d502b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d01a0000 0x7f53d01a0410 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.194+0000 7f53d502b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d01a0000 0x7f53d01a0410 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35056/0 (socket says 192.168.123.105:35056) 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.194+0000 7f53d502b700 1 -- 192.168.123.105:0/1296424996 learned_addr learned my addr 192.168.123.105:0/1296424996 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.194+0000 7f53d502b700 1 -- 192.168.123.105:0/1296424996 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f53cc009740 con 0x7f53d01a0000 2026-03-09T14:55:29.364 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.194+0000 7f53d502b700 1 --2- 192.168.123.105:0/1296424996 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d01a0000 0x7f53d01a0410 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f53cc009130 tx=0x7f53cc00be80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.194+0000 7f53c67fc700 1 -- 192.168.123.105:0/1296424996 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f53cc01a670 con 0x7f53d01a0000 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.195+0000 7f53d728f700 1 -- 192.168.123.105:0/1296424996 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f53d01a0950 con 0x7f53d01a0000 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.195+0000 7f53d728f700 1 -- 192.168.123.105:0/1296424996 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f53d01a35e0 con 0x7f53d01a0000 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.195+0000 7f53c67fc700 1 -- 192.168.123.105:0/1296424996 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f53cc01ac70 con 0x7f53d01a0000 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.195+0000 7f53c67fc700 1 -- 192.168.123.105:0/1296424996 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f53cc0044b0 con 0x7f53d01a0000 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.196+0000 7f53c67fc700 1 -- 192.168.123.105:0/1296424996 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 8) v1 ==== 45079+0+0 (secure 0 0 0) 0x7f53cc003c60 con 0x7f53d01a0000 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.196+0000 7f53c67fc700 1 --2- 192.168.123.105:0/1296424996 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53bc0384b0 0x7f53bc03a960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.196+0000 7f53d482a700 1 --2- 192.168.123.105:0/1296424996 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53bc0384b0 0x7f53bc03a960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.196+0000 7f53d482a700 1 --2- 192.168.123.105:0/1296424996 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53bc0384b0 0x7f53bc03a960 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f53c0006fd0 tx=0x7f53c0006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.196+0000 7f53c67fc700 1 -- 192.168.123.105:0/1296424996 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f53cc04b5f0 con 0x7f53d01a0000 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.197+0000 7f53d728f700 1 -- 192.168.123.105:0/1296424996 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f53b4005320 con 0x7f53d01a0000 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.200+0000 7f53c67fc700 1 -- 192.168.123.105:0/1296424996 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f53cc01a7d0 con 0x7f53d01a0000 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.322+0000 7f53d728f700 1 -- 192.168.123.105:0/1296424996 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}) v1 -- 0x7f53b4000bf0 con 0x7f53bc0384b0 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.327+0000 7f53c67fc700 1 -- 192.168.123.105:0/1296424996 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+24 (secure 0 0 0) 0x7f53b4000bf0 con 0x7f53bc0384b0 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.330+0000 7f53bbfff700 1 -- 192.168.123.105:0/1296424996 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53bc0384b0 msgr2=0x7f53bc03a960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.330+0000 7f53bbfff700 1 --2- 192.168.123.105:0/1296424996 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53bc0384b0 0x7f53bc03a960 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f53c0006fd0 tx=0x7f53c0006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.330+0000 7f53bbfff700 1 -- 192.168.123.105:0/1296424996 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d01a0000 msgr2=0x7f53d01a0410 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.330+0000 7f53bbfff700 1 --2- 192.168.123.105:0/1296424996 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d01a0000 0x7f53d01a0410 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f53cc009130 tx=0x7f53cc00be80 comp rx=0 tx=0).stop 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.330+0000 7f53bbfff700 1 -- 192.168.123.105:0/1296424996 shutdown_connections 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.330+0000 7f53bbfff700 1 --2- 192.168.123.105:0/1296424996 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53bc0384b0 0x7f53bc03a960 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.330+0000 7f53bbfff700 1 --2- 192.168.123.105:0/1296424996 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53d01a0000 0x7f53d01a0410 secure :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f53cc009130 tx=0x7f53cc00be80 comp rx=0 tx=0).stop 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.330+0000 7f53bbfff700 1 -- 192.168.123.105:0/1296424996 >> 192.168.123.105:0/1296424996 conn(0x7f53d01000f0 msgr2=0x7f53d0100c60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.332+0000 7f53bbfff700 1 -- 192.168.123.105:0/1296424996 shutdown_connections 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.332+0000 7f53bbfff700 1 -- 192.168.123.105:0/1296424996 wait complete. 2026-03-09T14:55:29.365 INFO:teuthology.orchestra.run.vm05.stdout:Deploying mgr service with default placement... 2026-03-09T14:55:29.711 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T14:55:29.712 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.513+0000 7f5837efe700 1 Processor -- start 2026-03-09T14:55:29.712 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.514+0000 7f5837efe700 1 -- start start 2026-03-09T14:55:29.712 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.514+0000 7f5837efe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 0x7f5830108350 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:29.712 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.514+0000 7f5837efe700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5830108890 con 0x7f5830107f40 2026-03-09T14:55:29.712 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.514+0000 7f5835c9a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 0x7f5830108350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:29.712 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.514+0000 7f5835c9a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 0x7f5830108350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35064/0 (socket says 192.168.123.105:35064) 2026-03-09T14:55:29.712 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.514+0000 7f5835c9a700 1 -- 192.168.123.105:0/1524906711 learned_addr learned my addr 192.168.123.105:0/1524906711 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:29.712 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.515+0000 7f5835c9a700 1 -- 192.168.123.105:0/1524906711 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f58301089d0 con 0x7f5830107f40 2026-03-09T14:55:29.712 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.515+0000 7f5835c9a700 1 --2- 192.168.123.105:0/1524906711 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 0x7f5830108350 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f5820009a90 tx=0x7f5820009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=750ec7d7668acb1c server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.515+0000 7f5834c98700 1 -- 192.168.123.105:0/1524906711 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5820004030 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.515+0000 7f5834c98700 1 -- 192.168.123.105:0/1524906711 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f582000b7e0 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.515+0000 7f5834c98700 1 -- 192.168.123.105:0/1524906711 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5820003a40 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.516+0000 7f5837efe700 1 -- 192.168.123.105:0/1524906711 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 msgr2=0x7f5830108350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.516+0000 7f5837efe700 1 --2- 192.168.123.105:0/1524906711 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 0x7f5830108350 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f5820009a90 tx=0x7f5820009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.516+0000 7f5837efe700 1 -- 192.168.123.105:0/1524906711 shutdown_connections 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.516+0000 7f5837efe700 1 --2- 192.168.123.105:0/1524906711 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 0x7f5830108350 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.516+0000 7f5837efe700 1 -- 192.168.123.105:0/1524906711 >> 192.168.123.105:0/1524906711 conn(0x7f5830103770 msgr2=0x7f5830105b50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.516+0000 7f5837efe700 1 -- 192.168.123.105:0/1524906711 shutdown_connections 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.516+0000 7f5837efe700 1 -- 192.168.123.105:0/1524906711 wait complete. 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.517+0000 7f5837efe700 1 Processor -- start 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.517+0000 7f5837efe700 1 -- start start 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.517+0000 7f5837efe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 0x7f583007eb70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.517+0000 7f5837efe700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f583007f0b0 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.517+0000 7f5835c9a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 0x7f583007eb70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.517+0000 7f5835c9a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 0x7f583007eb70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35068/0 (socket says 192.168.123.105:35068) 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.517+0000 7f5835c9a700 1 -- 192.168.123.105:0/223443330 learned_addr learned my addr 192.168.123.105:0/223443330 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.517+0000 7f5835c9a700 1 -- 192.168.123.105:0/223443330 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5820009740 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.517+0000 7f5835c9a700 1 --2- 192.168.123.105:0/223443330 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 0x7f583007eb70 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f582000bf20 tx=0x7f5820003bf0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.518+0000 7f5826ffd700 1 -- 192.168.123.105:0/223443330 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f58200041f0 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.518+0000 7f5826ffd700 1 -- 192.168.123.105:0/223443330 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1075+0+0 (secure 0 0 0) 0x7f5820004350 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.518+0000 7f5826ffd700 1 -- 192.168.123.105:0/223443330 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f58200114a0 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.518+0000 7f5837efe700 1 -- 192.168.123.105:0/223443330 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f583007f2b0 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.519+0000 7f5837efe700 1 -- 192.168.123.105:0/223443330 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f583007b820 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.519+0000 7f5826ffd700 1 -- 192.168.123.105:0/223443330 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 8) v1 ==== 45079+0+0 (secure 0 0 0) 0x7f582001a480 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.519+0000 7f5826ffd700 1 --2- 192.168.123.105:0/223443330 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f581c037ff0 0x7f581c03a4a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.519+0000 7f5826ffd700 1 -- 192.168.123.105:0/223443330 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f5820020350 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.520+0000 7f5835499700 1 --2- 192.168.123.105:0/223443330 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f581c037ff0 0x7f581c03a4a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.520+0000 7f5835499700 1 --2- 192.168.123.105:0/223443330 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f581c037ff0 0x7f581c03a4a0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f582c006fd0 tx=0x7f582c006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.520+0000 7f5837efe700 1 -- 192.168.123.105:0/223443330 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f583010c3d0 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.523+0000 7f5826ffd700 1 -- 192.168.123.105:0/223443330 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f58200044c0 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.530+0000 7f5826ffd700 1 -- 192.168.123.105:0/223443330 <== mon.0 v2:192.168.123.105:3300/0 7 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f58200044c0 con 0x7f5830107f40 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.659+0000 7f5837efe700 1 -- 192.168.123.105:0/223443330 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}) v1 -- 0x7f583007bee0 con 0x7f581c037ff0 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.665+0000 7f5826ffd700 1 -- 192.168.123.105:0/223443330 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+24 (secure 0 0 0) 0x7f583007bee0 con 0x7f581c037ff0 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.667+0000 7f5837efe700 1 -- 192.168.123.105:0/223443330 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f581c037ff0 msgr2=0x7f581c03a4a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.667+0000 7f5837efe700 1 --2- 192.168.123.105:0/223443330 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f581c037ff0 0x7f581c03a4a0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f582c006fd0 tx=0x7f582c006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.668+0000 7f5837efe700 1 -- 192.168.123.105:0/223443330 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 msgr2=0x7f583007eb70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.668+0000 7f5837efe700 1 --2- 192.168.123.105:0/223443330 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 0x7f583007eb70 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f582000bf20 tx=0x7f5820003bf0 comp rx=0 tx=0).stop 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.668+0000 7f5837efe700 1 -- 192.168.123.105:0/223443330 shutdown_connections 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.668+0000 7f5837efe700 1 --2- 192.168.123.105:0/223443330 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f581c037ff0 0x7f581c03a4a0 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.668+0000 7f5837efe700 1 --2- 192.168.123.105:0/223443330 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5830107f40 0x7f583007eb70 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.668+0000 7f5837efe700 1 -- 192.168.123.105:0/223443330 >> 192.168.123.105:0/223443330 conn(0x7f5830103770 msgr2=0x7f5830105390 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.669+0000 7f5837efe700 1 -- 192.168.123.105:0/223443330 shutdown_connections 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.669+0000 7f5837efe700 1 -- 192.168.123.105:0/223443330 wait complete. 2026-03-09T14:55:29.713 INFO:teuthology.orchestra.run.vm05.stdout:Deploying crash service with default placement... 2026-03-09T14:55:30.012 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled crash update... 2026-03-09T14:55:30.012 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.852+0000 7f49af59e700 1 Processor -- start 2026-03-09T14:55:30.012 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.852+0000 7f49af59e700 1 -- start start 2026-03-09T14:55:30.012 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.852+0000 7f49af59e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 0x7f49b0071060 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:30.012 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.852+0000 7f49af59e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f49b00715a0 con 0x7f49b0072a40 2026-03-09T14:55:30.012 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.853+0000 7f49ae59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 0x7f49b0071060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:30.012 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.853+0000 7f49ae59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 0x7f49b0071060 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35074/0 (socket says 192.168.123.105:35074) 2026-03-09T14:55:30.012 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.853+0000 7f49ae59c700 1 -- 192.168.123.105:0/3565377467 learned_addr learned my addr 192.168.123.105:0/3565377467 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:30.012 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.853+0000 7f49ae59c700 1 -- 192.168.123.105:0/3565377467 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f49b00716e0 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.853+0000 7f49ae59c700 1 --2- 192.168.123.105:0/3565377467 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 0x7f49b0071060 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f49a0009cf0 tx=0x7f49a000b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d3432371ec8efe25 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.853+0000 7f49ad59a700 1 -- 192.168.123.105:0/3565377467 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f49a0004030 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.853+0000 7f49ad59a700 1 -- 192.168.123.105:0/3565377467 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f49a000b810 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.854+0000 7f49af59e700 1 -- 192.168.123.105:0/3565377467 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 msgr2=0x7f49b0071060 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.854+0000 7f49af59e700 1 --2- 192.168.123.105:0/3565377467 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 0x7f49b0071060 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f49a0009cf0 tx=0x7f49a000b0e0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.854+0000 7f49af59e700 1 -- 192.168.123.105:0/3565377467 shutdown_connections 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.854+0000 7f49af59e700 1 --2- 192.168.123.105:0/3565377467 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 0x7f49b0071060 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.854+0000 7f49af59e700 1 -- 192.168.123.105:0/3565377467 >> 192.168.123.105:0/3565377467 conn(0x7f49b006c9d0 msgr2=0x7f49b006ee00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.854+0000 7f49af59e700 1 -- 192.168.123.105:0/3565377467 shutdown_connections 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.854+0000 7f49af59e700 1 -- 192.168.123.105:0/3565377467 wait complete. 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.854+0000 7f49af59e700 1 Processor -- start 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.854+0000 7f49af59e700 1 -- start start 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.854+0000 7f49af59e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 0x7f49b011b060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.854+0000 7f49af59e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f49b00715a0 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.855+0000 7f49ae59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 0x7f49b011b060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.855+0000 7f49ae59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 0x7f49b011b060 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35084/0 (socket says 192.168.123.105:35084) 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.855+0000 7f49ae59c700 1 -- 192.168.123.105:0/3028401182 learned_addr learned my addr 192.168.123.105:0/3028401182 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.855+0000 7f49ae59c700 1 -- 192.168.123.105:0/3028401182 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f49a0009740 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.855+0000 7f49ae59c700 1 --2- 192.168.123.105:0/3028401182 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 0x7f49b011b060 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f49a0003f60 tx=0x7f49a0004040 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.855+0000 7f499f7fe700 1 -- 192.168.123.105:0/3028401182 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f49a0004290 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.855+0000 7f49af59e700 1 -- 192.168.123.105:0/3028401182 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f49b011ca30 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.855+0000 7f49af59e700 1 -- 192.168.123.105:0/3028401182 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f49b011b8b0 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.856+0000 7f499f7fe700 1 -- 192.168.123.105:0/3028401182 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f49a00043f0 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.856+0000 7f499f7fe700 1 -- 192.168.123.105:0/3028401182 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f49a001a5b0 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.857+0000 7f499f7fe700 1 -- 192.168.123.105:0/3028401182 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 8) v1 ==== 45079+0+0 (secure 0 0 0) 0x7f49a001a710 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.857+0000 7f499f7fe700 1 --2- 192.168.123.105:0/3028401182 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f49980384b0 0x7f499803a960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.857+0000 7f49af59e700 1 -- 192.168.123.105:0/3028401182 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f49b0062380 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.857+0000 7f49add9b700 1 --2- 192.168.123.105:0/3028401182 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f49980384b0 0x7f499803a960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.858+0000 7f49add9b700 1 --2- 192.168.123.105:0/3028401182 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f49980384b0 0x7f499803a960 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f49a4009990 tx=0x7f49a4006e30 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.858+0000 7f499f7fe700 1 -- 192.168.123.105:0/3028401182 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f49a004d110 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.860+0000 7f499f7fe700 1 -- 192.168.123.105:0/3028401182 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f49a007f720 con 0x7f49b0072a40 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.971+0000 7f49af59e700 1 -- 192.168.123.105:0/3028401182 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}) v1 -- 0x7f49b006ec90 con 0x7f49980384b0 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.977+0000 7f499f7fe700 1 -- 192.168.123.105:0/3028401182 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+26 (secure 0 0 0) 0x7f49b006ec90 con 0x7f49980384b0 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.979+0000 7f49af59e700 1 -- 192.168.123.105:0/3028401182 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f49980384b0 msgr2=0x7f499803a960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.979+0000 7f49af59e700 1 --2- 192.168.123.105:0/3028401182 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f49980384b0 0x7f499803a960 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f49a4009990 tx=0x7f49a4006e30 comp rx=0 tx=0).stop 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.979+0000 7f49af59e700 1 -- 192.168.123.105:0/3028401182 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 msgr2=0x7f49b011b060 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.979+0000 7f49af59e700 1 --2- 192.168.123.105:0/3028401182 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 0x7f49b011b060 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f49a0003f60 tx=0x7f49a0004040 comp rx=0 tx=0).stop 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.981+0000 7f49af59e700 1 -- 192.168.123.105:0/3028401182 shutdown_connections 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.981+0000 7f49af59e700 1 --2- 192.168.123.105:0/3028401182 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f49980384b0 0x7f499803a960 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.981+0000 7f49af59e700 1 --2- 192.168.123.105:0/3028401182 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49b0072a40 0x7f49b011b060 secure :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f49a0003f60 tx=0x7f49a0004040 comp rx=0 tx=0).stop 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.981+0000 7f49af59e700 1 -- 192.168.123.105:0/3028401182 >> 192.168.123.105:0/3028401182 conn(0x7f49b006c9d0 msgr2=0x7f49b006e070 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.983+0000 7f49af59e700 1 -- 192.168.123.105:0/3028401182 shutdown_connections 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:29.983+0000 7f49af59e700 1 -- 192.168.123.105:0/3028401182 wait complete. 2026-03-09T14:55:30.013 INFO:teuthology.orchestra.run.vm05.stdout:Deploying ceph-exporter service with default placement... 2026-03-09T14:55:30.297 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:29 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:30.297 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:29 vm05 ceph-mon[50611]: Added host vm05 2026-03-09T14:55:30.297 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:29 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:55:30.297 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:29 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:30.297 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:29 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:30.297 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:29 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:30.297 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:29 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:30.297 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:29 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled ceph-exporter update... 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.158+0000 7fd05ad8d700 1 Processor -- start 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.158+0000 7fd05ad8d700 1 -- start start 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.158+0000 7fd05ad8d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd054071410 0x7fd054071820 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.158+0000 7fd05ad8d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd054071d60 con 0x7fd054071410 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.159+0000 7fd059d8b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd054071410 0x7fd054071820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.159+0000 7fd059d8b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd054071410 0x7fd054071820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35100/0 (socket says 192.168.123.105:35100) 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.159+0000 7fd059d8b700 1 -- 192.168.123.105:0/3342271064 learned_addr learned my addr 192.168.123.105:0/3342271064 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.159+0000 7fd059d8b700 1 -- 192.168.123.105:0/3342271064 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd054071ea0 con 0x7fd054071410 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.159+0000 7fd059d8b700 1 --2- 192.168.123.105:0/3342271064 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd054071410 0x7fd054071820 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7fd05000ab30 tx=0x7fd050010730 comp rx=0 tx=0).ready entity=mon.0 client_cookie=ea158aa3b02f8002 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.159+0000 7fd058d89700 1 -- 192.168.123.105:0/3342271064 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd050010e00 con 0x7fd054071410 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.160+0000 7fd058d89700 1 -- 192.168.123.105:0/3342271064 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd050004510 con 0x7fd054071410 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.160+0000 7fd05ad8d700 1 -- 192.168.123.105:0/3342271064 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd054071410 msgr2=0x7fd054071820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.160+0000 7fd05ad8d700 1 --2- 192.168.123.105:0/3342271064 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd054071410 0x7fd054071820 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7fd05000ab30 tx=0x7fd050010730 comp rx=0 tx=0).stop 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.160+0000 7fd05ad8d700 1 -- 192.168.123.105:0/3342271064 shutdown_connections 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.160+0000 7fd05ad8d700 1 --2- 192.168.123.105:0/3342271064 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd054071410 0x7fd054071820 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.160+0000 7fd05ad8d700 1 -- 192.168.123.105:0/3342271064 >> 192.168.123.105:0/3342271064 conn(0x7fd05406c9d0 msgr2=0x7fd05406ee00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.161+0000 7fd05ad8d700 1 -- 192.168.123.105:0/3342271064 shutdown_connections 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.161+0000 7fd05ad8d700 1 -- 192.168.123.105:0/3342271064 wait complete. 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.161+0000 7fd05ad8d700 1 Processor -- start 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.161+0000 7fd05ad8d700 1 -- start start 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.162+0000 7fd05ad8d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd0541a86e0 0x7fd0541a8af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.162+0000 7fd05ad8d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd05001a4a0 con 0x7fd0541a86e0 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.162+0000 7fd059d8b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd0541a86e0 0x7fd0541a8af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:30.352 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.162+0000 7fd059d8b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd0541a86e0 0x7fd0541a8af0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35112/0 (socket says 192.168.123.105:35112) 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.162+0000 7fd059d8b700 1 -- 192.168.123.105:0/3694833161 learned_addr learned my addr 192.168.123.105:0/3694833161 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.162+0000 7fd059d8b700 1 -- 192.168.123.105:0/3694833161 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd05000a7e0 con 0x7fd0541a86e0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.162+0000 7fd059d8b700 1 --2- 192.168.123.105:0/3694833161 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd0541a86e0 0x7fd0541a8af0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7fd050006b20 tx=0x7fd050004650 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.163+0000 7fd04affd700 1 -- 192.168.123.105:0/3694833161 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd05001ae00 con 0x7fd0541a86e0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.163+0000 7fd05ad8d700 1 -- 192.168.123.105:0/3694833161 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd0541a9030 con 0x7fd0541a86e0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.163+0000 7fd05ad8d700 1 -- 192.168.123.105:0/3694833161 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd0541abd40 con 0x7fd0541a86e0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.163+0000 7fd04affd700 1 -- 192.168.123.105:0/3694833161 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd05000f070 con 0x7fd0541a86e0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.164+0000 7fd04affd700 1 -- 192.168.123.105:0/3694833161 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd050022a50 con 0x7fd0541a86e0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.164+0000 7fd04affd700 1 -- 192.168.123.105:0/3694833161 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 8) v1 ==== 45079+0+0 (secure 0 0 0) 0x7fd050018070 con 0x7fd0541a86e0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.164+0000 7fd04affd700 1 --2- 192.168.123.105:0/3694833161 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd040038460 0x7fd04003a910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.165+0000 7fd04affd700 1 -- 192.168.123.105:0/3694833161 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fd05004c2f0 con 0x7fd0541a86e0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.165+0000 7fd05958a700 1 --2- 192.168.123.105:0/3694833161 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd040038460 0x7fd04003a910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.165+0000 7fd05958a700 1 --2- 192.168.123.105:0/3694833161 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd040038460 0x7fd04003a910 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7fd04c00ad30 tx=0x7fd04c0093f0 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.165+0000 7fd05ad8d700 1 -- 192.168.123.105:0/3694833161 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd038005320 con 0x7fd0541a86e0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.170+0000 7fd04affd700 1 -- 192.168.123.105:0/3694833161 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fd050027070 con 0x7fd0541a86e0 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.295+0000 7fd05ad8d700 1 -- 192.168.123.105:0/3694833161 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}) v1 -- 0x7fd038000bf0 con 0x7fd040038460 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.303+0000 7fd04affd700 1 -- 192.168.123.105:0/3694833161 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+34 (secure 0 0 0) 0x7fd038000bf0 con 0x7fd040038460 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.309+0000 7fd048ff9700 1 -- 192.168.123.105:0/3694833161 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd040038460 msgr2=0x7fd04003a910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.309+0000 7fd048ff9700 1 --2- 192.168.123.105:0/3694833161 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd040038460 0x7fd04003a910 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7fd04c00ad30 tx=0x7fd04c0093f0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.310+0000 7fd048ff9700 1 -- 192.168.123.105:0/3694833161 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd0541a86e0 msgr2=0x7fd0541a8af0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.310+0000 7fd048ff9700 1 --2- 192.168.123.105:0/3694833161 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd0541a86e0 0x7fd0541a8af0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7fd050006b20 tx=0x7fd050004650 comp rx=0 tx=0).stop 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.310+0000 7fd048ff9700 1 -- 192.168.123.105:0/3694833161 shutdown_connections 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.310+0000 7fd048ff9700 1 --2- 192.168.123.105:0/3694833161 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd040038460 0x7fd04003a910 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.310+0000 7fd048ff9700 1 --2- 192.168.123.105:0/3694833161 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd0541a86e0 0x7fd0541a8af0 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.310+0000 7fd048ff9700 1 -- 192.168.123.105:0/3694833161 >> 192.168.123.105:0/3694833161 conn(0x7fd05406c9d0 msgr2=0x7fd05406d430 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.310+0000 7fd048ff9700 1 -- 192.168.123.105:0/3694833161 shutdown_connections 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.310+0000 7fd048ff9700 1 -- 192.168.123.105:0/3694833161 wait complete. 2026-03-09T14:55:30.353 INFO:teuthology.orchestra.run.vm05.stdout:Deploying prometheus service with default placement... 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled prometheus update... 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.511+0000 7f038733c700 1 Processor -- start 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.512+0000 7f038733c700 1 -- start start 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.512+0000 7f038733c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0380107170 0x7f0380107580 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.512+0000 7f038733c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0380107ac0 con 0x7f0380107170 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.512+0000 7f038633a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0380107170 0x7f0380107580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.512+0000 7f038633a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0380107170 0x7f0380107580 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35116/0 (socket says 192.168.123.105:35116) 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.512+0000 7f038633a700 1 -- 192.168.123.105:0/1612907470 learned_addr learned my addr 192.168.123.105:0/1612907470 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.512+0000 7f038633a700 1 -- 192.168.123.105:0/1612907470 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0380107c00 con 0x7f0380107170 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.513+0000 7f038633a700 1 --2- 192.168.123.105:0/1612907470 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0380107170 0x7f0380107580 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f0370009a90 tx=0x7f0370009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=8fef5fb5192da881 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.513+0000 7f0385338700 1 -- 192.168.123.105:0/1612907470 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0370004030 con 0x7f0380107170 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.513+0000 7f0385338700 1 -- 192.168.123.105:0/1612907470 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f037000b7e0 con 0x7f0380107170 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.513+0000 7f0385338700 1 -- 192.168.123.105:0/1612907470 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f0370003b30 con 0x7f0380107170 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.513+0000 7f038733c700 1 -- 192.168.123.105:0/1612907470 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0380107170 msgr2=0x7f0380107580 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:30.690 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.513+0000 7f038733c700 1 --2- 192.168.123.105:0/1612907470 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0380107170 0x7f0380107580 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f0370009a90 tx=0x7f0370009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.514+0000 7f038733c700 1 -- 192.168.123.105:0/1612907470 shutdown_connections 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.514+0000 7f038733c700 1 --2- 192.168.123.105:0/1612907470 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0380107170 0x7f0380107580 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.514+0000 7f038733c700 1 -- 192.168.123.105:0/1612907470 >> 192.168.123.105:0/1612907470 conn(0x7f03800ff840 msgr2=0x7f0380101c50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.514+0000 7f038733c700 1 -- 192.168.123.105:0/1612907470 shutdown_connections 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.514+0000 7f038733c700 1 -- 192.168.123.105:0/1612907470 wait complete. 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.514+0000 7f038733c700 1 Processor -- start 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.515+0000 7f038733c700 1 -- start start 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.515+0000 7f038733c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03801999b0 0x7f0380199dc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.515+0000 7f038733c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f038019a300 con 0x7f03801999b0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.515+0000 7f038633a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03801999b0 0x7f0380199dc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.515+0000 7f038633a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03801999b0 0x7f0380199dc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35126/0 (socket says 192.168.123.105:35126) 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.515+0000 7f038633a700 1 -- 192.168.123.105:0/4260763619 learned_addr learned my addr 192.168.123.105:0/4260763619 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.515+0000 7f038633a700 1 -- 192.168.123.105:0/4260763619 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0370009740 con 0x7f03801999b0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.515+0000 7f038633a700 1 --2- 192.168.123.105:0/4260763619 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03801999b0 0x7f0380199dc0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f037000b440 tx=0x7f037000bfa0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.516+0000 7f03777fe700 1 -- 192.168.123.105:0/4260763619 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f037001eab0 con 0x7f03801999b0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.516+0000 7f03777fe700 1 -- 192.168.123.105:0/4260763619 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f037001ec10 con 0x7f03801999b0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.516+0000 7f03777fe700 1 -- 192.168.123.105:0/4260763619 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f03700116f0 con 0x7f03801999b0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.516+0000 7f038733c700 1 -- 192.168.123.105:0/4260763619 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f038019a500 con 0x7f03801999b0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.516+0000 7f038733c700 1 -- 192.168.123.105:0/4260763619 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0380104f50 con 0x7f03801999b0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.517+0000 7f03777fe700 1 -- 192.168.123.105:0/4260763619 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 8) v1 ==== 45079+0+0 (secure 0 0 0) 0x7f0370011850 con 0x7f03801999b0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.517+0000 7f03777fe700 1 --2- 192.168.123.105:0/4260763619 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f036c038490 0x7f036c03a940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.517+0000 7f03777fe700 1 -- 192.168.123.105:0/4260763619 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f0370050f80 con 0x7f03801999b0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.517+0000 7f0385b39700 1 --2- 192.168.123.105:0/4260763619 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f036c038490 0x7f036c03a940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.518+0000 7f0385b39700 1 --2- 192.168.123.105:0/4260763619 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f036c038490 0x7f036c03a940 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f037c006fd0 tx=0x7f037c006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.518+0000 7f038733c700 1 -- 192.168.123.105:0/4260763619 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f038019a690 con 0x7f03801999b0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.521+0000 7f03777fe700 1 -- 192.168.123.105:0/4260763619 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f037002b030 con 0x7f03801999b0 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.641+0000 7f038733c700 1 -- 192.168.123.105:0/4260763619 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}) v1 -- 0x7f0380100ba0 con 0x7f036c038490 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.645+0000 7f03777fe700 1 -- 192.168.123.105:0/4260763619 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+31 (secure 0 0 0) 0x7f0380100ba0 con 0x7f036c038490 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.647+0000 7f038733c700 1 -- 192.168.123.105:0/4260763619 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f036c038490 msgr2=0x7f036c03a940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.647+0000 7f038733c700 1 --2- 192.168.123.105:0/4260763619 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f036c038490 0x7f036c03a940 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f037c006fd0 tx=0x7f037c006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.647+0000 7f038733c700 1 -- 192.168.123.105:0/4260763619 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03801999b0 msgr2=0x7f0380199dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.647+0000 7f038733c700 1 --2- 192.168.123.105:0/4260763619 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03801999b0 0x7f0380199dc0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f037000b440 tx=0x7f037000bfa0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.648+0000 7f038733c700 1 -- 192.168.123.105:0/4260763619 shutdown_connections 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.648+0000 7f038733c700 1 --2- 192.168.123.105:0/4260763619 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f036c038490 0x7f036c03a940 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.648+0000 7f038733c700 1 --2- 192.168.123.105:0/4260763619 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f03801999b0 0x7f0380199dc0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.648+0000 7f038733c700 1 -- 192.168.123.105:0/4260763619 >> 192.168.123.105:0/4260763619 conn(0x7f03800ff840 msgr2=0x7f0380100490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.649+0000 7f038733c700 1 -- 192.168.123.105:0/4260763619 shutdown_connections 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.649+0000 7f038733c700 1 -- 192.168.123.105:0/4260763619 wait complete. 2026-03-09T14:55:30.691 INFO:teuthology.orchestra.run.vm05.stdout:Deploying grafana service with default placement... 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled grafana update... 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.829+0000 7f13be3b3700 1 Processor -- start 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.830+0000 7f13be3b3700 1 -- start start 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.830+0000 7f13be3b3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 0x7f13b8108350 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.830+0000 7f13be3b3700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f13b8108890 con 0x7f13b8107f40 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.831+0000 7f13b7fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 0x7f13b8108350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.831+0000 7f13b7fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 0x7f13b8108350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35130/0 (socket says 192.168.123.105:35130) 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.831+0000 7f13b7fff700 1 -- 192.168.123.105:0/3255294310 learned_addr learned my addr 192.168.123.105:0/3255294310 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.831+0000 7f13b7fff700 1 -- 192.168.123.105:0/3255294310 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f13b81089d0 con 0x7f13b8107f40 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.832+0000 7f13b7fff700 1 --2- 192.168.123.105:0/3255294310 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 0x7f13b8108350 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f13a0009cf0 tx=0x7f13a000b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=1e58488e72a64578 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.832+0000 7f13b6ffd700 1 -- 192.168.123.105:0/3255294310 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f13a0004030 con 0x7f13b8107f40 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.832+0000 7f13b6ffd700 1 -- 192.168.123.105:0/3255294310 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f13a000b810 con 0x7f13b8107f40 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.832+0000 7f13b6ffd700 1 -- 192.168.123.105:0/3255294310 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f13a0003b10 con 0x7f13b8107f40 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.832+0000 7f13be3b3700 1 -- 192.168.123.105:0/3255294310 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 msgr2=0x7f13b8108350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.832+0000 7f13be3b3700 1 --2- 192.168.123.105:0/3255294310 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 0x7f13b8108350 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f13a0009cf0 tx=0x7f13a000b0e0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.833+0000 7f13be3b3700 1 -- 192.168.123.105:0/3255294310 shutdown_connections 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.833+0000 7f13be3b3700 1 --2- 192.168.123.105:0/3255294310 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 0x7f13b8108350 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.833+0000 7f13be3b3700 1 -- 192.168.123.105:0/3255294310 >> 192.168.123.105:0/3255294310 conn(0x7f13b8103770 msgr2=0x7f13b8105b50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.833+0000 7f13be3b3700 1 -- 192.168.123.105:0/3255294310 shutdown_connections 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.833+0000 7f13be3b3700 1 -- 192.168.123.105:0/3255294310 wait complete. 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.833+0000 7f13be3b3700 1 Processor -- start 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.834+0000 7f13be3b3700 1 -- start start 2026-03-09T14:55:30.986 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.834+0000 7f13be3b3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 0x7f13b819bce0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:30.987 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.834+0000 7f13be3b3700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f13b819c220 con 0x7f13b8107f40 2026-03-09T14:55:30.987 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.834+0000 7f13b7fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 0x7f13b819bce0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:30.987 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.834+0000 7f13b7fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 0x7f13b819bce0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35142/0 (socket says 192.168.123.105:35142) 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.835+0000 7f13b7fff700 1 -- 192.168.123.105:0/3035632303 learned_addr learned my addr 192.168.123.105:0/3035632303 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.835+0000 7f13b7fff700 1 -- 192.168.123.105:0/3035632303 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f13a0009740 con 0x7f13b8107f40 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.835+0000 7f13b7fff700 1 --2- 192.168.123.105:0/3035632303 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 0x7f13b819bce0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f13a0000c00 tx=0x7f13a0011890 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.835+0000 7f13b57fa700 1 -- 192.168.123.105:0/3035632303 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f13a0011bc0 con 0x7f13b8107f40 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.835+0000 7f13b57fa700 1 -- 192.168.123.105:0/3035632303 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f13a0011d20 con 0x7f13b8107f40 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.835+0000 7f13b57fa700 1 -- 192.168.123.105:0/3035632303 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f13a001a590 con 0x7f13b8107f40 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.835+0000 7f13be3b3700 1 -- 192.168.123.105:0/3035632303 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f13b819c420 con 0x7f13b8107f40 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.835+0000 7f13be3b3700 1 -- 192.168.123.105:0/3035632303 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f13b819c8c0 con 0x7f13b8107f40 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.836+0000 7f13b57fa700 1 -- 192.168.123.105:0/3035632303 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 8) v1 ==== 45079+0+0 (secure 0 0 0) 0x7f13a001a6f0 con 0x7f13b8107f40 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.836+0000 7f13b57fa700 1 --2- 192.168.123.105:0/3035632303 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f13a4038470 0x7f13a403a920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.837+0000 7f13b77fe700 1 --2- 192.168.123.105:0/3035632303 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f13a4038470 0x7f13a403a920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.837+0000 7f13b57fa700 1 -- 192.168.123.105:0/3035632303 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f13a004d1a0 con 0x7f13b8107f40 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.837+0000 7f13be3b3700 1 -- 192.168.123.105:0/3035632303 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f13b804f9e0 con 0x7f13b8107f40 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.839+0000 7f13b77fe700 1 --2- 192.168.123.105:0/3035632303 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f13a4038470 0x7f13a403a920 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f13a8006fd0 tx=0x7f13a8006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.840+0000 7f13b57fa700 1 -- 192.168.123.105:0/3035632303 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f13a001f080 con 0x7f13b8107f40 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.950+0000 7f13be3b3700 1 -- 192.168.123.105:0/3035632303 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}) v1 -- 0x7f13b8105af0 con 0x7f13a4038470 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.955+0000 7f13b57fa700 1 -- 192.168.123.105:0/3035632303 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+28 (secure 0 0 0) 0x7f13b8105af0 con 0x7f13a4038470 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.957+0000 7f13be3b3700 1 -- 192.168.123.105:0/3035632303 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f13a4038470 msgr2=0x7f13a403a920 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.957+0000 7f13be3b3700 1 --2- 192.168.123.105:0/3035632303 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f13a4038470 0x7f13a403a920 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f13a8006fd0 tx=0x7f13a8006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.957+0000 7f13be3b3700 1 -- 192.168.123.105:0/3035632303 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 msgr2=0x7f13b819bce0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.957+0000 7f13be3b3700 1 --2- 192.168.123.105:0/3035632303 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 0x7f13b819bce0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f13a0000c00 tx=0x7f13a0011890 comp rx=0 tx=0).stop 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.958+0000 7f13be3b3700 1 -- 192.168.123.105:0/3035632303 shutdown_connections 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.958+0000 7f13be3b3700 1 --2- 192.168.123.105:0/3035632303 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f13a4038470 0x7f13a403a920 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.958+0000 7f13be3b3700 1 --2- 192.168.123.105:0/3035632303 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13b8107f40 0x7f13b819bce0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.958+0000 7f13be3b3700 1 -- 192.168.123.105:0/3035632303 >> 192.168.123.105:0/3035632303 conn(0x7f13b8103770 msgr2=0x7f13b81053e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.958+0000 7f13be3b3700 1 -- 192.168.123.105:0/3035632303 shutdown_connections 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:30.958+0000 7f13be3b3700 1 -- 192.168.123.105:0/3035632303 wait complete. 2026-03-09T14:55:30.989 INFO:teuthology.orchestra.run.vm05.stdout:Deploying node-exporter service with default placement... 2026-03-09T14:55:31.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:30 vm05 ceph-mon[50611]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:31.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:30 vm05 ceph-mon[50611]: Saving service mon spec with placement count:5 2026-03-09T14:55:31.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:30 vm05 ceph-mon[50611]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:31.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:30 vm05 ceph-mon[50611]: Saving service mgr spec with placement count:2 2026-03-09T14:55:31.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:30 vm05 ceph-mon[50611]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:31.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:30 vm05 ceph-mon[50611]: Saving service crash spec with placement * 2026-03-09T14:55:31.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:30 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:31.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:30 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:31.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:30 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:31.293 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:30 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:32.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled node-exporter update... 2026-03-09T14:55:32.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.128+0000 7f40b4f52700 1 Processor -- start 2026-03-09T14:55:32.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.128+0000 7f40b4f52700 1 -- start start 2026-03-09T14:55:32.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.128+0000 7f40b4f52700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b0071410 0x7f40b0071820 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:32.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.128+0000 7f40b4f52700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f40b0071d60 con 0x7f40b0071410 2026-03-09T14:55:32.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.128+0000 7f40af7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b0071410 0x7f40b0071820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.128+0000 7f40af7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b0071410 0x7f40b0071820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35146/0 (socket says 192.168.123.105:35146) 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.128+0000 7f40af7fe700 1 -- 192.168.123.105:0/846012252 learned_addr learned my addr 192.168.123.105:0/846012252 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.129+0000 7f40af7fe700 1 -- 192.168.123.105:0/846012252 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f40b0071ea0 con 0x7f40b0071410 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.130+0000 7f40af7fe700 1 --2- 192.168.123.105:0/846012252 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b0071410 0x7f40b0071820 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f40a000d180 tx=0x7f40a000d490 comp rx=0 tx=0).ready entity=mon.0 client_cookie=fc53f293e569afbe server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.130+0000 7f40ae7fc700 1 -- 192.168.123.105:0/846012252 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f40a0010070 con 0x7f40b0071410 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.130+0000 7f40ae7fc700 1 -- 192.168.123.105:0/846012252 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f40a0004510 con 0x7f40b0071410 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.131+0000 7f40b4f52700 1 -- 192.168.123.105:0/846012252 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b0071410 msgr2=0x7f40b0071820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.131+0000 7f40b4f52700 1 --2- 192.168.123.105:0/846012252 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b0071410 0x7f40b0071820 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f40a000d180 tx=0x7f40a000d490 comp rx=0 tx=0).stop 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.131+0000 7f40b4f52700 1 -- 192.168.123.105:0/846012252 shutdown_connections 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.131+0000 7f40b4f52700 1 --2- 192.168.123.105:0/846012252 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b0071410 0x7f40b0071820 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.131+0000 7f40b4f52700 1 -- 192.168.123.105:0/846012252 >> 192.168.123.105:0/846012252 conn(0x7f40b006c9d0 msgr2=0x7f40b006ee00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.131+0000 7f40b4f52700 1 -- 192.168.123.105:0/846012252 shutdown_connections 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.131+0000 7f40b4f52700 1 -- 192.168.123.105:0/846012252 wait complete. 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.132+0000 7f40b4f52700 1 Processor -- start 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.132+0000 7f40b4f52700 1 -- start start 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.132+0000 7f40b4f52700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b01a0400 0x7f40b01a0810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.132+0000 7f40b4f52700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f40a0003c20 con 0x7f40b01a0400 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.132+0000 7f40af7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b01a0400 0x7f40b01a0810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.132+0000 7f40af7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b01a0400 0x7f40b01a0810 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35160/0 (socket says 192.168.123.105:35160) 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.132+0000 7f40af7fe700 1 -- 192.168.123.105:0/1593851630 learned_addr learned my addr 192.168.123.105:0/1593851630 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.132+0000 7f40af7fe700 1 -- 192.168.123.105:0/1593851630 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f40a00087c0 con 0x7f40b01a0400 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.133+0000 7f40af7fe700 1 --2- 192.168.123.105:0/1593851630 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b01a0400 0x7f40b01a0810 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f40a0006b20 tx=0x7f40a0004630 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.133+0000 7f40acff9700 1 -- 192.168.123.105:0/1593851630 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f40a0010040 con 0x7f40b01a0400 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.133+0000 7f40b4f52700 1 -- 192.168.123.105:0/1593851630 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f40b01a0d50 con 0x7f40b01a0400 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.133+0000 7f40b4f52700 1 -- 192.168.123.105:0/1593851630 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f40b01a39e0 con 0x7f40b01a0400 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.133+0000 7f40acff9700 1 -- 192.168.123.105:0/1593851630 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f40a000de70 con 0x7f40b01a0400 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.133+0000 7f40acff9700 1 -- 192.168.123.105:0/1593851630 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f40a0021910 con 0x7f40b01a0400 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.135+0000 7f40acff9700 1 -- 192.168.123.105:0/1593851630 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 8) v1 ==== 45079+0+0 (secure 0 0 0) 0x7f40a001d070 con 0x7f40b01a0400 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.135+0000 7f40acff9700 1 --2- 192.168.123.105:0/1593851630 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4098038460 0x7f409803a910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.135+0000 7f40acff9700 1 -- 192.168.123.105:0/1593851630 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f40a004f340 con 0x7f40b01a0400 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.135+0000 7f40aeffd700 1 --2- 192.168.123.105:0/1593851630 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4098038460 0x7f409803a910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.136+0000 7f40aeffd700 1 --2- 192.168.123.105:0/1593851630 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4098038460 0x7f409803a910 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f40a800ad30 tx=0x7f40a80093f0 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.136+0000 7f40b4f52700 1 -- 192.168.123.105:0/1593851630 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f409c005320 con 0x7f40b01a0400 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.140+0000 7f40acff9700 1 -- 192.168.123.105:0/1593851630 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f40a001b070 con 0x7f40b01a0400 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.290+0000 7f40b4f52700 1 -- 192.168.123.105:0/1593851630 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}) v1 -- 0x7f409c000bf0 con 0x7f4098038460 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.534+0000 7f40acff9700 1 -- 192.168.123.105:0/1593851630 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+34 (secure 0 0 0) 0x7f409c000bf0 con 0x7f4098038460 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.537+0000 7f40b4f52700 1 -- 192.168.123.105:0/1593851630 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4098038460 msgr2=0x7f409803a910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.537+0000 7f40b4f52700 1 --2- 192.168.123.105:0/1593851630 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4098038460 0x7f409803a910 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f40a800ad30 tx=0x7f40a80093f0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.537+0000 7f40b4f52700 1 -- 192.168.123.105:0/1593851630 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b01a0400 msgr2=0x7f40b01a0810 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.537+0000 7f40b4f52700 1 --2- 192.168.123.105:0/1593851630 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b01a0400 0x7f40b01a0810 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f40a0006b20 tx=0x7f40a0004630 comp rx=0 tx=0).stop 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.537+0000 7f40b4f52700 1 -- 192.168.123.105:0/1593851630 shutdown_connections 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.537+0000 7f40b4f52700 1 --2- 192.168.123.105:0/1593851630 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4098038460 0x7f409803a910 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.537+0000 7f40b4f52700 1 --2- 192.168.123.105:0/1593851630 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f40b01a0400 0x7f40b01a0810 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.537+0000 7f40b4f52700 1 -- 192.168.123.105:0/1593851630 >> 192.168.123.105:0/1593851630 conn(0x7f40b006c9d0 msgr2=0x7f40b006e220 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.537+0000 7f40b4f52700 1 -- 192.168.123.105:0/1593851630 shutdown_connections 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:31.537+0000 7f40b4f52700 1 -- 192.168.123.105:0/1593851630 wait complete. 2026-03-09T14:55:32.097 INFO:teuthology.orchestra.run.vm05.stdout:Deploying alertmanager service with default placement... 2026-03-09T14:55:32.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:32 vm05 ceph-mon[50611]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:32.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:32 vm05 ceph-mon[50611]: Saving service ceph-exporter spec with placement * 2026-03-09T14:55:32.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:32 vm05 ceph-mon[50611]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:32.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:32 vm05 ceph-mon[50611]: Saving service prometheus spec with placement count:1 2026-03-09T14:55:32.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:32 vm05 ceph-mon[50611]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:32.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:32 vm05 ceph-mon[50611]: Saving service grafana spec with placement count:1 2026-03-09T14:55:32.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:32 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:32.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:32 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:32.457 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled alertmanager update... 2026-03-09T14:55:32.457 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.242+0000 7fcfc59b4700 1 Processor -- start 2026-03-09T14:55:32.457 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.242+0000 7fcfc59b4700 1 -- start start 2026-03-09T14:55:32.457 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.243+0000 7fcfc59b4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 0x7fcfc007a360 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:32.457 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.243+0000 7fcfc59b4700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcfc007a8a0 con 0x7fcfc007be60 2026-03-09T14:55:32.457 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.243+0000 7fcfc49b2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 0x7fcfc007a360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:32.457 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.243+0000 7fcfc49b2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 0x7fcfc007a360 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35174/0 (socket says 192.168.123.105:35174) 2026-03-09T14:55:32.457 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.243+0000 7fcfc49b2700 1 -- 192.168.123.105:0/738731254 learned_addr learned my addr 192.168.123.105:0/738731254 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:32.457 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.243+0000 7fcfc49b2700 1 -- 192.168.123.105:0/738731254 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcfc007a9e0 con 0x7fcfc007be60 2026-03-09T14:55:32.457 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.244+0000 7fcfc49b2700 1 --2- 192.168.123.105:0/738731254 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 0x7fcfc007a360 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7fcfb4009cf0 tx=0x7fcfb400b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=77500299ee775125 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:32.457 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.244+0000 7fcfbf7fe700 1 -- 192.168.123.105:0/738731254 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcfb4004030 con 0x7fcfc007be60 2026-03-09T14:55:32.457 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.244+0000 7fcfbf7fe700 1 -- 192.168.123.105:0/738731254 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fcfb400b810 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.244+0000 7fcfbf7fe700 1 -- 192.168.123.105:0/738731254 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcfb4003b10 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.244+0000 7fcfc59b4700 1 -- 192.168.123.105:0/738731254 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 msgr2=0x7fcfc007a360 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.244+0000 7fcfc59b4700 1 --2- 192.168.123.105:0/738731254 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 0x7fcfc007a360 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7fcfb4009cf0 tx=0x7fcfb400b0e0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.245+0000 7fcfc59b4700 1 -- 192.168.123.105:0/738731254 shutdown_connections 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.245+0000 7fcfc59b4700 1 --2- 192.168.123.105:0/738731254 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 0x7fcfc007a360 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.245+0000 7fcfc59b4700 1 -- 192.168.123.105:0/738731254 >> 192.168.123.105:0/738731254 conn(0x7fcfc0101220 msgr2=0x7fcfc0103650 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.248+0000 7fcfc59b4700 1 -- 192.168.123.105:0/738731254 shutdown_connections 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.248+0000 7fcfc59b4700 1 -- 192.168.123.105:0/738731254 wait complete. 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.248+0000 7fcfc59b4700 1 Processor -- start 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.248+0000 7fcfc59b4700 1 -- start start 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.248+0000 7fcfc59b4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 0x7fcfc019ba00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.249+0000 7fcfc49b2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 0x7fcfc019ba00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.249+0000 7fcfc49b2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 0x7fcfc019ba00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35178/0 (socket says 192.168.123.105:35178) 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.249+0000 7fcfc49b2700 1 -- 192.168.123.105:0/1847117477 learned_addr learned my addr 192.168.123.105:0/1847117477 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.249+0000 7fcfc59b4700 1 -- 192.168.123.105:0/1847117477 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcfc019bf40 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.249+0000 7fcfc49b2700 1 -- 192.168.123.105:0/1847117477 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcfb4009740 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.249+0000 7fcfc49b2700 1 --2- 192.168.123.105:0/1847117477 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 0x7fcfc019ba00 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fcfb40037e0 tx=0x7fcfb4011770 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.249+0000 7fcfbdffb700 1 -- 192.168.123.105:0/1847117477 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcfb4011a10 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.249+0000 7fcfbdffb700 1 -- 192.168.123.105:0/1847117477 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fcfb4011b70 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.249+0000 7fcfc59b4700 1 -- 192.168.123.105:0/1847117477 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcfc019c140 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.249+0000 7fcfbdffb700 1 -- 192.168.123.105:0/1847117477 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fcfb401a520 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.249+0000 7fcfc59b4700 1 -- 192.168.123.105:0/1847117477 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcfc019c5e0 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.251+0000 7fcfbdffb700 1 -- 192.168.123.105:0/1847117477 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 8) v1 ==== 45079+0+0 (secure 0 0 0) 0x7fcfb4011ce0 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.252+0000 7fcfc59b4700 1 -- 192.168.123.105:0/1847117477 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcfc0062380 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.252+0000 7fcfbdffb700 1 --2- 192.168.123.105:0/1847117477 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcfb0038530 0x7fcfb003a9e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.252+0000 7fcfbdffb700 1 -- 192.168.123.105:0/1847117477 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fcfb404bb10 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.252+0000 7fcfbffff700 1 --2- 192.168.123.105:0/1847117477 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcfb0038530 0x7fcfb003a9e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.255+0000 7fcfbffff700 1 --2- 192.168.123.105:0/1847117477 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcfb0038530 0x7fcfb003a9e0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fcfac006fd0 tx=0x7fcfac006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.255+0000 7fcfbdffb700 1 -- 192.168.123.105:0/1847117477 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fcfb4018db0 con 0x7fcfc007be60 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.399+0000 7fcfc59b4700 1 -- 192.168.123.105:0/1847117477 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}) v1 -- 0x7fcfc019efa0 con 0x7fcfb0038530 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.403+0000 7fcfbdffb700 1 -- 192.168.123.105:0/1847117477 <== mgr.14120 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+33 (secure 0 0 0) 0x7fcfc019efa0 con 0x7fcfb0038530 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.406+0000 7fcfc59b4700 1 -- 192.168.123.105:0/1847117477 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcfb0038530 msgr2=0x7fcfb003a9e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.406+0000 7fcfc59b4700 1 --2- 192.168.123.105:0/1847117477 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcfb0038530 0x7fcfb003a9e0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fcfac006fd0 tx=0x7fcfac006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.406+0000 7fcfc59b4700 1 -- 192.168.123.105:0/1847117477 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 msgr2=0x7fcfc019ba00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.406+0000 7fcfc59b4700 1 --2- 192.168.123.105:0/1847117477 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 0x7fcfc019ba00 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fcfb40037e0 tx=0x7fcfb4011770 comp rx=0 tx=0).stop 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.406+0000 7fcfc59b4700 1 -- 192.168.123.105:0/1847117477 shutdown_connections 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.406+0000 7fcfc59b4700 1 --2- 192.168.123.105:0/1847117477 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcfb0038530 0x7fcfb003a9e0 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.406+0000 7fcfc59b4700 1 --2- 192.168.123.105:0/1847117477 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc007be60 0x7fcfc019ba00 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.406+0000 7fcfc59b4700 1 -- 192.168.123.105:0/1847117477 >> 192.168.123.105:0/1847117477 conn(0x7fcfc0101220 msgr2=0x7fcfc0101e50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.407+0000 7fcfc59b4700 1 -- 192.168.123.105:0/1847117477 shutdown_connections 2026-03-09T14:55:32.458 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.407+0000 7fcfc59b4700 1 -- 192.168.123.105:0/1847117477 wait complete. 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.579+0000 7f7882795700 1 Processor -- start 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.580+0000 7f7882795700 1 -- start start 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.580+0000 7f7882795700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 0x7f787c108350 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.580+0000 7f7882795700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f787c108890 con 0x7f787c107f40 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.580+0000 7f787bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 0x7f787c108350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.580+0000 7f787bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 0x7f787c108350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35188/0 (socket says 192.168.123.105:35188) 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.580+0000 7f787bfff700 1 -- 192.168.123.105:0/430400515 learned_addr learned my addr 192.168.123.105:0/430400515 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.581+0000 7f787bfff700 1 -- 192.168.123.105:0/430400515 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f787c1089d0 con 0x7f787c107f40 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.581+0000 7f787bfff700 1 --2- 192.168.123.105:0/430400515 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 0x7f787c108350 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f7864009a90 tx=0x7f7864009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=46849000b8033093 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.581+0000 7f787affd700 1 -- 192.168.123.105:0/430400515 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7864004030 con 0x7f787c107f40 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.581+0000 7f787affd700 1 -- 192.168.123.105:0/430400515 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f786400b7e0 con 0x7f787c107f40 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.581+0000 7f787affd700 1 -- 192.168.123.105:0/430400515 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7864003ae0 con 0x7f787c107f40 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.582+0000 7f7882795700 1 -- 192.168.123.105:0/430400515 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 msgr2=0x7f787c108350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.582+0000 7f7882795700 1 --2- 192.168.123.105:0/430400515 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 0x7f787c108350 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f7864009a90 tx=0x7f7864009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.582+0000 7f7882795700 1 -- 192.168.123.105:0/430400515 shutdown_connections 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.582+0000 7f7882795700 1 --2- 192.168.123.105:0/430400515 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 0x7f787c108350 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.582+0000 7f7882795700 1 -- 192.168.123.105:0/430400515 >> 192.168.123.105:0/430400515 conn(0x7f787c103770 msgr2=0x7f787c105b50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.583+0000 7f7882795700 1 -- 192.168.123.105:0/430400515 shutdown_connections 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.583+0000 7f7882795700 1 -- 192.168.123.105:0/430400515 wait complete. 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.583+0000 7f7882795700 1 Processor -- start 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.583+0000 7f7882795700 1 -- start start 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.584+0000 7f7882795700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 0x7f787c19bc70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.584+0000 7f7882795700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f787c19c1b0 con 0x7f787c107f40 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.584+0000 7f787bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 0x7f787c19bc70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.584+0000 7f787bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 0x7f787c19bc70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35194/0 (socket says 192.168.123.105:35194) 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.584+0000 7f787bfff700 1 -- 192.168.123.105:0/640535508 learned_addr learned my addr 192.168.123.105:0/640535508 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.584+0000 7f787bfff700 1 -- 192.168.123.105:0/640535508 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7864009740 con 0x7f787c107f40 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.585+0000 7f787bfff700 1 --2- 192.168.123.105:0/640535508 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 0x7f787c19bc70 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f7864000c00 tx=0x7f786400bfa0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:32.763 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.585+0000 7f78797fa700 1 -- 192.168.123.105:0/640535508 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7864004160 con 0x7f787c107f40 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.585+0000 7f78797fa700 1 -- 192.168.123.105:0/640535508 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f78640042c0 con 0x7f787c107f40 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.585+0000 7f78797fa700 1 -- 192.168.123.105:0/640535508 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7864011670 con 0x7f787c107f40 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.585+0000 7f7882795700 1 -- 192.168.123.105:0/640535508 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f787c19c3b0 con 0x7f787c107f40 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.585+0000 7f7882795700 1 -- 192.168.123.105:0/640535508 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f787c19c850 con 0x7f787c107f40 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.586+0000 7f78797fa700 1 -- 192.168.123.105:0/640535508 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 8) v1 ==== 45079+0+0 (secure 0 0 0) 0x7f78640117d0 con 0x7f787c107f40 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.586+0000 7f78797fa700 1 --2- 192.168.123.105:0/640535508 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7868038470 0x7f786803a920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.586+0000 7f78797fa700 1 -- 192.168.123.105:0/640535508 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f786404d250 con 0x7f787c107f40 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.587+0000 7f787b7fe700 1 --2- 192.168.123.105:0/640535508 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7868038470 0x7f786803a920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.587+0000 7f787b7fe700 1 --2- 192.168.123.105:0/640535508 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7868038470 0x7f786803a920 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f786c006fd0 tx=0x7f786c006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.587+0000 7f7882795700 1 -- 192.168.123.105:0/640535508 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f787c04f9e0 con 0x7f787c107f40 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.590+0000 7f78797fa700 1 -- 192.168.123.105:0/640535508 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f7864011a80 con 0x7f787c107f40 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.697+0000 7f7882795700 1 -- 192.168.123.105:0/640535508 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1 -- 0x7f787c07b6c0 con 0x7f787c107f40 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.706+0000 7f78797fa700 1 -- 192.168.123.105:0/640535508 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/container_init}]=0 v7)=0 v7) v1 ==== 142+0+0 (secure 0 0 0) 0x7f7864018b40 con 0x7f787c107f40 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.708+0000 7f7882795700 1 -- 192.168.123.105:0/640535508 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7868038470 msgr2=0x7f786803a920 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.708+0000 7f7882795700 1 --2- 192.168.123.105:0/640535508 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7868038470 0x7f786803a920 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f786c006fd0 tx=0x7f786c006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.708+0000 7f7882795700 1 -- 192.168.123.105:0/640535508 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 msgr2=0x7f787c19bc70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.708+0000 7f7882795700 1 --2- 192.168.123.105:0/640535508 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 0x7f787c19bc70 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f7864000c00 tx=0x7f786400bfa0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.709+0000 7f7882795700 1 -- 192.168.123.105:0/640535508 shutdown_connections 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.709+0000 7f7882795700 1 --2- 192.168.123.105:0/640535508 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7868038470 0x7f786803a920 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.709+0000 7f7882795700 1 --2- 192.168.123.105:0/640535508 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f787c107f40 0x7f787c19bc70 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.709+0000 7f7882795700 1 -- 192.168.123.105:0/640535508 >> 192.168.123.105:0/640535508 conn(0x7f787c103770 msgr2=0x7f787c105390 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.709+0000 7f7882795700 1 -- 192.168.123.105:0/640535508 shutdown_connections 2026-03-09T14:55:32.764 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.709+0000 7f7882795700 1 -- 192.168.123.105:0/640535508 wait complete. 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.901+0000 7f7f99131700 1 Processor -- start 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.902+0000 7f7f99131700 1 -- start start 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.902+0000 7f7f99131700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 0x7f7f94108350 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.902+0000 7f7f99131700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7f94108890 con 0x7f7f94107f40 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.902+0000 7f7f92d9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 0x7f7f94108350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.902+0000 7f7f92d9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 0x7f7f94108350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35206/0 (socket says 192.168.123.105:35206) 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.902+0000 7f7f92d9d700 1 -- 192.168.123.105:0/1568281259 learned_addr learned my addr 192.168.123.105:0/1568281259 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.903+0000 7f7f92d9d700 1 -- 192.168.123.105:0/1568281259 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7f941089d0 con 0x7f7f94107f40 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.903+0000 7f7f92d9d700 1 --2- 192.168.123.105:0/1568281259 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 0x7f7f94108350 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f7f84009a90 tx=0x7f7f84009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=463cc11d6462f806 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.903+0000 7f7f9259c700 1 -- 192.168.123.105:0/1568281259 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7f84004030 con 0x7f7f94107f40 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.903+0000 7f7f9259c700 1 -- 192.168.123.105:0/1568281259 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f7f8400b7e0 con 0x7f7f94107f40 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.903+0000 7f7f9259c700 1 -- 192.168.123.105:0/1568281259 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7f84003b30 con 0x7f7f94107f40 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.904+0000 7f7f99131700 1 -- 192.168.123.105:0/1568281259 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 msgr2=0x7f7f94108350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.904+0000 7f7f99131700 1 --2- 192.168.123.105:0/1568281259 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 0x7f7f94108350 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f7f84009a90 tx=0x7f7f84009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.904+0000 7f7f99131700 1 -- 192.168.123.105:0/1568281259 shutdown_connections 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.904+0000 7f7f99131700 1 --2- 192.168.123.105:0/1568281259 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 0x7f7f94108350 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:33.096 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.904+0000 7f7f99131700 1 -- 192.168.123.105:0/1568281259 >> 192.168.123.105:0/1568281259 conn(0x7f7f94103770 msgr2=0x7f7f94105b50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.904+0000 7f7f99131700 1 -- 192.168.123.105:0/1568281259 shutdown_connections 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.904+0000 7f7f99131700 1 -- 192.168.123.105:0/1568281259 wait complete. 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.905+0000 7f7f99131700 1 Processor -- start 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.905+0000 7f7f99131700 1 -- start start 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.905+0000 7f7f99131700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 0x7f7f9419bc20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.905+0000 7f7f99131700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7f9419c160 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.906+0000 7f7f92d9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 0x7f7f9419bc20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.906+0000 7f7f92d9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 0x7f7f9419bc20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35216/0 (socket says 192.168.123.105:35216) 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.906+0000 7f7f92d9d700 1 -- 192.168.123.105:0/3975421928 learned_addr learned my addr 192.168.123.105:0/3975421928 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.906+0000 7f7f92d9d700 1 -- 192.168.123.105:0/3975421928 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7f84009740 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.906+0000 7f7f92d9d700 1 --2- 192.168.123.105:0/3975421928 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 0x7f7f9419bc20 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f7f8400bef0 tx=0x7f7f8400bfd0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.907+0000 7f7f90d99700 1 -- 192.168.123.105:0/3975421928 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7f84004140 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.907+0000 7f7f90d99700 1 -- 192.168.123.105:0/3975421928 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f7f840042a0 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.907+0000 7f7f99131700 1 -- 192.168.123.105:0/3975421928 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7f9419c360 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.907+0000 7f7f90d99700 1 -- 192.168.123.105:0/3975421928 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7f84011590 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.907+0000 7f7f99131700 1 -- 192.168.123.105:0/3975421928 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7f9419c800 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.908+0000 7f7f90d99700 1 -- 192.168.123.105:0/3975421928 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 8) v1 ==== 45079+0+0 (secure 0 0 0) 0x7f7f840116f0 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.908+0000 7f7f90d99700 1 --2- 192.168.123.105:0/3975421928 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7f80038470 0x7f7f8003a920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.909+0000 7f7f90d99700 1 -- 192.168.123.105:0/3975421928 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f7f8404cc70 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.909+0000 7f7f8a3ff700 1 --2- 192.168.123.105:0/3975421928 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7f80038470 0x7f7f8003a920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.909+0000 7f7f99131700 1 -- 192.168.123.105:0/3975421928 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7f94062380 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.913+0000 7f7f8a3ff700 1 --2- 192.168.123.105:0/3975421928 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7f80038470 0x7f7f8003a920 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f7f7c006fd0 tx=0x7f7f7c006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:32.913+0000 7f7f90d99700 1 -- 192.168.123.105:0/3975421928 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f7f8401e070 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.030+0000 7f7f99131700 1 -- 192.168.123.105:0/3975421928 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0) v1 -- 0x7f7f9410c3d0 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.034+0000 7f7f90d99700 1 -- 192.168.123.105:0/3975421928 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/dashboard/ssl_server_port}]=0 v8)=0 v8) v1 ==== 130+0+0 (secure 0 0 0) 0x7f7f8404b0c0 con 0x7f7f94107f40 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.041+0000 7f7f99131700 1 -- 192.168.123.105:0/3975421928 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7f80038470 msgr2=0x7f7f8003a920 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.041+0000 7f7f99131700 1 --2- 192.168.123.105:0/3975421928 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7f80038470 0x7f7f8003a920 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f7f7c006fd0 tx=0x7f7f7c006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.041+0000 7f7f99131700 1 -- 192.168.123.105:0/3975421928 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 msgr2=0x7f7f9419bc20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.041+0000 7f7f99131700 1 --2- 192.168.123.105:0/3975421928 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 0x7f7f9419bc20 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f7f8400bef0 tx=0x7f7f8400bfd0 comp rx=0 tx=0).stop 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.041+0000 7f7f99131700 1 -- 192.168.123.105:0/3975421928 shutdown_connections 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.041+0000 7f7f99131700 1 --2- 192.168.123.105:0/3975421928 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7f80038470 0x7f7f8003a920 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.041+0000 7f7f99131700 1 --2- 192.168.123.105:0/3975421928 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7f94107f40 0x7f7f9419bc20 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.041+0000 7f7f99131700 1 -- 192.168.123.105:0/3975421928 >> 192.168.123.105:0/3975421928 conn(0x7f7f94103770 msgr2=0x7f7f94105350 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.042+0000 7f7f99131700 1 -- 192.168.123.105:0/3975421928 shutdown_connections 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.042+0000 7f7f99131700 1 -- 192.168.123.105:0/3975421928 wait complete. 2026-03-09T14:55:33.097 INFO:teuthology.orchestra.run.vm05.stdout:Enabling the dashboard module... 2026-03-09T14:55:33.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:33 vm05 ceph-mon[50611]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:33.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:33 vm05 ceph-mon[50611]: Saving service node-exporter spec with placement * 2026-03-09T14:55:33.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:33 vm05 ceph-mon[50611]: from='mgr.14120 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:33.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:33 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/640535508' entity='client.admin' 2026-03-09T14:55:33.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:33 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3975421928' entity='client.admin' 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.233+0000 7f598e0e0700 1 Processor -- start 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.233+0000 7f598e0e0700 1 -- start start 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.233+0000 7f598e0e0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 0x7f5988108350 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.233+0000 7f598e0e0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5988108890 con 0x7f5988107f40 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.233+0000 7f59877fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 0x7f5988108350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.234+0000 7f59877fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 0x7f5988108350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35228/0 (socket says 192.168.123.105:35228) 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.234+0000 7f59877fe700 1 -- 192.168.123.105:0/369937191 learned_addr learned my addr 192.168.123.105:0/369937191 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.234+0000 7f59877fe700 1 -- 192.168.123.105:0/369937191 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f59881089d0 con 0x7f5988107f40 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.234+0000 7f59877fe700 1 --2- 192.168.123.105:0/369937191 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 0x7f5988108350 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f5970009a90 tx=0x7f5970009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=eb7591d939a1f762 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.234+0000 7f59867fc700 1 -- 192.168.123.105:0/369937191 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5970004030 con 0x7f5988107f40 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.234+0000 7f59867fc700 1 -- 192.168.123.105:0/369937191 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f597000b7e0 con 0x7f5988107f40 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.235+0000 7f598e0e0700 1 -- 192.168.123.105:0/369937191 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 msgr2=0x7f5988108350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:34.143 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.235+0000 7f598e0e0700 1 --2- 192.168.123.105:0/369937191 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 0x7f5988108350 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f5970009a90 tx=0x7f5970009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.235+0000 7f598e0e0700 1 -- 192.168.123.105:0/369937191 shutdown_connections 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.235+0000 7f598e0e0700 1 --2- 192.168.123.105:0/369937191 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 0x7f5988108350 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.235+0000 7f598e0e0700 1 -- 192.168.123.105:0/369937191 >> 192.168.123.105:0/369937191 conn(0x7f5988103770 msgr2=0x7f5988105b50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.235+0000 7f598e0e0700 1 -- 192.168.123.105:0/369937191 shutdown_connections 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.235+0000 7f598e0e0700 1 -- 192.168.123.105:0/369937191 wait complete. 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.236+0000 7f598e0e0700 1 Processor -- start 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.236+0000 7f598e0e0700 1 -- start start 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.236+0000 7f598e0e0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 0x7f5988197960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.236+0000 7f598e0e0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5988108890 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.236+0000 7f59877fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 0x7f5988197960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.236+0000 7f59877fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 0x7f5988197960 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35244/0 (socket says 192.168.123.105:35244) 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.236+0000 7f59877fe700 1 -- 192.168.123.105:0/1378416553 learned_addr learned my addr 192.168.123.105:0/1378416553 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.236+0000 7f59877fe700 1 -- 192.168.123.105:0/1378416553 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5970009740 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.236+0000 7f59877fe700 1 --2- 192.168.123.105:0/1378416553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 0x7f5988197960 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f5970006fa0 tx=0x7f5970003d50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.236+0000 7f5984ff9700 1 -- 192.168.123.105:0/1378416553 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5970004180 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.237+0000 7f598e0e0700 1 -- 192.168.123.105:0/1378416553 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5988197ea0 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.237+0000 7f598e0e0700 1 -- 192.168.123.105:0/1378416553 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f59881982c0 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.237+0000 7f5984ff9700 1 -- 192.168.123.105:0/1378416553 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f59700042e0 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.237+0000 7f5984ff9700 1 -- 192.168.123.105:0/1378416553 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5970011600 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.237+0000 7f5984ff9700 1 -- 192.168.123.105:0/1378416553 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 8) v1 ==== 45079+0+0 (secure 0 0 0) 0x7f5970011760 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.237+0000 7f5984ff9700 1 --2- 192.168.123.105:0/1378416553 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5974040ca0 0x7f5974043150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.237+0000 7f5984ff9700 1 -- 192.168.123.105:0/1378416553 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f597004d150 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.238+0000 7f5986ffd700 1 --2- 192.168.123.105:0/1378416553 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5974040ca0 0x7f5974043150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.240+0000 7f5986ffd700 1 --2- 192.168.123.105:0/1378416553 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5974040ca0 0x7f5974043150 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f5978006fd0 tx=0x7f5978006e40 comp rx=0 tx=0).ready entity=mgr.14120 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.240+0000 7f598e0e0700 1 -- 192.168.123.105:0/1378416553 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f59680052f0 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.243+0000 7f5984ff9700 1 -- 192.168.123.105:0/1378416553 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f5970011a10 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:33.387+0000 7f598e0e0700 1 -- 192.168.123.105:0/1378416553 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0) v1 -- 0x7f5968005160 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.093+0000 7f5984ff9700 1 -- 192.168.123.105:0/1378416553 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mgrmap(e 9) v1 ==== 45092+0+0 (secure 0 0 0) 0x7f597002b950 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.095+0000 7f5984ff9700 1 -- 192.168.123.105:0/1378416553 <== mon.0 v2:192.168.123.105:3300/0 8 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "dashboard"}]=0 v9) v1 ==== 88+0+0 (secure 0 0 0) 0x7f59700276c0 con 0x7f5988107f40 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.097+0000 7f598e0e0700 1 -- 192.168.123.105:0/1378416553 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5974040ca0 msgr2=0x7f5974043150 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.097+0000 7f598e0e0700 1 --2- 192.168.123.105:0/1378416553 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5974040ca0 0x7f5974043150 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f5978006fd0 tx=0x7f5978006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.097+0000 7f598e0e0700 1 -- 192.168.123.105:0/1378416553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 msgr2=0x7f5988197960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.097+0000 7f598e0e0700 1 --2- 192.168.123.105:0/1378416553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 0x7f5988197960 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f5970006fa0 tx=0x7f5970003d50 comp rx=0 tx=0).stop 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.097+0000 7f598e0e0700 1 -- 192.168.123.105:0/1378416553 shutdown_connections 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.097+0000 7f598e0e0700 1 --2- 192.168.123.105:0/1378416553 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5974040ca0 0x7f5974043150 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.097+0000 7f598e0e0700 1 --2- 192.168.123.105:0/1378416553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5988107f40 0x7f5988197960 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.097+0000 7f598e0e0700 1 -- 192.168.123.105:0/1378416553 >> 192.168.123.105:0/1378416553 conn(0x7f5988103770 msgr2=0x7f598807eed0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.097+0000 7f598e0e0700 1 -- 192.168.123.105:0/1378416553 shutdown_connections 2026-03-09T14:55:34.144 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.098+0000 7f598e0e0700 1 -- 192.168.123.105:0/1378416553 wait complete. 2026-03-09T14:55:34.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:34 vm05 ceph-mon[50611]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:34.344 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:34 vm05 ceph-mon[50611]: Saving service alertmanager spec with placement count:1 2026-03-09T14:55:34.344 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:34 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/1378416553' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T14:55:34.533 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "active_name": "vm05.lhsexd", 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.330+0000 7fe53f59e700 1 Processor -- start 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.330+0000 7fe53f59e700 1 -- start start 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.331+0000 7fe53f59e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 0x7fe540071090 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.331+0000 7fe53f59e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe5400715d0 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.331+0000 7fe53e59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 0x7fe540071090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.331+0000 7fe53e59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 0x7fe540071090 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35268/0 (socket says 192.168.123.105:35268) 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.331+0000 7fe53e59c700 1 -- 192.168.123.105:0/188376246 learned_addr learned my addr 192.168.123.105:0/188376246 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.332+0000 7fe53e59c700 1 -- 192.168.123.105:0/188376246 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe540071710 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.332+0000 7fe53e59c700 1 --2- 192.168.123.105:0/188376246 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 0x7fe540071090 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fe53401ab30 tx=0x7fe53401ae40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=1e3889c7aadd654 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.333+0000 7fe53d59a700 1 -- 192.168.123.105:0/188376246 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe534004030 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.333+0000 7fe53d59a700 1 -- 192.168.123.105:0/188376246 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe53401c8b0 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.333+0000 7fe53d59a700 1 -- 192.168.123.105:0/188376246 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe534003b50 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.333+0000 7fe53f59e700 1 -- 192.168.123.105:0/188376246 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 msgr2=0x7fe540071090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.333+0000 7fe53f59e700 1 --2- 192.168.123.105:0/188376246 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 0x7fe540071090 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fe53401ab30 tx=0x7fe53401ae40 comp rx=0 tx=0).stop 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.333+0000 7fe53f59e700 1 -- 192.168.123.105:0/188376246 shutdown_connections 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.333+0000 7fe53f59e700 1 --2- 192.168.123.105:0/188376246 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 0x7fe540071090 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.333+0000 7fe53f59e700 1 -- 192.168.123.105:0/188376246 >> 192.168.123.105:0/188376246 conn(0x7fe54006c9d0 msgr2=0x7fe54006ee00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.333+0000 7fe53f59e700 1 -- 192.168.123.105:0/188376246 shutdown_connections 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.334+0000 7fe53f59e700 1 -- 192.168.123.105:0/188376246 wait complete. 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.334+0000 7fe53f59e700 1 Processor -- start 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.334+0000 7fe53f59e700 1 -- start start 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.334+0000 7fe53f59e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 0x7fe54019fe80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.334+0000 7fe53f59e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe5401a03c0 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.335+0000 7fe53e59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 0x7fe54019fe80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.335+0000 7fe53e59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 0x7fe54019fe80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35278/0 (socket says 192.168.123.105:35278) 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.335+0000 7fe53e59c700 1 -- 192.168.123.105:0/3968169291 learned_addr learned my addr 192.168.123.105:0/3968169291 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.335+0000 7fe53e59c700 1 -- 192.168.123.105:0/3968169291 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe53401a7e0 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.335+0000 7fe53e59c700 1 --2- 192.168.123.105:0/3968169291 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 0x7fe54019fe80 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fe534006b20 tx=0x7fe534004060 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.336+0000 7fe52f7fe700 1 -- 192.168.123.105:0/3968169291 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe5340043c0 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.336+0000 7fe53f59e700 1 -- 192.168.123.105:0/3968169291 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe5401a05c0 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.336+0000 7fe53f59e700 1 -- 192.168.123.105:0/3968169291 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe5401a0a60 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.337+0000 7fe53f59e700 1 -- 192.168.123.105:0/3968169291 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe540062380 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.338+0000 7fe52f7fe700 1 -- 192.168.123.105:0/3968169291 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe534004520 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.342+0000 7fe52f7fe700 1 -- 192.168.123.105:0/3968169291 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe534022820 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.342+0000 7fe52f7fe700 1 -- 192.168.123.105:0/3968169291 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 9) v1 ==== 45092+0+0 (secure 0 0 0) 0x7fe534022a40 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.342+0000 7fe52f7fe700 1 --2- 192.168.123.105:0/3968169291 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe528038550 0x7fe52803aa00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.342+0000 7fe53dd9b700 1 -- 192.168.123.105:0/3968169291 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe528038550 msgr2=0x7fe52803aa00 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.342+0000 7fe53dd9b700 1 --2- 192.168.123.105:0/3968169291 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe528038550 0x7fe52803aa00 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.342+0000 7fe52f7fe700 1 -- 192.168.123.105:0/3968169291 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7fe53405ecc0 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.343+0000 7fe52f7fe700 1 -- 192.168.123.105:0/3968169291 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fe534029760 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.483+0000 7fe53f59e700 1 -- 192.168.123.105:0/3968169291 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) v1 -- 0x7fe5401a3510 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.484+0000 7fe52f7fe700 1 -- 192.168.123.105:0/3968169291 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v9) v1 ==== 56+0+98 (secure 0 0 0) 0x7fe53402b3d0 con 0x7fe540072a40 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.490+0000 7fe52d7fa700 1 -- 192.168.123.105:0/3968169291 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe528038550 msgr2=0x7fe52803aa00 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.490+0000 7fe52d7fa700 1 --2- 192.168.123.105:0/3968169291 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe528038550 0x7fe52803aa00 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.490+0000 7fe52d7fa700 1 -- 192.168.123.105:0/3968169291 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 msgr2=0x7fe54019fe80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.490+0000 7fe52d7fa700 1 --2- 192.168.123.105:0/3968169291 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 0x7fe54019fe80 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fe534006b20 tx=0x7fe534004060 comp rx=0 tx=0).stop 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.490+0000 7fe52d7fa700 1 -- 192.168.123.105:0/3968169291 shutdown_connections 2026-03-09T14:55:34.534 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.490+0000 7fe52d7fa700 1 --2- 192.168.123.105:0/3968169291 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe528038550 0x7fe52803aa00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:34.535 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.490+0000 7fe52d7fa700 1 --2- 192.168.123.105:0/3968169291 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe540072a40 0x7fe54019fe80 secure :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fe534006b20 tx=0x7fe534004060 comp rx=0 tx=0).stop 2026-03-09T14:55:34.535 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.490+0000 7fe52d7fa700 1 -- 192.168.123.105:0/3968169291 >> 192.168.123.105:0/3968169291 conn(0x7fe54006c9d0 msgr2=0x7fe54006d6c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:34.535 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.490+0000 7fe52d7fa700 1 -- 192.168.123.105:0/3968169291 shutdown_connections 2026-03-09T14:55:34.535 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.490+0000 7fe52d7fa700 1 -- 192.168.123.105:0/3968169291 wait complete. 2026-03-09T14:55:34.535 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for the mgr to restart... 2026-03-09T14:55:34.535 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr epoch 9... 2026-03-09T14:55:35.298 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:35 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/1378416553' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T14:55:35.298 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:35 vm05 ceph-mon[50611]: mgrmap e9: vm05.lhsexd(active, since 9s) 2026-03-09T14:55:35.298 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:35 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3968169291' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: Active manager daemon vm05.lhsexd restarted 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: Activating manager daemon vm05.lhsexd 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: osdmap e3: 0 total, 0 up, 0 in 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: mgrmap e10: vm05.lhsexd(active, starting, since 0.0283244s) 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm05.lhsexd", "id": "vm05.lhsexd"}]: dispatch 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: Manager daemon vm05.lhsexd is now available 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:55:39.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:39 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.689+0000 7f5ee49ec700 1 Processor -- start 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.689+0000 7f5ee49ec700 1 -- start start 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.689+0000 7f5ee49ec700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee0071410 0x7f5ee0071820 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.689+0000 7f5ee49ec700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5ee0071d60 con 0x7f5ee0071410 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.690+0000 7f5eded9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee0071410 0x7f5ee0071820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.690+0000 7f5eded9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee0071410 0x7f5ee0071820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35280/0 (socket says 192.168.123.105:35280) 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.690+0000 7f5eded9d700 1 -- 192.168.123.105:0/1877824941 learned_addr learned my addr 192.168.123.105:0/1877824941 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.690+0000 7f5eded9d700 1 -- 192.168.123.105:0/1877824941 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5ee0071ea0 con 0x7f5ee0071410 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.690+0000 7f5eded9d700 1 --2- 192.168.123.105:0/1877824941 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee0071410 0x7f5ee0071820 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f5ed000d180 tx=0x7f5ed000d490 comp rx=0 tx=0).ready entity=mon.0 client_cookie=5c207be040c9157e server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.691+0000 7f5eddd9b700 1 -- 192.168.123.105:0/1877824941 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5ed0010070 con 0x7f5ee0071410 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.691+0000 7f5eddd9b700 1 -- 192.168.123.105:0/1877824941 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5ed0004510 con 0x7f5ee0071410 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.691+0000 7f5ee49ec700 1 -- 192.168.123.105:0/1877824941 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee0071410 msgr2=0x7f5ee0071820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.691+0000 7f5ee49ec700 1 --2- 192.168.123.105:0/1877824941 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee0071410 0x7f5ee0071820 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f5ed000d180 tx=0x7f5ed000d490 comp rx=0 tx=0).stop 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.691+0000 7f5ee49ec700 1 -- 192.168.123.105:0/1877824941 shutdown_connections 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.691+0000 7f5ee49ec700 1 --2- 192.168.123.105:0/1877824941 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee0071410 0x7f5ee0071820 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.691+0000 7f5ee49ec700 1 -- 192.168.123.105:0/1877824941 >> 192.168.123.105:0/1877824941 conn(0x7f5ee006c9d0 msgr2=0x7f5ee006ee00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.691+0000 7f5ee49ec700 1 -- 192.168.123.105:0/1877824941 shutdown_connections 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.691+0000 7f5ee49ec700 1 -- 192.168.123.105:0/1877824941 wait complete. 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.692+0000 7f5ee49ec700 1 Processor -- start 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.692+0000 7f5ee49ec700 1 -- start start 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.692+0000 7f5ee49ec700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee019ff10 0x7f5ee01a0320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.692+0000 7f5ee49ec700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5ed0003c20 con 0x7f5ee019ff10 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.692+0000 7f5eded9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee019ff10 0x7f5ee01a0320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.692+0000 7f5eded9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee019ff10 0x7f5ee01a0320 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:35294/0 (socket says 192.168.123.105:35294) 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.692+0000 7f5eded9d700 1 -- 192.168.123.105:0/2129659010 learned_addr learned my addr 192.168.123.105:0/2129659010 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.693+0000 7f5eded9d700 1 -- 192.168.123.105:0/2129659010 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5ed00087c0 con 0x7f5ee019ff10 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.693+0000 7f5eded9d700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee019ff10 0x7f5ee01a0320 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f5ed0008c10 tx=0x7f5ed0008cf0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.694+0000 7f5ec7fff700 1 -- 192.168.123.105:0/2129659010 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5ed0010050 con 0x7f5ee019ff10 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.694+0000 7f5ee49ec700 1 -- 192.168.123.105:0/2129659010 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5ee01a0860 con 0x7f5ee019ff10 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.694+0000 7f5ee49ec700 1 -- 192.168.123.105:0/2129659010 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5ee01a14e0 con 0x7f5ee019ff10 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.694+0000 7f5ec7fff700 1 -- 192.168.123.105:0/2129659010 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5ed000deb0 con 0x7f5ee019ff10 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.694+0000 7f5ec7fff700 1 -- 192.168.123.105:0/2129659010 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5ed0016440 con 0x7f5ee019ff10 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.695+0000 7f5ec7fff700 1 -- 192.168.123.105:0/2129659010 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 9) v1 ==== 45092+0+0 (secure 0 0 0) 0x7f5ed00165a0 con 0x7f5ee019ff10 2026-03-09T14:55:40.126 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.696+0000 7f5ec7fff700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 0x7f5ec803a9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.696+0000 7f5ede59c700 1 -- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 msgr2=0x7f5ec803a9b0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.696+0000 7f5ede59c700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 0x7f5ec803a9b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.696+0000 7f5ec7fff700 1 -- 192.168.123.105:0/2129659010 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- command(tid 0: {"prefix": "get_command_descriptions"}) v1 -- 0x7f5ec803b0c0 con 0x7f5ec8038500 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.696+0000 7f5ec7fff700 1 -- 192.168.123.105:0/2129659010 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(2..2 src has 1..2) v4 ==== 940+0+0 (secure 0 0 0) 0x7f5ed004ca00 con 0x7f5ee019ff10 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.896+0000 7f5ede59c700 1 -- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 msgr2=0x7f5ec803a9b0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:34.896+0000 7f5ede59c700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 0x7f5ec803a9b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:35.297+0000 7f5ede59c700 1 -- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 msgr2=0x7f5ec803a9b0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:35.297+0000 7f5ede59c700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 0x7f5ec803a9b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:36.098+0000 7f5ede59c700 1 -- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 msgr2=0x7f5ec803a9b0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:36.098+0000 7f5ede59c700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 0x7f5ec803a9b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:37.699+0000 7f5ede59c700 1 -- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 msgr2=0x7f5ec803a9b0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:37.699+0000 7f5ede59c700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 0x7f5ec803a9b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 3.200000 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:39.057+0000 7f5ec7fff700 1 -- 192.168.123.105:0/2129659010 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mgrmap(e 10) v1 ==== 44859+0+0 (secure 0 0 0) 0x7f5ed0014dc0 con 0x7f5ee019ff10 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:39.057+0000 7f5ec7fff700 1 -- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 msgr2=0x7f5ec803a9b0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:39.057+0000 7f5ec7fff700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 0x7f5ec803a9b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.061+0000 7f5ec7fff700 1 -- 192.168.123.105:0/2129659010 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mgrmap(e 11) v1 ==== 44986+0+0 (secure 0 0 0) 0x7f5ed004cdc0 con 0x7f5ee019ff10 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.062+0000 7f5ec7fff700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 0x7f5ec803a9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.062+0000 7f5ec7fff700 1 -- 192.168.123.105:0/2129659010 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- command(tid 0: {"prefix": "get_command_descriptions"}) v1 -- 0x7f5ec803b0c0 con 0x7f5ec8038500 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.063+0000 7f5ede59c700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 0x7f5ec803a9b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.063+0000 7f5ede59c700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 0x7f5ec803a9b0 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7f5ed8003de0 tx=0x7f5ed80073e0 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.065+0000 7f5ec7fff700 1 -- 192.168.123.105:0/2129659010 <== mgr.14164 v2:192.168.123.105:6800/2 1 ==== command_reply(tid 0: 0 ) v1 ==== 8+0+6910 (secure 0 0 0) 0x7f5ec803b0c0 con 0x7f5ec8038500 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.068+0000 7f5ec5ffb700 1 -- 192.168.123.105:0/2129659010 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- command(tid 1: {"prefix": "mgr_status"}) v1 -- 0x7f5ee01a1290 con 0x7f5ec8038500 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.069+0000 7f5ec7fff700 1 -- 192.168.123.105:0/2129659010 <== mgr.14164 v2:192.168.123.105:6800/2 2 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+52 (secure 0 0 0) 0x7f5ee01a1290 con 0x7f5ec8038500 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.070+0000 7f5ee49ec700 1 -- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 msgr2=0x7f5ec803a9b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.070+0000 7f5ee49ec700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 0x7f5ec803a9b0 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7f5ed8003de0 tx=0x7f5ed80073e0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.070+0000 7f5ee49ec700 1 -- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee019ff10 msgr2=0x7f5ee01a0320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.070+0000 7f5ee49ec700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee019ff10 0x7f5ee01a0320 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f5ed0008c10 tx=0x7f5ed0008cf0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.074+0000 7f5ee49ec700 1 -- 192.168.123.105:0/2129659010 shutdown_connections 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.074+0000 7f5ee49ec700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5ec8038500 0x7f5ec803a9b0 unknown :-1 s=CLOSED pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.074+0000 7f5ee49ec700 1 --2- 192.168.123.105:0/2129659010 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5ee019ff10 0x7f5ee01a0320 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.074+0000 7f5ee49ec700 1 -- 192.168.123.105:0/2129659010 >> 192.168.123.105:0/2129659010 conn(0x7f5ee006c9d0 msgr2=0x7f5ee006d460 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.074+0000 7f5ee49ec700 1 -- 192.168.123.105:0/2129659010 shutdown_connections 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.075+0000 7f5ee49ec700 1 -- 192.168.123.105:0/2129659010 wait complete. 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:mgr epoch 9 is available 2026-03-09T14:55:40.127 INFO:teuthology.orchestra.run.vm05.stdout:Generating a dashboard self-signed certificate... 2026-03-09T14:55:40.399 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:40 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/trash_purge_schedule"}]: dispatch 2026-03-09T14:55:40.399 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:40 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:40.399 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:40 vm05 ceph-mon[50611]: mgrmap e11: vm05.lhsexd(active, since 1.03098s) 2026-03-09T14:55:40.494 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T14:55:40.494 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.275+0000 7fd1a1041700 1 Processor -- start 2026-03-09T14:55:40.494 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.275+0000 7fd1a1041700 1 -- start start 2026-03-09T14:55:40.494 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.275+0000 7fd1a1041700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 0x7fd19c1068a0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:40.494 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.275+0000 7fd1a1041700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd19c0745b0 con 0x7fd19c104480 2026-03-09T14:55:40.494 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.275+0000 7fd19ad9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 0x7fd19c1068a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:40.494 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.275+0000 7fd19ad9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 0x7fd19c1068a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43800/0 (socket says 192.168.123.105:43800) 2026-03-09T14:55:40.494 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.275+0000 7fd19ad9d700 1 -- 192.168.123.105:0/898665463 learned_addr learned my addr 192.168.123.105:0/898665463 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:40.494 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.276+0000 7fd19ad9d700 1 -- 192.168.123.105:0/898665463 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd19c0746f0 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.276+0000 7fd19ad9d700 1 --2- 192.168.123.105:0/898665463 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 0x7fd19c1068a0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7fd18c009a90 tx=0x7fd18c009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7d5791f70b03a0dc server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.276+0000 7fd199d9b700 1 -- 192.168.123.105:0/898665463 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd18c004030 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.276+0000 7fd199d9b700 1 -- 192.168.123.105:0/898665463 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd18c00b7e0 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.276+0000 7fd199d9b700 1 -- 192.168.123.105:0/898665463 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd18c003ae0 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.277+0000 7fd1a1041700 1 -- 192.168.123.105:0/898665463 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 msgr2=0x7fd19c1068a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.277+0000 7fd1a1041700 1 --2- 192.168.123.105:0/898665463 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 0x7fd19c1068a0 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7fd18c009a90 tx=0x7fd18c009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.277+0000 7fd1a1041700 1 -- 192.168.123.105:0/898665463 shutdown_connections 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.277+0000 7fd1a1041700 1 --2- 192.168.123.105:0/898665463 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 0x7fd19c1068a0 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.277+0000 7fd1a1041700 1 -- 192.168.123.105:0/898665463 >> 192.168.123.105:0/898665463 conn(0x7fd19c1000f0 msgr2=0x7fd19c102500 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.280+0000 7fd1a1041700 1 -- 192.168.123.105:0/898665463 shutdown_connections 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.280+0000 7fd1a1041700 1 -- 192.168.123.105:0/898665463 wait complete. 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.280+0000 7fd1a1041700 1 Processor -- start 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.280+0000 7fd1a1041700 1 -- start start 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.281+0000 7fd1a1041700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 0x7fd19c19ff00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.281+0000 7fd1a1041700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd19c1a0440 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.281+0000 7fd19ad9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 0x7fd19c19ff00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.281+0000 7fd19ad9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 0x7fd19c19ff00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43802/0 (socket says 192.168.123.105:43802) 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.281+0000 7fd19ad9d700 1 -- 192.168.123.105:0/439059627 learned_addr learned my addr 192.168.123.105:0/439059627 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.282+0000 7fd19ad9d700 1 -- 192.168.123.105:0/439059627 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd18c009740 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.282+0000 7fd19ad9d700 1 --2- 192.168.123.105:0/439059627 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 0x7fd19c19ff00 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fd18c009710 tx=0x7fd18c00bfa0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.282+0000 7fd183fff700 1 -- 192.168.123.105:0/439059627 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd18c004160 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.282+0000 7fd183fff700 1 -- 192.168.123.105:0/439059627 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd18c0042c0 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.282+0000 7fd183fff700 1 -- 192.168.123.105:0/439059627 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd18c0115a0 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.282+0000 7fd1a1041700 1 -- 192.168.123.105:0/439059627 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd19c1a0640 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.283+0000 7fd1a1041700 1 -- 192.168.123.105:0/439059627 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd19c1a0ae0 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.283+0000 7fd1a1041700 1 -- 192.168.123.105:0/439059627 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd19c199760 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.286+0000 7fd183fff700 1 -- 192.168.123.105:0/439059627 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 11) v1 ==== 44986+0+0 (secure 0 0 0) 0x7fd18c028020 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.286+0000 7fd183fff700 1 --2- 192.168.123.105:0/439059627 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd184038070 0x7fd18403a520 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.286+0000 7fd183fff700 1 -- 192.168.123.105:0/439059627 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fd18c04c300 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.286+0000 7fd183fff700 1 -- 192.168.123.105:0/439059627 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fd18c04c6e0 con 0x7fd19c104480 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.286+0000 7fd19a59c700 1 --2- 192.168.123.105:0/439059627 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd184038070 0x7fd18403a520 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.287+0000 7fd19a59c700 1 --2- 192.168.123.105:0/439059627 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd184038070 0x7fd18403a520 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fd190006fd0 tx=0x7fd190006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.395+0000 7fd1a1041700 1 -- 192.168.123.105:0/439059627 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}) v1 -- 0x7fd19c061190 con 0x7fd184038070 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.452+0000 7fd183fff700 1 -- 192.168.123.105:0/439059627 <== mgr.14164 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7fd19c061190 con 0x7fd184038070 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.454+0000 7fd1a1041700 1 -- 192.168.123.105:0/439059627 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd184038070 msgr2=0x7fd18403a520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.454+0000 7fd1a1041700 1 --2- 192.168.123.105:0/439059627 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd184038070 0x7fd18403a520 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fd190006fd0 tx=0x7fd190006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.454+0000 7fd1a1041700 1 -- 192.168.123.105:0/439059627 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 msgr2=0x7fd19c19ff00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.454+0000 7fd1a1041700 1 --2- 192.168.123.105:0/439059627 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 0x7fd19c19ff00 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fd18c009710 tx=0x7fd18c00bfa0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.454+0000 7fd1a1041700 1 -- 192.168.123.105:0/439059627 shutdown_connections 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.454+0000 7fd1a1041700 1 --2- 192.168.123.105:0/439059627 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd184038070 0x7fd18403a520 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.454+0000 7fd1a1041700 1 --2- 192.168.123.105:0/439059627 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd19c104480 0x7fd19c19ff00 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.454+0000 7fd1a1041700 1 -- 192.168.123.105:0/439059627 >> 192.168.123.105:0/439059627 conn(0x7fd19c1000f0 msgr2=0x7fd19c1024d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:40.495 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.454+0000 7fd1a1041700 1 -- 192.168.123.105:0/439059627 shutdown_connections 2026-03-09T14:55:40.496 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.454+0000 7fd1a1041700 1 -- 192.168.123.105:0/439059627 wait complete. 2026-03-09T14:55:40.496 INFO:teuthology.orchestra.run.vm05.stdout:Creating initial admin user... 2026-03-09T14:55:40.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$xpSEymv7bXf7Um7hMswQD.G2wkRPYDLYm13JAF7O0BHyJJd4YwU4i", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773068140, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T14:55:40.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.644+0000 7ff4aa65f700 1 Processor -- start 2026-03-09T14:55:40.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.644+0000 7ff4aa65f700 1 -- start start 2026-03-09T14:55:40.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.644+0000 7ff4aa65f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 0x7ff4a4108160 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:40.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.644+0000 7ff4aa65f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff4a41086a0 con 0x7ff4a4107d50 2026-03-09T14:55:40.989 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.644+0000 7ff4a965d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 0x7ff4a4108160 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.644+0000 7ff4a965d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 0x7ff4a4108160 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43812/0 (socket says 192.168.123.105:43812) 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.644+0000 7ff4a965d700 1 -- 192.168.123.105:0/1780842718 learned_addr learned my addr 192.168.123.105:0/1780842718 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.646+0000 7ff4a965d700 1 -- 192.168.123.105:0/1780842718 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff4a41087e0 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.646+0000 7ff4a965d700 1 --2- 192.168.123.105:0/1780842718 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 0x7ff4a4108160 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7ff49c00b0d0 tx=0x7ff49c00b490 comp rx=0 tx=0).ready entity=mon.0 client_cookie=212a98fee29b1a62 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.646+0000 7ff49bfff700 1 -- 192.168.123.105:0/1780842718 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff49c00e070 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.646+0000 7ff49bfff700 1 -- 192.168.123.105:0/1780842718 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff49c003a20 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.646+0000 7ff49bfff700 1 -- 192.168.123.105:0/1780842718 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff49c0046b0 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.647+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1780842718 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 msgr2=0x7ff4a4108160 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.647+0000 7ff4aa65f700 1 --2- 192.168.123.105:0/1780842718 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 0x7ff4a4108160 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7ff49c00b0d0 tx=0x7ff49c00b490 comp rx=0 tx=0).stop 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.648+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1780842718 shutdown_connections 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.648+0000 7ff4aa65f700 1 --2- 192.168.123.105:0/1780842718 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 0x7ff4a4108160 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.648+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1780842718 >> 192.168.123.105:0/1780842718 conn(0x7ff4a41035a0 msgr2=0x7ff4a4105980 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.648+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1780842718 shutdown_connections 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.648+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1780842718 wait complete. 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.648+0000 7ff4aa65f700 1 Processor -- start 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.649+0000 7ff4aa65f700 1 -- start start 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.649+0000 7ff4aa65f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 0x7ff4a41976d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.649+0000 7ff4aa65f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff4a4197c10 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.649+0000 7ff4a965d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 0x7ff4a41976d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.649+0000 7ff4a965d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 0x7ff4a41976d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43814/0 (socket says 192.168.123.105:43814) 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.649+0000 7ff4a965d700 1 -- 192.168.123.105:0/1911688896 learned_addr learned my addr 192.168.123.105:0/1911688896 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.649+0000 7ff4a965d700 1 -- 192.168.123.105:0/1911688896 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff49c009d20 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.650+0000 7ff4a965d700 1 --2- 192.168.123.105:0/1911688896 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 0x7ff4a41976d0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7ff49c015040 tx=0x7ff49c00bd60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.651+0000 7ff49a7fc700 1 -- 192.168.123.105:0/1911688896 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff49c00e040 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.651+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1911688896 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff4a4197e10 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.651+0000 7ff49a7fc700 1 -- 192.168.123.105:0/1911688896 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff49c01d950 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.651+0000 7ff49a7fc700 1 -- 192.168.123.105:0/1911688896 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff49c012980 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.651+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1911688896 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff4a41982b0 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.652+0000 7ff49a7fc700 1 -- 192.168.123.105:0/1911688896 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 11) v1 ==== 44986+0+0 (secure 0 0 0) 0x7ff49c019070 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.652+0000 7ff49a7fc700 1 --2- 192.168.123.105:0/1911688896 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff490038030 0x7ff49003a4e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.652+0000 7ff49a7fc700 1 -- 192.168.123.105:0/1911688896 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7ff49c051600 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.653+0000 7ff4a8e5c700 1 --2- 192.168.123.105:0/1911688896 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff490038030 0x7ff49003a4e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.653+0000 7ff4a8e5c700 1 --2- 192.168.123.105:0/1911688896 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff490038030 0x7ff49003a4e0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7ff4a0009940 tx=0x7ff4a0006e30 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.653+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1911688896 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff4a404efc0 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.658+0000 7ff49a7fc700 1 -- 192.168.123.105:0/1911688896 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7ff49c017070 con 0x7ff4a4107d50 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.780+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1911688896 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}) v1 -- 0x7ff4a4062380 con 0x7ff490038030 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.957+0000 7ff49a7fc700 1 -- 192.168.123.105:0/1911688896 <== mgr.14164 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+252 (secure 0 0 0) 0x7ff4a4062380 con 0x7ff490038030 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.960+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1911688896 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff490038030 msgr2=0x7ff49003a4e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.960+0000 7ff4aa65f700 1 --2- 192.168.123.105:0/1911688896 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff490038030 0x7ff49003a4e0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7ff4a0009940 tx=0x7ff4a0006e30 comp rx=0 tx=0).stop 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.960+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1911688896 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 msgr2=0x7ff4a41976d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.960+0000 7ff4aa65f700 1 --2- 192.168.123.105:0/1911688896 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 0x7ff4a41976d0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7ff49c015040 tx=0x7ff49c00bd60 comp rx=0 tx=0).stop 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.960+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1911688896 shutdown_connections 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.960+0000 7ff4aa65f700 1 --2- 192.168.123.105:0/1911688896 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff490038030 0x7ff49003a4e0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.960+0000 7ff4aa65f700 1 --2- 192.168.123.105:0/1911688896 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff4a4107d50 0x7ff4a41976d0 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.960+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1911688896 >> 192.168.123.105:0/1911688896 conn(0x7ff4a41035a0 msgr2=0x7ff4a41052c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.960+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1911688896 shutdown_connections 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:40.960+0000 7ff4aa65f700 1 -- 192.168.123.105:0/1911688896 wait complete. 2026-03-09T14:55:40.990 INFO:teuthology.orchestra.run.vm05.stdout:Fetching dashboard port number... 2026-03-09T14:55:41.253 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:41 vm05 ceph-mon[50611]: [09/Mar/2026:14:55:39] ENGINE Bus STARTING 2026-03-09T14:55:41.253 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:41 vm05 ceph-mon[50611]: [09/Mar/2026:14:55:39] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T14:55:41.253 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:41 vm05 ceph-mon[50611]: [09/Mar/2026:14:55:39] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T14:55:41.253 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:41 vm05 ceph-mon[50611]: [09/Mar/2026:14:55:39] ENGINE Bus STARTED 2026-03-09T14:55:41.253 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:41 vm05 ceph-mon[50611]: from='client.14168 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T14:55:41.253 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:41 vm05 ceph-mon[50611]: from='client.14168 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T14:55:41.253 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:41 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:41.253 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:41 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:41.253 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:41 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:41.253 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:41 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.128+0000 7f56cbb5d700 1 Processor -- start 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.128+0000 7f56cbb5d700 1 -- start start 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.128+0000 7f56cbb5d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 0x7f56c407cbf0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.128+0000 7f56cbb5d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f56c407d130 con 0x7f56c407c7e0 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.129+0000 7f56c98f9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 0x7f56c407cbf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.129+0000 7f56c98f9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 0x7f56c407cbf0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43826/0 (socket says 192.168.123.105:43826) 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.129+0000 7f56c98f9700 1 -- 192.168.123.105:0/3411706223 learned_addr learned my addr 192.168.123.105:0/3411706223 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.129+0000 7f56c98f9700 1 -- 192.168.123.105:0/3411706223 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f56c407d270 con 0x7f56c407c7e0 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.129+0000 7f56c98f9700 1 --2- 192.168.123.105:0/3411706223 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 0x7f56c407cbf0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f56bc01c0d0 tx=0x7f56bc01c3e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=343db02e2afd302e server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.129+0000 7f56c88f7700 1 -- 192.168.123.105:0/3411706223 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f56bc01f070 con 0x7f56c407c7e0 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.129+0000 7f56c88f7700 1 -- 192.168.123.105:0/3411706223 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f56bc003a20 con 0x7f56c407c7e0 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.130+0000 7f56cbb5d700 1 -- 192.168.123.105:0/3411706223 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 msgr2=0x7f56c407cbf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:41.286 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.130+0000 7f56cbb5d700 1 --2- 192.168.123.105:0/3411706223 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 0x7f56c407cbf0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f56bc01c0d0 tx=0x7f56bc01c3e0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.131+0000 7f56cbb5d700 1 -- 192.168.123.105:0/3411706223 shutdown_connections 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.131+0000 7f56cbb5d700 1 --2- 192.168.123.105:0/3411706223 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 0x7f56c407cbf0 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.131+0000 7f56cbb5d700 1 -- 192.168.123.105:0/3411706223 >> 192.168.123.105:0/3411706223 conn(0x7f56c407b4b0 msgr2=0x7f56c407b8d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.131+0000 7f56cbb5d700 1 -- 192.168.123.105:0/3411706223 shutdown_connections 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.131+0000 7f56cbb5d700 1 -- 192.168.123.105:0/3411706223 wait complete. 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.132+0000 7f56cbb5d700 1 Processor -- start 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.132+0000 7f56cbb5d700 1 -- start start 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.132+0000 7f56cbb5d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 0x7f56c4197850 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.132+0000 7f56cbb5d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f56bc004560 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.132+0000 7f56c98f9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 0x7f56c4197850 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.132+0000 7f56c98f9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 0x7f56c4197850 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43836/0 (socket says 192.168.123.105:43836) 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.132+0000 7f56c98f9700 1 -- 192.168.123.105:0/662845874 learned_addr learned my addr 192.168.123.105:0/662845874 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.132+0000 7f56c98f9700 1 -- 192.168.123.105:0/662845874 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f56bc01ad30 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.132+0000 7f56c98f9700 1 --2- 192.168.123.105:0/662845874 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 0x7f56c4197850 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f56bc026040 tx=0x7f56bc01cef0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.133+0000 7f56baffd700 1 -- 192.168.123.105:0/662845874 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f56bc01f040 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.133+0000 7f56cbb5d700 1 -- 192.168.123.105:0/662845874 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f56c4197d90 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.133+0000 7f56cbb5d700 1 -- 192.168.123.105:0/662845874 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f56c4198230 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.133+0000 7f56baffd700 1 -- 192.168.123.105:0/662845874 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f56bc023b20 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.133+0000 7f56baffd700 1 -- 192.168.123.105:0/662845874 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f56bc02ecc0 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.133+0000 7f56baffd700 1 -- 192.168.123.105:0/662845874 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 11) v1 ==== 44986+0+0 (secure 0 0 0) 0x7f56bc02a070 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.133+0000 7f56baffd700 1 --2- 192.168.123.105:0/662845874 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f56b0038390 0x7f56b003a840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.133+0000 7f56baffd700 1 -- 192.168.123.105:0/662845874 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f56bc05e380 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.135+0000 7f56c90f8700 1 --2- 192.168.123.105:0/662845874 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f56b0038390 0x7f56b003a840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.135+0000 7f56cbb5d700 1 -- 192.168.123.105:0/662845874 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f56a80052f0 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.138+0000 7f56baffd700 1 -- 192.168.123.105:0/662845874 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f56bc028070 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.138+0000 7f56c90f8700 1 --2- 192.168.123.105:0/662845874 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f56b0038390 0x7f56b003a840 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f56b4006fd0 tx=0x7f56b4006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.251+0000 7f56cbb5d700 1 -- 192.168.123.105:0/662845874 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"} v 0) v1 -- 0x7f56a8005f40 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.254+0000 7f56baffd700 1 -- 192.168.123.105:0/662845874 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]=0 v8) v1 ==== 112+0+5 (secure 0 0 0) 0x7f56bc036360 con 0x7f56c407c7e0 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.256+0000 7f56cbb5d700 1 -- 192.168.123.105:0/662845874 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f56b0038390 msgr2=0x7f56b003a840 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.256+0000 7f56cbb5d700 1 --2- 192.168.123.105:0/662845874 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f56b0038390 0x7f56b003a840 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f56b4006fd0 tx=0x7f56b4006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.256+0000 7f56cbb5d700 1 -- 192.168.123.105:0/662845874 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 msgr2=0x7f56c4197850 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.256+0000 7f56cbb5d700 1 --2- 192.168.123.105:0/662845874 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 0x7f56c4197850 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f56bc026040 tx=0x7f56bc01cef0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.256+0000 7f56cbb5d700 1 -- 192.168.123.105:0/662845874 shutdown_connections 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.256+0000 7f56cbb5d700 1 --2- 192.168.123.105:0/662845874 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f56b0038390 0x7f56b003a840 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.256+0000 7f56cbb5d700 1 --2- 192.168.123.105:0/662845874 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f56c407c7e0 0x7f56c4197850 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.256+0000 7f56cbb5d700 1 -- 192.168.123.105:0/662845874 >> 192.168.123.105:0/662845874 conn(0x7f56c407b4b0 msgr2=0x7f56c4106c10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.256+0000 7f56cbb5d700 1 -- 192.168.123.105:0/662845874 shutdown_connections 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.257+0000 7f56cbb5d700 1 -- 192.168.123.105:0/662845874 wait complete. 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-09T14:55:41.287 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T14:55:41.288 INFO:teuthology.orchestra.run.vm05.stdout:Ceph Dashboard is now available at: 2026-03-09T14:55:41.288 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:55:41.288 INFO:teuthology.orchestra.run.vm05.stdout: URL: https://vm05.local:8443/ 2026-03-09T14:55:41.288 INFO:teuthology.orchestra.run.vm05.stdout: User: admin 2026-03-09T14:55:41.288 INFO:teuthology.orchestra.run.vm05.stdout: Password: 3ka9kmevm4 2026-03-09T14:55:41.288 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:55:41.288 INFO:teuthology.orchestra.run.vm05.stdout:Saving cluster configuration to /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config directory 2026-03-09T14:55:41.289 INFO:teuthology.orchestra.run.vm05.stdout:Enabling autotune for osd_memory_target 2026-03-09T14:55:41.574 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.423+0000 7f97ed09b700 1 Processor -- start 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.424+0000 7f97ed09b700 1 -- start start 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.424+0000 7f97ed09b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 0x7f97e8106120 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.424+0000 7f97ed09b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f97e8106660 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.424+0000 7f97e6d9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 0x7f97e8106120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.424+0000 7f97e6d9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 0x7f97e8106120 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43842/0 (socket says 192.168.123.105:43842) 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.424+0000 7f97e6d9d700 1 -- 192.168.123.105:0/2046512073 learned_addr learned my addr 192.168.123.105:0/2046512073 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.424+0000 7f97e6d9d700 1 -- 192.168.123.105:0/2046512073 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f97e81067a0 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.424+0000 7f97e6d9d700 1 --2- 192.168.123.105:0/2046512073 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 0x7f97e8106120 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f97dc009cf0 tx=0x7f97dc00b0e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=3a95bbb30e559a22 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.425+0000 7f97e5d9b700 1 -- 192.168.123.105:0/2046512073 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f97dc004030 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.425+0000 7f97e5d9b700 1 -- 192.168.123.105:0/2046512073 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f97dc00b810 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.425+0000 7f97e5d9b700 1 -- 192.168.123.105:0/2046512073 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f97dc003b10 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.425+0000 7f97ed09b700 1 -- 192.168.123.105:0/2046512073 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 msgr2=0x7f97e8106120 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.426+0000 7f97ed09b700 1 --2- 192.168.123.105:0/2046512073 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 0x7f97e8106120 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f97dc009cf0 tx=0x7f97dc00b0e0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.426+0000 7f97ed09b700 1 -- 192.168.123.105:0/2046512073 shutdown_connections 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.426+0000 7f97ed09b700 1 --2- 192.168.123.105:0/2046512073 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 0x7f97e8106120 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.426+0000 7f97ed09b700 1 -- 192.168.123.105:0/2046512073 >> 192.168.123.105:0/2046512073 conn(0x7f97e8101360 msgr2=0x7f97e8103790 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.426+0000 7f97ed09b700 1 -- 192.168.123.105:0/2046512073 shutdown_connections 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.426+0000 7f97ed09b700 1 -- 192.168.123.105:0/2046512073 wait complete. 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.427+0000 7f97ed09b700 1 Processor -- start 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.427+0000 7f97ed09b700 1 -- start start 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.427+0000 7f97ed09b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 0x7f97e8199a60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.427+0000 7f97ed09b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f97e8199fa0 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.427+0000 7f97e6d9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 0x7f97e8199a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.427+0000 7f97e6d9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 0x7f97e8199a60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43854/0 (socket says 192.168.123.105:43854) 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.427+0000 7f97e6d9d700 1 -- 192.168.123.105:0/1539135577 learned_addr learned my addr 192.168.123.105:0/1539135577 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.427+0000 7f97e6d9d700 1 -- 192.168.123.105:0/1539135577 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f97dc009740 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.427+0000 7f97e6d9d700 1 --2- 192.168.123.105:0/1539135577 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 0x7f97e8199a60 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f97dc0037e0 tx=0x7f97dc011770 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.428+0000 7f97cffff700 1 -- 192.168.123.105:0/1539135577 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f97dc011a10 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.428+0000 7f97cffff700 1 -- 192.168.123.105:0/1539135577 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f97dc011b70 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.428+0000 7f97ed09b700 1 -- 192.168.123.105:0/1539135577 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f97e819a0e0 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.428+0000 7f97ed09b700 1 -- 192.168.123.105:0/1539135577 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f97e819a580 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.429+0000 7f97cffff700 1 -- 192.168.123.105:0/1539135577 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f97dc01a520 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.429+0000 7f97ed09b700 1 -- 192.168.123.105:0/1539135577 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f97d4005320 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.430+0000 7f97cffff700 1 -- 192.168.123.105:0/1539135577 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 11) v1 ==== 44986+0+0 (secure 0 0 0) 0x7f97dc01a740 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.430+0000 7f97cffff700 1 --2- 192.168.123.105:0/1539135577 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f97d0038070 0x7f97d003a520 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.430+0000 7f97cffff700 1 -- 192.168.123.105:0/1539135577 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f97dc04d4f0 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.432+0000 7f97e659c700 1 --2- 192.168.123.105:0/1539135577 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f97d0038070 0x7f97d003a520 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.433+0000 7f97e659c700 1 --2- 192.168.123.105:0/1539135577 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f97d0038070 0x7f97d003a520 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f97d8006fd0 tx=0x7f97d8006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.433+0000 7f97cffff700 1 -- 192.168.123.105:0/1539135577 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f97dc018350 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.541+0000 7f97ed09b700 1 -- 192.168.123.105:0/1539135577 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1 -- 0x7f97d4005f70 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.544+0000 7f97cffff700 1 -- 192.168.123.105:0/1539135577 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{prefix=config set, name=osd_memory_target_autotune}]=0 v8)=0 v8) v1 ==== 127+0+0 (secure 0 0 0) 0x7f97dc018350 con 0x7f97e8105d10 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.546+0000 7f97ed09b700 1 -- 192.168.123.105:0/1539135577 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f97d0038070 msgr2=0x7f97d003a520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.546+0000 7f97ed09b700 1 --2- 192.168.123.105:0/1539135577 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f97d0038070 0x7f97d003a520 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f97d8006fd0 tx=0x7f97d8006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.546+0000 7f97ed09b700 1 -- 192.168.123.105:0/1539135577 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 msgr2=0x7f97e8199a60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.546+0000 7f97ed09b700 1 --2- 192.168.123.105:0/1539135577 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 0x7f97e8199a60 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f97dc0037e0 tx=0x7f97dc011770 comp rx=0 tx=0).stop 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.546+0000 7f97ed09b700 1 -- 192.168.123.105:0/1539135577 shutdown_connections 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.546+0000 7f97ed09b700 1 --2- 192.168.123.105:0/1539135577 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f97d0038070 0x7f97d003a520 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.546+0000 7f97ed09b700 1 --2- 192.168.123.105:0/1539135577 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f97e8105d10 0x7f97e8199a60 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.546+0000 7f97ed09b700 1 -- 192.168.123.105:0/1539135577 >> 192.168.123.105:0/1539135577 conn(0x7f97e8101360 msgr2=0x7f97e8102020 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.547+0000 7f97ed09b700 1 -- 192.168.123.105:0/1539135577 shutdown_connections 2026-03-09T14:55:41.575 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.547+0000 7f97ed09b700 1 -- 192.168.123.105:0/1539135577 wait complete. 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.706+0000 7f20975b1700 1 Processor -- start 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.706+0000 7f20975b1700 1 -- start start 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.707+0000 7f20975b1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 0x7f2090108330 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.707+0000 7f20975b1700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2090108870 con 0x7f2090107f20 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.707+0000 7f209534d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 0x7f2090108330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.707+0000 7f209534d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 0x7f2090108330 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43864/0 (socket says 192.168.123.105:43864) 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.707+0000 7f209534d700 1 -- 192.168.123.105:0/4282743763 learned_addr learned my addr 192.168.123.105:0/4282743763 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.707+0000 7f209534d700 1 -- 192.168.123.105:0/4282743763 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f20901089b0 con 0x7f2090107f20 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.707+0000 7f209534d700 1 --2- 192.168.123.105:0/4282743763 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 0x7f2090108330 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f2080009a90 tx=0x7f2080009da0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=1bcb5b3f187a3d05 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.708+0000 7f2087fff700 1 -- 192.168.123.105:0/4282743763 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2080004030 con 0x7f2090107f20 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.708+0000 7f2087fff700 1 -- 192.168.123.105:0/4282743763 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f208000b7e0 con 0x7f2090107f20 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.708+0000 7f2087fff700 1 -- 192.168.123.105:0/4282743763 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2080003ae0 con 0x7f2090107f20 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.708+0000 7f20975b1700 1 -- 192.168.123.105:0/4282743763 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 msgr2=0x7f2090108330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.708+0000 7f20975b1700 1 --2- 192.168.123.105:0/4282743763 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 0x7f2090108330 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f2080009a90 tx=0x7f2080009da0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.709+0000 7f20975b1700 1 -- 192.168.123.105:0/4282743763 shutdown_connections 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.709+0000 7f20975b1700 1 --2- 192.168.123.105:0/4282743763 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 0x7f2090108330 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.709+0000 7f20975b1700 1 -- 192.168.123.105:0/4282743763 >> 192.168.123.105:0/4282743763 conn(0x7f209007b4b0 msgr2=0x7f209007b8d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.709+0000 7f20975b1700 1 -- 192.168.123.105:0/4282743763 shutdown_connections 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.709+0000 7f20975b1700 1 -- 192.168.123.105:0/4282743763 wait complete. 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.709+0000 7f20975b1700 1 Processor -- start 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.710+0000 7f20975b1700 1 -- start start 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.710+0000 7f20975b1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 0x7f209019bc30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.710+0000 7f20975b1700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f209019c170 con 0x7f2090107f20 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.710+0000 7f209534d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 0x7f209019bc30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.710+0000 7f209534d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 0x7f209019bc30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43870/0 (socket says 192.168.123.105:43870) 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.710+0000 7f209534d700 1 -- 192.168.123.105:0/3502861017 learned_addr learned my addr 192.168.123.105:0/3502861017 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.710+0000 7f209534d700 1 -- 192.168.123.105:0/3502861017 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2080009740 con 0x7f2090107f20 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.711+0000 7f209534d700 1 --2- 192.168.123.105:0/3502861017 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 0x7f209019bc30 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f2080000c00 tx=0x7f208000bfa0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:41.939 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.711+0000 7f20867fc700 1 -- 192.168.123.105:0/3502861017 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2080004160 con 0x7f2090107f20 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.711+0000 7f20867fc700 1 -- 192.168.123.105:0/3502861017 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f20800042c0 con 0x7f2090107f20 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.711+0000 7f20867fc700 1 -- 192.168.123.105:0/3502861017 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2080011620 con 0x7f2090107f20 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.711+0000 7f20975b1700 1 -- 192.168.123.105:0/3502861017 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f209019c370 con 0x7f2090107f20 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.711+0000 7f20975b1700 1 -- 192.168.123.105:0/3502861017 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f209019c810 con 0x7f2090107f20 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.712+0000 7f20867fc700 1 -- 192.168.123.105:0/3502861017 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 11) v1 ==== 44986+0+0 (secure 0 0 0) 0x7f2080011780 con 0x7f2090107f20 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.712+0000 7f20867fc700 1 --2- 192.168.123.105:0/3502861017 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f207c0383f0 0x7f207c03a8a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.712+0000 7f20867fc700 1 -- 192.168.123.105:0/3502861017 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f208004d0d0 con 0x7f2090107f20 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.713+0000 7f2094b4c700 1 --2- 192.168.123.105:0/3502861017 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f207c0383f0 0x7f207c03a8a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.713+0000 7f20975b1700 1 -- 192.168.123.105:0/3502861017 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2090062380 con 0x7f2090107f20 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.714+0000 7f2094b4c700 1 --2- 192.168.123.105:0/3502861017 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f207c0383f0 0x7f207c03a8a0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f208c006fd0 tx=0x7f208c006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.716+0000 7f20867fc700 1 -- 192.168.123.105:0/3502861017 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f2080011a80 con 0x7f2090107f20 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.883+0000 7f20975b1700 1 -- 192.168.123.105:0/3502861017 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1 -- 0x7f209010c3b0 con 0x7f2090107f20 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.885+0000 7f20867fc700 1 -- 192.168.123.105:0/3502861017 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{prefix=config-key set, key=mgr/dashboard/cluster/status}]=0 set mgr/dashboard/cluster/status v34)=0 set mgr/dashboard/cluster/status v34) v1 ==== 153+0+0 (secure 0 0 0) 0x7f2080018b40 con 0x7f2090107f20 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.887+0000 7f20975b1700 1 -- 192.168.123.105:0/3502861017 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f207c0383f0 msgr2=0x7f207c03a8a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.887+0000 7f20975b1700 1 --2- 192.168.123.105:0/3502861017 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f207c0383f0 0x7f207c03a8a0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f208c006fd0 tx=0x7f208c006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.888+0000 7f20975b1700 1 -- 192.168.123.105:0/3502861017 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 msgr2=0x7f209019bc30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.888+0000 7f20975b1700 1 --2- 192.168.123.105:0/3502861017 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 0x7f209019bc30 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f2080000c00 tx=0x7f208000bfa0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.888+0000 7f20975b1700 1 -- 192.168.123.105:0/3502861017 shutdown_connections 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.888+0000 7f20975b1700 1 --2- 192.168.123.105:0/3502861017 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f207c0383f0 0x7f207c03a8a0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.888+0000 7f20975b1700 1 --2- 192.168.123.105:0/3502861017 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2090107f20 0x7f209019bc30 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.888+0000 7f20975b1700 1 -- 192.168.123.105:0/3502861017 >> 192.168.123.105:0/3502861017 conn(0x7f209007b4b0 msgr2=0x7f20901055d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.888+0000 7f20975b1700 1 -- 192.168.123.105:0/3502861017 shutdown_connections 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-09T14:55:41.888+0000 7f20975b1700 1 -- 192.168.123.105:0/3502861017 wait complete. 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout: ceph telemetry on 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:For more information see: 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:55:41.940 INFO:teuthology.orchestra.run.vm05.stdout:Bootstrap complete. 2026-03-09T14:55:41.969 INFO:tasks.cephadm:Fetching config... 2026-03-09T14:55:41.969 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:55:41.969 DEBUG:teuthology.orchestra.run.vm05:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T14:55:41.994 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T14:55:41.994 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:55:41.994 DEBUG:teuthology.orchestra.run.vm05:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T14:55:42.057 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T14:55:42.057 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:55:42.057 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/keyring of=/dev/stdout 2026-03-09T14:55:42.133 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T14:55:42.133 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:55:42.133 DEBUG:teuthology.orchestra.run.vm05:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T14:55:42.193 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T14:55:42.193 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyOqi4+YN0cFEuNE+epK3rFx/DtF3RriFSg+xXPyigspQYGcYModV+a15nnuQOq+70+6SZcl8zIoMKlVKwRvKe24eW+xGf34oQgEkNWDgCxR6ChKdLruaB2Tl18lA4iiSF4Y0heN2xcDiLE8lpMEQFevtwrLXRaY/eoKwhgMjlOAKAmiVt1barDGwFDxhUSewyKLlpIy5B73IVcjQiWVhZNWADGwKPYQHuI/gTp1voLSEdZBr0LjP9sA7p+fT2i9N+yoRxNYyaYqwOdjeZJGJJdEROgddNoGGj7Nsu7pp8zS6Bd2RTw6DjKZtvyHkZTu+/ujq1rtLRwXDF+9dBZMniSzGqWATBDnH3tGxG/tiLpaqYjHFRpXmT71RyU5QwJJN7omUPr1L/o1u469LmDN9FIe45FLy+BBBTOJgzf3K0zCvdfPMhIRqvYYUiizGzy8MPJd7ttmelaXxPB4XBeeTk7hgKpmR2GrVmMPEq8BwDF9NGSusH0RC45p1W6+O9alU= ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T14:55:42.260 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:42 vm05 ceph-mon[50611]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:42.260 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:42 vm05 ceph-mon[50611]: from='client.14178 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:42.260 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:42 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/662845874' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T14:55:42.260 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:42 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3502861017' entity='client.admin' 2026-03-09T14:55:42.260 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:42 vm05 ceph-mon[50611]: mgrmap e12: vm05.lhsexd(active, since 2s) 2026-03-09T14:55:42.298 INFO:teuthology.orchestra.run.vm05.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyOqi4+YN0cFEuNE+epK3rFx/DtF3RriFSg+xXPyigspQYGcYModV+a15nnuQOq+70+6SZcl8zIoMKlVKwRvKe24eW+xGf34oQgEkNWDgCxR6ChKdLruaB2Tl18lA4iiSF4Y0heN2xcDiLE8lpMEQFevtwrLXRaY/eoKwhgMjlOAKAmiVt1barDGwFDxhUSewyKLlpIy5B73IVcjQiWVhZNWADGwKPYQHuI/gTp1voLSEdZBr0LjP9sA7p+fT2i9N+yoRxNYyaYqwOdjeZJGJJdEROgddNoGGj7Nsu7pp8zS6Bd2RTw6DjKZtvyHkZTu+/ujq1rtLRwXDF+9dBZMniSzGqWATBDnH3tGxG/tiLpaqYjHFRpXmT71RyU5QwJJN7omUPr1L/o1u469LmDN9FIe45FLy+BBBTOJgzf3K0zCvdfPMhIRqvYYUiizGzy8MPJd7ttmelaXxPB4XBeeTk7hgKpmR2GrVmMPEq8BwDF9NGSusH0RC45p1W6+O9alU= ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:55:42.313 DEBUG:teuthology.orchestra.run.vm09:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyOqi4+YN0cFEuNE+epK3rFx/DtF3RriFSg+xXPyigspQYGcYModV+a15nnuQOq+70+6SZcl8zIoMKlVKwRvKe24eW+xGf34oQgEkNWDgCxR6ChKdLruaB2Tl18lA4iiSF4Y0heN2xcDiLE8lpMEQFevtwrLXRaY/eoKwhgMjlOAKAmiVt1barDGwFDxhUSewyKLlpIy5B73IVcjQiWVhZNWADGwKPYQHuI/gTp1voLSEdZBr0LjP9sA7p+fT2i9N+yoRxNYyaYqwOdjeZJGJJdEROgddNoGGj7Nsu7pp8zS6Bd2RTw6DjKZtvyHkZTu+/ujq1rtLRwXDF+9dBZMniSzGqWATBDnH3tGxG/tiLpaqYjHFRpXmT71RyU5QwJJN7omUPr1L/o1u469LmDN9FIe45FLy+BBBTOJgzf3K0zCvdfPMhIRqvYYUiizGzy8MPJd7ttmelaXxPB4XBeeTk7hgKpmR2GrVmMPEq8BwDF9NGSusH0RC45p1W6+O9alU= ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T14:55:42.352 INFO:teuthology.orchestra.run.vm09.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyOqi4+YN0cFEuNE+epK3rFx/DtF3RriFSg+xXPyigspQYGcYModV+a15nnuQOq+70+6SZcl8zIoMKlVKwRvKe24eW+xGf34oQgEkNWDgCxR6ChKdLruaB2Tl18lA4iiSF4Y0heN2xcDiLE8lpMEQFevtwrLXRaY/eoKwhgMjlOAKAmiVt1barDGwFDxhUSewyKLlpIy5B73IVcjQiWVhZNWADGwKPYQHuI/gTp1voLSEdZBr0LjP9sA7p+fT2i9N+yoRxNYyaYqwOdjeZJGJJdEROgddNoGGj7Nsu7pp8zS6Bd2RTw6DjKZtvyHkZTu+/ujq1rtLRwXDF+9dBZMniSzGqWATBDnH3tGxG/tiLpaqYjHFRpXmT71RyU5QwJJN7omUPr1L/o1u469LmDN9FIe45FLy+BBBTOJgzf3K0zCvdfPMhIRqvYYUiizGzy8MPJd7ttmelaXxPB4XBeeTk7hgKpmR2GrVmMPEq8BwDF9NGSusH0RC45p1W6+O9alU= ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:55:42.365 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T14:55:42.515 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:55:43.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.461+0000 7fda6ee75700 1 -- 192.168.123.105:0/1876237772 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fda680713c0 msgr2=0x7fda680717d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:43.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.461+0000 7fda6ee75700 1 --2- 192.168.123.105:0/1876237772 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fda680713c0 0x7fda680717d0 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7fda64009b00 tx=0x7fda64009e10 comp rx=0 tx=0).stop 2026-03-09T14:55:43.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.461+0000 7fda6ee75700 1 -- 192.168.123.105:0/1876237772 shutdown_connections 2026-03-09T14:55:43.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.461+0000 7fda6ee75700 1 --2- 192.168.123.105:0/1876237772 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fda680713c0 0x7fda680717d0 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:43.463 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.461+0000 7fda6ee75700 1 -- 192.168.123.105:0/1876237772 >> 192.168.123.105:0/1876237772 conn(0x7fda6806cd30 msgr2=0x7fda6806f180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.462+0000 7fda6ee75700 1 -- 192.168.123.105:0/1876237772 shutdown_connections 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.462+0000 7fda6ee75700 1 -- 192.168.123.105:0/1876237772 wait complete. 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.462+0000 7fda6ee75700 1 Processor -- start 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.462+0000 7fda6ee75700 1 -- start start 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.462+0000 7fda6ee75700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fda680713c0 0x7fda681acab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.462+0000 7fda6ee75700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fda681acff0 con 0x7fda680713c0 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.462+0000 7fda6de73700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fda680713c0 0x7fda681acab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.462+0000 7fda6de73700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fda680713c0 0x7fda681acab0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43888/0 (socket says 192.168.123.105:43888) 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.462+0000 7fda6de73700 1 -- 192.168.123.105:0/3683666000 learned_addr learned my addr 192.168.123.105:0/3683666000 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.463+0000 7fda6de73700 1 -- 192.168.123.105:0/3683666000 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fda640097e0 con 0x7fda680713c0 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.463+0000 7fda6de73700 1 --2- 192.168.123.105:0/3683666000 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fda680713c0 0x7fda681acab0 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7fda64004f40 tx=0x7fda64004740 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.463+0000 7fda5effd700 1 -- 192.168.123.105:0/3683666000 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fda6401c070 con 0x7fda680713c0 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.463+0000 7fda6ee75700 1 -- 192.168.123.105:0/3683666000 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fda681ad1f0 con 0x7fda680713c0 2026-03-09T14:55:43.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.463+0000 7fda6ee75700 1 -- 192.168.123.105:0/3683666000 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fda681ad610 con 0x7fda680713c0 2026-03-09T14:55:43.465 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.464+0000 7fda5effd700 1 -- 192.168.123.105:0/3683666000 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fda640053f0 con 0x7fda680713c0 2026-03-09T14:55:43.465 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.464+0000 7fda5effd700 1 -- 192.168.123.105:0/3683666000 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fda6400f550 con 0x7fda680713c0 2026-03-09T14:55:43.466 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.465+0000 7fda6ee75700 1 -- 192.168.123.105:0/3683666000 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fda68110430 con 0x7fda680713c0 2026-03-09T14:55:43.468 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.467+0000 7fda5effd700 1 -- 192.168.123.105:0/3683666000 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 12) v1 ==== 45092+0+0 (secure 0 0 0) 0x7fda6400f6f0 con 0x7fda680713c0 2026-03-09T14:55:43.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.468+0000 7fda5effd700 1 --2- 192.168.123.105:0/3683666000 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fda54038550 0x7fda5403aa00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:43.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.468+0000 7fda5effd700 1 -- 192.168.123.105:0/3683666000 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fda6404d440 con 0x7fda680713c0 2026-03-09T14:55:43.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.468+0000 7fda6d672700 1 --2- 192.168.123.105:0/3683666000 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fda54038550 0x7fda5403aa00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:43.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.468+0000 7fda5effd700 1 -- 192.168.123.105:0/3683666000 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fda6404f800 con 0x7fda680713c0 2026-03-09T14:55:43.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.474+0000 7fda6d672700 1 --2- 192.168.123.105:0/3683666000 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fda54038550 0x7fda5403aa00 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fda6000ad30 tx=0x7fda600093f0 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:43.594 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.592+0000 7fda6ee75700 1 -- 192.168.123.105:0/3683666000 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/allow_ptrace}] v 0) v1 -- 0x7fda68062380 con 0x7fda680713c0 2026-03-09T14:55:43.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.596+0000 7fda5effd700 1 -- 192.168.123.105:0/3683666000 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/allow_ptrace}]=0 v9)=0 v9) v1 ==== 125+0+0 (secure 0 0 0) 0x7fda64026030 con 0x7fda680713c0 2026-03-09T14:55:43.613 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.612+0000 7fda6ee75700 1 -- 192.168.123.105:0/3683666000 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fda54038550 msgr2=0x7fda5403aa00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:43.614 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.612+0000 7fda6ee75700 1 --2- 192.168.123.105:0/3683666000 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fda54038550 0x7fda5403aa00 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7fda6000ad30 tx=0x7fda600093f0 comp rx=0 tx=0).stop 2026-03-09T14:55:43.614 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.612+0000 7fda6ee75700 1 -- 192.168.123.105:0/3683666000 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fda680713c0 msgr2=0x7fda681acab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:43.614 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.612+0000 7fda6ee75700 1 --2- 192.168.123.105:0/3683666000 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fda680713c0 0x7fda681acab0 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7fda64004f40 tx=0x7fda64004740 comp rx=0 tx=0).stop 2026-03-09T14:55:43.614 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.612+0000 7fda6ee75700 1 -- 192.168.123.105:0/3683666000 shutdown_connections 2026-03-09T14:55:43.614 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.612+0000 7fda6ee75700 1 --2- 192.168.123.105:0/3683666000 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fda54038550 0x7fda5403aa00 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:43.614 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.612+0000 7fda6ee75700 1 --2- 192.168.123.105:0/3683666000 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fda680713c0 0x7fda681acab0 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:43.614 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.612+0000 7fda6ee75700 1 -- 192.168.123.105:0/3683666000 >> 192.168.123.105:0/3683666000 conn(0x7fda6806cd30 msgr2=0x7fda6806ea70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:43.614 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.613+0000 7fda6ee75700 1 -- 192.168.123.105:0/3683666000 shutdown_connections 2026-03-09T14:55:43.615 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:43.614+0000 7fda6ee75700 1 -- 192.168.123.105:0/3683666000 wait complete. 2026-03-09T14:55:43.687 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T14:55:43.687 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T14:55:43.856 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:55:44.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.150+0000 7ff09c928700 1 -- 192.168.123.105:0/1934553989 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0940fe140 msgr2=0x7ff0940fe550 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:44.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.150+0000 7ff09c928700 1 --2- 192.168.123.105:0/1934553989 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0940fe140 0x7ff0940fe550 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7ff088009b50 tx=0x7ff088009e60 comp rx=0 tx=0).stop 2026-03-09T14:55:44.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.159+0000 7ff09c928700 1 -- 192.168.123.105:0/1934553989 shutdown_connections 2026-03-09T14:55:44.161 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.159+0000 7ff09c928700 1 --2- 192.168.123.105:0/1934553989 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0940fe140 0x7ff0940fe550 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:44.161 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.159+0000 7ff09c928700 1 -- 192.168.123.105:0/1934553989 >> 192.168.123.105:0/1934553989 conn(0x7ff0940f9ab0 msgr2=0x7ff0940fbee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:44.161 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.159+0000 7ff09c928700 1 -- 192.168.123.105:0/1934553989 shutdown_connections 2026-03-09T14:55:44.161 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.159+0000 7ff09c928700 1 -- 192.168.123.105:0/1934553989 wait complete. 2026-03-09T14:55:44.161 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.160+0000 7ff09c928700 1 Processor -- start 2026-03-09T14:55:44.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.163+0000 7ff09c928700 1 -- start start 2026-03-09T14:55:44.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.163+0000 7ff09c928700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0940fe140 0x7ff09410dff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:44.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.163+0000 7ff09a6c4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0940fe140 0x7ff09410dff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:44.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.163+0000 7ff09a6c4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0940fe140 0x7ff09410dff0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43914/0 (socket says 192.168.123.105:43914) 2026-03-09T14:55:44.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.163+0000 7ff09a6c4700 1 -- 192.168.123.105:0/3749335280 learned_addr learned my addr 192.168.123.105:0/3749335280 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:44.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.163+0000 7ff09c928700 1 -- 192.168.123.105:0/3749335280 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff09410f9a0 con 0x7ff0940fe140 2026-03-09T14:55:44.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.164+0000 7ff09a6c4700 1 -- 192.168.123.105:0/3749335280 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff0880097e0 con 0x7ff0940fe140 2026-03-09T14:55:44.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.164+0000 7ff09a6c4700 1 --2- 192.168.123.105:0/3749335280 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0940fe140 0x7ff09410dff0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7ff088006010 tx=0x7ff0880050d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:44.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.164+0000 7ff0877fe700 1 -- 192.168.123.105:0/3749335280 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff08801c070 con 0x7ff0940fe140 2026-03-09T14:55:44.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.165+0000 7ff09c928700 1 -- 192.168.123.105:0/3749335280 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff09410fba0 con 0x7ff0940fe140 2026-03-09T14:55:44.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.165+0000 7ff09c928700 1 -- 192.168.123.105:0/3749335280 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff09410e700 con 0x7ff0940fe140 2026-03-09T14:55:44.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.165+0000 7ff0877fe700 1 -- 192.168.123.105:0/3749335280 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff088005710 con 0x7ff0940fe140 2026-03-09T14:55:44.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.166+0000 7ff0877fe700 1 -- 192.168.123.105:0/3749335280 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7ff088003bd0 con 0x7ff0940fe140 2026-03-09T14:55:44.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.167+0000 7ff0877fe700 1 -- 192.168.123.105:0/3749335280 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 12) v1 ==== 45092+0+0 (secure 0 0 0) 0x7ff088021470 con 0x7ff0940fe140 2026-03-09T14:55:44.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.167+0000 7ff0877fe700 1 --2- 192.168.123.105:0/3749335280 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff080038200 0x7ff08003a6b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:44.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.167+0000 7ff0877fe700 1 -- 192.168.123.105:0/3749335280 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7ff08804d830 con 0x7ff0940fe140 2026-03-09T14:55:44.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.167+0000 7ff099ec3700 1 --2- 192.168.123.105:0/3749335280 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff080038200 0x7ff08003a6b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:44.169 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.168+0000 7ff099ec3700 1 --2- 192.168.123.105:0/3749335280 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff080038200 0x7ff08003a6b0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7ff08c006fd0 tx=0x7ff08c006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:44.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.178+0000 7ff09c928700 1 -- 192.168.123.105:0/3749335280 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff078005320 con 0x7ff0940fe140 2026-03-09T14:55:44.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.184+0000 7ff0877fe700 1 -- 192.168.123.105:0/3749335280 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7ff08802a9d0 con 0x7ff0940fe140 2026-03-09T14:55:44.307 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.306+0000 7ff09c928700 1 -- 192.168.123.105:0/3749335280 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}) v1 -- 0x7ff078000bf0 con 0x7ff080038200 2026-03-09T14:55:44.310 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.309+0000 7ff0877fe700 1 -- 192.168.123.105:0/3749335280 <== mgr.14164 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+0 (secure 0 0 0) 0x7ff078000bf0 con 0x7ff080038200 2026-03-09T14:55:44.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.312+0000 7ff09c928700 1 -- 192.168.123.105:0/3749335280 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff080038200 msgr2=0x7ff08003a6b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:44.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.312+0000 7ff09c928700 1 --2- 192.168.123.105:0/3749335280 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff080038200 0x7ff08003a6b0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7ff08c006fd0 tx=0x7ff08c006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:44.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.312+0000 7ff09c928700 1 -- 192.168.123.105:0/3749335280 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0940fe140 msgr2=0x7ff09410dff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:44.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.312+0000 7ff09c928700 1 --2- 192.168.123.105:0/3749335280 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0940fe140 0x7ff09410dff0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7ff088006010 tx=0x7ff0880050d0 comp rx=0 tx=0).stop 2026-03-09T14:55:44.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.312+0000 7ff09c928700 1 -- 192.168.123.105:0/3749335280 shutdown_connections 2026-03-09T14:55:44.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.312+0000 7ff09c928700 1 --2- 192.168.123.105:0/3749335280 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff080038200 0x7ff08003a6b0 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:44.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.312+0000 7ff09c928700 1 --2- 192.168.123.105:0/3749335280 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0940fe140 0x7ff09410dff0 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:44.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.312+0000 7ff09c928700 1 -- 192.168.123.105:0/3749335280 >> 192.168.123.105:0/3749335280 conn(0x7ff0940f9ab0 msgr2=0x7ff094104dc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:44.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.312+0000 7ff09c928700 1 -- 192.168.123.105:0/3749335280 shutdown_connections 2026-03-09T14:55:44.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.312+0000 7ff09c928700 1 -- 192.168.123.105:0/3749335280 wait complete. 2026-03-09T14:55:44.378 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm09 2026-03-09T14:55:44.378 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:55:44.378 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.conf 2026-03-09T14:55:44.395 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:55:44.395 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:55:44.452 INFO:tasks.cephadm:Adding host vm09 to orchestrator... 2026-03-09T14:55:44.452 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph orch host add vm09 2026-03-09T14:55:44.645 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:55:44.742 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:44 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3683666000' entity='client.admin' 2026-03-09T14:55:44.742 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:44 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:44.743 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:44 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:44.743 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:44 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:44.743 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:44 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:55:44.743 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:44 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:55:44.743 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:44 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.985+0000 7fdb3bfff700 1 -- 192.168.123.105:0/1218980903 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb3c0713c0 msgr2=0x7fdb3c0717d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.985+0000 7fdb3bfff700 1 --2- 192.168.123.105:0/1218980903 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb3c0713c0 0x7fdb3c0717d0 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7fdb2c009b00 tx=0x7fdb2c009e10 comp rx=0 tx=0).stop 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.986+0000 7fdb3bfff700 1 -- 192.168.123.105:0/1218980903 shutdown_connections 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.986+0000 7fdb3bfff700 1 --2- 192.168.123.105:0/1218980903 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb3c0713c0 0x7fdb3c0717d0 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.986+0000 7fdb3bfff700 1 -- 192.168.123.105:0/1218980903 >> 192.168.123.105:0/1218980903 conn(0x7fdb3c06cd30 msgr2=0x7fdb3c06f180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.986+0000 7fdb3bfff700 1 -- 192.168.123.105:0/1218980903 shutdown_connections 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.986+0000 7fdb3bfff700 1 -- 192.168.123.105:0/1218980903 wait complete. 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.987+0000 7fdb3bfff700 1 Processor -- start 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.987+0000 7fdb3bfff700 1 -- start start 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.988+0000 7fdb3bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb3c0713c0 0x7fdb3c1acab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.988+0000 7fdb3bfff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdb2c012070 con 0x7fdb3c0713c0 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.988+0000 7fdb3affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb3c0713c0 0x7fdb3c1acab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.988+0000 7fdb3affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb3c0713c0 0x7fdb3c1acab0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43928/0 (socket says 192.168.123.105:43928) 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.988+0000 7fdb3affd700 1 -- 192.168.123.105:0/936932382 learned_addr learned my addr 192.168.123.105:0/936932382 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.988+0000 7fdb3affd700 1 -- 192.168.123.105:0/936932382 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fdb2c0097e0 con 0x7fdb3c0713c0 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.988+0000 7fdb3affd700 1 --2- 192.168.123.105:0/936932382 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb3c0713c0 0x7fdb3c1acab0 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7fdb2c006010 tx=0x7fdb2c005800 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:45.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.988+0000 7fdb23fff700 1 -- 192.168.123.105:0/936932382 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fdb2c01d070 con 0x7fdb3c0713c0 2026-03-09T14:55:45.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.988+0000 7fdb3bfff700 1 -- 192.168.123.105:0/936932382 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fdb3c1acff0 con 0x7fdb3c0713c0 2026-03-09T14:55:45.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.988+0000 7fdb3bfff700 1 -- 192.168.123.105:0/936932382 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fdb3c1ad490 con 0x7fdb3c0713c0 2026-03-09T14:55:45.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.989+0000 7fdb3bfff700 1 -- 192.168.123.105:0/936932382 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fdb3c110b20 con 0x7fdb3c0713c0 2026-03-09T14:55:45.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.989+0000 7fdb23fff700 1 -- 192.168.123.105:0/936932382 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fdb2c00f460 con 0x7fdb3c0713c0 2026-03-09T14:55:45.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.989+0000 7fdb23fff700 1 -- 192.168.123.105:0/936932382 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fdb2c0175c0 con 0x7fdb3c0713c0 2026-03-09T14:55:45.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.994+0000 7fdb23fff700 1 -- 192.168.123.105:0/936932382 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 12) v1 ==== 45092+0+0 (secure 0 0 0) 0x7fdb2c017720 con 0x7fdb3c0713c0 2026-03-09T14:55:45.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.994+0000 7fdb23fff700 1 --2- 192.168.123.105:0/936932382 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdb24038500 0x7fdb2403a9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:45.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.994+0000 7fdb23fff700 1 -- 192.168.123.105:0/936932382 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fdb2c04c650 con 0x7fdb3c0713c0 2026-03-09T14:55:45.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.995+0000 7fdb3a7fc700 1 --2- 192.168.123.105:0/936932382 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdb24038500 0x7fdb2403a9b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:45.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.995+0000 7fdb23fff700 1 -- 192.168.123.105:0/936932382 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fdb2c015350 con 0x7fdb3c0713c0 2026-03-09T14:55:45.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:44.998+0000 7fdb3a7fc700 1 --2- 192.168.123.105:0/936932382 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdb24038500 0x7fdb2403a9b0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7fdb3400ad30 tx=0x7fdb340093f0 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:45.118 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:45.117+0000 7fdb3bfff700 1 -- 192.168.123.105:0/936932382 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}) v1 -- 0x7fdb3c02d020 con 0x7fdb24038500 2026-03-09T14:55:45.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:45.601+0000 7fdb23fff700 1 -- 192.168.123.105:0/936932382 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fdb2c00fab0 con 0x7fdb3c0713c0 2026-03-09T14:55:45.964 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:45 vm05 ceph-mon[50611]: from='client.14188 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:45.964 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:45 vm05 ceph-mon[50611]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:55:45.964 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:45 vm05 ceph-mon[50611]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:55:45.964 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:45 vm05 ceph-mon[50611]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:55:45.964 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:45 vm05 ceph-mon[50611]: from='client.14190 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:46.807 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: mgrmap e13: vm05.lhsexd(active, since 6s) 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: Deploying cephadm binary to vm09 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: Deploying daemon ceph-exporter.vm05 on vm05 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-09T14:55:46.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:46 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:55:47.020 INFO:teuthology.orchestra.run.vm05.stdout:Added host 'vm09' with addr '192.168.123.109' 2026-03-09T14:55:47.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.016+0000 7fdb23fff700 1 -- 192.168.123.105:0/936932382 <== mgr.14164 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+46 (secure 0 0 0) 0x7fdb3c02d020 con 0x7fdb24038500 2026-03-09T14:55:47.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.021+0000 7fdb21ffb700 1 -- 192.168.123.105:0/936932382 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdb24038500 msgr2=0x7fdb2403a9b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:47.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.021+0000 7fdb21ffb700 1 --2- 192.168.123.105:0/936932382 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdb24038500 0x7fdb2403a9b0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7fdb3400ad30 tx=0x7fdb340093f0 comp rx=0 tx=0).stop 2026-03-09T14:55:47.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.021+0000 7fdb21ffb700 1 -- 192.168.123.105:0/936932382 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb3c0713c0 msgr2=0x7fdb3c1acab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:47.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.021+0000 7fdb21ffb700 1 --2- 192.168.123.105:0/936932382 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb3c0713c0 0x7fdb3c1acab0 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7fdb2c006010 tx=0x7fdb2c005800 comp rx=0 tx=0).stop 2026-03-09T14:55:47.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.025+0000 7fdb21ffb700 1 -- 192.168.123.105:0/936932382 shutdown_connections 2026-03-09T14:55:47.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.025+0000 7fdb21ffb700 1 --2- 192.168.123.105:0/936932382 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdb24038500 0x7fdb2403a9b0 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:47.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.025+0000 7fdb21ffb700 1 --2- 192.168.123.105:0/936932382 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb3c0713c0 0x7fdb3c1acab0 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:47.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.025+0000 7fdb21ffb700 1 -- 192.168.123.105:0/936932382 >> 192.168.123.105:0/936932382 conn(0x7fdb3c06cd30 msgr2=0x7fdb3c06fa10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:47.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.025+0000 7fdb21ffb700 1 -- 192.168.123.105:0/936932382 shutdown_connections 2026-03-09T14:55:47.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.025+0000 7fdb21ffb700 1 -- 192.168.123.105:0/936932382 wait complete. 2026-03-09T14:55:47.196 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph orch host ls --format=json 2026-03-09T14:55:47.481 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.832+0000 7f11216f6700 1 -- 192.168.123.105:0/1012977149 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f111c102320 msgr2=0x7f111c1026f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.832+0000 7f11216f6700 1 --2- 192.168.123.105:0/1012977149 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f111c102320 0x7f111c1026f0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f110c009b50 tx=0x7f110c009e60 comp rx=0 tx=0).stop 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.834+0000 7f11216f6700 1 -- 192.168.123.105:0/1012977149 shutdown_connections 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.834+0000 7f11216f6700 1 --2- 192.168.123.105:0/1012977149 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f111c102320 0x7f111c1026f0 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.834+0000 7f11216f6700 1 -- 192.168.123.105:0/1012977149 >> 192.168.123.105:0/1012977149 conn(0x7f111c0fdc00 msgr2=0x7f111c100010 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.834+0000 7f11216f6700 1 -- 192.168.123.105:0/1012977149 shutdown_connections 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.834+0000 7f11216f6700 1 -- 192.168.123.105:0/1012977149 wait complete. 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.834+0000 7f11216f6700 1 Processor -- start 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.834+0000 7f11216f6700 1 -- start start 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.834+0000 7f11216f6700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f111c102320 0x7f111c072660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.834+0000 7f11216f6700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f110c012070 con 0x7f111c102320 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.835+0000 7f111bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f111c102320 0x7f111c072660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.835+0000 7f111bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f111c102320 0x7f111c072660 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43952/0 (socket says 192.168.123.105:43952) 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.835+0000 7f111bfff700 1 -- 192.168.123.105:0/1529591074 learned_addr learned my addr 192.168.123.105:0/1529591074 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.836+0000 7f111bfff700 1 -- 192.168.123.105:0/1529591074 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f110c0097e0 con 0x7f111c102320 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.836+0000 7f111bfff700 1 --2- 192.168.123.105:0/1529591074 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f111c102320 0x7f111c072660 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f110c00bc90 tx=0x7f110c0056a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.836+0000 7f11197fa700 1 -- 192.168.123.105:0/1529591074 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f110c01d070 con 0x7f111c102320 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.836+0000 7f11197fa700 1 -- 192.168.123.105:0/1529591074 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f110c004e50 con 0x7f111c102320 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.836+0000 7f11197fa700 1 -- 192.168.123.105:0/1529591074 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f110c00f460 con 0x7f111c102320 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.836+0000 7f11216f6700 1 -- 192.168.123.105:0/1529591074 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f111c072c00 con 0x7f111c102320 2026-03-09T14:55:47.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.836+0000 7f11216f6700 1 -- 192.168.123.105:0/1529591074 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f111c06d830 con 0x7f111c102320 2026-03-09T14:55:47.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.839+0000 7f11197fa700 1 -- 192.168.123.105:0/1529591074 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f110c003780 con 0x7f111c102320 2026-03-09T14:55:47.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.839+0000 7f11197fa700 1 --2- 192.168.123.105:0/1529591074 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1104038580 0x7f110403aa30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:47.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.839+0000 7f11197fa700 1 -- 192.168.123.105:0/1529591074 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f110c04c510 con 0x7f111c102320 2026-03-09T14:55:47.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.839+0000 7f111b7fe700 1 --2- 192.168.123.105:0/1529591074 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1104038580 0x7f110403aa30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:47.844 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.839+0000 7f11216f6700 1 -- 192.168.123.105:0/1529591074 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f111c072d90 con 0x7f111c102320 2026-03-09T14:55:47.844 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.843+0000 7f11197fa700 1 -- 192.168.123.105:0/1529591074 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f110c00f5c0 con 0x7f111c102320 2026-03-09T14:55:47.845 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.843+0000 7f111b7fe700 1 --2- 192.168.123.105:0/1529591074 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1104038580 0x7f110403aa30 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f1110006fd0 tx=0x7f1110006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:47.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.953+0000 7f11216f6700 1 -- 192.168.123.105:0/1529591074 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}) v1 -- 0x7f111c06ec00 con 0x7f1104038580 2026-03-09T14:55:47.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.954+0000 7f11197fa700 1 -- 192.168.123.105:0/1529591074 <== mgr.14164 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+155 (secure 0 0 0) 0x7f111c06ec00 con 0x7f1104038580 2026-03-09T14:55:47.955 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:55:47.956 INFO:teuthology.orchestra.run.vm05.stdout:[{"addr": "192.168.123.105", "hostname": "vm05", "labels": [], "status": ""}, {"addr": "192.168.123.109", "hostname": "vm09", "labels": [], "status": ""}] 2026-03-09T14:55:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.958+0000 7f11216f6700 1 -- 192.168.123.105:0/1529591074 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1104038580 msgr2=0x7f110403aa30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.958+0000 7f11216f6700 1 --2- 192.168.123.105:0/1529591074 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1104038580 0x7f110403aa30 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f1110006fd0 tx=0x7f1110006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.959+0000 7f11216f6700 1 -- 192.168.123.105:0/1529591074 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f111c102320 msgr2=0x7f111c072660 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.959+0000 7f11216f6700 1 --2- 192.168.123.105:0/1529591074 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f111c102320 0x7f111c072660 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f110c00bc90 tx=0x7f110c0056a0 comp rx=0 tx=0).stop 2026-03-09T14:55:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.959+0000 7f11216f6700 1 -- 192.168.123.105:0/1529591074 shutdown_connections 2026-03-09T14:55:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.959+0000 7f11216f6700 1 --2- 192.168.123.105:0/1529591074 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1104038580 0x7f110403aa30 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.959+0000 7f11216f6700 1 --2- 192.168.123.105:0/1529591074 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f111c102320 0x7f111c072660 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:47.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.960+0000 7f11216f6700 1 -- 192.168.123.105:0/1529591074 >> 192.168.123.105:0/1529591074 conn(0x7f111c0fdc00 msgr2=0x7f111c103c10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:47.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.960+0000 7f11216f6700 1 -- 192.168.123.105:0/1529591074 shutdown_connections 2026-03-09T14:55:47.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:47.960+0000 7f11216f6700 1 -- 192.168.123.105:0/1529591074 wait complete. 2026-03-09T14:55:48.030 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T14:55:48.030 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd crush tunables default 2026-03-09T14:55:48.278 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:48 vm05 ceph-mon[50611]: Deploying daemon crash.vm05 on vm05 2026-03-09T14:55:48.279 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:48 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:48.279 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:48 vm05 ceph-mon[50611]: Added host vm09 2026-03-09T14:55:48.279 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:48 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:48.279 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:48 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:48.279 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:48 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:48.279 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:48 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:48.343 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:55:48.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.629+0000 7fdf1cbda700 1 -- 192.168.123.105:0/1214038290 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdf18073070 msgr2=0x7fdf18073440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:48.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.629+0000 7fdf1cbda700 1 --2- 192.168.123.105:0/1214038290 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdf18073070 0x7fdf18073440 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7fdf00009b50 tx=0x7fdf00009e60 comp rx=0 tx=0).stop 2026-03-09T14:55:48.631 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.630+0000 7fdf1cbda700 1 -- 192.168.123.105:0/1214038290 shutdown_connections 2026-03-09T14:55:48.631 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.630+0000 7fdf1cbda700 1 --2- 192.168.123.105:0/1214038290 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdf18073070 0x7fdf18073440 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:48.631 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.630+0000 7fdf1cbda700 1 -- 192.168.123.105:0/1214038290 >> 192.168.123.105:0/1214038290 conn(0x7fdf180fbb70 msgr2=0x7fdf180fdf80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:48.631 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.631+0000 7fdf1cbda700 1 -- 192.168.123.105:0/1214038290 shutdown_connections 2026-03-09T14:55:48.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.631+0000 7fdf1cbda700 1 -- 192.168.123.105:0/1214038290 wait complete. 2026-03-09T14:55:48.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.631+0000 7fdf1cbda700 1 Processor -- start 2026-03-09T14:55:48.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.631+0000 7fdf1cbda700 1 -- start start 2026-03-09T14:55:48.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.631+0000 7fdf1cbda700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdf18073070 0x7fdf18102fe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:48.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.631+0000 7fdf1cbda700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdf18109fc0 con 0x7fdf18073070 2026-03-09T14:55:48.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.632+0000 7fdf1659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdf18073070 0x7fdf18102fe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:48.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.632+0000 7fdf1659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdf18073070 0x7fdf18102fe0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41754/0 (socket says 192.168.123.105:41754) 2026-03-09T14:55:48.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.632+0000 7fdf1659c700 1 -- 192.168.123.105:0/2111663480 learned_addr learned my addr 192.168.123.105:0/2111663480 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:55:48.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.632+0000 7fdf1659c700 1 -- 192.168.123.105:0/2111663480 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fdf000097e0 con 0x7fdf18073070 2026-03-09T14:55:48.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.632+0000 7fdf1659c700 1 --2- 192.168.123.105:0/2111663480 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdf18073070 0x7fdf18102fe0 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7fdf00005f50 tx=0x7fdf00004de0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:48.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.632+0000 7fdf0f7fe700 1 -- 192.168.123.105:0/2111663480 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fdf0001c070 con 0x7fdf18073070 2026-03-09T14:55:48.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.632+0000 7fdf0f7fe700 1 -- 192.168.123.105:0/2111663480 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fdf000055f0 con 0x7fdf18073070 2026-03-09T14:55:48.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.633+0000 7fdf1cbda700 1 -- 192.168.123.105:0/2111663480 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fdf1810a1c0 con 0x7fdf18073070 2026-03-09T14:55:48.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.633+0000 7fdf1cbda700 1 -- 192.168.123.105:0/2111663480 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fdf18103750 con 0x7fdf18073070 2026-03-09T14:55:48.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.633+0000 7fdf0f7fe700 1 -- 192.168.123.105:0/2111663480 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fdf0001c070 con 0x7fdf18073070 2026-03-09T14:55:48.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.633+0000 7fdf1cbda700 1 -- 192.168.123.105:0/2111663480 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fdf1804f9e0 con 0x7fdf18073070 2026-03-09T14:55:48.637 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.635+0000 7fdf0f7fe700 1 -- 192.168.123.105:0/2111663480 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fdf00005110 con 0x7fdf18073070 2026-03-09T14:55:48.637 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.635+0000 7fdf0f7fe700 1 --2- 192.168.123.105:0/2111663480 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdf04038450 0x7fdf0403a900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:48.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.635+0000 7fdf15d9b700 1 --2- 192.168.123.105:0/2111663480 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdf04038450 0x7fdf0403a900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:48.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.635+0000 7fdf15d9b700 1 --2- 192.168.123.105:0/2111663480 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdf04038450 0x7fdf0403a900 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7fdf08006fd0 tx=0x7fdf08006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:48.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.635+0000 7fdf0f7fe700 1 -- 192.168.123.105:0/2111663480 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(3..3 src has 1..3) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fdf00030080 con 0x7fdf18073070 2026-03-09T14:55:48.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.637+0000 7fdf0f7fe700 1 -- 192.168.123.105:0/2111663480 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fdf0002b7c0 con 0x7fdf18073070 2026-03-09T14:55:48.752 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:48.751+0000 7fdf1cbda700 1 -- 192.168.123.105:0/2111663480 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd crush tunables", "profile": "default"} v 0) v1 -- 0x7fdf18062380 con 0x7fdf18073070 2026-03-09T14:55:49.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:49.019+0000 7fdf0f7fe700 1 -- 192.168.123.105:0/2111663480 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd crush tunables", "profile": "default"}]=0 adjusted tunables profile to default v4) v1 ==== 124+0+0 (secure 0 0 0) 0x7fdf0002b7c0 con 0x7fdf18073070 2026-03-09T14:55:49.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:49.024+0000 7fdf1cbda700 1 -- 192.168.123.105:0/2111663480 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdf04038450 msgr2=0x7fdf0403a900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:49.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:49.024+0000 7fdf1cbda700 1 --2- 192.168.123.105:0/2111663480 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdf04038450 0x7fdf0403a900 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7fdf08006fd0 tx=0x7fdf08006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:49.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:49.024+0000 7fdf1cbda700 1 -- 192.168.123.105:0/2111663480 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdf18073070 msgr2=0x7fdf18102fe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:49.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:49.024+0000 7fdf1cbda700 1 --2- 192.168.123.105:0/2111663480 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdf18073070 0x7fdf18102fe0 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7fdf00005f50 tx=0x7fdf00004de0 comp rx=0 tx=0).stop 2026-03-09T14:55:49.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:49.025+0000 7fdf1cbda700 1 -- 192.168.123.105:0/2111663480 shutdown_connections 2026-03-09T14:55:49.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:49.025+0000 7fdf1cbda700 1 --2- 192.168.123.105:0/2111663480 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdf04038450 0x7fdf0403a900 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:49.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:49.025+0000 7fdf1cbda700 1 --2- 192.168.123.105:0/2111663480 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdf18073070 0x7fdf18102fe0 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:49.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:49.025+0000 7fdf1cbda700 1 -- 192.168.123.105:0/2111663480 >> 192.168.123.105:0/2111663480 conn(0x7fdf180fbb70 msgr2=0x7fdf1806c760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:49.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:49.025+0000 7fdf1cbda700 1 -- 192.168.123.105:0/2111663480 shutdown_connections 2026-03-09T14:55:49.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:55:49.025+0000 7fdf1cbda700 1 -- 192.168.123.105:0/2111663480 wait complete. 2026-03-09T14:55:49.026 INFO:teuthology.orchestra.run.vm05.stderr:adjusted tunables profile to default 2026-03-09T14:55:49.066 INFO:tasks.cephadm:Adding mon.vm05 on vm05 2026-03-09T14:55:49.066 INFO:tasks.cephadm:Adding mon.vm09 on vm09 2026-03-09T14:55:49.066 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph orch apply mon '2;vm05:192.168.123.105=vm05;vm09:192.168.123.109=vm09' 2026-03-09T14:55:49.221 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:49.257 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:49 vm05 ceph-mon[50611]: Deploying daemon node-exporter.vm05 on vm05 2026-03-09T14:55:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:49 vm05 ceph-mon[50611]: from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:55:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:49 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2111663480' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T14:55:50.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:50 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2111663480' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T14:55:50.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:50 vm05 ceph-mon[50611]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:55:50.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:50 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:50.374 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.374+0000 7fd11ffff700 1 -- 192.168.123.109:0/2046677320 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd120101460 msgr2=0x7fd120101870 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:50.375 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.374+0000 7fd11ffff700 1 --2- 192.168.123.109:0/2046677320 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd120101460 0x7fd120101870 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fd108009b00 tx=0x7fd108009e10 comp rx=0 tx=0).stop 2026-03-09T14:55:50.375 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.375+0000 7fd11ffff700 1 -- 192.168.123.109:0/2046677320 shutdown_connections 2026-03-09T14:55:50.375 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.375+0000 7fd11ffff700 1 --2- 192.168.123.109:0/2046677320 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd120101460 0x7fd120101870 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:50.375 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.375+0000 7fd11ffff700 1 -- 192.168.123.109:0/2046677320 >> 192.168.123.109:0/2046677320 conn(0x7fd1200f9ab0 msgr2=0x7fd1200fbec0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:50.375 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.376+0000 7fd11ffff700 1 -- 192.168.123.109:0/2046677320 shutdown_connections 2026-03-09T14:55:50.375 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.376+0000 7fd11ffff700 1 -- 192.168.123.109:0/2046677320 wait complete. 2026-03-09T14:55:50.375 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.376+0000 7fd11ffff700 1 Processor -- start 2026-03-09T14:55:50.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.377+0000 7fd11ffff700 1 -- start start 2026-03-09T14:55:50.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.377+0000 7fd11ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd120101460 0x7fd120194fe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:50.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.377+0000 7fd11ffff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd120195520 con 0x7fd120101460 2026-03-09T14:55:50.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.377+0000 7fd11effd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd120101460 0x7fd120194fe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:50.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.377+0000 7fd11effd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd120101460 0x7fd120194fe0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:44030/0 (socket says 192.168.123.109:44030) 2026-03-09T14:55:50.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.377+0000 7fd11effd700 1 -- 192.168.123.109:0/1113749162 learned_addr learned my addr 192.168.123.109:0/1113749162 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:55:50.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.377+0000 7fd11effd700 1 -- 192.168.123.109:0/1113749162 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd1080097e0 con 0x7fd120101460 2026-03-09T14:55:50.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.378+0000 7fd11effd700 1 --2- 192.168.123.109:0/1113749162 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd120101460 0x7fd120194fe0 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7fd108004750 tx=0x7fd108005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:50.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.378+0000 7fd117fff700 1 -- 192.168.123.109:0/1113749162 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd10801c070 con 0x7fd120101460 2026-03-09T14:55:50.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.378+0000 7fd117fff700 1 -- 192.168.123.109:0/1113749162 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd108021470 con 0x7fd120101460 2026-03-09T14:55:50.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.378+0000 7fd117fff700 1 -- 192.168.123.109:0/1113749162 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd10800f460 con 0x7fd120101460 2026-03-09T14:55:50.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.378+0000 7fd11ffff700 1 -- 192.168.123.109:0/1113749162 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd120195720 con 0x7fd120101460 2026-03-09T14:55:50.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.378+0000 7fd11ffff700 1 -- 192.168.123.109:0/1113749162 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd120195bc0 con 0x7fd120101460 2026-03-09T14:55:50.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.379+0000 7fd11ffff700 1 -- 192.168.123.109:0/1113749162 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd12018ecb0 con 0x7fd120101460 2026-03-09T14:55:50.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.379+0000 7fd117fff700 1 -- 192.168.123.109:0/1113749162 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fd108021ac0 con 0x7fd120101460 2026-03-09T14:55:50.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.379+0000 7fd117fff700 1 --2- 192.168.123.109:0/1113749162 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd10c038560 0x7fd10c03aa10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:50.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.379+0000 7fd117fff700 1 -- 192.168.123.109:0/1113749162 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fd10804c500 con 0x7fd120101460 2026-03-09T14:55:50.379 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.380+0000 7fd11e7fc700 1 --2- 192.168.123.109:0/1113749162 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd10c038560 0x7fd10c03aa10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:50.379 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.380+0000 7fd11e7fc700 1 --2- 192.168.123.109:0/1113749162 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd10c038560 0x7fd10c03aa10 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7fd110006fd0 tx=0x7fd110006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:50.381 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.382+0000 7fd117fff700 1 -- 192.168.123.109:0/1113749162 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fd10802ae40 con 0x7fd120101460 2026-03-09T14:55:50.493 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.492+0000 7fd11ffff700 1 -- 192.168.123.109:0/1113749162 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "placement": "2;vm05:192.168.123.105=vm05;vm09:192.168.123.109=vm09", "target": ["mon-mgr", ""]}) v1 -- 0x7fd120061190 con 0x7fd10c038560 2026-03-09T14:55:50.498 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.498+0000 7fd117fff700 1 -- 192.168.123.109:0/1113749162 <== mgr.14164 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+24 (secure 0 0 0) 0x7fd120061190 con 0x7fd10c038560 2026-03-09T14:55:50.498 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mon update... 2026-03-09T14:55:50.500 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.501+0000 7fd11ffff700 1 -- 192.168.123.109:0/1113749162 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd10c038560 msgr2=0x7fd10c03aa10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:50.500 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.501+0000 7fd11ffff700 1 --2- 192.168.123.109:0/1113749162 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd10c038560 0x7fd10c03aa10 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7fd110006fd0 tx=0x7fd110006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:50.501 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.501+0000 7fd11ffff700 1 -- 192.168.123.109:0/1113749162 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd120101460 msgr2=0x7fd120194fe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:50.501 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.501+0000 7fd11ffff700 1 --2- 192.168.123.109:0/1113749162 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd120101460 0x7fd120194fe0 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7fd108004750 tx=0x7fd108005dc0 comp rx=0 tx=0).stop 2026-03-09T14:55:50.501 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.501+0000 7fd11ffff700 1 -- 192.168.123.109:0/1113749162 shutdown_connections 2026-03-09T14:55:50.501 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.501+0000 7fd11ffff700 1 --2- 192.168.123.109:0/1113749162 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd10c038560 0x7fd10c03aa10 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:50.501 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.501+0000 7fd11ffff700 1 --2- 192.168.123.109:0/1113749162 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd120101460 0x7fd120194fe0 unknown :-1 s=CLOSED pgs=104 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:50.501 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.501+0000 7fd11ffff700 1 -- 192.168.123.109:0/1113749162 >> 192.168.123.109:0/1113749162 conn(0x7fd1200f9ab0 msgr2=0x7fd1200fa710 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:50.501 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.501+0000 7fd11ffff700 1 -- 192.168.123.109:0/1113749162 shutdown_connections 2026-03-09T14:55:50.501 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:50.501+0000 7fd11ffff700 1 -- 192.168.123.109:0/1113749162 wait complete. 2026-03-09T14:55:50.544 DEBUG:teuthology.orchestra.run.vm09:mon.vm09> sudo journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm09.service 2026-03-09T14:55:50.546 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:55:50.546 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:55:50.727 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:50.765 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:51.023 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.024+0000 7fa3b06ec700 1 -- 192.168.123.109:0/349193561 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa3a80fe910 msgr2=0x7fa3a80fed20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:51.023 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.024+0000 7fa3b06ec700 1 --2- 192.168.123.109:0/349193561 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa3a80fe910 0x7fa3a80fed20 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7fa398009b00 tx=0x7fa398009e10 comp rx=0 tx=0).stop 2026-03-09T14:55:51.024 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.024+0000 7fa3b06ec700 1 -- 192.168.123.109:0/349193561 shutdown_connections 2026-03-09T14:55:51.024 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.024+0000 7fa3b06ec700 1 --2- 192.168.123.109:0/349193561 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa3a80fe910 0x7fa3a80fed20 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:51.024 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.024+0000 7fa3b06ec700 1 -- 192.168.123.109:0/349193561 >> 192.168.123.109:0/349193561 conn(0x7fa3a80fa4a0 msgr2=0x7fa3a80fc8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:51.024 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.025+0000 7fa3b06ec700 1 -- 192.168.123.109:0/349193561 shutdown_connections 2026-03-09T14:55:51.024 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.025+0000 7fa3b06ec700 1 -- 192.168.123.109:0/349193561 wait complete. 2026-03-09T14:55:51.024 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.025+0000 7fa3b06ec700 1 Processor -- start 2026-03-09T14:55:51.024 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.025+0000 7fa3b06ec700 1 -- start start 2026-03-09T14:55:51.024 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.025+0000 7fa3b06ec700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa3a80fe910 0x7fa3a8197360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:51.024 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.025+0000 7fa3b06ec700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa3a81978a0 con 0x7fa3a80fe910 2026-03-09T14:55:51.025 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.026+0000 7fa3ae488700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa3a80fe910 0x7fa3a8197360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:51.025 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.026+0000 7fa3ae488700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa3a80fe910 0x7fa3a8197360 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:44060/0 (socket says 192.168.123.109:44060) 2026-03-09T14:55:51.025 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.026+0000 7fa3ae488700 1 -- 192.168.123.109:0/2936443567 learned_addr learned my addr 192.168.123.109:0/2936443567 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:55:51.025 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.026+0000 7fa3ae488700 1 -- 192.168.123.109:0/2936443567 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa3980097e0 con 0x7fa3a80fe910 2026-03-09T14:55:51.025 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.026+0000 7fa3ae488700 1 --2- 192.168.123.109:0/2936443567 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa3a80fe910 0x7fa3a8197360 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7fa398004f40 tx=0x7fa398005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:51.026 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.026+0000 7fa39f7fe700 1 -- 192.168.123.109:0/2936443567 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa39801c070 con 0x7fa3a80fe910 2026-03-09T14:55:51.026 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.026+0000 7fa39f7fe700 1 -- 192.168.123.109:0/2936443567 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa3980053b0 con 0x7fa3a80fe910 2026-03-09T14:55:51.026 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.026+0000 7fa39f7fe700 1 -- 192.168.123.109:0/2936443567 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa39800f460 con 0x7fa3a80fe910 2026-03-09T14:55:51.026 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.026+0000 7fa3b06ec700 1 -- 192.168.123.109:0/2936443567 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa3a8197aa0 con 0x7fa3a80fe910 2026-03-09T14:55:51.026 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.026+0000 7fa3b06ec700 1 -- 192.168.123.109:0/2936443567 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa3a8197e80 con 0x7fa3a80fe910 2026-03-09T14:55:51.027 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.027+0000 7fa39f7fe700 1 -- 192.168.123.109:0/2936443567 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fa39800f5e0 con 0x7fa3a80fe910 2026-03-09T14:55:51.027 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.028+0000 7fa3b06ec700 1 -- 192.168.123.109:0/2936443567 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa3a804fa50 con 0x7fa3a80fe910 2026-03-09T14:55:51.027 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.028+0000 7fa39f7fe700 1 --2- 192.168.123.109:0/2936443567 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa394038510 0x7fa39403a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:51.027 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.028+0000 7fa39f7fe700 1 -- 192.168.123.109:0/2936443567 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fa39804d520 con 0x7fa3a80fe910 2026-03-09T14:55:51.027 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.028+0000 7fa3adc87700 1 --2- 192.168.123.109:0/2936443567 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa394038510 0x7fa39403a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:51.027 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.028+0000 7fa3adc87700 1 --2- 192.168.123.109:0/2936443567 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa394038510 0x7fa39403a9c0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fa3a4006fd0 tx=0x7fa3a4006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:51.030 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.031+0000 7fa39f7fe700 1 -- 192.168.123.109:0/2936443567 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fa398017490 con 0x7fa3a80fe910 2026-03-09T14:55:51.181 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.182+0000 7fa3b06ec700 1 -- 192.168.123.109:0/2936443567 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fa3a802d050 con 0x7fa3a80fe910 2026-03-09T14:55:51.182 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.183+0000 7fa39f7fe700 1 -- 192.168.123.109:0/2936443567 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fa398026030 con 0x7fa3a80fe910 2026-03-09T14:55:51.182 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:55:51.182 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:55:51.184 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.185+0000 7fa3b06ec700 1 -- 192.168.123.109:0/2936443567 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa394038510 msgr2=0x7fa39403a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:51.184 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.185+0000 7fa3b06ec700 1 --2- 192.168.123.109:0/2936443567 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa394038510 0x7fa39403a9c0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fa3a4006fd0 tx=0x7fa3a4006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:51.185 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.185+0000 7fa3b06ec700 1 -- 192.168.123.109:0/2936443567 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa3a80fe910 msgr2=0x7fa3a8197360 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:51.185 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.185+0000 7fa3b06ec700 1 --2- 192.168.123.109:0/2936443567 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa3a80fe910 0x7fa3a8197360 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7fa398004f40 tx=0x7fa398005e70 comp rx=0 tx=0).stop 2026-03-09T14:55:51.185 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.186+0000 7fa3b06ec700 1 -- 192.168.123.109:0/2936443567 shutdown_connections 2026-03-09T14:55:51.185 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.186+0000 7fa3b06ec700 1 --2- 192.168.123.109:0/2936443567 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa394038510 0x7fa39403a9c0 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:51.185 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.186+0000 7fa3b06ec700 1 --2- 192.168.123.109:0/2936443567 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa3a80fe910 0x7fa3a8197360 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:51.185 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.186+0000 7fa3b06ec700 1 -- 192.168.123.109:0/2936443567 >> 192.168.123.109:0/2936443567 conn(0x7fa3a80fa4a0 msgr2=0x7fa3a80fb170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:51.185 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.186+0000 7fa3b06ec700 1 -- 192.168.123.109:0/2936443567 shutdown_connections 2026-03-09T14:55:51.185 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:51.186+0000 7fa3b06ec700 1 -- 192.168.123.109:0/2936443567 wait complete. 2026-03-09T14:55:51.186 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:55:51.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:51 vm05 ceph-mon[50611]: from='client.14197 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm05:192.168.123.105=vm05;vm09:192.168.123.109=vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:55:51.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:51 vm05 ceph-mon[50611]: Saving service mon spec with placement vm05:192.168.123.105=vm05;vm09:192.168.123.109=vm09;count:2 2026-03-09T14:55:51.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:51 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:51.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:51 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:51.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:51 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:51.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:51 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:51.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:51 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:51.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:51 vm05 ceph-mon[50611]: Deploying daemon alertmanager.vm05 on vm05 2026-03-09T14:55:51.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:51 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/2936443567' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:55:52.258 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:55:52.258 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:55:52.415 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:52.460 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:52.704 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.704+0000 7effadff0700 1 -- 192.168.123.109:0/3598465238 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7effa8100ab0 msgr2=0x7effa8102e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:52.704 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.704+0000 7effadff0700 1 --2- 192.168.123.109:0/3598465238 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7effa8100ab0 0x7effa8102e90 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7eff90009b00 tx=0x7eff90009e10 comp rx=0 tx=0).stop 2026-03-09T14:55:52.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.705+0000 7effadff0700 1 -- 192.168.123.109:0/3598465238 shutdown_connections 2026-03-09T14:55:52.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.705+0000 7effadff0700 1 --2- 192.168.123.109:0/3598465238 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7effa8100ab0 0x7effa8102e90 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:52.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.705+0000 7effadff0700 1 -- 192.168.123.109:0/3598465238 >> 192.168.123.109:0/3598465238 conn(0x7effa80fa4f0 msgr2=0x7effa80fc900 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:52.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.706+0000 7effadff0700 1 -- 192.168.123.109:0/3598465238 shutdown_connections 2026-03-09T14:55:52.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.706+0000 7effadff0700 1 -- 192.168.123.109:0/3598465238 wait complete. 2026-03-09T14:55:52.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.706+0000 7effadff0700 1 Processor -- start 2026-03-09T14:55:52.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.706+0000 7effadff0700 1 -- start start 2026-03-09T14:55:52.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.707+0000 7effadff0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7effa8100ab0 0x7effa8197380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:52.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.707+0000 7effadff0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7effa81978c0 con 0x7effa8100ab0 2026-03-09T14:55:52.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.707+0000 7effa77fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7effa8100ab0 0x7effa8197380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:52.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.707+0000 7effa77fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7effa8100ab0 0x7effa8197380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:44070/0 (socket says 192.168.123.109:44070) 2026-03-09T14:55:52.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.707+0000 7effa77fe700 1 -- 192.168.123.109:0/1072352531 learned_addr learned my addr 192.168.123.109:0/1072352531 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:55:52.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.707+0000 7effa77fe700 1 -- 192.168.123.109:0/1072352531 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7eff900097e0 con 0x7effa8100ab0 2026-03-09T14:55:52.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.707+0000 7effa77fe700 1 --2- 192.168.123.109:0/1072352531 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7effa8100ab0 0x7effa8197380 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7eff90004f40 tx=0x7eff90005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:52.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.707+0000 7effa4ff9700 1 -- 192.168.123.109:0/1072352531 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7eff9001c070 con 0x7effa8100ab0 2026-03-09T14:55:52.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.707+0000 7effa4ff9700 1 -- 192.168.123.109:0/1072352531 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7eff900053b0 con 0x7effa8100ab0 2026-03-09T14:55:52.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.707+0000 7effadff0700 1 -- 192.168.123.109:0/1072352531 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7effa8197ac0 con 0x7effa8100ab0 2026-03-09T14:55:52.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.707+0000 7effadff0700 1 -- 192.168.123.109:0/1072352531 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7effa8197f60 con 0x7effa8100ab0 2026-03-09T14:55:52.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.708+0000 7effa4ff9700 1 -- 192.168.123.109:0/1072352531 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7eff9000f550 con 0x7effa8100ab0 2026-03-09T14:55:52.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.708+0000 7effa4ff9700 1 -- 192.168.123.109:0/1072352531 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7eff9000f770 con 0x7effa8100ab0 2026-03-09T14:55:52.709 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.709+0000 7effadff0700 1 -- 192.168.123.109:0/1072352531 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7eff88005320 con 0x7effa8100ab0 2026-03-09T14:55:52.709 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.709+0000 7effa4ff9700 1 --2- 192.168.123.109:0/1072352531 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7eff94040d70 0x7eff94043220 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:52.709 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.709+0000 7effa4ff9700 1 -- 192.168.123.109:0/1072352531 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7eff9004d4b0 con 0x7effa8100ab0 2026-03-09T14:55:52.710 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.711+0000 7effa6ffd700 1 --2- 192.168.123.109:0/1072352531 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7eff94040d70 0x7eff94043220 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:52.710 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.711+0000 7effa6ffd700 1 --2- 192.168.123.109:0/1072352531 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7eff94040d70 0x7eff94043220 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7eff98006fd0 tx=0x7eff98006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:52.712 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.713+0000 7effa4ff9700 1 -- 192.168.123.109:0/1072352531 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7eff90029bb0 con 0x7effa8100ab0 2026-03-09T14:55:52.860 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.860+0000 7effadff0700 1 -- 192.168.123.109:0/1072352531 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7eff88005190 con 0x7effa8100ab0 2026-03-09T14:55:52.860 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.861+0000 7effa4ff9700 1 -- 192.168.123.109:0/1072352531 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7eff90026030 con 0x7effa8100ab0 2026-03-09T14:55:52.861 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:55:52.861 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:55:52.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.865+0000 7effadff0700 1 -- 192.168.123.109:0/1072352531 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7eff94040d70 msgr2=0x7eff94043220 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:52.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.865+0000 7effadff0700 1 --2- 192.168.123.109:0/1072352531 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7eff94040d70 0x7eff94043220 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7eff98006fd0 tx=0x7eff98006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:52.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.865+0000 7effadff0700 1 -- 192.168.123.109:0/1072352531 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7effa8100ab0 msgr2=0x7effa8197380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:52.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.865+0000 7effadff0700 1 --2- 192.168.123.109:0/1072352531 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7effa8100ab0 0x7effa8197380 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7eff90004f40 tx=0x7eff90005e70 comp rx=0 tx=0).stop 2026-03-09T14:55:52.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.865+0000 7effadff0700 1 -- 192.168.123.109:0/1072352531 shutdown_connections 2026-03-09T14:55:52.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.865+0000 7effadff0700 1 --2- 192.168.123.109:0/1072352531 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7eff94040d70 0x7eff94043220 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:52.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.865+0000 7effadff0700 1 --2- 192.168.123.109:0/1072352531 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7effa8100ab0 0x7effa8197380 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:52.865 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.865+0000 7effadff0700 1 -- 192.168.123.109:0/1072352531 >> 192.168.123.109:0/1072352531 conn(0x7effa80fa4f0 msgr2=0x7effa80fc8d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:52.865 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.865+0000 7effadff0700 1 -- 192.168.123.109:0/1072352531 shutdown_connections 2026-03-09T14:55:52.865 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:52.866+0000 7effadff0700 1 -- 192.168.123.109:0/1072352531 wait complete. 2026-03-09T14:55:52.866 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:55:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:52 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1072352531' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:55:53.913 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:55:53.913 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:55:54.049 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:54.110 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:54.595 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.595+0000 7fdd33847700 1 -- 192.168.123.109:0/2313149619 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdd2c102230 msgr2=0x7fdd2c102640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:54.595 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.595+0000 7fdd33847700 1 --2- 192.168.123.109:0/2313149619 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdd2c102230 0x7fdd2c102640 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fdd1c009b00 tx=0x7fdd1c009e10 comp rx=0 tx=0).stop 2026-03-09T14:55:54.596 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.596+0000 7fdd33847700 1 -- 192.168.123.109:0/2313149619 shutdown_connections 2026-03-09T14:55:54.596 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.596+0000 7fdd33847700 1 --2- 192.168.123.109:0/2313149619 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdd2c102230 0x7fdd2c102640 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:54.596 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.596+0000 7fdd33847700 1 -- 192.168.123.109:0/2313149619 >> 192.168.123.109:0/2313149619 conn(0x7fdd2c0fd8d0 msgr2=0x7fdd2c0ffcb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:54.596 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.596+0000 7fdd33847700 1 -- 192.168.123.109:0/2313149619 shutdown_connections 2026-03-09T14:55:54.596 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.596+0000 7fdd33847700 1 -- 192.168.123.109:0/2313149619 wait complete. 2026-03-09T14:55:54.596 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.597+0000 7fdd33847700 1 Processor -- start 2026-03-09T14:55:54.596 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.597+0000 7fdd33847700 1 -- start start 2026-03-09T14:55:54.596 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.597+0000 7fdd33847700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdd2c102230 0x7fdd2c19b7b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:54.596 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.597+0000 7fdd33847700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdd2c19bcf0 con 0x7fdd2c102230 2026-03-09T14:55:54.597 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.598+0000 7fdd315e3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdd2c102230 0x7fdd2c19b7b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:54.597 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.598+0000 7fdd315e3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdd2c102230 0x7fdd2c19b7b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:44074/0 (socket says 192.168.123.109:44074) 2026-03-09T14:55:54.597 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.598+0000 7fdd315e3700 1 -- 192.168.123.109:0/1751774406 learned_addr learned my addr 192.168.123.109:0/1751774406 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:55:54.597 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.598+0000 7fdd315e3700 1 -- 192.168.123.109:0/1751774406 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fdd1c0097e0 con 0x7fdd2c102230 2026-03-09T14:55:54.597 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.598+0000 7fdd315e3700 1 --2- 192.168.123.109:0/1751774406 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdd2c102230 0x7fdd2c19b7b0 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fdd1c004f40 tx=0x7fdd1c005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:54.597 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.598+0000 7fdd227fc700 1 -- 192.168.123.109:0/1751774406 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fdd1c01c070 con 0x7fdd2c102230 2026-03-09T14:55:54.597 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.598+0000 7fdd33847700 1 -- 192.168.123.109:0/1751774406 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fdd2c19bef0 con 0x7fdd2c102230 2026-03-09T14:55:54.598 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.599+0000 7fdd33847700 1 -- 192.168.123.109:0/1751774406 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fdd2c19c390 con 0x7fdd2c102230 2026-03-09T14:55:54.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.599+0000 7fdd227fc700 1 -- 192.168.123.109:0/1751774406 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fdd1c0053b0 con 0x7fdd2c102230 2026-03-09T14:55:54.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.599+0000 7fdd227fc700 1 -- 192.168.123.109:0/1751774406 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fdd1c00f460 con 0x7fdd2c102230 2026-03-09T14:55:54.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.600+0000 7fdd227fc700 1 -- 192.168.123.109:0/1751774406 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fdd1c00f6c0 con 0x7fdd2c102230 2026-03-09T14:55:54.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.600+0000 7fdd227fc700 1 --2- 192.168.123.109:0/1751774406 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdd18038510 0x7fdd1803a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:54.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.600+0000 7fdd227fc700 1 -- 192.168.123.109:0/1751774406 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fdd1c04d570 con 0x7fdd2c102230 2026-03-09T14:55:54.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.600+0000 7fdd30de2700 1 --2- 192.168.123.109:0/1751774406 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdd18038510 0x7fdd1803a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:54.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.600+0000 7fdd33847700 1 -- 192.168.123.109:0/1751774406 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fdd2c062380 con 0x7fdd2c102230 2026-03-09T14:55:54.600 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.601+0000 7fdd30de2700 1 --2- 192.168.123.109:0/1751774406 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdd18038510 0x7fdd1803a9c0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fdd28006fd0 tx=0x7fdd28006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:54.603 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.604+0000 7fdd227fc700 1 -- 192.168.123.109:0/1751774406 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fdd1c026070 con 0x7fdd2c102230 2026-03-09T14:55:54.764 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.764+0000 7fdd33847700 1 -- 192.168.123.109:0/1751774406 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fdd2c19ed50 con 0x7fdd2c102230 2026-03-09T14:55:54.764 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.765+0000 7fdd227fc700 1 -- 192.168.123.109:0/1751774406 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fdd1c029bb0 con 0x7fdd2c102230 2026-03-09T14:55:54.765 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:55:54.765 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:55:54.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.769+0000 7fdd33847700 1 -- 192.168.123.109:0/1751774406 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdd18038510 msgr2=0x7fdd1803a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:54.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.769+0000 7fdd33847700 1 --2- 192.168.123.109:0/1751774406 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdd18038510 0x7fdd1803a9c0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fdd28006fd0 tx=0x7fdd28006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:54.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.769+0000 7fdd33847700 1 -- 192.168.123.109:0/1751774406 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdd2c102230 msgr2=0x7fdd2c19b7b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:54.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.769+0000 7fdd33847700 1 --2- 192.168.123.109:0/1751774406 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdd2c102230 0x7fdd2c19b7b0 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fdd1c004f40 tx=0x7fdd1c005e70 comp rx=0 tx=0).stop 2026-03-09T14:55:54.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.769+0000 7fdd33847700 1 -- 192.168.123.109:0/1751774406 shutdown_connections 2026-03-09T14:55:54.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.770+0000 7fdd33847700 1 --2- 192.168.123.109:0/1751774406 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdd18038510 0x7fdd1803a9c0 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:54.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.770+0000 7fdd33847700 1 --2- 192.168.123.109:0/1751774406 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdd2c102230 0x7fdd2c19b7b0 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:54.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.770+0000 7fdd33847700 1 -- 192.168.123.109:0/1751774406 >> 192.168.123.109:0/1751774406 conn(0x7fdd2c0fd8d0 msgr2=0x7fdd2c0fe530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:54.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.770+0000 7fdd33847700 1 -- 192.168.123.109:0/1751774406 shutdown_connections 2026-03-09T14:55:54.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:54.770+0000 7fdd33847700 1 -- 192.168.123.109:0/1751774406 wait complete. 2026-03-09T14:55:54.770 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1751774406' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:55:55.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:55 vm05 ceph-mon[50611]: Deploying daemon grafana.vm05 on vm05 2026-03-09T14:55:55.842 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:55:55.842 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:55:55.979 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:56.010 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:56.248 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.249+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2890928137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74dc102230 msgr2=0x7f74dc102640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:56.248 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.249+0000 7f74e1c1b700 1 --2- 192.168.123.109:0/2890928137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74dc102230 0x7f74dc102640 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f74c4009b00 tx=0x7f74c4009e10 comp rx=0 tx=0).stop 2026-03-09T14:55:56.248 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.249+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2890928137 shutdown_connections 2026-03-09T14:55:56.249 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.249+0000 7f74e1c1b700 1 --2- 192.168.123.109:0/2890928137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74dc102230 0x7f74dc102640 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:56.249 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.249+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2890928137 >> 192.168.123.109:0/2890928137 conn(0x7f74dc0fd8d0 msgr2=0x7f74dc0ffcb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:56.249 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.249+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2890928137 shutdown_connections 2026-03-09T14:55:56.249 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.250+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2890928137 wait complete. 2026-03-09T14:55:56.249 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.250+0000 7f74e1c1b700 1 Processor -- start 2026-03-09T14:55:56.249 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.250+0000 7f74e1c1b700 1 -- start start 2026-03-09T14:55:56.249 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.250+0000 7f74e1c1b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74dc102230 0x7f74dc197380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:56.249 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.250+0000 7f74e1c1b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f74dc1978c0 con 0x7f74dc102230 2026-03-09T14:55:56.250 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.251+0000 7f74db7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74dc102230 0x7f74dc197380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:56.250 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.251+0000 7f74db7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74dc102230 0x7f74dc197380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:36860/0 (socket says 192.168.123.109:36860) 2026-03-09T14:55:56.250 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.251+0000 7f74db7fe700 1 -- 192.168.123.109:0/2290274839 learned_addr learned my addr 192.168.123.109:0/2290274839 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:55:56.250 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.251+0000 7f74db7fe700 1 -- 192.168.123.109:0/2290274839 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f74c40097e0 con 0x7f74dc102230 2026-03-09T14:55:56.250 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.251+0000 7f74db7fe700 1 --2- 192.168.123.109:0/2290274839 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74dc102230 0x7f74dc197380 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f74c4004750 tx=0x7f74c4005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:56.251 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.251+0000 7f74d8ff9700 1 -- 192.168.123.109:0/2290274839 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f74c401c070 con 0x7f74dc102230 2026-03-09T14:55:56.251 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.252+0000 7f74d8ff9700 1 -- 192.168.123.109:0/2290274839 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f74c4021470 con 0x7f74dc102230 2026-03-09T14:55:56.251 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.252+0000 7f74d8ff9700 1 -- 192.168.123.109:0/2290274839 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f74c400f460 con 0x7f74dc102230 2026-03-09T14:55:56.251 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.252+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2290274839 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f74dc197ac0 con 0x7f74dc102230 2026-03-09T14:55:56.252 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.252+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2290274839 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f74dc197f60 con 0x7f74dc102230 2026-03-09T14:55:56.252 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.253+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2290274839 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f74dc191080 con 0x7f74dc102230 2026-03-09T14:55:56.252 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.253+0000 7f74d8ff9700 1 -- 192.168.123.109:0/2290274839 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f74c4021ac0 con 0x7f74dc102230 2026-03-09T14:55:56.252 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.253+0000 7f74d8ff9700 1 --2- 192.168.123.109:0/2290274839 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f74c8038510 0x7f74c803a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:56.252 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.253+0000 7f74d8ff9700 1 -- 192.168.123.109:0/2290274839 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f74c404c410 con 0x7f74dc102230 2026-03-09T14:55:56.252 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.253+0000 7f74daffd700 1 --2- 192.168.123.109:0/2290274839 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f74c8038510 0x7f74c803a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:56.253 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.254+0000 7f74daffd700 1 --2- 192.168.123.109:0/2290274839 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f74c8038510 0x7f74c803a9c0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f74cc006fd0 tx=0x7f74cc006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:56.255 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.256+0000 7f74d8ff9700 1 -- 192.168.123.109:0/2290274839 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f74c400f5c0 con 0x7f74dc102230 2026-03-09T14:55:56.405 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.406+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2290274839 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f74dc062380 con 0x7f74dc102230 2026-03-09T14:55:56.407 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.408+0000 7f74d8ff9700 1 -- 192.168.123.109:0/2290274839 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f74c4026030 con 0x7f74dc102230 2026-03-09T14:55:56.407 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:55:56.407 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:55:56.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.411+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2290274839 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f74c8038510 msgr2=0x7f74c803a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:56.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.411+0000 7f74e1c1b700 1 --2- 192.168.123.109:0/2290274839 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f74c8038510 0x7f74c803a9c0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f74cc006fd0 tx=0x7f74cc006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:56.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.411+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2290274839 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74dc102230 msgr2=0x7f74dc197380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:56.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.411+0000 7f74e1c1b700 1 --2- 192.168.123.109:0/2290274839 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74dc102230 0x7f74dc197380 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f74c4004750 tx=0x7f74c4005dc0 comp rx=0 tx=0).stop 2026-03-09T14:55:56.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.411+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2290274839 shutdown_connections 2026-03-09T14:55:56.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.411+0000 7f74e1c1b700 1 --2- 192.168.123.109:0/2290274839 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f74c8038510 0x7f74c803a9c0 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:56.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.411+0000 7f74e1c1b700 1 --2- 192.168.123.109:0/2290274839 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f74dc102230 0x7f74dc197380 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:56.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.411+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2290274839 >> 192.168.123.109:0/2290274839 conn(0x7f74dc0fd8d0 msgr2=0x7f74dc0fe530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:56.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.411+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2290274839 shutdown_connections 2026-03-09T14:55:56.410 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:56.411+0000 7f74e1c1b700 1 -- 192.168.123.109:0/2290274839 wait complete. 2026-03-09T14:55:56.411 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:55:56.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:56 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/2290274839' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:55:57.472 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:55:57.472 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:55:57.633 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:57.672 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:57.935 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.936+0000 7f9f4b254700 1 -- 192.168.123.109:0/3429066278 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f44074d80 msgr2=0x7f9f440731e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:57.935 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.936+0000 7f9f4b254700 1 --2- 192.168.123.109:0/3429066278 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f44074d80 0x7f9f440731e0 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f9f30009b00 tx=0x7f9f30009e10 comp rx=0 tx=0).stop 2026-03-09T14:55:57.936 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.936+0000 7f9f4b254700 1 -- 192.168.123.109:0/3429066278 shutdown_connections 2026-03-09T14:55:57.936 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.936+0000 7f9f4b254700 1 --2- 192.168.123.109:0/3429066278 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f44074d80 0x7f9f440731e0 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:57.936 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.936+0000 7f9f4b254700 1 -- 192.168.123.109:0/3429066278 >> 192.168.123.109:0/3429066278 conn(0x7f9f440fb5b0 msgr2=0x7f9f440fda00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:57.936 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.937+0000 7f9f4b254700 1 -- 192.168.123.109:0/3429066278 shutdown_connections 2026-03-09T14:55:57.936 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.937+0000 7f9f4b254700 1 -- 192.168.123.109:0/3429066278 wait complete. 2026-03-09T14:55:57.936 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.937+0000 7f9f4b254700 1 Processor -- start 2026-03-09T14:55:57.936 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.937+0000 7f9f4b254700 1 -- start start 2026-03-09T14:55:57.936 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.937+0000 7f9f4b254700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f44074d80 0x7f9f44197380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:57.936 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.937+0000 7f9f4b254700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9f441978c0 con 0x7f9f44074d80 2026-03-09T14:55:57.937 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.937+0000 7f9f48ff0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f44074d80 0x7f9f44197380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:57.937 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.937+0000 7f9f48ff0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f44074d80 0x7f9f44197380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:36882/0 (socket says 192.168.123.109:36882) 2026-03-09T14:55:57.937 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.937+0000 7f9f48ff0700 1 -- 192.168.123.109:0/3756055233 learned_addr learned my addr 192.168.123.109:0/3756055233 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:55:57.937 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.938+0000 7f9f48ff0700 1 -- 192.168.123.109:0/3756055233 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9f300097e0 con 0x7f9f44074d80 2026-03-09T14:55:57.937 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.938+0000 7f9f48ff0700 1 --2- 192.168.123.109:0/3756055233 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f44074d80 0x7f9f44197380 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f9f30004f40 tx=0x7f9f30005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:57.937 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.938+0000 7f9f41ffb700 1 -- 192.168.123.109:0/3756055233 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9f3001c070 con 0x7f9f44074d80 2026-03-09T14:55:57.937 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.938+0000 7f9f41ffb700 1 -- 192.168.123.109:0/3756055233 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9f300053b0 con 0x7f9f44074d80 2026-03-09T14:55:57.939 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.938+0000 7f9f41ffb700 1 -- 192.168.123.109:0/3756055233 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f9f3000f460 con 0x7f9f44074d80 2026-03-09T14:55:57.939 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.938+0000 7f9f4b254700 1 -- 192.168.123.109:0/3756055233 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9f44197ac0 con 0x7f9f44074d80 2026-03-09T14:55:57.939 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.938+0000 7f9f4b254700 1 -- 192.168.123.109:0/3756055233 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9f44197f60 con 0x7f9f44074d80 2026-03-09T14:55:57.939 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.939+0000 7f9f4b254700 1 -- 192.168.123.109:0/3756055233 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9f44191070 con 0x7f9f44074d80 2026-03-09T14:55:57.939 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.940+0000 7f9f41ffb700 1 -- 192.168.123.109:0/3756055233 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f9f30005520 con 0x7f9f44074d80 2026-03-09T14:55:57.939 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.940+0000 7f9f41ffb700 1 --2- 192.168.123.109:0/3756055233 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9f34038190 0x7f9f3403a640 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:57.939 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.940+0000 7f9f41ffb700 1 -- 192.168.123.109:0/3756055233 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f9f3004c470 con 0x7f9f44074d80 2026-03-09T14:55:57.939 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.940+0000 7f9f43fff700 1 --2- 192.168.123.109:0/3756055233 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9f34038190 0x7f9f3403a640 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:57.940 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.941+0000 7f9f43fff700 1 --2- 192.168.123.109:0/3756055233 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9f34038190 0x7f9f3403a640 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f9f38006fd0 tx=0x7f9f38006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:57.942 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:57.943+0000 7f9f41ffb700 1 -- 192.168.123.109:0/3756055233 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f9f3002aa30 con 0x7f9f44074d80 2026-03-09T14:55:58.098 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:58.099+0000 7f9f4b254700 1 -- 192.168.123.109:0/3756055233 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f9f44062380 con 0x7f9f44074d80 2026-03-09T14:55:58.099 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:58.100+0000 7f9f41ffb700 1 -- 192.168.123.109:0/3756055233 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f9f30026020 con 0x7f9f44074d80 2026-03-09T14:55:58.100 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:55:58.100 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:55:58.102 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:58.103+0000 7f9f4b254700 1 -- 192.168.123.109:0/3756055233 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9f34038190 msgr2=0x7f9f3403a640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:58.103 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:58.103+0000 7f9f4b254700 1 --2- 192.168.123.109:0/3756055233 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9f34038190 0x7f9f3403a640 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f9f38006fd0 tx=0x7f9f38006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:58.103 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:58.104+0000 7f9f4b254700 1 -- 192.168.123.109:0/3756055233 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f44074d80 msgr2=0x7f9f44197380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:58.103 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:58.104+0000 7f9f4b254700 1 --2- 192.168.123.109:0/3756055233 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f44074d80 0x7f9f44197380 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f9f30004f40 tx=0x7f9f30005e70 comp rx=0 tx=0).stop 2026-03-09T14:55:58.103 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:58.104+0000 7f9f4b254700 1 -- 192.168.123.109:0/3756055233 shutdown_connections 2026-03-09T14:55:58.103 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:58.104+0000 7f9f4b254700 1 --2- 192.168.123.109:0/3756055233 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9f34038190 0x7f9f3403a640 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:58.103 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:58.104+0000 7f9f4b254700 1 --2- 192.168.123.109:0/3756055233 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9f44074d80 0x7f9f44197380 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:58.103 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:58.104+0000 7f9f4b254700 1 -- 192.168.123.109:0/3756055233 >> 192.168.123.109:0/3756055233 conn(0x7f9f440fb5b0 msgr2=0x7f9f440fc280 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:58.104 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:58.105+0000 7f9f4b254700 1 -- 192.168.123.109:0/3756055233 shutdown_connections 2026-03-09T14:55:58.104 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:58.105+0000 7f9f4b254700 1 -- 192.168.123.109:0/3756055233 wait complete. 2026-03-09T14:55:58.105 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:55:58.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:55:58 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3756055233' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:55:59.172 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:55:59.172 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:55:59.321 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:59.363 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:55:59.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.649+0000 7f206c4e2700 1 -- 192.168.123.109:0/3049659684 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2064102240 msgr2=0x7f2064102650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:59.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.649+0000 7f206c4e2700 1 --2- 192.168.123.109:0/3049659684 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2064102240 0x7f2064102650 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7f2054009b00 tx=0x7f2054009e10 comp rx=0 tx=0).stop 2026-03-09T14:55:59.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.650+0000 7f206c4e2700 1 -- 192.168.123.109:0/3049659684 shutdown_connections 2026-03-09T14:55:59.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.650+0000 7f206c4e2700 1 --2- 192.168.123.109:0/3049659684 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2064102240 0x7f2064102650 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:59.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.650+0000 7f206c4e2700 1 -- 192.168.123.109:0/3049659684 >> 192.168.123.109:0/3049659684 conn(0x7f20640fd8d0 msgr2=0x7f20640ffcb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:59.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.650+0000 7f206c4e2700 1 -- 192.168.123.109:0/3049659684 shutdown_connections 2026-03-09T14:55:59.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.650+0000 7f206c4e2700 1 -- 192.168.123.109:0/3049659684 wait complete. 2026-03-09T14:55:59.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.651+0000 7f206c4e2700 1 Processor -- start 2026-03-09T14:55:59.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.651+0000 7f206c4e2700 1 -- start start 2026-03-09T14:55:59.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.651+0000 7f206c4e2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2064102240 0x7f2064197450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:59.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.651+0000 7f206c4e2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2064197990 con 0x7f2064102240 2026-03-09T14:55:59.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.652+0000 7f206a27e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2064102240 0x7f2064197450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:59.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.652+0000 7f206a27e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2064102240 0x7f2064197450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:36888/0 (socket says 192.168.123.109:36888) 2026-03-09T14:55:59.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.652+0000 7f206a27e700 1 -- 192.168.123.109:0/3918698020 learned_addr learned my addr 192.168.123.109:0/3918698020 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:55:59.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.652+0000 7f206a27e700 1 -- 192.168.123.109:0/3918698020 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f20540097e0 con 0x7f2064102240 2026-03-09T14:55:59.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.652+0000 7f206a27e700 1 --2- 192.168.123.109:0/3918698020 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2064102240 0x7f2064197450 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7f2054004d40 tx=0x7f2054004e20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:59.652 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.652+0000 7f205b7fe700 1 -- 192.168.123.109:0/3918698020 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f205401c070 con 0x7f2064102240 2026-03-09T14:55:59.652 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.653+0000 7f206c4e2700 1 -- 192.168.123.109:0/3918698020 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2064197b90 con 0x7f2064102240 2026-03-09T14:55:59.652 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.653+0000 7f206c4e2700 1 -- 192.168.123.109:0/3918698020 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2064198030 con 0x7f2064102240 2026-03-09T14:55:59.653 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.653+0000 7f205b7fe700 1 -- 192.168.123.109:0/3918698020 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f20540056f0 con 0x7f2064102240 2026-03-09T14:55:59.653 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.653+0000 7f205b7fe700 1 -- 192.168.123.109:0/3918698020 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f2054017440 con 0x7f2064102240 2026-03-09T14:55:59.653 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.654+0000 7f205b7fe700 1 -- 192.168.123.109:0/3918698020 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f2054017660 con 0x7f2064102240 2026-03-09T14:55:59.653 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.654+0000 7f205b7fe700 1 --2- 192.168.123.109:0/3918698020 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f2050038510 0x7f205003a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:55:59.653 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.654+0000 7f205b7fe700 1 -- 192.168.123.109:0/3918698020 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f205404d190 con 0x7f2064102240 2026-03-09T14:55:59.653 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.654+0000 7f206c4e2700 1 -- 192.168.123.109:0/3918698020 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2048005320 con 0x7f2064102240 2026-03-09T14:55:59.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.654+0000 7f2069a7d700 1 --2- 192.168.123.109:0/3918698020 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f2050038510 0x7f205003a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:55:59.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.655+0000 7f2069a7d700 1 --2- 192.168.123.109:0/3918698020 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f2050038510 0x7f205003a9c0 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f2060006fd0 tx=0x7f2060006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:55:59.657 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.658+0000 7f205b7fe700 1 -- 192.168.123.109:0/3918698020 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f2054025070 con 0x7f2064102240 2026-03-09T14:55:59.811 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.810+0000 7f206c4e2700 1 -- 192.168.123.109:0/3918698020 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f2048005190 con 0x7f2064102240 2026-03-09T14:55:59.812 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.813+0000 7f205b7fe700 1 -- 192.168.123.109:0/3918698020 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f205402a430 con 0x7f2064102240 2026-03-09T14:55:59.812 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:55:59.812 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:55:59.814 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.815+0000 7f206c4e2700 1 -- 192.168.123.109:0/3918698020 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f2050038510 msgr2=0x7f205003a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:59.814 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.815+0000 7f206c4e2700 1 --2- 192.168.123.109:0/3918698020 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f2050038510 0x7f205003a9c0 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f2060006fd0 tx=0x7f2060006e40 comp rx=0 tx=0).stop 2026-03-09T14:55:59.814 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.815+0000 7f206c4e2700 1 -- 192.168.123.109:0/3918698020 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2064102240 msgr2=0x7f2064197450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:55:59.814 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.815+0000 7f206c4e2700 1 --2- 192.168.123.109:0/3918698020 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2064102240 0x7f2064197450 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7f2054004d40 tx=0x7f2054004e20 comp rx=0 tx=0).stop 2026-03-09T14:55:59.815 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.815+0000 7f206c4e2700 1 -- 192.168.123.109:0/3918698020 shutdown_connections 2026-03-09T14:55:59.815 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.815+0000 7f206c4e2700 1 --2- 192.168.123.109:0/3918698020 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f2050038510 0x7f205003a9c0 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:59.815 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.815+0000 7f206c4e2700 1 --2- 192.168.123.109:0/3918698020 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2064102240 0x7f2064197450 unknown :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:55:59.815 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.816+0000 7f206c4e2700 1 -- 192.168.123.109:0/3918698020 >> 192.168.123.109:0/3918698020 conn(0x7f20640fd8d0 msgr2=0x7f20640fe530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:55:59.815 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.816+0000 7f206c4e2700 1 -- 192.168.123.109:0/3918698020 shutdown_connections 2026-03-09T14:55:59.815 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:55:59.816+0000 7f206c4e2700 1 -- 192.168.123.109:0/3918698020 wait complete. 2026-03-09T14:55:59.816 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:00 vm05 ceph-mon[50611]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:00 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:00 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3918698020' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:00.914 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:00.914 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:01.066 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:01.110 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:01.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.468+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/1296036340 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe08102240 msgr2=0x7fbe08102650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:01.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.468+0000 7fbe0ca8b700 1 --2- 192.168.123.109:0/1296036340 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe08102240 0x7fbe08102650 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7fbdf0009b00 tx=0x7fbdf0009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:01.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.470+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/1296036340 shutdown_connections 2026-03-09T14:56:01.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.470+0000 7fbe0ca8b700 1 --2- 192.168.123.109:0/1296036340 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe08102240 0x7fbe08102650 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:01.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.470+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/1296036340 >> 192.168.123.109:0/1296036340 conn(0x7fbe080fd8d0 msgr2=0x7fbe080ffcb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:01.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.470+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/1296036340 shutdown_connections 2026-03-09T14:56:01.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.470+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/1296036340 wait complete. 2026-03-09T14:56:01.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.471+0000 7fbe0ca8b700 1 Processor -- start 2026-03-09T14:56:01.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.471+0000 7fbe0ca8b700 1 -- start start 2026-03-09T14:56:01.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.471+0000 7fbe0ca8b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe08102240 0x7fbe08197450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:01.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.471+0000 7fbe0ca8b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbe08197990 con 0x7fbe08102240 2026-03-09T14:56:01.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.471+0000 7fbe0659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe08102240 0x7fbe08197450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:01.471 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.471+0000 7fbe0659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe08102240 0x7fbe08197450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:36906/0 (socket says 192.168.123.109:36906) 2026-03-09T14:56:01.471 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.471+0000 7fbe0659c700 1 -- 192.168.123.109:0/3830558250 learned_addr learned my addr 192.168.123.109:0/3830558250 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:01.471 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.472+0000 7fbe0659c700 1 -- 192.168.123.109:0/3830558250 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbdf00097e0 con 0x7fbe08102240 2026-03-09T14:56:01.471 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.472+0000 7fbe0659c700 1 --2- 192.168.123.109:0/3830558250 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe08102240 0x7fbe08197450 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7fbdf0004d40 tx=0x7fbdf0004e20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:01.471 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.472+0000 7fbdff7fe700 1 -- 192.168.123.109:0/3830558250 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbdf001c070 con 0x7fbe08102240 2026-03-09T14:56:01.471 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.472+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/3830558250 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbe08197b90 con 0x7fbe08102240 2026-03-09T14:56:01.472 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.472+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/3830558250 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbe08198030 con 0x7fbe08102240 2026-03-09T14:56:01.472 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.472+0000 7fbdff7fe700 1 -- 192.168.123.109:0/3830558250 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fbdf00054e0 con 0x7fbe08102240 2026-03-09T14:56:01.472 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.472+0000 7fbdff7fe700 1 -- 192.168.123.109:0/3830558250 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbdf0003c50 con 0x7fbe08102240 2026-03-09T14:56:01.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.474+0000 7fbdff7fe700 1 -- 192.168.123.109:0/3830558250 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fbdf000f460 con 0x7fbe08102240 2026-03-09T14:56:01.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.474+0000 7fbdff7fe700 1 --2- 192.168.123.109:0/3830558250 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbdf4038510 0x7fbdf403a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:01.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.474+0000 7fbdff7fe700 1 -- 192.168.123.109:0/3830558250 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fbdf004d350 con 0x7fbe08102240 2026-03-09T14:56:01.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.474+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/3830558250 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbde8005320 con 0x7fbe08102240 2026-03-09T14:56:01.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.474+0000 7fbe05d9b700 1 --2- 192.168.123.109:0/3830558250 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbdf4038510 0x7fbdf403a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:01.474 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.475+0000 7fbe05d9b700 1 --2- 192.168.123.109:0/3830558250 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbdf4038510 0x7fbdf403a9c0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7fbdf8006fd0 tx=0x7fbdf8006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:01.477 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.478+0000 7fbdff7fe700 1 -- 192.168.123.109:0/3830558250 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fbdf0029980 con 0x7fbe08102240 2026-03-09T14:56:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:01 vm05 ceph-mon[50611]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:01.625 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.626+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/3830558250 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fbde8005190 con 0x7fbe08102240 2026-03-09T14:56:01.625 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.626+0000 7fbdff7fe700 1 -- 192.168.123.109:0/3830558250 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fbdf0029390 con 0x7fbe08102240 2026-03-09T14:56:01.626 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:01.626 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:01.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.629+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/3830558250 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbdf4038510 msgr2=0x7fbdf403a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:01.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.629+0000 7fbe0ca8b700 1 --2- 192.168.123.109:0/3830558250 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbdf4038510 0x7fbdf403a9c0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7fbdf8006fd0 tx=0x7fbdf8006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:01.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.629+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/3830558250 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe08102240 msgr2=0x7fbe08197450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:01.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.629+0000 7fbe0ca8b700 1 --2- 192.168.123.109:0/3830558250 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe08102240 0x7fbe08197450 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7fbdf0004d40 tx=0x7fbdf0004e20 comp rx=0 tx=0).stop 2026-03-09T14:56:01.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.629+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/3830558250 shutdown_connections 2026-03-09T14:56:01.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.629+0000 7fbe0ca8b700 1 --2- 192.168.123.109:0/3830558250 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbdf4038510 0x7fbdf403a9c0 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:01.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.629+0000 7fbe0ca8b700 1 --2- 192.168.123.109:0/3830558250 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe08102240 0x7fbe08197450 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:01.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.629+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/3830558250 >> 192.168.123.109:0/3830558250 conn(0x7fbe080fd8d0 msgr2=0x7fbe080fe530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:01.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.629+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/3830558250 shutdown_connections 2026-03-09T14:56:01.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:01.629+0000 7fbe0ca8b700 1 -- 192.168.123.109:0/3830558250 wait complete. 2026-03-09T14:56:01.629 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:02.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:02 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3830558250' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:02.676 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:02.676 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:02.825 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:02.876 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:03.149 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.149+0000 7f1f11e98700 1 -- 192.168.123.109:0/802758650 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f0c100010 msgr2=0x7f1f0c100420 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:03.149 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.149+0000 7f1f11e98700 1 --2- 192.168.123.109:0/802758650 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f0c100010 0x7f1f0c100420 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f1ef4009b00 tx=0x7f1ef4009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:03.149 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.150+0000 7f1f11e98700 1 -- 192.168.123.109:0/802758650 shutdown_connections 2026-03-09T14:56:03.149 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.150+0000 7f1f11e98700 1 --2- 192.168.123.109:0/802758650 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f0c100010 0x7f1f0c100420 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:03.149 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.150+0000 7f1f11e98700 1 -- 192.168.123.109:0/802758650 >> 192.168.123.109:0/802758650 conn(0x7f1f0c0fb5a0 msgr2=0x7f1f0c0fd9f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:03.149 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.150+0000 7f1f11e98700 1 -- 192.168.123.109:0/802758650 shutdown_connections 2026-03-09T14:56:03.149 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.150+0000 7f1f11e98700 1 -- 192.168.123.109:0/802758650 wait complete. 2026-03-09T14:56:03.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.151+0000 7f1f11e98700 1 Processor -- start 2026-03-09T14:56:03.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.151+0000 7f1f11e98700 1 -- start start 2026-03-09T14:56:03.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.151+0000 7f1f11e98700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f0c100010 0x7f1f0c1973a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:03.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.151+0000 7f1f11e98700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1f0c1978e0 con 0x7f1f0c100010 2026-03-09T14:56:03.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.151+0000 7f1f0b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f0c100010 0x7f1f0c1973a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:03.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.151+0000 7f1f0b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f0c100010 0x7f1f0c1973a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:36922/0 (socket says 192.168.123.109:36922) 2026-03-09T14:56:03.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.151+0000 7f1f0b7fe700 1 -- 192.168.123.109:0/3339832556 learned_addr learned my addr 192.168.123.109:0/3339832556 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:03.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.152+0000 7f1f0b7fe700 1 -- 192.168.123.109:0/3339832556 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1ef40097e0 con 0x7f1f0c100010 2026-03-09T14:56:03.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.152+0000 7f1f0b7fe700 1 --2- 192.168.123.109:0/3339832556 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f0c100010 0x7f1f0c1973a0 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f1ef4004750 tx=0x7f1ef4005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:03.152 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.152+0000 7f1f08ff9700 1 -- 192.168.123.109:0/3339832556 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1ef401c070 con 0x7f1f0c100010 2026-03-09T14:56:03.152 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.152+0000 7f1f11e98700 1 -- 192.168.123.109:0/3339832556 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1f0c197ae0 con 0x7f1f0c100010 2026-03-09T14:56:03.152 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.152+0000 7f1f11e98700 1 -- 192.168.123.109:0/3339832556 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1f0c197f80 con 0x7f1f0c100010 2026-03-09T14:56:03.153 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.153+0000 7f1f08ff9700 1 -- 192.168.123.109:0/3339832556 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1ef4021470 con 0x7f1f0c100010 2026-03-09T14:56:03.153 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.153+0000 7f1f08ff9700 1 -- 192.168.123.109:0/3339832556 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f1ef400f460 con 0x7f1f0c100010 2026-03-09T14:56:03.153 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.154+0000 7f1f08ff9700 1 -- 192.168.123.109:0/3339832556 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f1ef400f600 con 0x7f1f0c100010 2026-03-09T14:56:03.153 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.154+0000 7f1f11e98700 1 -- 192.168.123.109:0/3339832556 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1f0c191260 con 0x7f1f0c100010 2026-03-09T14:56:03.153 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.154+0000 7f1f08ff9700 1 --2- 192.168.123.109:0/3339832556 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1ef8038510 0x7f1ef803a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:03.153 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.154+0000 7f1f08ff9700 1 -- 192.168.123.109:0/3339832556 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f1ef404d4a0 con 0x7f1f0c100010 2026-03-09T14:56:03.153 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.154+0000 7f1f0affd700 1 --2- 192.168.123.109:0/3339832556 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1ef8038510 0x7f1ef803a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:03.154 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.155+0000 7f1f0affd700 1 --2- 192.168.123.109:0/3339832556 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1ef8038510 0x7f1ef803a9c0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f1efc006fd0 tx=0x7f1efc006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:03.157 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.157+0000 7f1f08ff9700 1 -- 192.168.123.109:0/3339832556 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f1ef4029950 con 0x7f1f0c100010 2026-03-09T14:56:03.306 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.307+0000 7f1f11e98700 1 -- 192.168.123.109:0/3339832556 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f1f0c062380 con 0x7f1f0c100010 2026-03-09T14:56:03.307 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.308+0000 7f1f08ff9700 1 -- 192.168.123.109:0/3339832556 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f1ef4026030 con 0x7f1f0c100010 2026-03-09T14:56:03.307 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:03.307 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:03.310 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.311+0000 7f1f11e98700 1 -- 192.168.123.109:0/3339832556 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1ef8038510 msgr2=0x7f1ef803a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:03.310 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.311+0000 7f1f11e98700 1 --2- 192.168.123.109:0/3339832556 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1ef8038510 0x7f1ef803a9c0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f1efc006fd0 tx=0x7f1efc006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:03.310 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.311+0000 7f1f11e98700 1 -- 192.168.123.109:0/3339832556 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f0c100010 msgr2=0x7f1f0c1973a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:03.310 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.311+0000 7f1f11e98700 1 --2- 192.168.123.109:0/3339832556 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f0c100010 0x7f1f0c1973a0 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f1ef4004750 tx=0x7f1ef4005dc0 comp rx=0 tx=0).stop 2026-03-09T14:56:03.311 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.311+0000 7f1f11e98700 1 -- 192.168.123.109:0/3339832556 shutdown_connections 2026-03-09T14:56:03.311 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.311+0000 7f1f11e98700 1 --2- 192.168.123.109:0/3339832556 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1ef8038510 0x7f1ef803a9c0 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:03.311 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.311+0000 7f1f11e98700 1 --2- 192.168.123.109:0/3339832556 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f0c100010 0x7f1f0c1973a0 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:03.311 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.311+0000 7f1f11e98700 1 -- 192.168.123.109:0/3339832556 >> 192.168.123.109:0/3339832556 conn(0x7f1f0c0fb5a0 msgr2=0x7f1f0c0fd9f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:03.311 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.311+0000 7f1f11e98700 1 -- 192.168.123.109:0/3339832556 shutdown_connections 2026-03-09T14:56:03.311 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:03.311+0000 7f1f11e98700 1 -- 192.168.123.109:0/3339832556 wait complete. 2026-03-09T14:56:03.312 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:03.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:03 vm05 ceph-mon[50611]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:04.380 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:04.380 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:04.531 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:04.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:04 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3339832556' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:04.577 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:04.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.859+0000 7f749e818700 1 -- 192.168.123.109:0/549080665 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7498102240 msgr2=0x7f7498102650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:04.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.859+0000 7f749e818700 1 --2- 192.168.123.109:0/549080665 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7498102240 0x7f7498102650 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f7480009b00 tx=0x7f7480009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:04.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.859+0000 7f749e818700 1 -- 192.168.123.109:0/549080665 shutdown_connections 2026-03-09T14:56:04.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.859+0000 7f749e818700 1 --2- 192.168.123.109:0/549080665 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7498102240 0x7f7498102650 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:04.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.859+0000 7f749e818700 1 -- 192.168.123.109:0/549080665 >> 192.168.123.109:0/549080665 conn(0x7f74980fd8d0 msgr2=0x7f74980ffcb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:04.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.860+0000 7f749e818700 1 -- 192.168.123.109:0/549080665 shutdown_connections 2026-03-09T14:56:04.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.860+0000 7f749e818700 1 -- 192.168.123.109:0/549080665 wait complete. 2026-03-09T14:56:04.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.860+0000 7f749e818700 1 Processor -- start 2026-03-09T14:56:04.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.860+0000 7f749e818700 1 -- start start 2026-03-09T14:56:04.860 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.861+0000 7f749e818700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7498102240 0x7f7498197450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:04.860 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.861+0000 7f749e818700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7498197990 con 0x7f7498102240 2026-03-09T14:56:04.860 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.861+0000 7f7497fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7498102240 0x7f7498197450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:04.860 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.861+0000 7f7497fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7498102240 0x7f7498197450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:50010/0 (socket says 192.168.123.109:50010) 2026-03-09T14:56:04.860 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.861+0000 7f7497fff700 1 -- 192.168.123.109:0/2742717716 learned_addr learned my addr 192.168.123.109:0/2742717716 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:04.860 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.861+0000 7f7497fff700 1 -- 192.168.123.109:0/2742717716 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f74800097e0 con 0x7f7498102240 2026-03-09T14:56:04.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.861+0000 7f7497fff700 1 --2- 192.168.123.109:0/2742717716 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7498102240 0x7f7498197450 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f7480004d40 tx=0x7f7480004e20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:04.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.862+0000 7f74957fa700 1 -- 192.168.123.109:0/2742717716 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f748001c070 con 0x7f7498102240 2026-03-09T14:56:04.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.862+0000 7f749e818700 1 -- 192.168.123.109:0/2742717716 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7498197b90 con 0x7f7498102240 2026-03-09T14:56:04.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.862+0000 7f749e818700 1 -- 192.168.123.109:0/2742717716 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7498198030 con 0x7f7498102240 2026-03-09T14:56:04.862 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.862+0000 7f74957fa700 1 -- 192.168.123.109:0/2742717716 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f74800056f0 con 0x7f7498102240 2026-03-09T14:56:04.862 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.862+0000 7f74957fa700 1 -- 192.168.123.109:0/2742717716 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f7480017440 con 0x7f7498102240 2026-03-09T14:56:04.862 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.863+0000 7f749e818700 1 -- 192.168.123.109:0/2742717716 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7478005320 con 0x7f7498102240 2026-03-09T14:56:04.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.864+0000 7f74957fa700 1 -- 192.168.123.109:0/2742717716 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f7480005210 con 0x7f7498102240 2026-03-09T14:56:04.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.864+0000 7f74957fa700 1 --2- 192.168.123.109:0/2742717716 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7484038510 0x7f748403a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:04.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.864+0000 7f74957fa700 1 -- 192.168.123.109:0/2742717716 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f748004bf50 con 0x7f7498102240 2026-03-09T14:56:04.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.865+0000 7f74977fe700 1 --2- 192.168.123.109:0/2742717716 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7484038510 0x7f748403a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:04.864 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.865+0000 7f74977fe700 1 --2- 192.168.123.109:0/2742717716 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7484038510 0x7f748403a9c0 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f7488006fd0 tx=0x7f7488006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:04.871 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:04.871+0000 7f74957fa700 1 -- 192.168.123.109:0/2742717716 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f7480028950 con 0x7f7498102240 2026-03-09T14:56:05.026 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:05.026+0000 7f749e818700 1 -- 192.168.123.109:0/2742717716 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f7478005190 con 0x7f7498102240 2026-03-09T14:56:05.027 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:05.028+0000 7f74957fa700 1 -- 192.168.123.109:0/2742717716 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f7480025030 con 0x7f7498102240 2026-03-09T14:56:05.028 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:05.028 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:05.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:05.031+0000 7f749e818700 1 -- 192.168.123.109:0/2742717716 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7484038510 msgr2=0x7f748403a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:05.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:05.031+0000 7f749e818700 1 --2- 192.168.123.109:0/2742717716 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7484038510 0x7f748403a9c0 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f7488006fd0 tx=0x7f7488006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:05.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:05.032+0000 7f749e818700 1 -- 192.168.123.109:0/2742717716 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7498102240 msgr2=0x7f7498197450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:05.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:05.032+0000 7f749e818700 1 --2- 192.168.123.109:0/2742717716 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7498102240 0x7f7498197450 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f7480004d40 tx=0x7f7480004e20 comp rx=0 tx=0).stop 2026-03-09T14:56:05.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:05.032+0000 7f749e818700 1 -- 192.168.123.109:0/2742717716 shutdown_connections 2026-03-09T14:56:05.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:05.032+0000 7f749e818700 1 --2- 192.168.123.109:0/2742717716 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7484038510 0x7f748403a9c0 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:05.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:05.032+0000 7f749e818700 1 --2- 192.168.123.109:0/2742717716 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7498102240 0x7f7498197450 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:05.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:05.032+0000 7f749e818700 1 -- 192.168.123.109:0/2742717716 >> 192.168.123.109:0/2742717716 conn(0x7f74980fd8d0 msgr2=0x7f74980fe530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:05.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:05.032+0000 7f749e818700 1 -- 192.168.123.109:0/2742717716 shutdown_connections 2026-03-09T14:56:05.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:05.032+0000 7f749e818700 1 -- 192.168.123.109:0/2742717716 wait complete. 2026-03-09T14:56:05.032 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:05.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:05 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/2742717716' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:05.290 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:05 vm05 ceph-mon[50611]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:06.094 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:06.094 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:06.241 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:06.279 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:06.549 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.549+0000 7fe1ce41f700 1 -- 192.168.123.109:0/2181084548 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1c8102230 msgr2=0x7fe1c8102640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:06.549 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.549+0000 7fe1ce41f700 1 --2- 192.168.123.109:0/2181084548 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1c8102230 0x7fe1c8102640 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7fe1b0009b00 tx=0x7fe1b0009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:06.549 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.550+0000 7fe1ce41f700 1 -- 192.168.123.109:0/2181084548 shutdown_connections 2026-03-09T14:56:06.549 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.550+0000 7fe1ce41f700 1 --2- 192.168.123.109:0/2181084548 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1c8102230 0x7fe1c8102640 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:06.550 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.550+0000 7fe1ce41f700 1 -- 192.168.123.109:0/2181084548 >> 192.168.123.109:0/2181084548 conn(0x7fe1c80fd8d0 msgr2=0x7fe1c80ffcb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:06.550 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.550+0000 7fe1ce41f700 1 -- 192.168.123.109:0/2181084548 shutdown_connections 2026-03-09T14:56:06.550 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.550+0000 7fe1ce41f700 1 -- 192.168.123.109:0/2181084548 wait complete. 2026-03-09T14:56:06.550 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.551+0000 7fe1ce41f700 1 Processor -- start 2026-03-09T14:56:06.550 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.551+0000 7fe1ce41f700 1 -- start start 2026-03-09T14:56:06.550 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.551+0000 7fe1ce41f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1c8102230 0x7fe1c8197380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:06.550 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.551+0000 7fe1ce41f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe1c81978c0 con 0x7fe1c8102230 2026-03-09T14:56:06.551 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.551+0000 7fe1c7fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1c8102230 0x7fe1c8197380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:06.551 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.551+0000 7fe1c7fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1c8102230 0x7fe1c8197380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:50020/0 (socket says 192.168.123.109:50020) 2026-03-09T14:56:06.551 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.551+0000 7fe1c7fff700 1 -- 192.168.123.109:0/150528292 learned_addr learned my addr 192.168.123.109:0/150528292 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:06.551 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.552+0000 7fe1c7fff700 1 -- 192.168.123.109:0/150528292 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe1b00097e0 con 0x7fe1c8102230 2026-03-09T14:56:06.552 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.552+0000 7fe1c7fff700 1 --2- 192.168.123.109:0/150528292 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1c8102230 0x7fe1c8197380 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7fe1b0004750 tx=0x7fe1b0005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:06.552 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.552+0000 7fe1c57fa700 1 -- 192.168.123.109:0/150528292 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe1b001c070 con 0x7fe1c8102230 2026-03-09T14:56:06.552 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.552+0000 7fe1c57fa700 1 -- 192.168.123.109:0/150528292 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe1b0021470 con 0x7fe1c8102230 2026-03-09T14:56:06.552 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.552+0000 7fe1ce41f700 1 -- 192.168.123.109:0/150528292 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe1c8197ac0 con 0x7fe1c8102230 2026-03-09T14:56:06.552 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.552+0000 7fe1ce41f700 1 -- 192.168.123.109:0/150528292 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe1c8197f60 con 0x7fe1c8102230 2026-03-09T14:56:06.552 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.552+0000 7fe1c57fa700 1 -- 192.168.123.109:0/150528292 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fe1b000f460 con 0x7fe1c8102230 2026-03-09T14:56:06.552 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.553+0000 7fe1c57fa700 1 -- 192.168.123.109:0/150528292 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fe1b0021ac0 con 0x7fe1c8102230 2026-03-09T14:56:06.552 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.553+0000 7fe1ce41f700 1 -- 192.168.123.109:0/150528292 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe1c8191080 con 0x7fe1c8102230 2026-03-09T14:56:06.553 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.554+0000 7fe1c57fa700 1 --2- 192.168.123.109:0/150528292 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe1b4038560 0x7fe1b403aa10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:06.553 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.554+0000 7fe1c57fa700 1 -- 192.168.123.109:0/150528292 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fe1b004c370 con 0x7fe1c8102230 2026-03-09T14:56:06.555 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.555+0000 7fe1c77fe700 1 --2- 192.168.123.109:0/150528292 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe1b4038560 0x7fe1b403aa10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:06.555 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.556+0000 7fe1c57fa700 1 -- 192.168.123.109:0/150528292 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fe1b0005420 con 0x7fe1c8102230 2026-03-09T14:56:06.555 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.556+0000 7fe1c77fe700 1 --2- 192.168.123.109:0/150528292 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe1b4038560 0x7fe1b403aa10 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fe1b8006fd0 tx=0x7fe1b8006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:06.702 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.703+0000 7fe1ce41f700 1 -- 192.168.123.109:0/150528292 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fe1c8062380 con 0x7fe1c8102230 2026-03-09T14:56:06.704 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.705+0000 7fe1c57fa700 1 -- 192.168.123.109:0/150528292 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fe1b0026030 con 0x7fe1c8102230 2026-03-09T14:56:06.704 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:06.704 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:06.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.708+0000 7fe1ce41f700 1 -- 192.168.123.109:0/150528292 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe1b4038560 msgr2=0x7fe1b403aa10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:06.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.708+0000 7fe1ce41f700 1 --2- 192.168.123.109:0/150528292 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe1b4038560 0x7fe1b403aa10 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7fe1b8006fd0 tx=0x7fe1b8006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:06.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.708+0000 7fe1ce41f700 1 -- 192.168.123.109:0/150528292 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1c8102230 msgr2=0x7fe1c8197380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:06.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.708+0000 7fe1ce41f700 1 --2- 192.168.123.109:0/150528292 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1c8102230 0x7fe1c8197380 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7fe1b0004750 tx=0x7fe1b0005dc0 comp rx=0 tx=0).stop 2026-03-09T14:56:06.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.708+0000 7fe1ce41f700 1 -- 192.168.123.109:0/150528292 shutdown_connections 2026-03-09T14:56:06.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.708+0000 7fe1ce41f700 1 --2- 192.168.123.109:0/150528292 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe1b4038560 0x7fe1b403aa10 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:06.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.708+0000 7fe1ce41f700 1 --2- 192.168.123.109:0/150528292 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1c8102230 0x7fe1c8197380 unknown :-1 s=CLOSED pgs=124 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:06.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.708+0000 7fe1ce41f700 1 -- 192.168.123.109:0/150528292 >> 192.168.123.109:0/150528292 conn(0x7fe1c80fd8d0 msgr2=0x7fe1c80fe530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:06.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.708+0000 7fe1ce41f700 1 -- 192.168.123.109:0/150528292 shutdown_connections 2026-03-09T14:56:06.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:06.708+0000 7fe1ce41f700 1 -- 192.168.123.109:0/150528292 wait complete. 2026-03-09T14:56:06.708 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:07.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:06 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/150528292' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:07.777 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:07.778 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:07.926 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:07.969 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:08.230 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.230+0000 7f04092c4700 1 -- 192.168.123.109:0/3824217854 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0404100b40 msgr2=0x7f0404102f20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:08.230 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.230+0000 7f04092c4700 1 --2- 192.168.123.109:0/3824217854 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0404100b40 0x7f0404102f20 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f03ec009b00 tx=0x7f03ec009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:08.230 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.231+0000 7f04092c4700 1 -- 192.168.123.109:0/3824217854 shutdown_connections 2026-03-09T14:56:08.230 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.231+0000 7f04092c4700 1 --2- 192.168.123.109:0/3824217854 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0404100b40 0x7f0404102f20 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:08.230 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.231+0000 7f04092c4700 1 -- 192.168.123.109:0/3824217854 >> 192.168.123.109:0/3824217854 conn(0x7f04040fa4a0 msgr2=0x7f04040fc8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:08.230 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.231+0000 7f04092c4700 1 -- 192.168.123.109:0/3824217854 shutdown_connections 2026-03-09T14:56:08.230 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.231+0000 7f04092c4700 1 -- 192.168.123.109:0/3824217854 wait complete. 2026-03-09T14:56:08.231 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.232+0000 7f04092c4700 1 Processor -- start 2026-03-09T14:56:08.231 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.232+0000 7f04092c4700 1 -- start start 2026-03-09T14:56:08.231 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.232+0000 7f04092c4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0404100b40 0x7f04041004a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:08.231 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.232+0000 7f04092c4700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f04041009e0 con 0x7f0404100b40 2026-03-09T14:56:08.231 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.232+0000 7f0402ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0404100b40 0x7f04041004a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:08.232 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.232+0000 7f0402ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0404100b40 0x7f04041004a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:50042/0 (socket says 192.168.123.109:50042) 2026-03-09T14:56:08.232 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.232+0000 7f0402ffd700 1 -- 192.168.123.109:0/1344289083 learned_addr learned my addr 192.168.123.109:0/1344289083 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:08.232 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.233+0000 7f0402ffd700 1 -- 192.168.123.109:0/1344289083 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f03ec0097e0 con 0x7f0404100b40 2026-03-09T14:56:08.232 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.233+0000 7f0402ffd700 1 --2- 192.168.123.109:0/1344289083 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0404100b40 0x7f04041004a0 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f03ec004f40 tx=0x7f03ec005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:08.232 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.233+0000 7f03fbfff700 1 -- 192.168.123.109:0/1344289083 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f03ec01c070 con 0x7f0404100b40 2026-03-09T14:56:08.233 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.233+0000 7f03fbfff700 1 -- 192.168.123.109:0/1344289083 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f03ec0053b0 con 0x7f0404100b40 2026-03-09T14:56:08.234 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.233+0000 7f04092c4700 1 -- 192.168.123.109:0/1344289083 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f04040feaf0 con 0x7f0404100b40 2026-03-09T14:56:08.234 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.233+0000 7f03fbfff700 1 -- 192.168.123.109:0/1344289083 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f03ec00f460 con 0x7f0404100b40 2026-03-09T14:56:08.234 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.233+0000 7f04092c4700 1 -- 192.168.123.109:0/1344289083 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f04040fef90 con 0x7f0404100b40 2026-03-09T14:56:08.234 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.234+0000 7f03fbfff700 1 -- 192.168.123.109:0/1344289083 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f03ec021470 con 0x7f0404100b40 2026-03-09T14:56:08.234 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.234+0000 7f03fbfff700 1 --2- 192.168.123.109:0/1344289083 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f03f0038510 0x7f03f003a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:08.234 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.234+0000 7f04092c4700 1 -- 192.168.123.109:0/1344289083 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f040404fa50 con 0x7f0404100b40 2026-03-09T14:56:08.234 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.235+0000 7f04027fc700 1 --2- 192.168.123.109:0/1344289083 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f03f0038510 0x7f03f003a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:08.234 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.235+0000 7f03fbfff700 1 -- 192.168.123.109:0/1344289083 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f03ec04c380 con 0x7f0404100b40 2026-03-09T14:56:08.234 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.235+0000 7f04027fc700 1 --2- 192.168.123.109:0/1344289083 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f03f0038510 0x7f03f003a9c0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f03f4006fd0 tx=0x7f03f4006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:08.236 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.237+0000 7f03fbfff700 1 -- 192.168.123.109:0/1344289083 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f03ec00f5e0 con 0x7f0404100b40 2026-03-09T14:56:08.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:07 vm05 ceph-mon[50611]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:08.380 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.381+0000 7f04092c4700 1 -- 192.168.123.109:0/1344289083 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f040418ee40 con 0x7f0404100b40 2026-03-09T14:56:08.381 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.382+0000 7f03fbfff700 1 -- 192.168.123.109:0/1344289083 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f03ec026030 con 0x7f0404100b40 2026-03-09T14:56:08.381 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:08.381 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:08.383 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.384+0000 7f04092c4700 1 -- 192.168.123.109:0/1344289083 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f03f0038510 msgr2=0x7f03f003a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:08.384 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.385+0000 7f04092c4700 1 --2- 192.168.123.109:0/1344289083 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f03f0038510 0x7f03f003a9c0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f03f4006fd0 tx=0x7f03f4006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:08.384 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.385+0000 7f04092c4700 1 -- 192.168.123.109:0/1344289083 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0404100b40 msgr2=0x7f04041004a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:08.384 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.385+0000 7f04092c4700 1 --2- 192.168.123.109:0/1344289083 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0404100b40 0x7f04041004a0 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f03ec004f40 tx=0x7f03ec005e70 comp rx=0 tx=0).stop 2026-03-09T14:56:08.384 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.385+0000 7f04092c4700 1 -- 192.168.123.109:0/1344289083 shutdown_connections 2026-03-09T14:56:08.384 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.385+0000 7f04092c4700 1 --2- 192.168.123.109:0/1344289083 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f03f0038510 0x7f03f003a9c0 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:08.384 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.385+0000 7f04092c4700 1 --2- 192.168.123.109:0/1344289083 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0404100b40 0x7f04041004a0 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:08.384 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.385+0000 7f04092c4700 1 -- 192.168.123.109:0/1344289083 >> 192.168.123.109:0/1344289083 conn(0x7f04040fa4a0 msgr2=0x7f04040fb170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:08.384 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.385+0000 7f04092c4700 1 -- 192.168.123.109:0/1344289083 shutdown_connections 2026-03-09T14:56:08.384 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:08.385+0000 7f04092c4700 1 -- 192.168.123.109:0/1344289083 wait complete. 2026-03-09T14:56:08.386 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:09.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:08 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1344289083' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:09.455 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:09.455 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:09.605 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:09.647 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:09.927 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.927+0000 7f65c9258700 1 -- 192.168.123.109:0/2042611807 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65c4100b40 msgr2=0x7f65c4102f20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:09.927 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.927+0000 7f65c9258700 1 --2- 192.168.123.109:0/2042611807 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65c4100b40 0x7f65c4102f20 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f65ac009b00 tx=0x7f65ac009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:09.927 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.928+0000 7f65c9258700 1 -- 192.168.123.109:0/2042611807 shutdown_connections 2026-03-09T14:56:09.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.928+0000 7f65c9258700 1 --2- 192.168.123.109:0/2042611807 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65c4100b40 0x7f65c4102f20 unknown :-1 s=CLOSED pgs=127 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:09.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.928+0000 7f65c9258700 1 -- 192.168.123.109:0/2042611807 >> 192.168.123.109:0/2042611807 conn(0x7f65c40fa4a0 msgr2=0x7f65c40fc8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:09.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.928+0000 7f65c9258700 1 -- 192.168.123.109:0/2042611807 shutdown_connections 2026-03-09T14:56:09.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.928+0000 7f65c9258700 1 -- 192.168.123.109:0/2042611807 wait complete. 2026-03-09T14:56:09.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.929+0000 7f65c9258700 1 Processor -- start 2026-03-09T14:56:09.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.929+0000 7f65c9258700 1 -- start start 2026-03-09T14:56:09.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.929+0000 7f65c9258700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65c4100b40 0x7f65c4197360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:09.928 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.929+0000 7f65c9258700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f65c41978a0 con 0x7f65c4100b40 2026-03-09T14:56:09.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.929+0000 7f65c2d9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65c4100b40 0x7f65c4197360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:09.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.929+0000 7f65c2d9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65c4100b40 0x7f65c4197360 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:50064/0 (socket says 192.168.123.109:50064) 2026-03-09T14:56:09.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.929+0000 7f65c2d9d700 1 -- 192.168.123.109:0/78175681 learned_addr learned my addr 192.168.123.109:0/78175681 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:09.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.930+0000 7f65c2d9d700 1 -- 192.168.123.109:0/78175681 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f65ac0097e0 con 0x7f65c4100b40 2026-03-09T14:56:09.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.930+0000 7f65c2d9d700 1 --2- 192.168.123.109:0/78175681 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65c4100b40 0x7f65c4197360 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f65ac004f40 tx=0x7f65ac005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:09.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.930+0000 7f65bbfff700 1 -- 192.168.123.109:0/78175681 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f65ac01c070 con 0x7f65c4100b40 2026-03-09T14:56:09.929 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.930+0000 7f65c9258700 1 -- 192.168.123.109:0/78175681 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f65c4197aa0 con 0x7f65c4100b40 2026-03-09T14:56:09.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.930+0000 7f65c9258700 1 -- 192.168.123.109:0/78175681 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f65c4197f40 con 0x7f65c4100b40 2026-03-09T14:56:09.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.931+0000 7f65bbfff700 1 -- 192.168.123.109:0/78175681 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f65ac0053b0 con 0x7f65c4100b40 2026-03-09T14:56:09.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.931+0000 7f65bbfff700 1 -- 192.168.123.109:0/78175681 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f65ac00f460 con 0x7f65c4100b40 2026-03-09T14:56:09.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.931+0000 7f65bbfff700 1 -- 192.168.123.109:0/78175681 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f65ac00f680 con 0x7f65c4100b40 2026-03-09T14:56:09.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.931+0000 7f65c9258700 1 -- 192.168.123.109:0/78175681 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f65a4005320 con 0x7f65c4100b40 2026-03-09T14:56:09.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.931+0000 7f65bbfff700 1 --2- 192.168.123.109:0/78175681 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f65b0038510 0x7f65b003a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:09.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.931+0000 7f65bbfff700 1 -- 192.168.123.109:0/78175681 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f65ac04d520 con 0x7f65c4100b40 2026-03-09T14:56:09.931 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.932+0000 7f65c259c700 1 --2- 192.168.123.109:0/78175681 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f65b0038510 0x7f65b003a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:09.932 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.932+0000 7f65c259c700 1 --2- 192.168.123.109:0/78175681 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f65b0038510 0x7f65b003a9c0 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f65b4006fd0 tx=0x7f65b4006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:09.934 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:09.934+0000 7f65bbfff700 1 -- 192.168.123.109:0/78175681 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f65ac029b30 con 0x7f65c4100b40 2026-03-09T14:56:10.086 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:10.086+0000 7f65c9258700 1 -- 192.168.123.109:0/78175681 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f65a4005190 con 0x7f65c4100b40 2026-03-09T14:56:10.087 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:10.088+0000 7f65bbfff700 1 -- 192.168.123.109:0/78175681 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f65ac026030 con 0x7f65c4100b40 2026-03-09T14:56:10.087 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:10.087 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:10.090 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:10.091+0000 7f65c9258700 1 -- 192.168.123.109:0/78175681 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f65b0038510 msgr2=0x7f65b003a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:10.090 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:10.091+0000 7f65c9258700 1 --2- 192.168.123.109:0/78175681 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f65b0038510 0x7f65b003a9c0 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f65b4006fd0 tx=0x7f65b4006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:10.090 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:10.091+0000 7f65c9258700 1 -- 192.168.123.109:0/78175681 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65c4100b40 msgr2=0x7f65c4197360 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:10.090 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:10.091+0000 7f65c9258700 1 --2- 192.168.123.109:0/78175681 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65c4100b40 0x7f65c4197360 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7f65ac004f40 tx=0x7f65ac005e70 comp rx=0 tx=0).stop 2026-03-09T14:56:10.090 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:10.091+0000 7f65c9258700 1 -- 192.168.123.109:0/78175681 shutdown_connections 2026-03-09T14:56:10.090 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:10.091+0000 7f65c9258700 1 --2- 192.168.123.109:0/78175681 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f65b0038510 0x7f65b003a9c0 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:10.091 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:10.091+0000 7f65c9258700 1 --2- 192.168.123.109:0/78175681 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65c4100b40 0x7f65c4197360 unknown :-1 s=CLOSED pgs=128 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:10.091 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:10.091+0000 7f65c9258700 1 -- 192.168.123.109:0/78175681 >> 192.168.123.109:0/78175681 conn(0x7f65c40fa4a0 msgr2=0x7f65c40fc8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:10.091 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:10.091+0000 7f65c9258700 1 -- 192.168.123.109:0/78175681 shutdown_connections 2026-03-09T14:56:10.091 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:10.091+0000 7f65c9258700 1 -- 192.168.123.109:0/78175681 wait complete. 2026-03-09T14:56:10.091 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:10.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:09 vm05 ceph-mon[50611]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:11.161 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:11.161 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:11.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:10 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/78175681' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:11.309 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:11.346 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:11.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.627+0000 7fd39398f700 1 -- 192.168.123.109:0/3533233648 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd38c0fe910 msgr2=0x7fd38c0fed20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:11.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.627+0000 7fd39398f700 1 --2- 192.168.123.109:0/3533233648 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd38c0fe910 0x7fd38c0fed20 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7fd37c009b00 tx=0x7fd37c009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:11.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.628+0000 7fd39398f700 1 -- 192.168.123.109:0/3533233648 shutdown_connections 2026-03-09T14:56:11.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.628+0000 7fd39398f700 1 --2- 192.168.123.109:0/3533233648 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd38c0fe910 0x7fd38c0fed20 unknown :-1 s=CLOSED pgs=129 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:11.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.628+0000 7fd39398f700 1 -- 192.168.123.109:0/3533233648 >> 192.168.123.109:0/3533233648 conn(0x7fd38c0fa4a0 msgr2=0x7fd38c0fc8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:11.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.628+0000 7fd39398f700 1 -- 192.168.123.109:0/3533233648 shutdown_connections 2026-03-09T14:56:11.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.628+0000 7fd39398f700 1 -- 192.168.123.109:0/3533233648 wait complete. 2026-03-09T14:56:11.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.629+0000 7fd39398f700 1 Processor -- start 2026-03-09T14:56:11.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.629+0000 7fd39398f700 1 -- start start 2026-03-09T14:56:11.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.629+0000 7fd39398f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd38c0fe910 0x7fd38c1973b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:11.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.629+0000 7fd39398f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd38c1978f0 con 0x7fd38c0fe910 2026-03-09T14:56:11.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.629+0000 7fd39172b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd38c0fe910 0x7fd38c1973b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:11.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.629+0000 7fd39172b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd38c0fe910 0x7fd38c1973b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:50084/0 (socket says 192.168.123.109:50084) 2026-03-09T14:56:11.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.629+0000 7fd39172b700 1 -- 192.168.123.109:0/1927769155 learned_addr learned my addr 192.168.123.109:0/1927769155 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:11.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.629+0000 7fd39172b700 1 -- 192.168.123.109:0/1927769155 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd37c0097e0 con 0x7fd38c0fe910 2026-03-09T14:56:11.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.630+0000 7fd39172b700 1 --2- 192.168.123.109:0/1927769155 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd38c0fe910 0x7fd38c1973b0 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7fd37c004f40 tx=0x7fd37c005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:11.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.630+0000 7fd3827fc700 1 -- 192.168.123.109:0/1927769155 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd37c01d070 con 0x7fd38c0fe910 2026-03-09T14:56:11.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.630+0000 7fd39398f700 1 -- 192.168.123.109:0/1927769155 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd38c197af0 con 0x7fd38c0fe910 2026-03-09T14:56:11.629 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.630+0000 7fd39398f700 1 -- 192.168.123.109:0/1927769155 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd38c197f90 con 0x7fd38c0fe910 2026-03-09T14:56:11.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.631+0000 7fd3827fc700 1 -- 192.168.123.109:0/1927769155 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd37c022470 con 0x7fd38c0fe910 2026-03-09T14:56:11.631 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.631+0000 7fd3827fc700 1 -- 192.168.123.109:0/1927769155 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd37c00f460 con 0x7fd38c0fe910 2026-03-09T14:56:11.631 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.631+0000 7fd3827fc700 1 -- 192.168.123.109:0/1927769155 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fd37c00f650 con 0x7fd38c0fe910 2026-03-09T14:56:11.631 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.631+0000 7fd3827fc700 1 --2- 192.168.123.109:0/1927769155 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd378038550 0x7fd37803aa00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:11.631 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.631+0000 7fd3827fc700 1 -- 192.168.123.109:0/1927769155 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fd37c04d470 con 0x7fd38c0fe910 2026-03-09T14:56:11.631 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.631+0000 7fd39398f700 1 -- 192.168.123.109:0/1927769155 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd370005320 con 0x7fd38c0fe910 2026-03-09T14:56:11.631 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.631+0000 7fd390f2a700 1 --2- 192.168.123.109:0/1927769155 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd378038550 0x7fd37803aa00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:11.632 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.633+0000 7fd390f2a700 1 --2- 192.168.123.109:0/1927769155 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd378038550 0x7fd37803aa00 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7fd388006fd0 tx=0x7fd388006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:11.634 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.634+0000 7fd3827fc700 1 -- 192.168.123.109:0/1927769155 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fd37c027070 con 0x7fd38c0fe910 2026-03-09T14:56:11.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.777+0000 7fd39398f700 1 -- 192.168.123.109:0/1927769155 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fd370005190 con 0x7fd38c0fe910 2026-03-09T14:56:11.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.778+0000 7fd3827fc700 1 -- 192.168.123.109:0/1927769155 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fd37c02a720 con 0x7fd38c0fe910 2026-03-09T14:56:11.779 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:11.779 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:11.781 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.782+0000 7fd39398f700 1 -- 192.168.123.109:0/1927769155 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd378038550 msgr2=0x7fd37803aa00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:11.781 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.782+0000 7fd39398f700 1 --2- 192.168.123.109:0/1927769155 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd378038550 0x7fd37803aa00 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7fd388006fd0 tx=0x7fd388006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:11.781 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.782+0000 7fd39398f700 1 -- 192.168.123.109:0/1927769155 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd38c0fe910 msgr2=0x7fd38c1973b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:11.781 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.782+0000 7fd39398f700 1 --2- 192.168.123.109:0/1927769155 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd38c0fe910 0x7fd38c1973b0 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7fd37c004f40 tx=0x7fd37c005e70 comp rx=0 tx=0).stop 2026-03-09T14:56:11.781 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.782+0000 7fd39398f700 1 -- 192.168.123.109:0/1927769155 shutdown_connections 2026-03-09T14:56:11.781 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.782+0000 7fd39398f700 1 --2- 192.168.123.109:0/1927769155 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd378038550 0x7fd37803aa00 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:11.782 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.783+0000 7fd39398f700 1 --2- 192.168.123.109:0/1927769155 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd38c0fe910 0x7fd38c1973b0 unknown :-1 s=CLOSED pgs=130 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:11.782 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.783+0000 7fd39398f700 1 -- 192.168.123.109:0/1927769155 >> 192.168.123.109:0/1927769155 conn(0x7fd38c0fa4a0 msgr2=0x7fd38c0fb170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:11.782 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.783+0000 7fd39398f700 1 -- 192.168.123.109:0/1927769155 shutdown_connections 2026-03-09T14:56:11.782 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:11.783+0000 7fd39398f700 1 -- 192.168.123.109:0/1927769155 wait complete. 2026-03-09T14:56:11.783 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:12.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:11 vm05 ceph-mon[50611]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:12.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:11 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1927769155' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:12.851 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:12.851 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:12.999 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:13.039 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:13.305 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.305+0000 7f8109a5d700 1 -- 192.168.123.109:0/3379446387 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8104106610 msgr2=0x7f8104106a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:13.305 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.305+0000 7f8109a5d700 1 --2- 192.168.123.109:0/3379446387 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8104106610 0x7f8104106a20 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7f80ec009b00 tx=0x7f80ec009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:13.305 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.306+0000 7f8109a5d700 1 -- 192.168.123.109:0/3379446387 shutdown_connections 2026-03-09T14:56:13.305 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.306+0000 7f8109a5d700 1 --2- 192.168.123.109:0/3379446387 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8104106610 0x7f8104106a20 unknown :-1 s=CLOSED pgs=131 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:13.305 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.306+0000 7f8109a5d700 1 -- 192.168.123.109:0/3379446387 >> 192.168.123.109:0/3379446387 conn(0x7f8104075940 msgr2=0x7f8104077d90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:13.305 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.306+0000 7f8109a5d700 1 -- 192.168.123.109:0/3379446387 shutdown_connections 2026-03-09T14:56:13.305 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.306+0000 7f8109a5d700 1 -- 192.168.123.109:0/3379446387 wait complete. 2026-03-09T14:56:13.305 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.306+0000 7f8109a5d700 1 Processor -- start 2026-03-09T14:56:13.305 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.306+0000 7f8109a5d700 1 -- start start 2026-03-09T14:56:13.306 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.307+0000 7f8109a5d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8104106610 0x7f810419b7d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:13.306 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.307+0000 7f8109a5d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f810419bd10 con 0x7f8104106610 2026-03-09T14:56:13.306 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.307+0000 7f8102ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8104106610 0x7f810419b7d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:13.306 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.307+0000 7f8102ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8104106610 0x7f810419b7d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:50100/0 (socket says 192.168.123.109:50100) 2026-03-09T14:56:13.306 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.307+0000 7f8102ffd700 1 -- 192.168.123.109:0/1767722606 learned_addr learned my addr 192.168.123.109:0/1767722606 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:13.306 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.307+0000 7f8102ffd700 1 -- 192.168.123.109:0/1767722606 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f80ec0097e0 con 0x7f8104106610 2026-03-09T14:56:13.306 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.307+0000 7f8102ffd700 1 --2- 192.168.123.109:0/1767722606 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8104106610 0x7f810419b7d0 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7f80ec009fd0 tx=0x7f80ec005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:13.307 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.308+0000 7f8108a5b700 1 -- 192.168.123.109:0/1767722606 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f80ec01d070 con 0x7f8104106610 2026-03-09T14:56:13.307 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.308+0000 7f8109a5d700 1 -- 192.168.123.109:0/1767722606 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f810419bf10 con 0x7f8104106610 2026-03-09T14:56:13.307 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.308+0000 7f8109a5d700 1 -- 192.168.123.109:0/1767722606 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f810419c3b0 con 0x7f8104106610 2026-03-09T14:56:13.308 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.308+0000 7f8108a5b700 1 -- 192.168.123.109:0/1767722606 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f80ec022470 con 0x7f8104106610 2026-03-09T14:56:13.308 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.308+0000 7f8108a5b700 1 -- 192.168.123.109:0/1767722606 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f80ec00f460 con 0x7f8104106610 2026-03-09T14:56:13.308 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.309+0000 7f8108a5b700 1 -- 192.168.123.109:0/1767722606 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f80ec00f610 con 0x7f8104106610 2026-03-09T14:56:13.309 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.309+0000 7f8109a5d700 1 -- 192.168.123.109:0/1767722606 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f80e4005320 con 0x7f8104106610 2026-03-09T14:56:13.309 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.309+0000 7f8108a5b700 1 --2- 192.168.123.109:0/1767722606 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f80f0038550 0x7f80f003aa00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:13.309 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.309+0000 7f8108a5b700 1 -- 192.168.123.109:0/1767722606 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f80ec04d450 con 0x7f8104106610 2026-03-09T14:56:13.309 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.309+0000 7f81027fc700 1 --2- 192.168.123.109:0/1767722606 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f80f0038550 0x7f80f003aa00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:13.309 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.310+0000 7f81027fc700 1 --2- 192.168.123.109:0/1767722606 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f80f0038550 0x7f80f003aa00 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f80f4006fd0 tx=0x7f80f4006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:13.312 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.313+0000 7f8108a5b700 1 -- 192.168.123.109:0/1767722606 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f80ec017490 con 0x7f8104106610 2026-03-09T14:56:13.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.456+0000 7f8109a5d700 1 -- 192.168.123.109:0/1767722606 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f80e4005190 con 0x7f8104106610 2026-03-09T14:56:13.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.457+0000 7f8108a5b700 1 -- 192.168.123.109:0/1767722606 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f80ec027030 con 0x7f8104106610 2026-03-09T14:56:13.457 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:13.457 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:13.459 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.460+0000 7f8109a5d700 1 -- 192.168.123.109:0/1767722606 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f80f0038550 msgr2=0x7f80f003aa00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:13.460 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.460+0000 7f8109a5d700 1 --2- 192.168.123.109:0/1767722606 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f80f0038550 0x7f80f003aa00 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f80f4006fd0 tx=0x7f80f4006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:13.460 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.460+0000 7f8109a5d700 1 -- 192.168.123.109:0/1767722606 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8104106610 msgr2=0x7f810419b7d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:13.460 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.461+0000 7f8109a5d700 1 --2- 192.168.123.109:0/1767722606 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8104106610 0x7f810419b7d0 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7f80ec009fd0 tx=0x7f80ec005e70 comp rx=0 tx=0).stop 2026-03-09T14:56:13.460 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.461+0000 7f8109a5d700 1 -- 192.168.123.109:0/1767722606 shutdown_connections 2026-03-09T14:56:13.460 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.461+0000 7f8109a5d700 1 --2- 192.168.123.109:0/1767722606 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f80f0038550 0x7f80f003aa00 unknown :-1 s=CLOSED pgs=31 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:13.460 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.461+0000 7f8109a5d700 1 --2- 192.168.123.109:0/1767722606 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8104106610 0x7f810419b7d0 unknown :-1 s=CLOSED pgs=132 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:13.460 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.461+0000 7f8109a5d700 1 -- 192.168.123.109:0/1767722606 >> 192.168.123.109:0/1767722606 conn(0x7f8104075940 msgr2=0x7f81040775a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:13.461 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.461+0000 7f8109a5d700 1 -- 192.168.123.109:0/1767722606 shutdown_connections 2026-03-09T14:56:13.461 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:13.462+0000 7f8109a5d700 1 -- 192.168.123.109:0/1767722606 wait complete. 2026-03-09T14:56:13.462 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:14.526 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:14.526 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:14.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:14 vm05 ceph-mon[50611]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:14.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:14 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1767722606' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:14.661 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:14.697 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:14.947 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.946+0000 7f3d89cd7700 1 -- 192.168.123.109:0/4110461061 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d84102240 msgr2=0x7f3d84102650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:14.947 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.946+0000 7f3d89cd7700 1 --2- 192.168.123.109:0/4110461061 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d84102240 0x7f3d84102650 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7f3d6c009b00 tx=0x7f3d6c009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:14.947 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.947+0000 7f3d89cd7700 1 -- 192.168.123.109:0/4110461061 shutdown_connections 2026-03-09T14:56:14.949 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.947+0000 7f3d89cd7700 1 --2- 192.168.123.109:0/4110461061 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d84102240 0x7f3d84102650 unknown :-1 s=CLOSED pgs=133 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:14.950 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.947+0000 7f3d89cd7700 1 -- 192.168.123.109:0/4110461061 >> 192.168.123.109:0/4110461061 conn(0x7f3d840fd8d0 msgr2=0x7f3d840ffcb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:14.950 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.951+0000 7f3d89cd7700 1 -- 192.168.123.109:0/4110461061 shutdown_connections 2026-03-09T14:56:14.950 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.951+0000 7f3d89cd7700 1 -- 192.168.123.109:0/4110461061 wait complete. 2026-03-09T14:56:14.950 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.951+0000 7f3d89cd7700 1 Processor -- start 2026-03-09T14:56:14.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.952+0000 7f3d89cd7700 1 -- start start 2026-03-09T14:56:14.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.952+0000 7f3d89cd7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d84102240 0x7f3d84193030 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:14.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.952+0000 7f3d89cd7700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3d84193570 con 0x7f3d84102240 2026-03-09T14:56:14.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.952+0000 7f3d837fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d84102240 0x7f3d84193030 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:14.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.952+0000 7f3d837fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d84102240 0x7f3d84193030 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:48298/0 (socket says 192.168.123.109:48298) 2026-03-09T14:56:14.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.952+0000 7f3d837fe700 1 -- 192.168.123.109:0/1105522557 learned_addr learned my addr 192.168.123.109:0/1105522557 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:14.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.952+0000 7f3d837fe700 1 -- 192.168.123.109:0/1105522557 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3d6c0097e0 con 0x7f3d84102240 2026-03-09T14:56:14.952 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.953+0000 7f3d837fe700 1 --2- 192.168.123.109:0/1105522557 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d84102240 0x7f3d84193030 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f3d6c004d10 tx=0x7f3d6c004df0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:14.952 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.953+0000 7f3d817fa700 1 -- 192.168.123.109:0/1105522557 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f3d6c01d070 con 0x7f3d84102240 2026-03-09T14:56:14.952 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.953+0000 7f3d89cd7700 1 -- 192.168.123.109:0/1105522557 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3d84193770 con 0x7f3d84102240 2026-03-09T14:56:14.952 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.953+0000 7f3d89cd7700 1 -- 192.168.123.109:0/1105522557 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3d84193c10 con 0x7f3d84102240 2026-03-09T14:56:14.953 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.954+0000 7f3d817fa700 1 -- 192.168.123.109:0/1105522557 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3d6c0056f0 con 0x7f3d84102240 2026-03-09T14:56:14.953 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.954+0000 7f3d817fa700 1 -- 192.168.123.109:0/1105522557 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f3d6c017440 con 0x7f3d84102240 2026-03-09T14:56:14.954 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.954+0000 7f3d89cd7700 1 -- 192.168.123.109:0/1105522557 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3d68005320 con 0x7f3d84102240 2026-03-09T14:56:14.954 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.955+0000 7f3d817fa700 1 -- 192.168.123.109:0/1105522557 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f3d6c00f840 con 0x7f3d84102240 2026-03-09T14:56:14.954 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.955+0000 7f3d817fa700 1 --2- 192.168.123.109:0/1105522557 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f3d64038550 0x7f3d6403aa00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:14.954 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.955+0000 7f3d817fa700 1 -- 192.168.123.109:0/1105522557 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f3d6c04c1f0 con 0x7f3d84102240 2026-03-09T14:56:14.954 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.955+0000 7f3d7bfff700 1 --2- 192.168.123.109:0/1105522557 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f3d64038550 0x7f3d6403aa00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:14.955 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.956+0000 7f3d7bfff700 1 --2- 192.168.123.109:0/1105522557 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f3d64038550 0x7f3d6403aa00 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f3d74006fd0 tx=0x7f3d74006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:14.957 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:14.958+0000 7f3d817fa700 1 -- 192.168.123.109:0/1105522557 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f3d6c020780 con 0x7f3d84102240 2026-03-09T14:56:15.102 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:15.102+0000 7f3d89cd7700 1 -- 192.168.123.109:0/1105522557 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f3d68005190 con 0x7f3d84102240 2026-03-09T14:56:15.105 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:15.105+0000 7f3d817fa700 1 -- 192.168.123.109:0/1105522557 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f3d6c027070 con 0x7f3d84102240 2026-03-09T14:56:15.105 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:15.105 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:15.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:15.108+0000 7f3d89cd7700 1 -- 192.168.123.109:0/1105522557 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f3d64038550 msgr2=0x7f3d6403aa00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:15.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:15.108+0000 7f3d89cd7700 1 --2- 192.168.123.109:0/1105522557 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f3d64038550 0x7f3d6403aa00 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f3d74006fd0 tx=0x7f3d74006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:15.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:15.108+0000 7f3d89cd7700 1 -- 192.168.123.109:0/1105522557 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d84102240 msgr2=0x7f3d84193030 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:15.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:15.108+0000 7f3d89cd7700 1 --2- 192.168.123.109:0/1105522557 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d84102240 0x7f3d84193030 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f3d6c004d10 tx=0x7f3d6c004df0 comp rx=0 tx=0).stop 2026-03-09T14:56:15.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:15.109+0000 7f3d89cd7700 1 -- 192.168.123.109:0/1105522557 shutdown_connections 2026-03-09T14:56:15.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:15.109+0000 7f3d89cd7700 1 --2- 192.168.123.109:0/1105522557 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f3d64038550 0x7f3d6403aa00 unknown :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:15.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:15.109+0000 7f3d89cd7700 1 --2- 192.168.123.109:0/1105522557 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3d84102240 0x7f3d84193030 unknown :-1 s=CLOSED pgs=134 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:15.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:15.109+0000 7f3d89cd7700 1 -- 192.168.123.109:0/1105522557 >> 192.168.123.109:0/1105522557 conn(0x7f3d840fd8d0 msgr2=0x7f3d840fe530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:15.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:15.109+0000 7f3d89cd7700 1 -- 192.168.123.109:0/1105522557 shutdown_connections 2026-03-09T14:56:15.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:15.109+0000 7f3d89cd7700 1 -- 192.168.123.109:0/1105522557 wait complete. 2026-03-09T14:56:15.109 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:15.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:15 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1105522557' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:16.177 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:16.177 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:16.333 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:16.374 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:16 vm05 ceph-mon[50611]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:16.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.650+0000 7fa328e73700 1 -- 192.168.123.109:0/914307491 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa324074d80 msgr2=0x7fa3240731e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:16.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.650+0000 7fa328e73700 1 --2- 192.168.123.109:0/914307491 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa324074d80 0x7fa3240731e0 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7fa30c009b00 tx=0x7fa30c009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:16.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.651+0000 7fa328e73700 1 -- 192.168.123.109:0/914307491 shutdown_connections 2026-03-09T14:56:16.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.651+0000 7fa328e73700 1 --2- 192.168.123.109:0/914307491 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa324074d80 0x7fa3240731e0 unknown :-1 s=CLOSED pgs=135 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:16.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.651+0000 7fa328e73700 1 -- 192.168.123.109:0/914307491 >> 192.168.123.109:0/914307491 conn(0x7fa3240fb5a0 msgr2=0x7fa3240fd9f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:16.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.651+0000 7fa328e73700 1 -- 192.168.123.109:0/914307491 shutdown_connections 2026-03-09T14:56:16.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.652+0000 7fa328e73700 1 -- 192.168.123.109:0/914307491 wait complete. 2026-03-09T14:56:16.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.652+0000 7fa328e73700 1 Processor -- start 2026-03-09T14:56:16.652 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.652+0000 7fa328e73700 1 -- start start 2026-03-09T14:56:16.652 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.653+0000 7fa328e73700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa324074d80 0x7fa32419b760 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:16.652 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.653+0000 7fa328e73700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa32419bca0 con 0x7fa324074d80 2026-03-09T14:56:16.652 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.653+0000 7fa32259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa324074d80 0x7fa32419b760 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:16.652 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.653+0000 7fa32259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa324074d80 0x7fa32419b760 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:48312/0 (socket says 192.168.123.109:48312) 2026-03-09T14:56:16.652 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.653+0000 7fa32259c700 1 -- 192.168.123.109:0/1072707292 learned_addr learned my addr 192.168.123.109:0/1072707292 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:16.653 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.654+0000 7fa32259c700 1 -- 192.168.123.109:0/1072707292 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa30c0097e0 con 0x7fa324074d80 2026-03-09T14:56:16.653 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.654+0000 7fa32259c700 1 --2- 192.168.123.109:0/1072707292 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa324074d80 0x7fa32419b760 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7fa30c004f40 tx=0x7fa30c005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:16.653 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.654+0000 7fa31b7fe700 1 -- 192.168.123.109:0/1072707292 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa30c01c070 con 0x7fa324074d80 2026-03-09T14:56:16.653 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.654+0000 7fa328e73700 1 -- 192.168.123.109:0/1072707292 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa32419bea0 con 0x7fa324074d80 2026-03-09T14:56:16.654 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.654+0000 7fa328e73700 1 -- 192.168.123.109:0/1072707292 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa32419c340 con 0x7fa324074d80 2026-03-09T14:56:16.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.655+0000 7fa31b7fe700 1 -- 192.168.123.109:0/1072707292 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa30c0053b0 con 0x7fa324074d80 2026-03-09T14:56:16.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.655+0000 7fa31b7fe700 1 -- 192.168.123.109:0/1072707292 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa30c00f460 con 0x7fa324074d80 2026-03-09T14:56:16.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.656+0000 7fa31b7fe700 1 -- 192.168.123.109:0/1072707292 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fa30c00f6d0 con 0x7fa324074d80 2026-03-09T14:56:16.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.656+0000 7fa328e73700 1 -- 192.168.123.109:0/1072707292 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa324195460 con 0x7fa324074d80 2026-03-09T14:56:16.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.656+0000 7fa31b7fe700 1 --2- 192.168.123.109:0/1072707292 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa310038510 0x7fa31003a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:16.655 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.656+0000 7fa31b7fe700 1 -- 192.168.123.109:0/1072707292 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fa30c04d5d0 con 0x7fa324074d80 2026-03-09T14:56:16.658 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.658+0000 7fa321d9b700 1 --2- 192.168.123.109:0/1072707292 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa310038510 0x7fa31003a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:16.658 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.659+0000 7fa31b7fe700 1 -- 192.168.123.109:0/1072707292 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fa30c017440 con 0x7fa324074d80 2026-03-09T14:56:16.658 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.659+0000 7fa321d9b700 1 --2- 192.168.123.109:0/1072707292 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa310038510 0x7fa31003a9c0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7fa314006fd0 tx=0x7fa314006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:16.807 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.805+0000 7fa328e73700 1 -- 192.168.123.109:0/1072707292 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fa324062380 con 0x7fa324074d80 2026-03-09T14:56:16.807 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.808+0000 7fa31b7fe700 1 -- 192.168.123.109:0/1072707292 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fa30c026020 con 0x7fa324074d80 2026-03-09T14:56:16.808 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:16.808 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:16.810 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.810+0000 7fa328e73700 1 -- 192.168.123.109:0/1072707292 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa310038510 msgr2=0x7fa31003a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:16.810 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.810+0000 7fa328e73700 1 --2- 192.168.123.109:0/1072707292 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa310038510 0x7fa31003a9c0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7fa314006fd0 tx=0x7fa314006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:16.810 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.811+0000 7fa328e73700 1 -- 192.168.123.109:0/1072707292 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa324074d80 msgr2=0x7fa32419b760 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:16.810 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.811+0000 7fa328e73700 1 --2- 192.168.123.109:0/1072707292 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa324074d80 0x7fa32419b760 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7fa30c004f40 tx=0x7fa30c005e70 comp rx=0 tx=0).stop 2026-03-09T14:56:16.810 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.811+0000 7fa328e73700 1 -- 192.168.123.109:0/1072707292 shutdown_connections 2026-03-09T14:56:16.810 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.811+0000 7fa328e73700 1 --2- 192.168.123.109:0/1072707292 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa310038510 0x7fa31003a9c0 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:16.810 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.811+0000 7fa328e73700 1 --2- 192.168.123.109:0/1072707292 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa324074d80 0x7fa32419b760 unknown :-1 s=CLOSED pgs=136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:16.810 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.811+0000 7fa328e73700 1 -- 192.168.123.109:0/1072707292 >> 192.168.123.109:0/1072707292 conn(0x7fa3240fb5a0 msgr2=0x7fa3240fc270 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:16.810 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.811+0000 7fa328e73700 1 -- 192.168.123.109:0/1072707292 shutdown_connections 2026-03-09T14:56:16.811 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:16.811+0000 7fa328e73700 1 -- 192.168.123.109:0/1072707292 wait complete. 2026-03-09T14:56:16.812 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:17 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1072707292' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:17.890 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:17.890 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:18.030 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:18.070 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:18.347 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.348+0000 7fd67f555700 1 -- 192.168.123.109:0/1727749553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd678100950 msgr2=0x7fd678102d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:18.347 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.348+0000 7fd67f555700 1 --2- 192.168.123.109:0/1727749553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd678100950 0x7fd678102d30 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7fd674009b00 tx=0x7fd674009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:18.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.348+0000 7fd67f555700 1 -- 192.168.123.109:0/1727749553 shutdown_connections 2026-03-09T14:56:18.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.348+0000 7fd67f555700 1 --2- 192.168.123.109:0/1727749553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd678100950 0x7fd678102d30 unknown :-1 s=CLOSED pgs=137 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:18.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.348+0000 7fd67f555700 1 -- 192.168.123.109:0/1727749553 >> 192.168.123.109:0/1727749553 conn(0x7fd6780fa310 msgr2=0x7fd6780fc740 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:18.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.348+0000 7fd67f555700 1 -- 192.168.123.109:0/1727749553 shutdown_connections 2026-03-09T14:56:18.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.348+0000 7fd67f555700 1 -- 192.168.123.109:0/1727749553 wait complete. 2026-03-09T14:56:18.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.349+0000 7fd67f555700 1 Processor -- start 2026-03-09T14:56:18.348 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.349+0000 7fd67f555700 1 -- start start 2026-03-09T14:56:18.349 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.349+0000 7fd67f555700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd678100950 0x7fd678194fa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:18.349 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.349+0000 7fd67f555700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd6781954e0 con 0x7fd678100950 2026-03-09T14:56:18.349 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.350+0000 7fd67e553700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd678100950 0x7fd678194fa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:18.349 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.350+0000 7fd67e553700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd678100950 0x7fd678194fa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:48330/0 (socket says 192.168.123.109:48330) 2026-03-09T14:56:18.349 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.350+0000 7fd67e553700 1 -- 192.168.123.109:0/2862187273 learned_addr learned my addr 192.168.123.109:0/2862187273 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:18.349 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.350+0000 7fd67e553700 1 -- 192.168.123.109:0/2862187273 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd6740097e0 con 0x7fd678100950 2026-03-09T14:56:18.349 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.350+0000 7fd67e553700 1 --2- 192.168.123.109:0/2862187273 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd678100950 0x7fd678194fa0 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7fd674004f40 tx=0x7fd674005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:18.349 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.350+0000 7fd66f7fe700 1 -- 192.168.123.109:0/2862187273 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd67401c070 con 0x7fd678100950 2026-03-09T14:56:18.350 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.350+0000 7fd67f555700 1 -- 192.168.123.109:0/2862187273 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd6781956e0 con 0x7fd678100950 2026-03-09T14:56:18.350 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.350+0000 7fd67f555700 1 -- 192.168.123.109:0/2862187273 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd678195b80 con 0x7fd678100950 2026-03-09T14:56:18.350 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.350+0000 7fd66f7fe700 1 -- 192.168.123.109:0/2862187273 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd6740053b0 con 0x7fd678100950 2026-03-09T14:56:18.351 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.350+0000 7fd66f7fe700 1 -- 192.168.123.109:0/2862187273 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd67400f460 con 0x7fd678100950 2026-03-09T14:56:18.351 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.352+0000 7fd67f555700 1 -- 192.168.123.109:0/2862187273 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd65c005320 con 0x7fd678100950 2026-03-09T14:56:18.351 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.352+0000 7fd66f7fe700 1 -- 192.168.123.109:0/2862187273 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fd674021470 con 0x7fd678100950 2026-03-09T14:56:18.352 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.352+0000 7fd66f7fe700 1 --2- 192.168.123.109:0/2862187273 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd664038510 0x7fd66403a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:18.352 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.352+0000 7fd66f7fe700 1 -- 192.168.123.109:0/2862187273 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fd67404c330 con 0x7fd678100950 2026-03-09T14:56:18.352 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.352+0000 7fd67dd52700 1 --2- 192.168.123.109:0/2862187273 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd664038510 0x7fd66403a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:18.352 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.353+0000 7fd67dd52700 1 --2- 192.168.123.109:0/2862187273 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd664038510 0x7fd66403a9c0 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7fd668006fd0 tx=0x7fd668006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:18.354 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.355+0000 7fd66f7fe700 1 -- 192.168.123.109:0/2862187273 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fd67402aba0 con 0x7fd678100950 2026-03-09T14:56:18.499 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.499+0000 7fd67f555700 1 -- 192.168.123.109:0/2862187273 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fd65c005190 con 0x7fd678100950 2026-03-09T14:56:18.501 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.502+0000 7fd66f7fe700 1 -- 192.168.123.109:0/2862187273 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fd674026020 con 0x7fd678100950 2026-03-09T14:56:18.501 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:18.501 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:18.503 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.504+0000 7fd67f555700 1 -- 192.168.123.109:0/2862187273 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd664038510 msgr2=0x7fd66403a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:18.503 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.504+0000 7fd67f555700 1 --2- 192.168.123.109:0/2862187273 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd664038510 0x7fd66403a9c0 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7fd668006fd0 tx=0x7fd668006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:18.503 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.504+0000 7fd67f555700 1 -- 192.168.123.109:0/2862187273 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd678100950 msgr2=0x7fd678194fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:18.503 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.504+0000 7fd67f555700 1 --2- 192.168.123.109:0/2862187273 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd678100950 0x7fd678194fa0 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7fd674004f40 tx=0x7fd674005e70 comp rx=0 tx=0).stop 2026-03-09T14:56:18.504 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.504+0000 7fd67f555700 1 -- 192.168.123.109:0/2862187273 shutdown_connections 2026-03-09T14:56:18.504 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.504+0000 7fd67f555700 1 --2- 192.168.123.109:0/2862187273 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd664038510 0x7fd66403a9c0 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:18.504 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.504+0000 7fd67f555700 1 --2- 192.168.123.109:0/2862187273 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd678100950 0x7fd678194fa0 unknown :-1 s=CLOSED pgs=138 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:18.504 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.504+0000 7fd67f555700 1 -- 192.168.123.109:0/2862187273 >> 192.168.123.109:0/2862187273 conn(0x7fd6780fa310 msgr2=0x7fd6780fafc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:18.504 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.505+0000 7fd67f555700 1 -- 192.168.123.109:0/2862187273 shutdown_connections 2026-03-09T14:56:18.504 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:18.505+0000 7fd67f555700 1 -- 192.168.123.109:0/2862187273 wait complete. 2026-03-09T14:56:18.504 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:18.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:18 vm05 ceph-mon[50611]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:19.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:19 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/2862187273' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:19.583 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:19.584 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:19.744 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:19.786 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:20.061 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.061+0000 7f538bf4b700 1 -- 192.168.123.109:0/2921439659 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5384100030 msgr2=0x7f5384100440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:20.062 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.061+0000 7f5388ce5700 1 -- 192.168.123.109:0/2921439659 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f537400bcf0 con 0x7f5384100030 2026-03-09T14:56:20.062 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.061+0000 7f538bf4b700 1 --2- 192.168.123.109:0/2921439659 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5384100030 0x7f5384100440 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7f5374009b00 tx=0x7f5374009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:20.062 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.062+0000 7f538bf4b700 1 -- 192.168.123.109:0/2921439659 shutdown_connections 2026-03-09T14:56:20.062 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.062+0000 7f538bf4b700 1 --2- 192.168.123.109:0/2921439659 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5384100030 0x7f5384100440 unknown :-1 s=CLOSED pgs=139 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:20.062 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.062+0000 7f538bf4b700 1 -- 192.168.123.109:0/2921439659 >> 192.168.123.109:0/2921439659 conn(0x7f53840fb5e0 msgr2=0x7f53840fda10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:20.062 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.062+0000 7f538bf4b700 1 -- 192.168.123.109:0/2921439659 shutdown_connections 2026-03-09T14:56:20.062 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.062+0000 7f538bf4b700 1 -- 192.168.123.109:0/2921439659 wait complete. 2026-03-09T14:56:20.062 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.063+0000 7f538bf4b700 1 Processor -- start 2026-03-09T14:56:20.062 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.063+0000 7f538bf4b700 1 -- start start 2026-03-09T14:56:20.063 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.063+0000 7f538bf4b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5384100030 0x7f53841951b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:20.063 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.064+0000 7f538bf4b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f53841956f0 con 0x7f5384100030 2026-03-09T14:56:20.063 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.064+0000 7f5389ce7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5384100030 0x7f53841951b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:20.063 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.064+0000 7f5389ce7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5384100030 0x7f53841951b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:48356/0 (socket says 192.168.123.109:48356) 2026-03-09T14:56:20.063 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.064+0000 7f5389ce7700 1 -- 192.168.123.109:0/3795034028 learned_addr learned my addr 192.168.123.109:0/3795034028 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:20.064 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.065+0000 7f5389ce7700 1 -- 192.168.123.109:0/3795034028 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f53740097e0 con 0x7f5384100030 2026-03-09T14:56:20.064 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.065+0000 7f5389ce7700 1 --2- 192.168.123.109:0/3795034028 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5384100030 0x7f53841951b0 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7f5374000c00 tx=0x7f53740047b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:20.065 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.065+0000 7f537affd700 1 -- 192.168.123.109:0/3795034028 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f537401c070 con 0x7f5384100030 2026-03-09T14:56:20.065 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.065+0000 7f537affd700 1 -- 192.168.123.109:0/3795034028 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f53740056f0 con 0x7f5384100030 2026-03-09T14:56:20.065 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.065+0000 7f538bf4b700 1 -- 192.168.123.109:0/3795034028 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f53841958f0 con 0x7f5384100030 2026-03-09T14:56:20.065 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.065+0000 7f538bf4b700 1 -- 192.168.123.109:0/3795034028 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5384195d90 con 0x7f5384100030 2026-03-09T14:56:20.065 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.066+0000 7f537affd700 1 -- 192.168.123.109:0/3795034028 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f537400f460 con 0x7f5384100030 2026-03-09T14:56:20.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.066+0000 7f537affd700 1 -- 192.168.123.109:0/3795034028 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f537400f720 con 0x7f5384100030 2026-03-09T14:56:20.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.067+0000 7f537affd700 1 --2- 192.168.123.109:0/3795034028 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53700384a0 0x7f537003a950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:20.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.067+0000 7f537affd700 1 -- 192.168.123.109:0/3795034028 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f537404c570 con 0x7f5384100030 2026-03-09T14:56:20.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.067+0000 7f53894e6700 1 --2- 192.168.123.109:0/3795034028 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53700384a0 0x7f537003a950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:20.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.067+0000 7f538bf4b700 1 -- 192.168.123.109:0/3795034028 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f538418ee80 con 0x7f5384100030 2026-03-09T14:56:20.067 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.068+0000 7f53894e6700 1 --2- 192.168.123.109:0/3795034028 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53700384a0 0x7f537003a950 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f5380006fd0 tx=0x7f5380006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:20.070 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.070+0000 7f537affd700 1 -- 192.168.123.109:0/3795034028 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f5374026030 con 0x7f5384100030 2026-03-09T14:56:20.217 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.217+0000 7f538bf4b700 1 -- 192.168.123.109:0/3795034028 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f5384062380 con 0x7f5384100030 2026-03-09T14:56:20.218 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.219+0000 7f537affd700 1 -- 192.168.123.109:0/3795034028 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f5374017440 con 0x7f5384100030 2026-03-09T14:56:20.218 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:20.219 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:20.221 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.222+0000 7f538bf4b700 1 -- 192.168.123.109:0/3795034028 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53700384a0 msgr2=0x7f537003a950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:20.221 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.222+0000 7f538bf4b700 1 --2- 192.168.123.109:0/3795034028 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53700384a0 0x7f537003a950 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f5380006fd0 tx=0x7f5380006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:20.221 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.222+0000 7f538bf4b700 1 -- 192.168.123.109:0/3795034028 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5384100030 msgr2=0x7f53841951b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:20.221 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.222+0000 7f538bf4b700 1 --2- 192.168.123.109:0/3795034028 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5384100030 0x7f53841951b0 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7f5374000c00 tx=0x7f53740047b0 comp rx=0 tx=0).stop 2026-03-09T14:56:20.221 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.222+0000 7f538bf4b700 1 -- 192.168.123.109:0/3795034028 shutdown_connections 2026-03-09T14:56:20.221 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.222+0000 7f538bf4b700 1 --2- 192.168.123.109:0/3795034028 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53700384a0 0x7f537003a950 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:20.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.222+0000 7f538bf4b700 1 --2- 192.168.123.109:0/3795034028 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5384100030 0x7f53841951b0 unknown :-1 s=CLOSED pgs=140 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:20.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.223+0000 7f538bf4b700 1 -- 192.168.123.109:0/3795034028 >> 192.168.123.109:0/3795034028 conn(0x7f53840fb5e0 msgr2=0x7f53840fc290 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:20.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.223+0000 7f538bf4b700 1 -- 192.168.123.109:0/3795034028 shutdown_connections 2026-03-09T14:56:20.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:20.223+0000 7f538bf4b700 1 -- 192.168.123.109:0/3795034028 wait complete. 2026-03-09T14:56:20.223 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:20.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:20 vm05 ceph-mon[50611]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:21.291 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:21.292 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:21.458 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:21.506 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:21.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:21 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3795034028' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:21.774 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.774+0000 7f53d2d38700 1 -- 192.168.123.109:0/1030091159 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53cc102230 msgr2=0x7f53cc102640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:21.774 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.774+0000 7f53d2d38700 1 --2- 192.168.123.109:0/1030091159 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53cc102230 0x7f53cc102640 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7f53bc009b00 tx=0x7f53bc009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:21.774 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.775+0000 7f53d2d38700 1 -- 192.168.123.109:0/1030091159 shutdown_connections 2026-03-09T14:56:21.774 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.775+0000 7f53d2d38700 1 --2- 192.168.123.109:0/1030091159 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53cc102230 0x7f53cc102640 unknown :-1 s=CLOSED pgs=141 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:21.775 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.775+0000 7f53d2d38700 1 -- 192.168.123.109:0/1030091159 >> 192.168.123.109:0/1030091159 conn(0x7f53cc0fd8d0 msgr2=0x7f53cc0ffcb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:21.775 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.775+0000 7f53d2d38700 1 -- 192.168.123.109:0/1030091159 shutdown_connections 2026-03-09T14:56:21.775 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.775+0000 7f53d2d38700 1 -- 192.168.123.109:0/1030091159 wait complete. 2026-03-09T14:56:21.775 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.776+0000 7f53d2d38700 1 Processor -- start 2026-03-09T14:56:21.775 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.776+0000 7f53d2d38700 1 -- start start 2026-03-09T14:56:21.775 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.776+0000 7f53d2d38700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53cc102230 0x7f53cc197380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:21.775 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.776+0000 7f53d2d38700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f53cc1978c0 con 0x7f53cc102230 2026-03-09T14:56:21.775 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.776+0000 7f53d0ad4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53cc102230 0x7f53cc197380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:21.776 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.776+0000 7f53d0ad4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53cc102230 0x7f53cc197380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:48380/0 (socket says 192.168.123.109:48380) 2026-03-09T14:56:21.776 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.776+0000 7f53d0ad4700 1 -- 192.168.123.109:0/3382801191 learned_addr learned my addr 192.168.123.109:0/3382801191 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:21.776 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.777+0000 7f53d0ad4700 1 -- 192.168.123.109:0/3382801191 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f53bc0097e0 con 0x7f53cc102230 2026-03-09T14:56:21.776 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.777+0000 7f53d0ad4700 1 --2- 192.168.123.109:0/3382801191 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53cc102230 0x7f53cc197380 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7f53bc004750 tx=0x7f53bc005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:21.776 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.777+0000 7f53c9ffb700 1 -- 192.168.123.109:0/3382801191 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f53bc01c070 con 0x7f53cc102230 2026-03-09T14:56:21.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.777+0000 7f53c9ffb700 1 -- 192.168.123.109:0/3382801191 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f53bc021470 con 0x7f53cc102230 2026-03-09T14:56:21.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.777+0000 7f53d2d38700 1 -- 192.168.123.109:0/3382801191 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f53cc197ac0 con 0x7f53cc102230 2026-03-09T14:56:21.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.777+0000 7f53c9ffb700 1 -- 192.168.123.109:0/3382801191 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f53bc00f460 con 0x7f53cc102230 2026-03-09T14:56:21.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.777+0000 7f53d2d38700 1 -- 192.168.123.109:0/3382801191 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f53cc197f60 con 0x7f53cc102230 2026-03-09T14:56:21.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.779+0000 7f53c9ffb700 1 -- 192.168.123.109:0/3382801191 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f53bc00f5c0 con 0x7f53cc102230 2026-03-09T14:56:21.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.779+0000 7f53c9ffb700 1 --2- 192.168.123.109:0/3382801191 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53b4038510 0x7f53b403a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:21.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.779+0000 7f53c9ffb700 1 -- 192.168.123.109:0/3382801191 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f53bc04d4a0 con 0x7f53cc102230 2026-03-09T14:56:21.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.779+0000 7f53cbfff700 1 --2- 192.168.123.109:0/3382801191 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53b4038510 0x7f53b403a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:21.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.779+0000 7f53d2d38700 1 -- 192.168.123.109:0/3382801191 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f53cc191080 con 0x7f53cc102230 2026-03-09T14:56:21.779 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.780+0000 7f53cbfff700 1 --2- 192.168.123.109:0/3382801191 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53b4038510 0x7f53b403a9c0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f53c0006fd0 tx=0x7f53c0006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:21.782 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.782+0000 7f53c9ffb700 1 -- 192.168.123.109:0/3382801191 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f53bc026070 con 0x7f53cc102230 2026-03-09T14:56:21.944 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.944+0000 7f53d2d38700 1 -- 192.168.123.109:0/3382801191 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f53cc062380 con 0x7f53cc102230 2026-03-09T14:56:21.944 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.945+0000 7f53c9ffb700 1 -- 192.168.123.109:0/3382801191 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f53bc029720 con 0x7f53cc102230 2026-03-09T14:56:21.944 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:21.944 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:21.947 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.948+0000 7f53d2d38700 1 -- 192.168.123.109:0/3382801191 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53b4038510 msgr2=0x7f53b403a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:21.947 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.948+0000 7f53d2d38700 1 --2- 192.168.123.109:0/3382801191 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53b4038510 0x7f53b403a9c0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f53c0006fd0 tx=0x7f53c0006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:21.947 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.948+0000 7f53d2d38700 1 -- 192.168.123.109:0/3382801191 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53cc102230 msgr2=0x7f53cc197380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:21.947 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.948+0000 7f53d2d38700 1 --2- 192.168.123.109:0/3382801191 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53cc102230 0x7f53cc197380 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7f53bc004750 tx=0x7f53bc005dc0 comp rx=0 tx=0).stop 2026-03-09T14:56:21.947 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.948+0000 7f53d2d38700 1 -- 192.168.123.109:0/3382801191 shutdown_connections 2026-03-09T14:56:21.947 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.948+0000 7f53d2d38700 1 --2- 192.168.123.109:0/3382801191 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f53b4038510 0x7f53b403a9c0 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:21.947 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.948+0000 7f53d2d38700 1 --2- 192.168.123.109:0/3382801191 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f53cc102230 0x7f53cc197380 unknown :-1 s=CLOSED pgs=142 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:21.947 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.948+0000 7f53d2d38700 1 -- 192.168.123.109:0/3382801191 >> 192.168.123.109:0/3382801191 conn(0x7f53cc0fd8d0 msgr2=0x7f53cc0fe530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:21.948 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.948+0000 7f53d2d38700 1 -- 192.168.123.109:0/3382801191 shutdown_connections 2026-03-09T14:56:21.948 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:21.948+0000 7f53d2d38700 1 -- 192.168.123.109:0/3382801191 wait complete. 2026-03-09T14:56:21.948 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:22.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:22 vm05 ceph-mon[50611]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:22.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:22 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3382801191' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:23.025 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:23.026 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:23.184 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:23.243 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:23.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.534+0000 7fbe40010700 1 -- 192.168.123.109:0/4121385855 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe38102240 msgr2=0x7fbe38102650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:23.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.534+0000 7fbe40010700 1 --2- 192.168.123.109:0/4121385855 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe38102240 0x7fbe38102650 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7fbe28009b00 tx=0x7fbe28009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:23.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.535+0000 7fbe40010700 1 -- 192.168.123.109:0/4121385855 shutdown_connections 2026-03-09T14:56:23.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.535+0000 7fbe40010700 1 --2- 192.168.123.109:0/4121385855 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe38102240 0x7fbe38102650 unknown :-1 s=CLOSED pgs=143 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:23.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.535+0000 7fbe40010700 1 -- 192.168.123.109:0/4121385855 >> 192.168.123.109:0/4121385855 conn(0x7fbe380fd8d0 msgr2=0x7fbe380ffcb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:23.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.536+0000 7fbe40010700 1 -- 192.168.123.109:0/4121385855 shutdown_connections 2026-03-09T14:56:23.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.536+0000 7fbe40010700 1 -- 192.168.123.109:0/4121385855 wait complete. 2026-03-09T14:56:23.535 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.536+0000 7fbe40010700 1 Processor -- start 2026-03-09T14:56:23.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.536+0000 7fbe40010700 1 -- start start 2026-03-09T14:56:23.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.537+0000 7fbe40010700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe38102240 0x7fbe38197450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:23.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.537+0000 7fbe40010700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbe38197990 con 0x7fbe38102240 2026-03-09T14:56:23.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.537+0000 7fbe3ddac700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe38102240 0x7fbe38197450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:23.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.537+0000 7fbe3ddac700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe38102240 0x7fbe38197450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:48402/0 (socket says 192.168.123.109:48402) 2026-03-09T14:56:23.536 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.537+0000 7fbe3ddac700 1 -- 192.168.123.109:0/598832226 learned_addr learned my addr 192.168.123.109:0/598832226 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:23.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.537+0000 7fbe3ddac700 1 -- 192.168.123.109:0/598832226 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbe280097e0 con 0x7fbe38102240 2026-03-09T14:56:23.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.538+0000 7fbe3ddac700 1 --2- 192.168.123.109:0/598832226 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe38102240 0x7fbe38197450 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7fbe28004d40 tx=0x7fbe28004e20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:23.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.538+0000 7fbe2effd700 1 -- 192.168.123.109:0/598832226 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbe2801c070 con 0x7fbe38102240 2026-03-09T14:56:23.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.538+0000 7fbe2effd700 1 -- 192.168.123.109:0/598832226 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fbe280056f0 con 0x7fbe38102240 2026-03-09T14:56:23.537 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.538+0000 7fbe40010700 1 -- 192.168.123.109:0/598832226 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbe38197b90 con 0x7fbe38102240 2026-03-09T14:56:23.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.538+0000 7fbe2effd700 1 -- 192.168.123.109:0/598832226 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fbe28017440 con 0x7fbe38102240 2026-03-09T14:56:23.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.538+0000 7fbe40010700 1 -- 192.168.123.109:0/598832226 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbe38198030 con 0x7fbe38102240 2026-03-09T14:56:23.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.539+0000 7fbe2effd700 1 -- 192.168.123.109:0/598832226 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fbe280175a0 con 0x7fbe38102240 2026-03-09T14:56:23.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.540+0000 7fbe2effd700 1 --2- 192.168.123.109:0/598832226 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbe24038510 0x7fbe2403a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:23.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.540+0000 7fbe2effd700 1 -- 192.168.123.109:0/598832226 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fbe2804d130 con 0x7fbe38102240 2026-03-09T14:56:23.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.540+0000 7fbe3d5ab700 1 --2- 192.168.123.109:0/598832226 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbe24038510 0x7fbe2403a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:23.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.540+0000 7fbe40010700 1 -- 192.168.123.109:0/598832226 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbe38191090 con 0x7fbe38102240 2026-03-09T14:56:23.540 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.541+0000 7fbe3d5ab700 1 --2- 192.168.123.109:0/598832226 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbe24038510 0x7fbe2403a9c0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fbe34006fd0 tx=0x7fbe34006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:23.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.543+0000 7fbe2effd700 1 -- 192.168.123.109:0/598832226 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fbe28025070 con 0x7fbe38102240 2026-03-09T14:56:23.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:23 vm05 ceph-mon[50611]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:23.694 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.694+0000 7fbe40010700 1 -- 192.168.123.109:0/598832226 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fbe38062380 con 0x7fbe38102240 2026-03-09T14:56:23.694 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.695+0000 7fbe2effd700 1 -- 192.168.123.109:0/598832226 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fbe280289b0 con 0x7fbe38102240 2026-03-09T14:56:23.695 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:23.695 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:23.697 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.698+0000 7fbe40010700 1 -- 192.168.123.109:0/598832226 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbe24038510 msgr2=0x7fbe2403a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:23.697 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.698+0000 7fbe40010700 1 --2- 192.168.123.109:0/598832226 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbe24038510 0x7fbe2403a9c0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fbe34006fd0 tx=0x7fbe34006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:23.697 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.698+0000 7fbe40010700 1 -- 192.168.123.109:0/598832226 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe38102240 msgr2=0x7fbe38197450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:23.697 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.698+0000 7fbe40010700 1 --2- 192.168.123.109:0/598832226 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe38102240 0x7fbe38197450 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7fbe28004d40 tx=0x7fbe28004e20 comp rx=0 tx=0).stop 2026-03-09T14:56:23.697 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.698+0000 7fbe40010700 1 -- 192.168.123.109:0/598832226 shutdown_connections 2026-03-09T14:56:23.698 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.698+0000 7fbe40010700 1 --2- 192.168.123.109:0/598832226 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbe24038510 0x7fbe2403a9c0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:23.698 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.698+0000 7fbe40010700 1 --2- 192.168.123.109:0/598832226 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe38102240 0x7fbe38197450 unknown :-1 s=CLOSED pgs=144 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:23.698 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.699+0000 7fbe40010700 1 -- 192.168.123.109:0/598832226 >> 192.168.123.109:0/598832226 conn(0x7fbe380fd8d0 msgr2=0x7fbe380fe530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:23.698 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.699+0000 7fbe40010700 1 -- 192.168.123.109:0/598832226 shutdown_connections 2026-03-09T14:56:23.698 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:23.699+0000 7fbe40010700 1 -- 192.168.123.109:0/598832226 wait complete. 2026-03-09T14:56:23.699 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:24.706 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:24 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/598832226' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:24.800 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:24.800 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:24.938 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:24.972 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:25.285 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.285+0000 7fa639f95700 1 -- 192.168.123.109:0/3877999447 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa634102240 msgr2=0x7fa634102650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:25.286 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.285+0000 7fa639f95700 1 --2- 192.168.123.109:0/3877999447 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa634102240 0x7fa634102650 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7fa61c009b00 tx=0x7fa61c009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:25.286 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.286+0000 7fa639f95700 1 -- 192.168.123.109:0/3877999447 shutdown_connections 2026-03-09T14:56:25.286 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.286+0000 7fa639f95700 1 --2- 192.168.123.109:0/3877999447 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa634102240 0x7fa634102650 unknown :-1 s=CLOSED pgs=145 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:25.286 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.286+0000 7fa639f95700 1 -- 192.168.123.109:0/3877999447 >> 192.168.123.109:0/3877999447 conn(0x7fa6340fd8d0 msgr2=0x7fa6340ffcb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:25.286 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.286+0000 7fa639f95700 1 -- 192.168.123.109:0/3877999447 shutdown_connections 2026-03-09T14:56:25.286 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.286+0000 7fa639f95700 1 -- 192.168.123.109:0/3877999447 wait complete. 2026-03-09T14:56:25.286 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.287+0000 7fa639f95700 1 Processor -- start 2026-03-09T14:56:25.287 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.287+0000 7fa639f95700 1 -- start start 2026-03-09T14:56:25.287 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.288+0000 7fa639f95700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa634102240 0x7fa634197450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:25.287 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.288+0000 7fa639f95700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa634197990 con 0x7fa634102240 2026-03-09T14:56:25.287 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.288+0000 7fa6337fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa634102240 0x7fa634197450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:25.287 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.288+0000 7fa6337fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa634102240 0x7fa634197450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:38128/0 (socket says 192.168.123.109:38128) 2026-03-09T14:56:25.287 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.288+0000 7fa6337fe700 1 -- 192.168.123.109:0/239197554 learned_addr learned my addr 192.168.123.109:0/239197554 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:25.287 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.288+0000 7fa6337fe700 1 -- 192.168.123.109:0/239197554 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa61c0097e0 con 0x7fa634102240 2026-03-09T14:56:25.288 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.289+0000 7fa6337fe700 1 --2- 192.168.123.109:0/239197554 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa634102240 0x7fa634197450 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7fa61c004d40 tx=0x7fa61c004e20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:25.288 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.289+0000 7fa630ff9700 1 -- 192.168.123.109:0/239197554 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa61c01c070 con 0x7fa634102240 2026-03-09T14:56:25.288 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.289+0000 7fa639f95700 1 -- 192.168.123.109:0/239197554 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa634197b90 con 0x7fa634102240 2026-03-09T14:56:25.288 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.289+0000 7fa639f95700 1 -- 192.168.123.109:0/239197554 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa634198030 con 0x7fa634102240 2026-03-09T14:56:25.289 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.289+0000 7fa630ff9700 1 -- 192.168.123.109:0/239197554 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa61c0056f0 con 0x7fa634102240 2026-03-09T14:56:25.289 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.289+0000 7fa630ff9700 1 -- 192.168.123.109:0/239197554 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa61c017440 con 0x7fa634102240 2026-03-09T14:56:25.289 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.290+0000 7fa630ff9700 1 -- 192.168.123.109:0/239197554 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fa61c0175a0 con 0x7fa634102240 2026-03-09T14:56:25.289 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.290+0000 7fa630ff9700 1 --2- 192.168.123.109:0/239197554 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa620038560 0x7fa62003aa10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:25.290 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.290+0000 7fa630ff9700 1 -- 192.168.123.109:0/239197554 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fa61c04d120 con 0x7fa634102240 2026-03-09T14:56:25.290 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.291+0000 7fa632ffd700 1 --2- 192.168.123.109:0/239197554 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa620038560 0x7fa62003aa10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:25.290 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.291+0000 7fa639f95700 1 -- 192.168.123.109:0/239197554 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa614005320 con 0x7fa634102240 2026-03-09T14:56:25.290 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.291+0000 7fa632ffd700 1 --2- 192.168.123.109:0/239197554 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa620038560 0x7fa62003aa10 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fa624006fd0 tx=0x7fa624006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:25.293 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.294+0000 7fa630ff9700 1 -- 192.168.123.109:0/239197554 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fa61c025070 con 0x7fa634102240 2026-03-09T14:56:25.452 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.452+0000 7fa639f95700 1 -- 192.168.123.109:0/239197554 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fa614005190 con 0x7fa634102240 2026-03-09T14:56:25.452 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.453+0000 7fa630ff9700 1 -- 192.168.123.109:0/239197554 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fa61c028960 con 0x7fa634102240 2026-03-09T14:56:25.452 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:25.452 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:25.455 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.456+0000 7fa639f95700 1 -- 192.168.123.109:0/239197554 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa620038560 msgr2=0x7fa62003aa10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:25.455 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.456+0000 7fa639f95700 1 --2- 192.168.123.109:0/239197554 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa620038560 0x7fa62003aa10 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fa624006fd0 tx=0x7fa624006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:25.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.456+0000 7fa639f95700 1 -- 192.168.123.109:0/239197554 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa634102240 msgr2=0x7fa634197450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:25.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.456+0000 7fa639f95700 1 --2- 192.168.123.109:0/239197554 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa634102240 0x7fa634197450 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7fa61c004d40 tx=0x7fa61c004e20 comp rx=0 tx=0).stop 2026-03-09T14:56:25.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.456+0000 7fa639f95700 1 -- 192.168.123.109:0/239197554 shutdown_connections 2026-03-09T14:56:25.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.456+0000 7fa639f95700 1 --2- 192.168.123.109:0/239197554 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa620038560 0x7fa62003aa10 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:25.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.456+0000 7fa639f95700 1 --2- 192.168.123.109:0/239197554 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa634102240 0x7fa634197450 unknown :-1 s=CLOSED pgs=146 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:25.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.456+0000 7fa639f95700 1 -- 192.168.123.109:0/239197554 >> 192.168.123.109:0/239197554 conn(0x7fa6340fd8d0 msgr2=0x7fa6340fe530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:25.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.456+0000 7fa639f95700 1 -- 192.168.123.109:0/239197554 shutdown_connections 2026-03-09T14:56:25.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:25.456+0000 7fa639f95700 1 -- 192.168.123.109:0/239197554 wait complete. 2026-03-09T14:56:25.456 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:25.567 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:25 vm05 ceph-mon[50611]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:25.567 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:25 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:26.538 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:26.539 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:26.657 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:26 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:26.657 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:26 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:26.657 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:26 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:26.657 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:26 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:26.657 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:26 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:26.657 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:26 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/239197554' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:26.657 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:26 vm05 ceph-mon[50611]: Deploying daemon prometheus.vm05 on vm05 2026-03-09T14:56:26.698 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:26.743 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:27.006 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.006+0000 7f71b6dd9700 1 -- 192.168.123.109:0/1608736789 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71b0102240 msgr2=0x7f71b0102650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:27.007 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.006+0000 7f71b6dd9700 1 --2- 192.168.123.109:0/1608736789 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71b0102240 0x7f71b0102650 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7f71a0009b00 tx=0x7f71a0009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:27.007 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.007+0000 7f71b6dd9700 1 -- 192.168.123.109:0/1608736789 shutdown_connections 2026-03-09T14:56:27.007 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.007+0000 7f71b6dd9700 1 --2- 192.168.123.109:0/1608736789 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71b0102240 0x7f71b0102650 unknown :-1 s=CLOSED pgs=147 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:27.007 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.007+0000 7f71b6dd9700 1 -- 192.168.123.109:0/1608736789 >> 192.168.123.109:0/1608736789 conn(0x7f71b00fd8d0 msgr2=0x7f71b00ffcb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:27.007 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.007+0000 7f71b6dd9700 1 -- 192.168.123.109:0/1608736789 shutdown_connections 2026-03-09T14:56:27.007 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.007+0000 7f71b6dd9700 1 -- 192.168.123.109:0/1608736789 wait complete. 2026-03-09T14:56:27.007 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.008+0000 7f71b6dd9700 1 Processor -- start 2026-03-09T14:56:27.007 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.008+0000 7f71b6dd9700 1 -- start start 2026-03-09T14:56:27.008 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.008+0000 7f71b6dd9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71b0102240 0x7f71b0197450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:27.008 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.008+0000 7f71b6dd9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f71b0197990 con 0x7f71b0102240 2026-03-09T14:56:27.008 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.009+0000 7f71b4b75700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71b0102240 0x7f71b0197450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:27.008 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.009+0000 7f71b4b75700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71b0102240 0x7f71b0197450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:38146/0 (socket says 192.168.123.109:38146) 2026-03-09T14:56:27.008 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.009+0000 7f71b4b75700 1 -- 192.168.123.109:0/2578856591 learned_addr learned my addr 192.168.123.109:0/2578856591 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:27.008 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.009+0000 7f71b4b75700 1 -- 192.168.123.109:0/2578856591 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f71a00097e0 con 0x7f71b0102240 2026-03-09T14:56:27.008 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.009+0000 7f71b4b75700 1 --2- 192.168.123.109:0/2578856591 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71b0102240 0x7f71b0197450 secure :-1 s=READY pgs=148 cs=0 l=1 rev1=1 crypto rx=0x7f71a0004d40 tx=0x7f71a0004e20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:27.010 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.009+0000 7f71adffb700 1 -- 192.168.123.109:0/2578856591 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f71a001c070 con 0x7f71b0102240 2026-03-09T14:56:27.010 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.010+0000 7f71adffb700 1 -- 192.168.123.109:0/2578856591 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f71a00056f0 con 0x7f71b0102240 2026-03-09T14:56:27.010 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.010+0000 7f71adffb700 1 -- 192.168.123.109:0/2578856591 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f71a0017440 con 0x7f71b0102240 2026-03-09T14:56:27.010 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.010+0000 7f71b6dd9700 1 -- 192.168.123.109:0/2578856591 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f71b0197b90 con 0x7f71b0102240 2026-03-09T14:56:27.010 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.010+0000 7f71b6dd9700 1 -- 192.168.123.109:0/2578856591 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f71b0198030 con 0x7f71b0102240 2026-03-09T14:56:27.010 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.011+0000 7f71adffb700 1 -- 192.168.123.109:0/2578856591 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f71a0005210 con 0x7f71b0102240 2026-03-09T14:56:27.010 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.011+0000 7f71adffb700 1 --2- 192.168.123.109:0/2578856591 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7198038510 0x7f719803a9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:27.010 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.011+0000 7f71adffb700 1 -- 192.168.123.109:0/2578856591 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f71a004c030 con 0x7f71b0102240 2026-03-09T14:56:27.011 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.011+0000 7f71affff700 1 --2- 192.168.123.109:0/2578856591 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7198038510 0x7f719803a9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:27.011 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.012+0000 7f71b6dd9700 1 -- 192.168.123.109:0/2578856591 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f71b0191090 con 0x7f71b0102240 2026-03-09T14:56:27.011 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.012+0000 7f71affff700 1 --2- 192.168.123.109:0/2578856591 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7198038510 0x7f719803a9c0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f71a4006fd0 tx=0x7f71a4006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:27.017 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.017+0000 7f71adffb700 1 -- 192.168.123.109:0/2578856591 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f71a0025070 con 0x7f71b0102240 2026-03-09T14:56:27.178 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.178+0000 7f71b6dd9700 1 -- 192.168.123.109:0/2578856591 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f71b0062380 con 0x7f71b0102240 2026-03-09T14:56:27.179 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.179+0000 7f71adffb700 1 -- 192.168.123.109:0/2578856591 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f71a0028540 con 0x7f71b0102240 2026-03-09T14:56:27.179 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:27.179 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:27.181 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.182+0000 7f71b6dd9700 1 -- 192.168.123.109:0/2578856591 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7198038510 msgr2=0x7f719803a9c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:27.181 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.182+0000 7f71b6dd9700 1 --2- 192.168.123.109:0/2578856591 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7198038510 0x7f719803a9c0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f71a4006fd0 tx=0x7f71a4006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:27.181 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.182+0000 7f71b6dd9700 1 -- 192.168.123.109:0/2578856591 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71b0102240 msgr2=0x7f71b0197450 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:27.181 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.182+0000 7f71b6dd9700 1 --2- 192.168.123.109:0/2578856591 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71b0102240 0x7f71b0197450 secure :-1 s=READY pgs=148 cs=0 l=1 rev1=1 crypto rx=0x7f71a0004d40 tx=0x7f71a0004e20 comp rx=0 tx=0).stop 2026-03-09T14:56:27.181 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.182+0000 7f71b6dd9700 1 -- 192.168.123.109:0/2578856591 shutdown_connections 2026-03-09T14:56:27.181 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.182+0000 7f71b6dd9700 1 --2- 192.168.123.109:0/2578856591 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7198038510 0x7f719803a9c0 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:27.182 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.182+0000 7f71b6dd9700 1 --2- 192.168.123.109:0/2578856591 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71b0102240 0x7f71b0197450 unknown :-1 s=CLOSED pgs=148 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:27.182 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.182+0000 7f71b6dd9700 1 -- 192.168.123.109:0/2578856591 >> 192.168.123.109:0/2578856591 conn(0x7f71b00fd8d0 msgr2=0x7f71b00fe530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:27.182 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.182+0000 7f71b6dd9700 1 -- 192.168.123.109:0/2578856591 shutdown_connections 2026-03-09T14:56:27.182 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:27.182+0000 7f71b6dd9700 1 -- 192.168.123.109:0/2578856591 wait complete. 2026-03-09T14:56:27.183 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:28.253 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:28.253 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:28.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:27 vm05 ceph-mon[50611]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:28.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:27 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/2578856591' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:28.409 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:28.456 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:28.730 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.729+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1732500535 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc100aa0 msgr2=0x7fd7bc102e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:28.730 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.729+0000 7fd7c0d95700 1 --2- 192.168.123.109:0/1732500535 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc100aa0 0x7fd7bc102e80 secure :-1 s=READY pgs=149 cs=0 l=1 rev1=1 crypto rx=0x7fd7a4009b00 tx=0x7fd7a4009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:28.730 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.730+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1732500535 shutdown_connections 2026-03-09T14:56:28.730 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.730+0000 7fd7c0d95700 1 --2- 192.168.123.109:0/1732500535 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc100aa0 0x7fd7bc102e80 unknown :-1 s=CLOSED pgs=149 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:28.730 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.730+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1732500535 >> 192.168.123.109:0/1732500535 conn(0x7fd7bc0fa4a0 msgr2=0x7fd7bc0fc8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:28.730 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.730+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1732500535 shutdown_connections 2026-03-09T14:56:28.730 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.731+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1732500535 wait complete. 2026-03-09T14:56:28.730 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.731+0000 7fd7c0d95700 1 Processor -- start 2026-03-09T14:56:28.731 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.731+0000 7fd7c0d95700 1 -- start start 2026-03-09T14:56:28.731 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.732+0000 7fd7c0d95700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc100aa0 0x7fd7bc192f50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:28.731 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.732+0000 7fd7c0d95700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd7bc193490 con 0x7fd7bc100aa0 2026-03-09T14:56:28.731 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.732+0000 7fd7ba59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc100aa0 0x7fd7bc192f50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:28.732 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.732+0000 7fd7ba59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc100aa0 0x7fd7bc192f50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:38160/0 (socket says 192.168.123.109:38160) 2026-03-09T14:56:28.732 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.732+0000 7fd7ba59c700 1 -- 192.168.123.109:0/1326420354 learned_addr learned my addr 192.168.123.109:0/1326420354 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:28.732 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.733+0000 7fd7ba59c700 1 -- 192.168.123.109:0/1326420354 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd7a40097e0 con 0x7fd7bc100aa0 2026-03-09T14:56:28.732 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.733+0000 7fd7ba59c700 1 --2- 192.168.123.109:0/1326420354 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc100aa0 0x7fd7bc192f50 secure :-1 s=READY pgs=150 cs=0 l=1 rev1=1 crypto rx=0x7fd7a4004f40 tx=0x7fd7a4005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:28.733 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.733+0000 7fd7b37fe700 1 -- 192.168.123.109:0/1326420354 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd7a401c070 con 0x7fd7bc100aa0 2026-03-09T14:56:28.733 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.733+0000 7fd7b37fe700 1 -- 192.168.123.109:0/1326420354 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd7a40053b0 con 0x7fd7bc100aa0 2026-03-09T14:56:28.733 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.733+0000 7fd7b37fe700 1 -- 192.168.123.109:0/1326420354 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd7a400f460 con 0x7fd7bc100aa0 2026-03-09T14:56:28.733 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.733+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1326420354 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd7bc193690 con 0x7fd7bc100aa0 2026-03-09T14:56:28.733 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.733+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1326420354 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd7bc193a70 con 0x7fd7bc100aa0 2026-03-09T14:56:28.734 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.735+0000 7fd7b37fe700 1 -- 192.168.123.109:0/1326420354 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fd7a4021470 con 0x7fd7bc100aa0 2026-03-09T14:56:28.734 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.735+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1326420354 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd7bc04fa50 con 0x7fd7bc100aa0 2026-03-09T14:56:28.734 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.735+0000 7fd7b37fe700 1 --2- 192.168.123.109:0/1326420354 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7a80384f0 0x7fd7a803a9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:28.734 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.735+0000 7fd7b37fe700 1 -- 192.168.123.109:0/1326420354 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fd7a404c360 con 0x7fd7bc100aa0 2026-03-09T14:56:28.735 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.735+0000 7fd7b9d9b700 1 --2- 192.168.123.109:0/1326420354 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7a80384f0 0x7fd7a803a9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:28.735 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.736+0000 7fd7b9d9b700 1 --2- 192.168.123.109:0/1326420354 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7a80384f0 0x7fd7a803a9a0 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fd7ac006fd0 tx=0x7fd7ac006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:28.737 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.738+0000 7fd7b37fe700 1 -- 192.168.123.109:0/1326420354 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fd7a400f5e0 con 0x7fd7bc100aa0 2026-03-09T14:56:28.895 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.895+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1326420354 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fd7bc02d050 con 0x7fd7bc100aa0 2026-03-09T14:56:28.895 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.896+0000 7fd7b37fe700 1 -- 192.168.123.109:0/1326420354 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fd7a4026030 con 0x7fd7bc100aa0 2026-03-09T14:56:28.895 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:28.895 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:28.897 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.898+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1326420354 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7a80384f0 msgr2=0x7fd7a803a9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:28.897 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.898+0000 7fd7c0d95700 1 --2- 192.168.123.109:0/1326420354 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7a80384f0 0x7fd7a803a9a0 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fd7ac006fd0 tx=0x7fd7ac006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:28.897 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.898+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1326420354 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc100aa0 msgr2=0x7fd7bc192f50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:28.897 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.898+0000 7fd7c0d95700 1 --2- 192.168.123.109:0/1326420354 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc100aa0 0x7fd7bc192f50 secure :-1 s=READY pgs=150 cs=0 l=1 rev1=1 crypto rx=0x7fd7a4004f40 tx=0x7fd7a4005e70 comp rx=0 tx=0).stop 2026-03-09T14:56:28.897 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.898+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1326420354 shutdown_connections 2026-03-09T14:56:28.897 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.898+0000 7fd7c0d95700 1 --2- 192.168.123.109:0/1326420354 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7a80384f0 0x7fd7a803a9a0 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:28.898 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.898+0000 7fd7c0d95700 1 --2- 192.168.123.109:0/1326420354 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc100aa0 0x7fd7bc192f50 unknown :-1 s=CLOSED pgs=150 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:28.898 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.898+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1326420354 >> 192.168.123.109:0/1326420354 conn(0x7fd7bc0fa4a0 msgr2=0x7fd7bc0fb170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:28.898 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.899+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1326420354 shutdown_connections 2026-03-09T14:56:28.898 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:28.899+0000 7fd7c0d95700 1 -- 192.168.123.109:0/1326420354 wait complete. 2026-03-09T14:56:28.899 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:29.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:28 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1326420354' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:29.944 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:29.944 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:30.084 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:30.135 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:30.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:29 vm05 ceph-mon[50611]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:30.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:29 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:30.429 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.429+0000 7f621d704700 1 -- 192.168.123.109:0/1668370020 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6218100b40 msgr2=0x7f6218102f20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:30.429 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.429+0000 7f621d704700 1 --2- 192.168.123.109:0/1668370020 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6218100b40 0x7f6218102f20 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7f6200009b00 tx=0x7f6200009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:30.429 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.429+0000 7f621d704700 1 -- 192.168.123.109:0/1668370020 shutdown_connections 2026-03-09T14:56:30.429 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.429+0000 7f621d704700 1 --2- 192.168.123.109:0/1668370020 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6218100b40 0x7f6218102f20 unknown :-1 s=CLOSED pgs=151 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:30.429 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.429+0000 7f621d704700 1 -- 192.168.123.109:0/1668370020 >> 192.168.123.109:0/1668370020 conn(0x7f62180fa4a0 msgr2=0x7f62180fc8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:30.429 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.429+0000 7f621d704700 1 -- 192.168.123.109:0/1668370020 shutdown_connections 2026-03-09T14:56:30.429 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.430+0000 7f621d704700 1 -- 192.168.123.109:0/1668370020 wait complete. 2026-03-09T14:56:30.429 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.430+0000 7f621d704700 1 Processor -- start 2026-03-09T14:56:30.430 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.430+0000 7f621d704700 1 -- start start 2026-03-09T14:56:30.430 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.430+0000 7f621d704700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6218100b40 0x7f62181004a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:30.430 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.430+0000 7f621d704700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f62181009e0 con 0x7f6218100b40 2026-03-09T14:56:30.430 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.431+0000 7f6216ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6218100b40 0x7f62181004a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:30.430 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.431+0000 7f6216ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6218100b40 0x7f62181004a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:38174/0 (socket says 192.168.123.109:38174) 2026-03-09T14:56:30.430 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.431+0000 7f6216ffd700 1 -- 192.168.123.109:0/1556082586 learned_addr learned my addr 192.168.123.109:0/1556082586 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:30.430 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.431+0000 7f6216ffd700 1 -- 192.168.123.109:0/1556082586 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f62000097e0 con 0x7f6218100b40 2026-03-09T14:56:30.430 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.431+0000 7f6216ffd700 1 --2- 192.168.123.109:0/1556082586 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6218100b40 0x7f62181004a0 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7f6200004f40 tx=0x7f6200005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:30.431 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.431+0000 7f620ffff700 1 -- 192.168.123.109:0/1556082586 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f620001d070 con 0x7f6218100b40 2026-03-09T14:56:30.431 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.431+0000 7f621d704700 1 -- 192.168.123.109:0/1556082586 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f62180feaf0 con 0x7f6218100b40 2026-03-09T14:56:30.431 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.431+0000 7f621d704700 1 -- 192.168.123.109:0/1556082586 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f62180fef90 con 0x7f6218100b40 2026-03-09T14:56:30.431 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.432+0000 7f620ffff700 1 -- 192.168.123.109:0/1556082586 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f6200022470 con 0x7f6218100b40 2026-03-09T14:56:30.431 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.432+0000 7f620ffff700 1 -- 192.168.123.109:0/1556082586 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f620000f460 con 0x7f6218100b40 2026-03-09T14:56:30.432 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.432+0000 7f620ffff700 1 -- 192.168.123.109:0/1556082586 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7f6200005370 con 0x7f6218100b40 2026-03-09T14:56:30.432 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.432+0000 7f621d704700 1 -- 192.168.123.109:0/1556082586 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f61f8005320 con 0x7f6218100b40 2026-03-09T14:56:30.432 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.432+0000 7f620ffff700 1 --2- 192.168.123.109:0/1556082586 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6204038550 0x7f620403aa00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:30.432 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.432+0000 7f620ffff700 1 -- 192.168.123.109:0/1556082586 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f620004c290 con 0x7f6218100b40 2026-03-09T14:56:30.432 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.433+0000 7f62167fc700 1 --2- 192.168.123.109:0/1556082586 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6204038550 0x7f620403aa00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:30.432 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.433+0000 7f62167fc700 1 --2- 192.168.123.109:0/1556082586 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6204038550 0x7f620403aa00 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f6208006fd0 tx=0x7f6208006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:30.435 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.435+0000 7f620ffff700 1 -- 192.168.123.109:0/1556082586 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f620000f650 con 0x7f6218100b40 2026-03-09T14:56:30.588 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.586+0000 7f621d704700 1 -- 192.168.123.109:0/1556082586 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f61f8005190 con 0x7f6218100b40 2026-03-09T14:56:30.588 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.588+0000 7f620ffff700 1 -- 192.168.123.109:0/1556082586 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f6200027070 con 0x7f6218100b40 2026-03-09T14:56:30.588 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:30.588 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:30.591 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.591+0000 7f621d704700 1 -- 192.168.123.109:0/1556082586 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6204038550 msgr2=0x7f620403aa00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:30.591 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.591+0000 7f621d704700 1 --2- 192.168.123.109:0/1556082586 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6204038550 0x7f620403aa00 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f6208006fd0 tx=0x7f6208006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:30.591 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.592+0000 7f621d704700 1 -- 192.168.123.109:0/1556082586 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6218100b40 msgr2=0x7f62181004a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:30.591 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.592+0000 7f621d704700 1 --2- 192.168.123.109:0/1556082586 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6218100b40 0x7f62181004a0 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7f6200004f40 tx=0x7f6200005e70 comp rx=0 tx=0).stop 2026-03-09T14:56:30.591 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.592+0000 7f621d704700 1 -- 192.168.123.109:0/1556082586 shutdown_connections 2026-03-09T14:56:30.591 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.592+0000 7f621d704700 1 --2- 192.168.123.109:0/1556082586 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6204038550 0x7f620403aa00 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:30.591 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.592+0000 7f621d704700 1 --2- 192.168.123.109:0/1556082586 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6218100b40 0x7f62181004a0 unknown :-1 s=CLOSED pgs=152 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:30.591 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.592+0000 7f621d704700 1 -- 192.168.123.109:0/1556082586 >> 192.168.123.109:0/1556082586 conn(0x7f62180fa4a0 msgr2=0x7f62180fb170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:30.591 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.592+0000 7f621d704700 1 -- 192.168.123.109:0/1556082586 shutdown_connections 2026-03-09T14:56:30.592 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:30.592+0000 7f621d704700 1 -- 192.168.123.109:0/1556082586 wait complete. 2026-03-09T14:56:30.592 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:31.048 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:30 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1556082586' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:31.644 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:31.644 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:31.806 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:31.846 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:32.105 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.105+0000 7fd45b9ef700 1 -- 192.168.123.109:0/2367974348 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4540fef10 msgr2=0x7fd4540ff320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:32.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.105+0000 7fd45b9ef700 1 --2- 192.168.123.109:0/2367974348 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4540fef10 0x7fd4540ff320 secure :-1 s=READY pgs=153 cs=0 l=1 rev1=1 crypto rx=0x7fd448009b00 tx=0x7fd448009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:32.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.106+0000 7fd45b9ef700 1 -- 192.168.123.109:0/2367974348 shutdown_connections 2026-03-09T14:56:32.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.106+0000 7fd45b9ef700 1 --2- 192.168.123.109:0/2367974348 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4540fef10 0x7fd4540ff320 unknown :-1 s=CLOSED pgs=153 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:32.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.106+0000 7fd45b9ef700 1 -- 192.168.123.109:0/2367974348 >> 192.168.123.109:0/2367974348 conn(0x7fd4540fa4a0 msgr2=0x7fd4540fc8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:32.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.106+0000 7fd45b9ef700 1 -- 192.168.123.109:0/2367974348 shutdown_connections 2026-03-09T14:56:32.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.106+0000 7fd45b9ef700 1 -- 192.168.123.109:0/2367974348 wait complete. 2026-03-09T14:56:32.106 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.107+0000 7fd45b9ef700 1 Processor -- start 2026-03-09T14:56:32.107 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.107+0000 7fd45b9ef700 1 -- start start 2026-03-09T14:56:32.107 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.107+0000 7fd45b9ef700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4540fef10 0x7fd454197380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:32.107 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.107+0000 7fd45b9ef700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd4541978c0 con 0x7fd4540fef10 2026-03-09T14:56:32.107 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.108+0000 7fd45978b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4540fef10 0x7fd454197380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:32.107 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.108+0000 7fd45978b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4540fef10 0x7fd454197380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:38198/0 (socket says 192.168.123.109:38198) 2026-03-09T14:56:32.107 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.108+0000 7fd45978b700 1 -- 192.168.123.109:0/3562467274 learned_addr learned my addr 192.168.123.109:0/3562467274 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:32.108 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.108+0000 7fd45978b700 1 -- 192.168.123.109:0/3562467274 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd4480097e0 con 0x7fd4540fef10 2026-03-09T14:56:32.109 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.108+0000 7fd45978b700 1 --2- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4540fef10 0x7fd454197380 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7fd448004f40 tx=0x7fd448005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:32.109 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.108+0000 7fd4467fc700 1 -- 192.168.123.109:0/3562467274 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd44801c070 con 0x7fd4540fef10 2026-03-09T14:56:32.109 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.108+0000 7fd4467fc700 1 -- 192.168.123.109:0/3562467274 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd4480053b0 con 0x7fd4540fef10 2026-03-09T14:56:32.110 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.108+0000 7fd4467fc700 1 -- 192.168.123.109:0/3562467274 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fd44800f460 con 0x7fd4540fef10 2026-03-09T14:56:32.110 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.108+0000 7fd45b9ef700 1 -- 192.168.123.109:0/3562467274 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd454197ac0 con 0x7fd4540fef10 2026-03-09T14:56:32.110 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.109+0000 7fd45b9ef700 1 -- 192.168.123.109:0/3562467274 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd454197f60 con 0x7fd4540fef10 2026-03-09T14:56:32.110 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.110+0000 7fd4467fc700 1 -- 192.168.123.109:0/3562467274 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 13) v1 ==== 45138+0+0 (secure 0 0 0) 0x7fd448021470 con 0x7fd4540fef10 2026-03-09T14:56:32.110 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.110+0000 7fd4467fc700 1 --2- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd440038540 0x7fd44003a9f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:32.110 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.110+0000 7fd4467fc700 1 -- 192.168.123.109:0/3562467274 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7fd44804c1c0 con 0x7fd4540fef10 2026-03-09T14:56:32.110 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.110+0000 7fd45b9ef700 1 -- 192.168.123.109:0/3562467274 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd454190fe0 con 0x7fd4540fef10 2026-03-09T14:56:32.110 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.110+0000 7fd458f8a700 1 --2- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd440038540 0x7fd44003a9f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:32.110 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.111+0000 7fd458f8a700 1 --2- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd440038540 0x7fd44003a9f0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fd450006fd0 tx=0x7fd450006e40 comp rx=0 tx=0).ready entity=mgr.14164 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:32.113 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.114+0000 7fd4467fc700 1 -- 192.168.123.109:0/3562467274 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fd448026070 con 0x7fd4540fef10 2026-03-09T14:56:32.140 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.140+0000 7fd4467fc700 1 -- 192.168.123.109:0/3562467274 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mgrmap(e 14) v1 ==== 45152+0+0 (secure 0 0 0) 0x7fd448013250 con 0x7fd4540fef10 2026-03-09T14:56:32.196 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.175+0000 7fd458f8a700 1 -- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd440038540 msgr2=0x7fd44003a9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 13 2026-03-09T14:56:32.196 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.175+0000 7fd458f8a700 1 -- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd440038540 msgr2=0x7fd44003a9f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-09T14:56:32.196 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.175+0000 7fd458f8a700 1 --2- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd440038540 0x7fd44003a9f0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fd450006fd0 tx=0x7fd450006e40 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T14:56:32.196 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.175+0000 7fd458f8a700 1 --2- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd440038540 0x7fd44003a9f0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fd450006fd0 tx=0x7fd450006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:32.196 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.175+0000 7fd4467fc700 1 -- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd440038540 msgr2=0x7fd44003a9f0 unknown :-1 s=STATE_CLOSED l=1).mark_down 2026-03-09T14:56:32.196 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.175+0000 7fd4467fc700 1 --2- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd440038540 0x7fd44003a9f0 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:32.301 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.301+0000 7fd45b9ef700 1 -- 192.168.123.109:0/3562467274 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fd43c0051a0 con 0x7fd4540fef10 2026-03-09T14:56:32.304 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:32.304 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:32.304 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.302+0000 7fd4467fc700 1 -- 192.168.123.109:0/3562467274 <== mon.0 v2:192.168.123.105:3300/0 8 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fd44800e3e0 con 0x7fd4540fef10 2026-03-09T14:56:32.304 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.305+0000 7fd45b9ef700 1 -- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4540fef10 msgr2=0x7fd454197380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:32.304 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.305+0000 7fd45b9ef700 1 --2- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4540fef10 0x7fd454197380 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7fd448004f40 tx=0x7fd448005e70 comp rx=0 tx=0).stop 2026-03-09T14:56:32.304 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.305+0000 7fd45b9ef700 1 -- 192.168.123.109:0/3562467274 shutdown_connections 2026-03-09T14:56:32.304 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.305+0000 7fd45b9ef700 1 --2- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd440038540 0x7fd44003a9f0 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:32.304 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.305+0000 7fd45b9ef700 1 --2- 192.168.123.109:0/3562467274 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd4540fef10 0x7fd454197380 unknown :-1 s=CLOSED pgs=154 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:32.304 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.305+0000 7fd45b9ef700 1 -- 192.168.123.109:0/3562467274 >> 192.168.123.109:0/3562467274 conn(0x7fd4540fa4a0 msgr2=0x7fd4540fb170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:32.304 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.305+0000 7fd45b9ef700 1 -- 192.168.123.109:0/3562467274 shutdown_connections 2026-03-09T14:56:32.304 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:32.305+0000 7fd45b9ef700 1 -- 192.168.123.109:0/3562467274 wait complete. 2026-03-09T14:56:32.307 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:32.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:32 vm05 ceph-mon[50611]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:32.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:32 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:32.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:32 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:32.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:32 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:32.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:32 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T14:56:33.379 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:33.380 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:33.440 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:33 vm05 ceph-mon[50611]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T14:56:33.440 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:33 vm05 ceph-mon[50611]: mgrmap e14: vm05.lhsexd(active, since 53s) 2026-03-09T14:56:33.440 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:33 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3562467274' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:33.538 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:33.583 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:33.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.839+0000 7f2457bef700 1 -- 192.168.123.109:0/2306630501 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24500fe890 msgr2=0x7f2450100cb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:33.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.839+0000 7f2457bef700 1 --2- 192.168.123.109:0/2306630501 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24500fe890 0x7f2450100cb0 secure :-1 s=READY pgs=157 cs=0 l=1 rev1=1 crypto rx=0x7f2444009b00 tx=0x7f2444009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:33.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.839+0000 7f2457bef700 1 -- 192.168.123.109:0/2306630501 shutdown_connections 2026-03-09T14:56:33.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.839+0000 7f2457bef700 1 --2- 192.168.123.109:0/2306630501 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24500fe890 0x7f2450100cb0 unknown :-1 s=CLOSED pgs=157 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:33.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.839+0000 7f2457bef700 1 -- 192.168.123.109:0/2306630501 >> 192.168.123.109:0/2306630501 conn(0x7f24500fa4a0 msgr2=0x7f24500fc8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:33.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.840+0000 7f2457bef700 1 -- 192.168.123.109:0/2306630501 shutdown_connections 2026-03-09T14:56:33.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.840+0000 7f2457bef700 1 -- 192.168.123.109:0/2306630501 wait complete. 2026-03-09T14:56:33.840 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.841+0000 7f2457bef700 1 Processor -- start 2026-03-09T14:56:33.840 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.841+0000 7f2457bef700 1 -- start start 2026-03-09T14:56:33.840 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.841+0000 7f2457bef700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24500fe890 0x7f245019b780 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:33.840 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.841+0000 7f2457bef700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f245019bcc0 con 0x7f24500fe890 2026-03-09T14:56:33.841 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.841+0000 7f245598b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24500fe890 0x7f245019b780 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:33.841 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.841+0000 7f245598b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24500fe890 0x7f245019b780 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:38212/0 (socket says 192.168.123.109:38212) 2026-03-09T14:56:33.841 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.841+0000 7f245598b700 1 -- 192.168.123.109:0/3444045500 learned_addr learned my addr 192.168.123.109:0/3444045500 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:33.841 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.842+0000 7f245598b700 1 -- 192.168.123.109:0/3444045500 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f24440097e0 con 0x7f24500fe890 2026-03-09T14:56:33.841 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.842+0000 7f245598b700 1 --2- 192.168.123.109:0/3444045500 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24500fe890 0x7f245019b780 secure :-1 s=READY pgs=158 cs=0 l=1 rev1=1 crypto rx=0x7f2444004f40 tx=0x7f2444005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:33.843 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.842+0000 7f2442ffd700 1 -- 192.168.123.109:0/3444045500 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f244401c070 con 0x7f24500fe890 2026-03-09T14:56:33.843 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.842+0000 7f2442ffd700 1 -- 192.168.123.109:0/3444045500 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f24440053b0 con 0x7f24500fe890 2026-03-09T14:56:33.843 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.842+0000 7f2442ffd700 1 -- 192.168.123.109:0/3444045500 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f244400f460 con 0x7f24500fe890 2026-03-09T14:56:33.843 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.842+0000 7f2457bef700 1 -- 192.168.123.109:0/3444045500 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f245019bec0 con 0x7f24500fe890 2026-03-09T14:56:33.843 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.842+0000 7f2457bef700 1 -- 192.168.123.109:0/3444045500 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f245019c360 con 0x7f24500fe890 2026-03-09T14:56:33.843 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.843+0000 7f2457bef700 1 -- 192.168.123.109:0/3444045500 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2450195b80 con 0x7f24500fe890 2026-03-09T14:56:33.847 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.847+0000 7f2442ffd700 1 -- 192.168.123.109:0/3444045500 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 14) v1 ==== 45152+0+0 (secure 0 0 0) 0x7f2444021470 con 0x7f24500fe890 2026-03-09T14:56:33.847 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.848+0000 7f2442ffd700 1 --2- 192.168.123.109:0/3444045500 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f243c038590 0x7f243c03aa40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:33.847 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.848+0000 7f2442ffd700 1 -- 192.168.123.109:0/3444045500 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f244404cf00 con 0x7f24500fe890 2026-03-09T14:56:33.847 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.848+0000 7f245518a700 1 -- 192.168.123.109:0/3444045500 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f243c038590 msgr2=0x7f243c03aa40 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:56:33.847 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.848+0000 7f245518a700 1 --2- 192.168.123.109:0/3444045500 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f243c038590 0x7f243c03aa40 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T14:56:33.848 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.848+0000 7f2442ffd700 1 -- 192.168.123.109:0/3444045500 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f244400ece0 con 0x7f24500fe890 2026-03-09T14:56:33.997 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:33.997+0000 7f2457bef700 1 -- 192.168.123.109:0/3444045500 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f245002cc30 con 0x7f24500fe890 2026-03-09T14:56:33.999 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:34.000+0000 7f2442ffd700 1 -- 192.168.123.109:0/3444045500 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f244402a3e0 con 0x7f24500fe890 2026-03-09T14:56:34.000 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:34.000 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:34.002 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:34.003+0000 7f2457bef700 1 -- 192.168.123.109:0/3444045500 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f243c038590 msgr2=0x7f243c03aa40 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T14:56:34.002 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:34.003+0000 7f2457bef700 1 --2- 192.168.123.109:0/3444045500 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f243c038590 0x7f243c03aa40 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:34.002 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:34.003+0000 7f2457bef700 1 -- 192.168.123.109:0/3444045500 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24500fe890 msgr2=0x7f245019b780 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:34.002 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:34.003+0000 7f2457bef700 1 --2- 192.168.123.109:0/3444045500 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24500fe890 0x7f245019b780 secure :-1 s=READY pgs=158 cs=0 l=1 rev1=1 crypto rx=0x7f2444004f40 tx=0x7f2444005e70 comp rx=0 tx=0).stop 2026-03-09T14:56:34.002 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:34.003+0000 7f2457bef700 1 -- 192.168.123.109:0/3444045500 shutdown_connections 2026-03-09T14:56:34.002 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:34.003+0000 7f2457bef700 1 --2- 192.168.123.109:0/3444045500 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f243c038590 0x7f243c03aa40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:34.002 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:34.003+0000 7f2457bef700 1 --2- 192.168.123.109:0/3444045500 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f24500fe890 0x7f245019b780 unknown :-1 s=CLOSED pgs=158 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:34.002 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:34.003+0000 7f2457bef700 1 -- 192.168.123.109:0/3444045500 >> 192.168.123.109:0/3444045500 conn(0x7f24500fa4a0 msgr2=0x7f24500fb170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:34.002 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:34.003+0000 7f2457bef700 1 -- 192.168.123.109:0/3444045500 shutdown_connections 2026-03-09T14:56:34.003 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:34.003+0000 7f2457bef700 1 -- 192.168.123.109:0/3444045500 wait complete. 2026-03-09T14:56:34.004 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:34.458 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:34 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3444045500' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:35.071 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:35.072 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:35.217 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:35.258 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:35.538 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.538+0000 7f41b18a6700 1 -- 192.168.123.109:0/1798662256 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41ac0fe900 msgr2=0x7f41ac0fed10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:35.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.538+0000 7f41b18a6700 1 --2- 192.168.123.109:0/1798662256 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41ac0fe900 0x7f41ac0fed10 secure :-1 s=READY pgs=159 cs=0 l=1 rev1=1 crypto rx=0x7f4194009b00 tx=0x7f4194009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:35.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.539+0000 7f41b18a6700 1 -- 192.168.123.109:0/1798662256 shutdown_connections 2026-03-09T14:56:35.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.539+0000 7f41b18a6700 1 --2- 192.168.123.109:0/1798662256 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41ac0fe900 0x7f41ac0fed10 unknown :-1 s=CLOSED pgs=159 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:35.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.539+0000 7f41b18a6700 1 -- 192.168.123.109:0/1798662256 >> 192.168.123.109:0/1798662256 conn(0x7f41ac0fa490 msgr2=0x7f41ac0fc8e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:35.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.539+0000 7f41b18a6700 1 -- 192.168.123.109:0/1798662256 shutdown_connections 2026-03-09T14:56:35.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.539+0000 7f41b18a6700 1 -- 192.168.123.109:0/1798662256 wait complete. 2026-03-09T14:56:35.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.540+0000 7f41b18a6700 1 Processor -- start 2026-03-09T14:56:35.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.540+0000 7f41b18a6700 1 -- start start 2026-03-09T14:56:35.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.540+0000 7f41b18a6700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41ac0fe900 0x7f41ac197380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:35.539 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.540+0000 7f41b18a6700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f41ac1978c0 con 0x7f41ac0fe900 2026-03-09T14:56:35.540 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.540+0000 7f41aaffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41ac0fe900 0x7f41ac197380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:35.540 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.540+0000 7f41aaffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41ac0fe900 0x7f41ac197380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:50012/0 (socket says 192.168.123.109:50012) 2026-03-09T14:56:35.540 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.540+0000 7f41aaffd700 1 -- 192.168.123.109:0/2572920289 learned_addr learned my addr 192.168.123.109:0/2572920289 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:35.540 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.541+0000 7f41aaffd700 1 -- 192.168.123.109:0/2572920289 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f41940097e0 con 0x7f41ac0fe900 2026-03-09T14:56:35.540 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.541+0000 7f41aaffd700 1 --2- 192.168.123.109:0/2572920289 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41ac0fe900 0x7f41ac197380 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7f4194004f40 tx=0x7f4194005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:35.541 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.541+0000 7f41b08a4700 1 -- 192.168.123.109:0/2572920289 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f419401d070 con 0x7f41ac0fe900 2026-03-09T14:56:35.541 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.542+0000 7f41b18a6700 1 -- 192.168.123.109:0/2572920289 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f41ac197ac0 con 0x7f41ac0fe900 2026-03-09T14:56:35.541 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.542+0000 7f41b18a6700 1 -- 192.168.123.109:0/2572920289 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f41ac197f60 con 0x7f41ac0fe900 2026-03-09T14:56:35.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.542+0000 7f41b08a4700 1 -- 192.168.123.109:0/2572920289 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f4194022470 con 0x7f41ac0fe900 2026-03-09T14:56:35.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.542+0000 7f41b08a4700 1 -- 192.168.123.109:0/2572920289 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f419400f460 con 0x7f41ac0fe900 2026-03-09T14:56:35.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.543+0000 7f41b08a4700 1 -- 192.168.123.109:0/2572920289 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 14) v1 ==== 45152+0+0 (secure 0 0 0) 0x7f4194005370 con 0x7f41ac0fe900 2026-03-09T14:56:35.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.544+0000 7f41b08a4700 1 --2- 192.168.123.109:0/2572920289 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f41980385a0 0x7f419803aa50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:35.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.544+0000 7f41aa7fc700 1 -- 192.168.123.109:0/2572920289 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f41980385a0 msgr2=0x7f419803aa50 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.105:6800/2 2026-03-09T14:56:35.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.544+0000 7f41aa7fc700 1 --2- 192.168.123.109:0/2572920289 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f41980385a0 0x7f419803aa50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T14:56:35.543 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.544+0000 7f41b08a4700 1 -- 192.168.123.109:0/2572920289 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(4..4 src has 1..4) v4 ==== 1069+0+0 (secure 0 0 0) 0x7f419404c2c0 con 0x7f41ac0fe900 2026-03-09T14:56:35.544 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.544+0000 7f41b18a6700 1 -- 192.168.123.109:0/2572920289 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f418c005320 con 0x7f41ac0fe900 2026-03-09T14:56:35.547 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.547+0000 7f41b08a4700 1 -- 192.168.123.109:0/2572920289 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f4194027070 con 0x7f41ac0fe900 2026-03-09T14:56:35.700 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.700+0000 7f41b18a6700 1 -- 192.168.123.109:0/2572920289 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f418c005190 con 0x7f41ac0fe900 2026-03-09T14:56:35.700 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.701+0000 7f41b08a4700 1 -- 192.168.123.109:0/2572920289 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f41940056c0 con 0x7f41ac0fe900 2026-03-09T14:56:35.700 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:35.700 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:35.703 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.704+0000 7f41b18a6700 1 -- 192.168.123.109:0/2572920289 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f41980385a0 msgr2=0x7f419803aa50 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T14:56:35.703 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.704+0000 7f41b18a6700 1 --2- 192.168.123.109:0/2572920289 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f41980385a0 0x7f419803aa50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:35.703 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.704+0000 7f41b18a6700 1 -- 192.168.123.109:0/2572920289 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41ac0fe900 msgr2=0x7f41ac197380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:35.703 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.704+0000 7f41b18a6700 1 --2- 192.168.123.109:0/2572920289 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41ac0fe900 0x7f41ac197380 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7f4194004f40 tx=0x7f4194005e70 comp rx=0 tx=0).stop 2026-03-09T14:56:35.703 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.704+0000 7f41b18a6700 1 -- 192.168.123.109:0/2572920289 shutdown_connections 2026-03-09T14:56:35.703 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.704+0000 7f41b18a6700 1 --2- 192.168.123.109:0/2572920289 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f41980385a0 0x7f419803aa50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:35.704 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.704+0000 7f41b18a6700 1 --2- 192.168.123.109:0/2572920289 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f41ac0fe900 0x7f41ac197380 unknown :-1 s=CLOSED pgs=160 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:35.704 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.704+0000 7f41b18a6700 1 -- 192.168.123.109:0/2572920289 >> 192.168.123.109:0/2572920289 conn(0x7f41ac0fa490 msgr2=0x7f41ac0fb160 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:35.704 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.705+0000 7f41b18a6700 1 -- 192.168.123.109:0/2572920289 shutdown_connections 2026-03-09T14:56:35.704 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:35.705+0000 7f41b18a6700 1 -- 192.168.123.109:0/2572920289 wait complete. 2026-03-09T14:56:35.705 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:35.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:35 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/2572920289' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:36.755 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:36.756 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:36.891 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:36.931 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:37.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.221+0000 7f34aa5f3700 1 -- 192.168.123.109:0/3442949885 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f34a4100aa0 msgr2=0x7f34a4102e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:37.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.221+0000 7f34aa5f3700 1 --2- 192.168.123.109:0/3442949885 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f34a4100aa0 0x7f34a4102e80 secure :-1 s=READY pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7f348c009b00 tx=0x7f348c009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:37.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.222+0000 7f34aa5f3700 1 -- 192.168.123.109:0/3442949885 shutdown_connections 2026-03-09T14:56:37.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.222+0000 7f34aa5f3700 1 --2- 192.168.123.109:0/3442949885 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f34a4100aa0 0x7f34a4102e80 unknown :-1 s=CLOSED pgs=168 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:37.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.222+0000 7f34aa5f3700 1 -- 192.168.123.109:0/3442949885 >> 192.168.123.109:0/3442949885 conn(0x7f34a40fa4a0 msgr2=0x7f34a40fc8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:37.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.222+0000 7f34aa5f3700 1 -- 192.168.123.109:0/3442949885 shutdown_connections 2026-03-09T14:56:37.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.222+0000 7f34aa5f3700 1 -- 192.168.123.109:0/3442949885 wait complete. 2026-03-09T14:56:37.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.223+0000 7f34aa5f3700 1 Processor -- start 2026-03-09T14:56:37.222 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.223+0000 7f34aa5f3700 1 -- start start 2026-03-09T14:56:37.223 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.223+0000 7f34aa5f3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f34a4100aa0 0x7f34a4195190 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:37.223 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.223+0000 7f34aa5f3700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f34a41956d0 con 0x7f34a4100aa0 2026-03-09T14:56:37.223 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.224+0000 7f34a3fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f34a4100aa0 0x7f34a4195190 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:37.223 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.224+0000 7f34a3fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f34a4100aa0 0x7f34a4195190 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:50034/0 (socket says 192.168.123.109:50034) 2026-03-09T14:56:37.223 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.224+0000 7f34a3fff700 1 -- 192.168.123.109:0/1863028700 learned_addr learned my addr 192.168.123.109:0/1863028700 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:37.223 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.224+0000 7f34a3fff700 1 -- 192.168.123.109:0/1863028700 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f348c0097e0 con 0x7f34a4100aa0 2026-03-09T14:56:37.224 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.224+0000 7f34a3fff700 1 --2- 192.168.123.109:0/1863028700 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f34a4100aa0 0x7f34a4195190 secure :-1 s=READY pgs=169 cs=0 l=1 rev1=1 crypto rx=0x7f348c004f40 tx=0x7f348c005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:37.224 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.225+0000 7f34a17fa700 1 -- 192.168.123.109:0/1863028700 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f348c01d070 con 0x7f34a4100aa0 2026-03-09T14:56:37.224 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.225+0000 7f34aa5f3700 1 -- 192.168.123.109:0/1863028700 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f34a41958d0 con 0x7f34a4100aa0 2026-03-09T14:56:37.224 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.225+0000 7f34aa5f3700 1 -- 192.168.123.109:0/1863028700 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f34a4195cb0 con 0x7f34a4100aa0 2026-03-09T14:56:37.225 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.226+0000 7f34a17fa700 1 -- 192.168.123.109:0/1863028700 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f348c022470 con 0x7f34a4100aa0 2026-03-09T14:56:37.225 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.226+0000 7f34a17fa700 1 -- 192.168.123.109:0/1863028700 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f348c00f460 con 0x7f34a4100aa0 2026-03-09T14:56:37.225 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.226+0000 7f34a17fa700 1 -- 192.168.123.109:0/1863028700 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 15) v1 ==== 44873+0+0 (secure 0 0 0) 0x7f348c00f650 con 0x7f34a4100aa0 2026-03-09T14:56:37.226 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.226+0000 7f34a17fa700 1 -- 192.168.123.109:0/1863028700 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7f348c017440 con 0x7f34a4100aa0 2026-03-09T14:56:37.226 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.226+0000 7f34aa5f3700 1 -- 192.168.123.109:0/1863028700 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3484005320 con 0x7f34a4100aa0 2026-03-09T14:56:37.229 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.230+0000 7f34a17fa700 1 -- 192.168.123.109:0/1863028700 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f348c017780 con 0x7f34a4100aa0 2026-03-09T14:56:37.389 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.390+0000 7f34aa5f3700 1 -- 192.168.123.109:0/1863028700 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f34840059f0 con 0x7f34a4100aa0 2026-03-09T14:56:37.390 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.391+0000 7f34a17fa700 1 -- 192.168.123.109:0/1863028700 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f348c017920 con 0x7f34a4100aa0 2026-03-09T14:56:37.390 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:37.390 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:37.397 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.398+0000 7f349affd700 1 -- 192.168.123.109:0/1863028700 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f34a4100aa0 msgr2=0x7f34a4195190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:37.397 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.398+0000 7f349affd700 1 --2- 192.168.123.109:0/1863028700 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f34a4100aa0 0x7f34a4195190 secure :-1 s=READY pgs=169 cs=0 l=1 rev1=1 crypto rx=0x7f348c004f40 tx=0x7f348c005e70 comp rx=0 tx=0).stop 2026-03-09T14:56:37.397 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.398+0000 7f349affd700 1 -- 192.168.123.109:0/1863028700 shutdown_connections 2026-03-09T14:56:37.397 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.398+0000 7f349affd700 1 --2- 192.168.123.109:0/1863028700 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f34a4100aa0 0x7f34a4195190 unknown :-1 s=CLOSED pgs=169 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:37.397 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.398+0000 7f349affd700 1 -- 192.168.123.109:0/1863028700 >> 192.168.123.109:0/1863028700 conn(0x7f34a40fa4a0 msgr2=0x7f34a40fb170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:37.398 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.398+0000 7f349affd700 1 -- 192.168.123.109:0/1863028700 shutdown_connections 2026-03-09T14:56:37.398 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:37.398+0000 7f349affd700 1 -- 192.168.123.109:0/1863028700 wait complete. 2026-03-09T14:56:37.400 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: Active manager daemon vm05.lhsexd restarted 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: Activating manager daemon vm05.lhsexd 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: osdmap e5: 0 total, 0 up, 0 in 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: mgrmap e15: vm05.lhsexd(active, starting, since 0.00454509s) 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm05.lhsexd", "id": "vm05.lhsexd"}]: dispatch 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: Manager daemon vm05.lhsexd is now available 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/trash_purge_schedule"}]: dispatch 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1863028700' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:38.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:38.477 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:38.478 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:38.633 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:38.679 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T14:56:39.022 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.020+0000 7fa3629ed700 1 -- 192.168.123.109:0/4079773523 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa35c100030 msgr2=0x7fa35c100440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:39.022 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.020+0000 7fa3629ed700 1 --2- 192.168.123.109:0/4079773523 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa35c100030 0x7fa35c100440 secure :-1 s=READY pgs=170 cs=0 l=1 rev1=1 crypto rx=0x7fa344009b00 tx=0x7fa344009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:39.024 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.024+0000 7fa3629ed700 1 -- 192.168.123.109:0/4079773523 shutdown_connections 2026-03-09T14:56:39.026 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.024+0000 7fa3629ed700 1 --2- 192.168.123.109:0/4079773523 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa35c100030 0x7fa35c100440 unknown :-1 s=CLOSED pgs=170 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:39.026 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.024+0000 7fa3629ed700 1 -- 192.168.123.109:0/4079773523 >> 192.168.123.109:0/4079773523 conn(0x7fa35c0fb5e0 msgr2=0x7fa35c0fda10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:39.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.031+0000 7fa3629ed700 1 -- 192.168.123.109:0/4079773523 shutdown_connections 2026-03-09T14:56:39.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.031+0000 7fa3629ed700 1 -- 192.168.123.109:0/4079773523 wait complete. 2026-03-09T14:56:39.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.031+0000 7fa3629ed700 1 Processor -- start 2026-03-09T14:56:39.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.031+0000 7fa3629ed700 1 -- start start 2026-03-09T14:56:39.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.031+0000 7fa3629ed700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa35c100030 0x7fa35c107dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:39.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.031+0000 7fa3629ed700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa35c108310 con 0x7fa35c100030 2026-03-09T14:56:39.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.032+0000 7fa35bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa35c100030 0x7fa35c107dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:39.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.032+0000 7fa35bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa35c100030 0x7fa35c107dd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:50046/0 (socket says 192.168.123.109:50046) 2026-03-09T14:56:39.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.032+0000 7fa35bfff700 1 -- 192.168.123.109:0/4284446722 learned_addr learned my addr 192.168.123.109:0/4284446722 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:39.031 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.032+0000 7fa35bfff700 1 -- 192.168.123.109:0/4284446722 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa3440097e0 con 0x7fa35c100030 2026-03-09T14:56:39.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.032+0000 7fa35bfff700 1 --2- 192.168.123.109:0/4284446722 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa35c100030 0x7fa35c107dd0 secure :-1 s=READY pgs=171 cs=0 l=1 rev1=1 crypto rx=0x7fa344004730 tx=0x7fa344005980 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:39.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.032+0000 7fa3597fa700 1 -- 192.168.123.109:0/4284446722 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa344004030 con 0x7fa35c100030 2026-03-09T14:56:39.033 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.033+0000 7fa3629ed700 1 -- 192.168.123.109:0/4284446722 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa35c108510 con 0x7fa35c100030 2026-03-09T14:56:39.033 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.033+0000 7fa3629ed700 1 -- 192.168.123.109:0/4284446722 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa35c104a80 con 0x7fa35c100030 2026-03-09T14:56:39.033 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.033+0000 7fa3597fa700 1 -- 192.168.123.109:0/4284446722 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa344003bc0 con 0x7fa35c100030 2026-03-09T14:56:39.033 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.033+0000 7fa3597fa700 1 -- 192.168.123.109:0/4284446722 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fa344017530 con 0x7fa35c100030 2026-03-09T14:56:39.034 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.034+0000 7fa3597fa700 1 -- 192.168.123.109:0/4284446722 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 16) v1 ==== 45000+0+0 (secure 0 0 0) 0x7fa344017690 con 0x7fa35c100030 2026-03-09T14:56:39.034 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.034+0000 7fa3597fa700 1 --2- 192.168.123.109:0/4284446722 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa3480383f0 0x7fa34803a8a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:39.034 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.034+0000 7fa3597fa700 1 -- 192.168.123.109:0/4284446722 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7fa34404d3b0 con 0x7fa35c100030 2026-03-09T14:56:39.034 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.034+0000 7fa3629ed700 1 -- 192.168.123.109:0/4284446722 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa33c005320 con 0x7fa35c100030 2026-03-09T14:56:39.034 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.034+0000 7fa35b7fe700 1 --2- 192.168.123.109:0/4284446722 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa3480383f0 0x7fa34803a8a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:39.034 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.035+0000 7fa35b7fe700 1 --2- 192.168.123.109:0/4284446722 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa3480383f0 0x7fa34803a8a0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fa34c006fd0 tx=0x7fa34c006e40 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:39.037 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.038+0000 7fa3597fa700 1 -- 192.168.123.109:0/4284446722 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fa34402c430 con 0x7fa35c100030 2026-03-09T14:56:39.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:38 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:39.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:38 vm05 ceph-mon[50611]: mgrmap e16: vm05.lhsexd(active, since 1.01001s) 2026-03-09T14:56:39.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:38 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:39.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:38 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:39.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:38 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:39.192 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.192+0000 7fa3629ed700 1 -- 192.168.123.109:0/4284446722 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fa33c005190 con 0x7fa35c100030 2026-03-09T14:56:39.192 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.193+0000 7fa3597fa700 1 -- 192.168.123.109:0/4284446722 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fa34404a020 con 0x7fa35c100030 2026-03-09T14:56:39.192 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:39.193 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:39.195 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.195+0000 7fa3629ed700 1 -- 192.168.123.109:0/4284446722 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa3480383f0 msgr2=0x7fa34803a8a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:39.195 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.195+0000 7fa3629ed700 1 --2- 192.168.123.109:0/4284446722 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa3480383f0 0x7fa34803a8a0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fa34c006fd0 tx=0x7fa34c006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:39.195 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.195+0000 7fa3629ed700 1 -- 192.168.123.109:0/4284446722 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa35c100030 msgr2=0x7fa35c107dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:39.195 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.195+0000 7fa3629ed700 1 --2- 192.168.123.109:0/4284446722 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa35c100030 0x7fa35c107dd0 secure :-1 s=READY pgs=171 cs=0 l=1 rev1=1 crypto rx=0x7fa344004730 tx=0x7fa344005980 comp rx=0 tx=0).stop 2026-03-09T14:56:39.195 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.195+0000 7fa3629ed700 1 -- 192.168.123.109:0/4284446722 shutdown_connections 2026-03-09T14:56:39.195 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.195+0000 7fa3629ed700 1 --2- 192.168.123.109:0/4284446722 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa3480383f0 0x7fa34803a8a0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:39.195 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.195+0000 7fa3629ed700 1 --2- 192.168.123.109:0/4284446722 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa35c100030 0x7fa35c107dd0 unknown :-1 s=CLOSED pgs=171 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:39.195 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.195+0000 7fa3629ed700 1 -- 192.168.123.109:0/4284446722 >> 192.168.123.109:0/4284446722 conn(0x7fa35c0fb5e0 msgr2=0x7fa35c0fc290 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:39.195 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.196+0000 7fa3629ed700 1 -- 192.168.123.109:0/4284446722 shutdown_connections 2026-03-09T14:56:39.195 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:39.196+0000 7fa3629ed700 1 -- 192.168.123.109:0/4284446722 wait complete. 2026-03-09T14:56:39.198 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:40.288 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:40.288 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: [09/Mar/2026:14:56:38] ENGINE Bus STARTING 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: [09/Mar/2026:14:56:38] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: [09/Mar/2026:14:56:38] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: [09/Mar/2026:14:56:38] ENGINE Bus STARTED 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/4284446722' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:39 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:56:40.476 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:40.766 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.765+0000 7f935bfff700 1 -- 192.168.123.109:0/3932769469 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f935c072730 msgr2=0x7f935c10edb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:40.766 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.765+0000 7f935bfff700 1 --2- 192.168.123.109:0/3932769469 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f935c072730 0x7f935c10edb0 secure :-1 s=READY pgs=172 cs=0 l=1 rev1=1 crypto rx=0x7f934c009b00 tx=0x7f934c009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:40.766 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.766+0000 7f935bfff700 1 -- 192.168.123.109:0/3932769469 shutdown_connections 2026-03-09T14:56:40.766 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.766+0000 7f935bfff700 1 --2- 192.168.123.109:0/3932769469 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f935c072730 0x7f935c10edb0 unknown :-1 s=CLOSED pgs=172 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:40.766 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.766+0000 7f935bfff700 1 -- 192.168.123.109:0/3932769469 >> 192.168.123.109:0/3932769469 conn(0x7f935c06c410 msgr2=0x7f935c06c810 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:40.766 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.766+0000 7f935bfff700 1 -- 192.168.123.109:0/3932769469 shutdown_connections 2026-03-09T14:56:40.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.767+0000 7f935bfff700 1 -- 192.168.123.109:0/3932769469 wait complete. 2026-03-09T14:56:40.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.767+0000 7f935bfff700 1 Processor -- start 2026-03-09T14:56:40.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.767+0000 7f935bfff700 1 -- start start 2026-03-09T14:56:40.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.767+0000 7f935bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f935c116070 0x7f935c116440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:40.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.767+0000 7f935bfff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f934c012070 con 0x7f935c116070 2026-03-09T14:56:40.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.768+0000 7f935affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f935c116070 0x7f935c116440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:40.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.768+0000 7f935affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f935c116070 0x7f935c116440 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:50060/0 (socket says 192.168.123.109:50060) 2026-03-09T14:56:40.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.768+0000 7f935affd700 1 -- 192.168.123.109:0/3024633886 learned_addr learned my addr 192.168.123.109:0/3024633886 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:40.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.768+0000 7f935affd700 1 -- 192.168.123.109:0/3024633886 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f934c0097e0 con 0x7f935c116070 2026-03-09T14:56:40.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.769+0000 7f935affd700 1 --2- 192.168.123.109:0/3024633886 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f935c116070 0x7f935c116440 secure :-1 s=READY pgs=173 cs=0 l=1 rev1=1 crypto rx=0x7f934c00bac0 tx=0x7f934c0052e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:40.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.769+0000 7f9343fff700 1 -- 192.168.123.109:0/3024633886 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f934c01d070 con 0x7f935c116070 2026-03-09T14:56:40.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.769+0000 7f9343fff700 1 -- 192.168.123.109:0/3024633886 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f934c005640 con 0x7f935c116070 2026-03-09T14:56:40.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.770+0000 7f935bfff700 1 -- 192.168.123.109:0/3024633886 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f935c1169e0 con 0x7f935c116070 2026-03-09T14:56:40.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.770+0000 7f935bfff700 1 -- 192.168.123.109:0/3024633886 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f935c1b4910 con 0x7f935c116070 2026-03-09T14:56:40.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.770+0000 7f9343fff700 1 -- 192.168.123.109:0/3024633886 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f934c003e60 con 0x7f935c116070 2026-03-09T14:56:40.771 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.772+0000 7f935bfff700 1 -- 192.168.123.109:0/3024633886 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f935c04efc0 con 0x7f935c116070 2026-03-09T14:56:40.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.773+0000 7f9343fff700 1 -- 192.168.123.109:0/3024633886 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 17) v1 ==== 45198+0+0 (secure 0 0 0) 0x7f934c004050 con 0x7f935c116070 2026-03-09T14:56:40.773 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.773+0000 7f9343fff700 1 --2- 192.168.123.109:0/3024633886 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9344038330 0x7f934403a7e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:40.773 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.773+0000 7f9343fff700 1 -- 192.168.123.109:0/3024633886 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7f934c04bf30 con 0x7f935c116070 2026-03-09T14:56:40.774 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.774+0000 7f935a7fc700 1 --2- 192.168.123.109:0/3024633886 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9344038330 0x7f934403a7e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:40.774 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.775+0000 7f935a7fc700 1 --2- 192.168.123.109:0/3024633886 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9344038330 0x7f934403a7e0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f9350006fd0 tx=0x7f9350006e40 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:40.775 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.776+0000 7f9343fff700 1 -- 192.168.123.109:0/3024633886 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f934c00ee40 con 0x7f935c116070 2026-03-09T14:56:40.946 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.947+0000 7f935bfff700 1 -- 192.168.123.109:0/3024633886 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f935c1173d0 con 0x7f935c116070 2026-03-09T14:56:40.948 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.947+0000 7f9343fff700 1 -- 192.168.123.109:0/3024633886 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f934c026030 con 0x7f935c116070 2026-03-09T14:56:40.948 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:40.948 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:40.950 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.951+0000 7f935bfff700 1 -- 192.168.123.109:0/3024633886 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9344038330 msgr2=0x7f934403a7e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:40.950 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.951+0000 7f935bfff700 1 --2- 192.168.123.109:0/3024633886 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9344038330 0x7f934403a7e0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f9350006fd0 tx=0x7f9350006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:40.950 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.951+0000 7f935bfff700 1 -- 192.168.123.109:0/3024633886 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f935c116070 msgr2=0x7f935c116440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:40.950 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.951+0000 7f935bfff700 1 --2- 192.168.123.109:0/3024633886 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f935c116070 0x7f935c116440 secure :-1 s=READY pgs=173 cs=0 l=1 rev1=1 crypto rx=0x7f934c00bac0 tx=0x7f934c0052e0 comp rx=0 tx=0).stop 2026-03-09T14:56:40.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.952+0000 7f935bfff700 1 -- 192.168.123.109:0/3024633886 shutdown_connections 2026-03-09T14:56:40.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.952+0000 7f935bfff700 1 --2- 192.168.123.109:0/3024633886 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9344038330 0x7f934403a7e0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:40.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.952+0000 7f935bfff700 1 --2- 192.168.123.109:0/3024633886 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f935c116070 0x7f935c116440 unknown :-1 s=CLOSED pgs=173 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:40.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.952+0000 7f935bfff700 1 -- 192.168.123.109:0/3024633886 >> 192.168.123.109:0/3024633886 conn(0x7f935c06c410 msgr2=0x7f935c1095f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:40.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.952+0000 7f935bfff700 1 -- 192.168.123.109:0/3024633886 shutdown_connections 2026-03-09T14:56:40.951 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:40.952+0000 7f935bfff700 1 -- 192.168.123.109:0/3024633886 wait complete. 2026-03-09T14:56:40.953 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:40 vm05 ceph-mon[50611]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:56:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:40 vm05 ceph-mon[50611]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T14:56:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:40 vm05 ceph-mon[50611]: mgrmap e17: vm05.lhsexd(active, since 2s) 2026-03-09T14:56:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:40 vm05 ceph-mon[50611]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:40 vm05 ceph-mon[50611]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:40 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:40 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:41.999 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:41.999 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:41 vm05 ceph-mon[50611]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:56:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:41 vm05 ceph-mon[50611]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:56:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:41 vm05 ceph-mon[50611]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T14:56:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:41 vm05 ceph-mon[50611]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T14:56:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:41 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3024633886' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:41 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:41 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:41 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:41 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T14:56:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:41 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T14:56:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:41 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:41 vm05 ceph-mon[50611]: Deploying daemon ceph-exporter.vm09 on vm09 2026-03-09T14:56:42.335 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:42.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.626+0000 7f5965345700 1 -- 192.168.123.109:0/31901160 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f596006d6b0 msgr2=0x7f596010edb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:42.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.626+0000 7f5965345700 1 --2- 192.168.123.109:0/31901160 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f596006d6b0 0x7f596010edb0 secure :-1 s=READY pgs=175 cs=0 l=1 rev1=1 crypto rx=0x7f5950009b00 tx=0x7f5950009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:42.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.627+0000 7f5965345700 1 -- 192.168.123.109:0/31901160 shutdown_connections 2026-03-09T14:56:42.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.627+0000 7f5965345700 1 --2- 192.168.123.109:0/31901160 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f596006d6b0 0x7f596010edb0 unknown :-1 s=CLOSED pgs=175 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:42.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.627+0000 7f5965345700 1 -- 192.168.123.109:0/31901160 >> 192.168.123.109:0/31901160 conn(0x7f596006c430 msgr2=0x7f596006c830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:42.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.627+0000 7f5965345700 1 -- 192.168.123.109:0/31901160 shutdown_connections 2026-03-09T14:56:42.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.628+0000 7f5965345700 1 -- 192.168.123.109:0/31901160 wait complete. 2026-03-09T14:56:42.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.628+0000 7f5965345700 1 Processor -- start 2026-03-09T14:56:42.627 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.628+0000 7f5965345700 1 -- start start 2026-03-09T14:56:42.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.628+0000 7f5965345700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f59601a41c0 0x7f59601a4590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:42.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.628+0000 7f5965345700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5950012070 con 0x7f59601a41c0 2026-03-09T14:56:42.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.629+0000 7f595effd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f59601a41c0 0x7f59601a4590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:42.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.629+0000 7f595effd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f59601a41c0 0x7f59601a4590 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:50082/0 (socket says 192.168.123.109:50082) 2026-03-09T14:56:42.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.629+0000 7f595effd700 1 -- 192.168.123.109:0/3225882723 learned_addr learned my addr 192.168.123.109:0/3225882723 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:42.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.629+0000 7f595effd700 1 -- 192.168.123.109:0/3225882723 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f59500097e0 con 0x7f59601a41c0 2026-03-09T14:56:42.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.629+0000 7f595effd700 1 --2- 192.168.123.109:0/3225882723 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f59601a41c0 0x7f59601a4590 secure :-1 s=READY pgs=176 cs=0 l=1 rev1=1 crypto rx=0x7f595000c010 tx=0x7f595000bba0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:42.628 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.629+0000 7f5947fff700 1 -- 192.168.123.109:0/3225882723 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f595001c070 con 0x7f59601a41c0 2026-03-09T14:56:42.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.629+0000 7f5965345700 1 -- 192.168.123.109:0/3225882723 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f59601a4b30 con 0x7f59601a41c0 2026-03-09T14:56:42.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.629+0000 7f5965345700 1 -- 192.168.123.109:0/3225882723 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f59601a8b20 con 0x7f59601a41c0 2026-03-09T14:56:42.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.630+0000 7f5947fff700 1 -- 192.168.123.109:0/3225882723 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5950003e30 con 0x7f59601a41c0 2026-03-09T14:56:42.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.630+0000 7f5947fff700 1 -- 192.168.123.109:0/3225882723 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f5950017440 con 0x7f59601a41c0 2026-03-09T14:56:42.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.631+0000 7f5947fff700 1 -- 192.168.123.109:0/3225882723 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 17) v1 ==== 45198+0+0 (secure 0 0 0) 0x7f59500175a0 con 0x7f59601a41c0 2026-03-09T14:56:42.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.631+0000 7f5947fff700 1 --2- 192.168.123.109:0/3225882723 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5948038640 0x7f594803aaf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:42.630 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.631+0000 7f5947fff700 1 -- 192.168.123.109:0/3225882723 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7f595004d240 con 0x7f59601a41c0 2026-03-09T14:56:42.631 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.631+0000 7f595e7fc700 1 --2- 192.168.123.109:0/3225882723 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5948038640 0x7f594803aaf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:42.634 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.635+0000 7f595e7fc700 1 --2- 192.168.123.109:0/3225882723 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5948038640 0x7f594803aaf0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f595800ad30 tx=0x7f59580093f0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:42.634 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.635+0000 7f5965345700 1 -- 192.168.123.109:0/3225882723 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f596004efc0 con 0x7f59601a41c0 2026-03-09T14:56:42.638 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.638+0000 7f5947fff700 1 -- 192.168.123.109:0/3225882723 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f595002a430 con 0x7f59601a41c0 2026-03-09T14:56:42.819 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.819+0000 7f5965345700 1 -- 192.168.123.109:0/3225882723 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f5960062380 con 0x7f59601a41c0 2026-03-09T14:56:42.821 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.822+0000 7f5947fff700 1 -- 192.168.123.109:0/3225882723 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f595001fb60 con 0x7f59601a41c0 2026-03-09T14:56:42.822 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:42.822 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:42.838 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.838+0000 7f5965345700 1 -- 192.168.123.109:0/3225882723 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5948038640 msgr2=0x7f594803aaf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:42.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.838+0000 7f5965345700 1 --2- 192.168.123.109:0/3225882723 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5948038640 0x7f594803aaf0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f595800ad30 tx=0x7f59580093f0 comp rx=0 tx=0).stop 2026-03-09T14:56:42.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.838+0000 7f5965345700 1 -- 192.168.123.109:0/3225882723 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f59601a41c0 msgr2=0x7f59601a4590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:42.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.838+0000 7f5965345700 1 --2- 192.168.123.109:0/3225882723 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f59601a41c0 0x7f59601a4590 secure :-1 s=READY pgs=176 cs=0 l=1 rev1=1 crypto rx=0x7f595000c010 tx=0x7f595000bba0 comp rx=0 tx=0).stop 2026-03-09T14:56:42.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.839+0000 7f5965345700 1 -- 192.168.123.109:0/3225882723 shutdown_connections 2026-03-09T14:56:42.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.839+0000 7f5965345700 1 --2- 192.168.123.109:0/3225882723 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5948038640 0x7f594803aaf0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:42.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.839+0000 7f5965345700 1 --2- 192.168.123.109:0/3225882723 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f59601a41c0 0x7f59601a4590 unknown :-1 s=CLOSED pgs=176 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:42.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.839+0000 7f5965345700 1 -- 192.168.123.109:0/3225882723 >> 192.168.123.109:0/3225882723 conn(0x7f596006c430 msgr2=0x7f596010b660 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:42.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.839+0000 7f5965345700 1 -- 192.168.123.109:0/3225882723 shutdown_connections 2026-03-09T14:56:42.839 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:42.839+0000 7f5965345700 1 -- 192.168.123.109:0/3225882723 wait complete. 2026-03-09T14:56:42.844 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:43.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:43.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:43.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:43.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:43.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T14:56:43.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-09T14:56:43.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:43.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:43 vm05 ceph-mon[50611]: Deploying daemon crash.vm09 on vm09 2026-03-09T14:56:43.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:43 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3225882723' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:44.032 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:44.032 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:44.195 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:44.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.473+0000 7f3249880700 1 -- 192.168.123.109:0/3539495768 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3244106150 msgr2=0x7f3244106520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:44.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.473+0000 7f3249880700 1 --2- 192.168.123.109:0/3539495768 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3244106150 0x7f3244106520 secure :-1 s=READY pgs=177 cs=0 l=1 rev1=1 crypto rx=0x7f322c009b00 tx=0x7f322c009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:44.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.474+0000 7f3249880700 1 -- 192.168.123.109:0/3539495768 shutdown_connections 2026-03-09T14:56:44.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.474+0000 7f3249880700 1 --2- 192.168.123.109:0/3539495768 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3244106150 0x7f3244106520 unknown :-1 s=CLOSED pgs=177 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:44.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.474+0000 7f3249880700 1 -- 192.168.123.109:0/3539495768 >> 192.168.123.109:0/3539495768 conn(0x7f32440f9d90 msgr2=0x7f32440fc1a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:44.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.474+0000 7f3249880700 1 -- 192.168.123.109:0/3539495768 shutdown_connections 2026-03-09T14:56:44.474 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.474+0000 7f3249880700 1 -- 192.168.123.109:0/3539495768 wait complete. 2026-03-09T14:56:44.474 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.474+0000 7f3249880700 1 Processor -- start 2026-03-09T14:56:44.474 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.475+0000 7f3249880700 1 -- start start 2026-03-09T14:56:44.474 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.475+0000 7f3249880700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3244106150 0x7f3244195590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:44.474 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.475+0000 7f3249880700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3244195ad0 con 0x7f3244106150 2026-03-09T14:56:44.474 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.475+0000 7f3242ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3244106150 0x7f3244195590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:44.474 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.475+0000 7f3242ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3244106150 0x7f3244195590 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:50104/0 (socket says 192.168.123.109:50104) 2026-03-09T14:56:44.474 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.475+0000 7f3242ffd700 1 -- 192.168.123.109:0/1832913256 learned_addr learned my addr 192.168.123.109:0/1832913256 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:44.475 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.475+0000 7f3242ffd700 1 -- 192.168.123.109:0/1832913256 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f322c0097e0 con 0x7f3244106150 2026-03-09T14:56:44.475 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.476+0000 7f3242ffd700 1 --2- 192.168.123.109:0/1832913256 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3244106150 0x7f3244195590 secure :-1 s=READY pgs=178 cs=0 l=1 rev1=1 crypto rx=0x7f322c006010 tx=0x7f322c004dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:44.475 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.476+0000 7f324887e700 1 -- 192.168.123.109:0/1832913256 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f322c01c070 con 0x7f3244106150 2026-03-09T14:56:44.475 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.476+0000 7f3249880700 1 -- 192.168.123.109:0/1832913256 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3244195cd0 con 0x7f3244106150 2026-03-09T14:56:44.475 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.476+0000 7f3249880700 1 -- 192.168.123.109:0/1832913256 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f32441960f0 con 0x7f3244106150 2026-03-09T14:56:44.475 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.476+0000 7f324887e700 1 -- 192.168.123.109:0/1832913256 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f322c021470 con 0x7f3244106150 2026-03-09T14:56:44.475 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.476+0000 7f324887e700 1 -- 192.168.123.109:0/1832913256 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f322c021e70 con 0x7f3244106150 2026-03-09T14:56:44.476 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.477+0000 7f324887e700 1 -- 192.168.123.109:0/1832913256 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 17) v1 ==== 45198+0+0 (secure 0 0 0) 0x7f322c005320 con 0x7f3244106150 2026-03-09T14:56:44.476 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.477+0000 7f3249880700 1 -- 192.168.123.109:0/1832913256 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3224005320 con 0x7f3244106150 2026-03-09T14:56:44.476 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.477+0000 7f324887e700 1 --2- 192.168.123.109:0/1832913256 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f3230038600 0x7f323003aab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:44.476 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.477+0000 7f324887e700 1 -- 192.168.123.109:0/1832913256 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7f322c04c490 con 0x7f3244106150 2026-03-09T14:56:44.476 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.477+0000 7f32427fc700 1 --2- 192.168.123.109:0/1832913256 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f3230038600 0x7f323003aab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:44.477 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.478+0000 7f32427fc700 1 --2- 192.168.123.109:0/1832913256 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f3230038600 0x7f323003aab0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f3234006fd0 tx=0x7f3234006e40 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:44.479 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.480+0000 7f324887e700 1 -- 192.168.123.109:0/1832913256 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f322c029b70 con 0x7f3244106150 2026-03-09T14:56:44.633 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.633+0000 7f3249880700 1 -- 192.168.123.109:0/1832913256 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f3224005190 con 0x7f3244106150 2026-03-09T14:56:44.633 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.634+0000 7f324887e700 1 -- 192.168.123.109:0/1832913256 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f322c026030 con 0x7f3244106150 2026-03-09T14:56:44.633 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:44.633 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:44.636 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.636+0000 7f3249880700 1 -- 192.168.123.109:0/1832913256 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f3230038600 msgr2=0x7f323003aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:44.636 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.636+0000 7f3249880700 1 --2- 192.168.123.109:0/1832913256 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f3230038600 0x7f323003aab0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f3234006fd0 tx=0x7f3234006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:44.636 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.636+0000 7f3249880700 1 -- 192.168.123.109:0/1832913256 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3244106150 msgr2=0x7f3244195590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:44.636 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.636+0000 7f3249880700 1 --2- 192.168.123.109:0/1832913256 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3244106150 0x7f3244195590 secure :-1 s=READY pgs=178 cs=0 l=1 rev1=1 crypto rx=0x7f322c006010 tx=0x7f322c004dc0 comp rx=0 tx=0).stop 2026-03-09T14:56:44.636 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.637+0000 7f3249880700 1 -- 192.168.123.109:0/1832913256 shutdown_connections 2026-03-09T14:56:44.636 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.637+0000 7f3249880700 1 --2- 192.168.123.109:0/1832913256 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f3230038600 0x7f323003aab0 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:44.636 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.637+0000 7f3249880700 1 --2- 192.168.123.109:0/1832913256 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3244106150 0x7f3244195590 unknown :-1 s=CLOSED pgs=178 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:44.636 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.637+0000 7f3249880700 1 -- 192.168.123.109:0/1832913256 >> 192.168.123.109:0/1832913256 conn(0x7f32440f9d90 msgr2=0x7f32440fbae0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:44.636 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.637+0000 7f3249880700 1 -- 192.168.123.109:0/1832913256 shutdown_connections 2026-03-09T14:56:44.636 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:44.637+0000 7f3249880700 1 -- 192.168.123.109:0/1832913256 wait complete. 2026-03-09T14:56:44.637 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:44.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:44 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:44.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:44 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:44.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:44 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:44.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:44 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:44.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:44 vm05 ceph-mon[50611]: Deploying daemon node-exporter.vm09 on vm09 2026-03-09T14:56:45.711 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:45.711 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:45.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:45 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1832913256' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:45.917 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:46.280 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.279+0000 7f630359e700 1 -- 192.168.123.109:0/2932180674 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f62fc099230 msgr2=0x7f62fc099600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:46.280 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.280+0000 7f630359e700 1 --2- 192.168.123.109:0/2932180674 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f62fc099230 0x7f62fc099600 secure :-1 s=READY pgs=179 cs=0 l=1 rev1=1 crypto rx=0x7f62f4009b00 tx=0x7f62f4009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:46.280 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.280+0000 7f630359e700 1 -- 192.168.123.109:0/2932180674 shutdown_connections 2026-03-09T14:56:46.280 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.280+0000 7f630359e700 1 --2- 192.168.123.109:0/2932180674 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f62fc099230 0x7f62fc099600 unknown :-1 s=CLOSED pgs=179 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:46.280 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.280+0000 7f630359e700 1 -- 192.168.123.109:0/2932180674 >> 192.168.123.109:0/2932180674 conn(0x7f62fc00e300 msgr2=0x7f62fc00e700 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:46.283 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.284+0000 7f630359e700 1 -- 192.168.123.109:0/2932180674 shutdown_connections 2026-03-09T14:56:46.283 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.284+0000 7f630359e700 1 -- 192.168.123.109:0/2932180674 wait complete. 2026-03-09T14:56:46.284 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.284+0000 7f630359e700 1 Processor -- start 2026-03-09T14:56:46.284 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.285+0000 7f630359e700 1 -- start start 2026-03-09T14:56:46.284 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.285+0000 7f630359e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f62fc099230 0x7f62fc132640 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:46.284 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.285+0000 7f630359e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f62fc132b80 con 0x7f62fc099230 2026-03-09T14:56:46.284 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.285+0000 7f630259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f62fc099230 0x7f62fc132640 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:46.285 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.285+0000 7f630259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f62fc099230 0x7f62fc132640 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:49388/0 (socket says 192.168.123.109:49388) 2026-03-09T14:56:46.285 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.285+0000 7f630259c700 1 -- 192.168.123.109:0/715853318 learned_addr learned my addr 192.168.123.109:0/715853318 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:46.285 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.286+0000 7f630259c700 1 -- 192.168.123.109:0/715853318 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f62f40097e0 con 0x7f62fc099230 2026-03-09T14:56:46.285 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.286+0000 7f630259c700 1 --2- 192.168.123.109:0/715853318 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f62fc099230 0x7f62fc132640 secure :-1 s=READY pgs=180 cs=0 l=1 rev1=1 crypto rx=0x7f62f4005e00 tx=0x7f62f40050b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:46.285 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.286+0000 7f62f37fe700 1 -- 192.168.123.109:0/715853318 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f62f401c070 con 0x7f62fc099230 2026-03-09T14:56:46.285 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.286+0000 7f630359e700 1 -- 192.168.123.109:0/715853318 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f62fc132de0 con 0x7f62fc099230 2026-03-09T14:56:46.286 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.286+0000 7f62f37fe700 1 -- 192.168.123.109:0/715853318 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f62f40054a0 con 0x7f62fc099230 2026-03-09T14:56:46.286 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.286+0000 7f630359e700 1 -- 192.168.123.109:0/715853318 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f62fc12c840 con 0x7f62fc099230 2026-03-09T14:56:46.286 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.286+0000 7f62f37fe700 1 -- 192.168.123.109:0/715853318 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7f62f400f680 con 0x7f62fc099230 2026-03-09T14:56:46.287 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.287+0000 7f62f17fa700 1 -- 192.168.123.109:0/715853318 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f62fc008730 con 0x7f62fc099230 2026-03-09T14:56:46.287 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.287+0000 7f62f37fe700 1 -- 192.168.123.109:0/715853318 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 17) v1 ==== 45198+0+0 (secure 0 0 0) 0x7f62f4021b50 con 0x7f62fc099230 2026-03-09T14:56:46.287 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.288+0000 7f62f37fe700 1 --2- 192.168.123.109:0/715853318 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f62ec038650 0x7f62ec03ab00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:46.287 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.288+0000 7f62f37fe700 1 -- 192.168.123.109:0/715853318 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7f62f404c620 con 0x7f62fc099230 2026-03-09T14:56:46.287 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.288+0000 7f6301d9b700 1 --2- 192.168.123.109:0/715853318 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f62ec038650 0x7f62ec03ab00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:46.288 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.288+0000 7f6301d9b700 1 --2- 192.168.123.109:0/715853318 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f62ec038650 0x7f62ec03ab00 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f62f8006fd0 tx=0x7f62f8006e40 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:46.290 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.291+0000 7f62f37fe700 1 -- 192.168.123.109:0/715853318 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f62f4021990 con 0x7f62fc099230 2026-03-09T14:56:46.460 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.459+0000 7f62f17fa700 1 -- 192.168.123.109:0/715853318 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f62fc004030 con 0x7f62fc099230 2026-03-09T14:56:46.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.467+0000 7f62f37fe700 1 -- 192.168.123.109:0/715853318 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7f62f4026030 con 0x7f62fc099230 2026-03-09T14:56:46.466 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:46.467 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:46.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.470+0000 7f62f17fa700 1 -- 192.168.123.109:0/715853318 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f62ec038650 msgr2=0x7f62ec03ab00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:46.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.470+0000 7f62f17fa700 1 --2- 192.168.123.109:0/715853318 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f62ec038650 0x7f62ec03ab00 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f62f8006fd0 tx=0x7f62f8006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:46.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.470+0000 7f62f17fa700 1 -- 192.168.123.109:0/715853318 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f62fc099230 msgr2=0x7f62fc132640 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:46.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.470+0000 7f62f17fa700 1 --2- 192.168.123.109:0/715853318 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f62fc099230 0x7f62fc132640 secure :-1 s=READY pgs=180 cs=0 l=1 rev1=1 crypto rx=0x7f62f4005e00 tx=0x7f62f40050b0 comp rx=0 tx=0).stop 2026-03-09T14:56:46.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.470+0000 7f62f17fa700 1 -- 192.168.123.109:0/715853318 shutdown_connections 2026-03-09T14:56:46.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.470+0000 7f62f17fa700 1 --2- 192.168.123.109:0/715853318 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f62ec038650 0x7f62ec03ab00 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:46.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.470+0000 7f62f17fa700 1 --2- 192.168.123.109:0/715853318 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f62fc099230 0x7f62fc132640 unknown :-1 s=CLOSED pgs=180 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:46.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.470+0000 7f62f17fa700 1 -- 192.168.123.109:0/715853318 >> 192.168.123.109:0/715853318 conn(0x7f62fc00e300 msgr2=0x7f62fc097140 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:46.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.470+0000 7f62f17fa700 1 -- 192.168.123.109:0/715853318 shutdown_connections 2026-03-09T14:56:46.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:46.470+0000 7f62f17fa700 1 -- 192.168.123.109:0/715853318 wait complete. 2026-03-09T14:56:46.470 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:47.565 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:47.565 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/715853318' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.cfuwdz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm09.cfuwdz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: Deploying daemon mgr.vm09.cfuwdz on vm09 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:47 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:47.912 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:48.361 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.360+0000 7fc0fb547700 1 -- 192.168.123.109:0/2736186897 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0f406d6b0 msgr2=0x7fc0f410edb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:48.361 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.360+0000 7fc0fb547700 1 --2- 192.168.123.109:0/2736186897 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0f406d6b0 0x7fc0f410edb0 secure :-1 s=READY pgs=183 cs=0 l=1 rev1=1 crypto rx=0x7fc0f0009b00 tx=0x7fc0f0009e10 comp rx=0 tx=0).stop 2026-03-09T14:56:48.361 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.360+0000 7fc0fb547700 1 -- 192.168.123.109:0/2736186897 shutdown_connections 2026-03-09T14:56:48.361 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.360+0000 7fc0fb547700 1 --2- 192.168.123.109:0/2736186897 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0f406d6b0 0x7fc0f410edb0 unknown :-1 s=CLOSED pgs=183 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:48.361 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.360+0000 7fc0fb547700 1 -- 192.168.123.109:0/2736186897 >> 192.168.123.109:0/2736186897 conn(0x7fc0f406c430 msgr2=0x7fc0f406c830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:48.361 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.360+0000 7fc0fb547700 1 -- 192.168.123.109:0/2736186897 shutdown_connections 2026-03-09T14:56:48.361 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.360+0000 7fc0fb547700 1 -- 192.168.123.109:0/2736186897 wait complete. 2026-03-09T14:56:48.361 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.361+0000 7fc0fb547700 1 Processor -- start 2026-03-09T14:56:48.361 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.361+0000 7fc0fb547700 1 -- start start 2026-03-09T14:56:48.361 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.361+0000 7fc0fb547700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0f406d6b0 0x7fc0f4114b70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:48.361 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.361+0000 7fc0fb547700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc0f4117b50 con 0x7fc0f406d6b0 2026-03-09T14:56:48.363 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.362+0000 7fc0f92e3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0f406d6b0 0x7fc0f4114b70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:48.363 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.362+0000 7fc0f92e3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0f406d6b0 0x7fc0f4114b70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:49414/0 (socket says 192.168.123.109:49414) 2026-03-09T14:56:48.363 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.362+0000 7fc0f92e3700 1 -- 192.168.123.109:0/1345822051 learned_addr learned my addr 192.168.123.109:0/1345822051 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:48.363 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.362+0000 7fc0f92e3700 1 -- 192.168.123.109:0/1345822051 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc0f00097e0 con 0x7fc0f406d6b0 2026-03-09T14:56:48.364 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.362+0000 7fc0f92e3700 1 --2- 192.168.123.109:0/1345822051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0f406d6b0 0x7fc0f4114b70 secure :-1 s=READY pgs=184 cs=0 l=1 rev1=1 crypto rx=0x7fc0f0006010 tx=0x7fc0f0004dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:48.364 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.362+0000 7fc0ea7fc700 1 -- 192.168.123.109:0/1345822051 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fc0f001c070 con 0x7fc0f406d6b0 2026-03-09T14:56:48.364 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.362+0000 7fc0ea7fc700 1 -- 192.168.123.109:0/1345822051 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc0f0021470 con 0x7fc0f406d6b0 2026-03-09T14:56:48.364 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.362+0000 7fc0ea7fc700 1 -- 192.168.123.109:0/1345822051 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 214+0+0 (secure 0 0 0) 0x7fc0f000f460 con 0x7fc0f406d6b0 2026-03-09T14:56:48.364 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.364+0000 7fc0fb547700 1 -- 192.168.123.109:0/1345822051 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc0f4115110 con 0x7fc0f406d6b0 2026-03-09T14:56:48.364 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.364+0000 7fc0fb547700 1 -- 192.168.123.109:0/1345822051 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc0f4115590 con 0x7fc0f406d6b0 2026-03-09T14:56:48.365 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.365+0000 7fc0ea7fc700 1 -- 192.168.123.109:0/1345822051 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 17) v1 ==== 45198+0+0 (secure 0 0 0) 0x7fc0f00215e0 con 0x7fc0f406d6b0 2026-03-09T14:56:48.365 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.365+0000 7fc0ea7fc700 1 --2- 192.168.123.109:0/1345822051 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc0e0038600 0x7fc0e003aab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:48.365 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.365+0000 7fc0ea7fc700 1 -- 192.168.123.109:0/1345822051 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7fc0f004c620 con 0x7fc0f406d6b0 2026-03-09T14:56:48.365 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.366+0000 7fc0f8ae2700 1 --2- 192.168.123.109:0/1345822051 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc0e0038600 0x7fc0e003aab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:48.365 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.366+0000 7fc0f8ae2700 1 --2- 192.168.123.109:0/1345822051 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc0e0038600 0x7fc0e003aab0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fc0e4006fd0 tx=0x7fc0e4006e40 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:48.366 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.366+0000 7fc0fb547700 1 -- 192.168.123.109:0/1345822051 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc0f410e420 con 0x7fc0f406d6b0 2026-03-09T14:56:48.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.372+0000 7fc0ea7fc700 1 -- 192.168.123.109:0/1345822051 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fc0f0026030 con 0x7fc0f406d6b0 2026-03-09T14:56:48.635 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.635+0000 7fc0fb547700 1 -- 192.168.123.109:0/1345822051 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7fc0f404efc0 con 0x7fc0f406d6b0 2026-03-09T14:56:48.635 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.636+0000 7fc0ea7fc700 1 -- 192.168.123.109:0/1345822051 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) v1 ==== 95+0+751 (secure 0 0 0) 0x7fc0f002aa30 con 0x7fc0f406d6b0 2026-03-09T14:56:48.636 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:48.636 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:55:09.447382Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:56:48.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.643+0000 7fc0fb547700 1 -- 192.168.123.109:0/1345822051 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc0e0038600 msgr2=0x7fc0e003aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:48.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.643+0000 7fc0fb547700 1 --2- 192.168.123.109:0/1345822051 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc0e0038600 0x7fc0e003aab0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fc0e4006fd0 tx=0x7fc0e4006e40 comp rx=0 tx=0).stop 2026-03-09T14:56:48.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.643+0000 7fc0fb547700 1 -- 192.168.123.109:0/1345822051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0f406d6b0 msgr2=0x7fc0f4114b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:48.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.643+0000 7fc0fb547700 1 --2- 192.168.123.109:0/1345822051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0f406d6b0 0x7fc0f4114b70 secure :-1 s=READY pgs=184 cs=0 l=1 rev1=1 crypto rx=0x7fc0f0006010 tx=0x7fc0f0004dc0 comp rx=0 tx=0).stop 2026-03-09T14:56:48.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.643+0000 7fc0fb547700 1 -- 192.168.123.109:0/1345822051 shutdown_connections 2026-03-09T14:56:48.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.643+0000 7fc0fb547700 1 --2- 192.168.123.109:0/1345822051 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc0e0038600 0x7fc0e003aab0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:48.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.643+0000 7fc0fb547700 1 --2- 192.168.123.109:0/1345822051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0f406d6b0 0x7fc0f4114b70 unknown :-1 s=CLOSED pgs=184 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:48.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.643+0000 7fc0fb547700 1 -- 192.168.123.109:0/1345822051 >> 192.168.123.109:0/1345822051 conn(0x7fc0f406c430 msgr2=0x7fc0f410b560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:48.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.643+0000 7fc0fb547700 1 -- 192.168.123.109:0/1345822051 shutdown_connections 2026-03-09T14:56:48.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:48.643+0000 7fc0fb547700 1 -- 192.168.123.109:0/1345822051 wait complete. 2026-03-09T14:56:48.646 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T14:56:48.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:48 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:48.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:48 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:48.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:48 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:56:48.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:48 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:48.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:48 vm05 ceph-mon[50611]: Deploying daemon mon.vm09 on vm09 2026-03-09T14:56:49.169 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(???) e0 preinit fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).mds e1 new map 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).mds e1 print_map 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout: e1 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout: legacy client fscid: -1 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout: 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout: No filesystems configured 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).osd e5 e5: 0 total, 0 up, 0 in 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).osd e5 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/2862187273' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/3795034028' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/3382801191' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/598832226' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/239197554' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Deploying daemon prometheus.vm05 on vm05 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/2578856591' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/1326420354' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/1556082586' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14164 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mgrmap e14: vm05.lhsexd(active, since 53s) 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/3562467274' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/3444045500' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/2572920289' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Active manager daemon vm05.lhsexd restarted 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Activating manager daemon vm05.lhsexd 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: osdmap e5: 0 total, 0 up, 0 in 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mgrmap e15: vm05.lhsexd(active, starting, since 0.00454509s) 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm05.lhsexd", "id": "vm05.lhsexd"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Manager daemon vm05.lhsexd is now available 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/trash_purge_schedule"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/1863028700' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mgrmap e16: vm05.lhsexd(active, since 1.01001s) 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: [09/Mar/2026:14:56:38] ENGINE Bus STARTING 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: [09/Mar/2026:14:56:38] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: [09/Mar/2026:14:56:38] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: [09/Mar/2026:14:56:38] ENGINE Bus STARTED 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/4284446722' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mgrmap e17: vm05.lhsexd(active, since 2s) 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/3024633886' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Deploying daemon ceph-exporter.vm09 on vm09 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Deploying daemon crash.vm09 on vm09 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/3225882723' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Deploying daemon node-exporter.vm09 on vm09 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/1832913256' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/715853318' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.cfuwdz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm09.cfuwdz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Deploying daemon mgr.vm09.cfuwdz on vm09 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:49.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:56:49.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:49.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: Deploying daemon mon.vm09 on vm09 2026-03-09T14:56:49.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3 2026-03-09T14:56:49.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:49 vm09 ceph-mon[59673]: mon.vm09@-1(synchronizing) e1 handle_conf_change mon_allow_pool_delete,mon_cluster_log_to_file 2026-03-09T14:56:49.927 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T14:56:49.927 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mon dump -f json 2026-03-09T14:56:50.129 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm09/config 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: mon.vm05 calling monitor election 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: mon.vm09 calling monitor election 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.? 192.168.123.109:0/75644177' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/crt"}]: dispatch 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: mon.vm05 is new leader, mons vm05,vm09 in quorum (ranks 0,1) 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: monmap e2: 2 mons at {vm05=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],vm09=[v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0]} removed_ranks: {} 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: fsmap 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: osdmap e5: 0 total, 0 up, 0 in 2026-03-09T14:56:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: mgrmap e17: vm05.lhsexd(active, since 17s) 2026-03-09T14:56:54.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: Standby manager daemon vm09.cfuwdz started 2026-03-09T14:56:54.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.? 192.168.123.109:0/75644177' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:56:54.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.? 192.168.123.109:0/75644177' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/key"}]: dispatch 2026-03-09T14:56:54.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.? 192.168.123.109:0/75644177' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:56:54.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: overall HEALTH_OK 2026-03-09T14:56:54.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:54.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:54.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:54.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:54.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T14:56:54.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: mon.vm05 calling monitor election 2026-03-09T14:56:54.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: mon.vm09 calling monitor election 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.? 192.168.123.109:0/75644177' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/crt"}]: dispatch 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: mon.vm05 is new leader, mons vm05,vm09 in quorum (ranks 0,1) 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: monmap e2: 2 mons at {vm05=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],vm09=[v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0]} removed_ranks: {} 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: fsmap 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: osdmap e5: 0 total, 0 up, 0 in 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: mgrmap e17: vm05.lhsexd(active, since 17s) 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: Standby manager daemon vm09.cfuwdz started 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.? 192.168.123.109:0/75644177' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.? 192.168.123.109:0/75644177' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/key"}]: dispatch 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.? 192.168.123.109:0/75644177' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: overall HEALTH_OK 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:54.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:54.855 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.855+0000 7f49967fc700 1 -- 192.168.123.109:0/2192014847 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f498801d650 con 0x7f4998071980 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.856+0000 7f499c8b3700 1 -- 192.168.123.109:0/2192014847 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4998071980 msgr2=0x7f4980005610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.856+0000 7f499c8b3700 1 --2- 192.168.123.109:0/2192014847 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4998071980 0x7f4980005610 secure :-1 s=READY pgs=187 cs=0 l=1 rev1=1 crypto rx=0x7f4988005fa0 tx=0x7f498800ff70 comp rx=0 tx=0).stop 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.856+0000 7f499c8b3700 1 -- 192.168.123.109:0/2192014847 shutdown_connections 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.856+0000 7f499c8b3700 1 --2- 192.168.123.109:0/2192014847 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4998071980 0x7f4980005610 secure :-1 s=CLOSED pgs=187 cs=0 l=1 rev1=1 crypto rx=0x7f4988005fa0 tx=0x7f498800ff70 comp rx=0 tx=0).stop 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.856+0000 7f499c8b3700 1 -- 192.168.123.109:0/2192014847 >> 192.168.123.109:0/2192014847 conn(0x7f499806b290 msgr2=0x7f499806b690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.856+0000 7f499c8b3700 1 -- 192.168.123.109:0/2192014847 shutdown_connections 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.856+0000 7f499c8b3700 1 -- 192.168.123.109:0/2192014847 wait complete. 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.856+0000 7f499c8b3700 1 Processor -- start 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.856+0000 7f499c8b3700 1 -- start start 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.857+0000 7f499c8b3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49981a47a0 0x7f49981a4b70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.857+0000 7f499c8b3700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f49981a8f60 0x7f49981a93d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.857+0000 7f499c8b3700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f49981a5250 con 0x7f49981a47a0 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.857+0000 7f499c8b3700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f49981a9910 con 0x7f49981a8f60 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.857+0000 7f49977fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49981a47a0 0x7f49981a4b70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.857+0000 7f49977fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49981a47a0 0x7f49981a4b70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:40922/0 (socket says 192.168.123.109:40922) 2026-03-09T14:56:54.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.857+0000 7f49977fe700 1 -- 192.168.123.109:0/360086288 learned_addr learned my addr 192.168.123.109:0/360086288 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:56:54.857 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.857+0000 7f4996ffd700 1 --2- 192.168.123.109:0/360086288 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f49981a8f60 0x7f49981a93d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:54.857 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.857+0000 7f49977fe700 1 -- 192.168.123.109:0/360086288 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f49981a8f60 msgr2=0x7f49981a93d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:54.857 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.857+0000 7f49977fe700 1 --2- 192.168.123.109:0/360086288 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f49981a8f60 0x7f49981a93d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:54.857 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.857+0000 7f49977fe700 1 -- 192.168.123.109:0/360086288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f498800f970 con 0x7f49981a47a0 2026-03-09T14:56:54.857 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.858+0000 7f49977fe700 1 --2- 192.168.123.109:0/360086288 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49981a47a0 0x7f49981a4b70 secure :-1 s=READY pgs=188 cs=0 l=1 rev1=1 crypto rx=0x7f49880046c0 tx=0x7f4988012ee0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:54.857 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.858+0000 7f4994ff9700 1 -- 192.168.123.109:0/360086288 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f49880040e0 con 0x7f49981a47a0 2026-03-09T14:56:54.857 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.858+0000 7f499c8b3700 1 -- 192.168.123.109:0/360086288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f49981a9b60 con 0x7f49981a47a0 2026-03-09T14:56:54.857 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.858+0000 7f499c8b3700 1 -- 192.168.123.109:0/360086288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f49981af3d0 con 0x7f49981a47a0 2026-03-09T14:56:54.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.859+0000 7f4994ff9700 1 -- 192.168.123.109:0/360086288 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f4988004240 con 0x7f49981a47a0 2026-03-09T14:56:54.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.859+0000 7f4994ff9700 1 -- 192.168.123.109:0/360086288 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f498801d440 con 0x7f49981a47a0 2026-03-09T14:56:54.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.860+0000 7f4994ff9700 1 -- 192.168.123.109:0/360086288 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 18) v1 ==== 89854+0+0 (secure 0 0 0) 0x7f4988005030 con 0x7f49981a47a0 2026-03-09T14:56:54.859 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.860+0000 7f497e7fc700 1 -- 192.168.123.109:0/360086288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f49800017f0 con 0x7f49981a47a0 2026-03-09T14:56:54.860 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.860+0000 7f4994ff9700 1 --2- 192.168.123.109:0/360086288 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f498406c700 0x7f498406ebb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:54.860 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.861+0000 7f4994ff9700 1 -- 192.168.123.109:0/360086288 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7f498808d4f0 con 0x7f49981a47a0 2026-03-09T14:56:54.860 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.861+0000 7f4996ffd700 1 --2- 192.168.123.109:0/360086288 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f498406c700 0x7f498406ebb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:54.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.861+0000 7f4996ffd700 1 --2- 192.168.123.109:0/360086288 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f498406c700 0x7f498406ebb0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f498c0078e0 tx=0x7f498c008040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:54.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:54.863+0000 7f4994ff9700 1 -- 192.168.123.109:0/360086288 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f4988058d40 con 0x7f49981a47a0 2026-03-09T14:56:55.024 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.025+0000 7f497e7fc700 1 -- 192.168.123.109:0/360086288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) v1 -- 0x7f4980001660 con 0x7f49981a47a0 2026-03-09T14:56:55.026 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.027+0000 7f4994ff9700 1 -- 192.168.123.109:0/360086288 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 2 v2) v1 ==== 95+0+1032 (secure 0 0 0) 0x7f4988022420 con 0x7f49981a47a0 2026-03-09T14:56:55.026 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:56:55.026 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":2,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","modified":"2026-03-09T14:56:49.179978Z","created":"2026-03-09T14:55:09.447382Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"vm05","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"vm09","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T14:56:55.029 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.029+0000 7f497e7fc700 1 -- 192.168.123.109:0/360086288 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f498406c700 msgr2=0x7f498406ebb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:55.029 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.029+0000 7f497e7fc700 1 --2- 192.168.123.109:0/360086288 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f498406c700 0x7f498406ebb0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f498c0078e0 tx=0x7f498c008040 comp rx=0 tx=0).stop 2026-03-09T14:56:55.029 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.030+0000 7f497e7fc700 1 -- 192.168.123.109:0/360086288 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49981a47a0 msgr2=0x7f49981a4b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:55.029 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.030+0000 7f497e7fc700 1 --2- 192.168.123.109:0/360086288 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49981a47a0 0x7f49981a4b70 secure :-1 s=READY pgs=188 cs=0 l=1 rev1=1 crypto rx=0x7f49880046c0 tx=0x7f4988012ee0 comp rx=0 tx=0).stop 2026-03-09T14:56:55.030 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.030+0000 7f497e7fc700 1 -- 192.168.123.109:0/360086288 shutdown_connections 2026-03-09T14:56:55.030 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.030+0000 7f497e7fc700 1 --2- 192.168.123.109:0/360086288 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f498406c700 0x7f498406ebb0 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:55.030 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.030+0000 7f497e7fc700 1 --2- 192.168.123.109:0/360086288 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f49981a47a0 0x7f49981a4b70 unknown :-1 s=CLOSED pgs=188 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:55.030 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.030+0000 7f497e7fc700 1 --2- 192.168.123.109:0/360086288 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f49981a8f60 0x7f49981a93d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:55.030 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.030+0000 7f497e7fc700 1 -- 192.168.123.109:0/360086288 >> 192.168.123.109:0/360086288 conn(0x7f499806b290 msgr2=0x7f499806fa60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:55.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.032+0000 7f497e7fc700 1 -- 192.168.123.109:0/360086288 shutdown_connections 2026-03-09T14:56:55.032 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:56:55.033+0000 7f497e7fc700 1 -- 192.168.123.109:0/360086288 wait complete. 2026-03-09T14:56:55.037 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 2 2026-03-09T14:56:55.101 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T14:56:55.101 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph config generate-minimal-conf 2026-03-09T14:56:55.286 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:56:55.383 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:55 vm05 ceph-mon[50611]: mgrmap e18: vm05.lhsexd(active, since 17s), standbys: vm09.cfuwdz 2026-03-09T14:56:55.383 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:55 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm09.cfuwdz", "id": "vm09.cfuwdz"}]: dispatch 2026-03-09T14:56:55.383 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:55 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:56:55.383 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:55 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/360086288' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:55.383 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:55 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:55.383 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:55 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:55.383 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:55 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:55.383 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:55 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:55.383 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:55 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:56:55.383 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:55 vm05 ceph-mon[50611]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:56:55.383 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:55 vm05 ceph-mon[50611]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T14:56:55.601 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.599+0000 7f6b74900700 1 -- 192.168.123.105:0/877917051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6b70072730 msgr2=0x7f6b7010edb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:55.601 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.599+0000 7f6b74900700 1 --2- 192.168.123.105:0/877917051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6b70072730 0x7f6b7010edb0 secure :-1 s=READY pgs=189 cs=0 l=1 rev1=1 crypto rx=0x7f6b60009b50 tx=0x7f6b60009e60 comp rx=0 tx=0).stop 2026-03-09T14:56:55.601 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.599+0000 7f6b74900700 1 -- 192.168.123.105:0/877917051 shutdown_connections 2026-03-09T14:56:55.601 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.599+0000 7f6b74900700 1 --2- 192.168.123.105:0/877917051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6b70072730 0x7f6b7010edb0 unknown :-1 s=CLOSED pgs=189 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:55.601 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.599+0000 7f6b74900700 1 -- 192.168.123.105:0/877917051 >> 192.168.123.105:0/877917051 conn(0x7f6b7006c410 msgr2=0x7f6b7006c810 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:55.601 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.599+0000 7f6b74900700 1 -- 192.168.123.105:0/877917051 shutdown_connections 2026-03-09T14:56:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.599+0000 7f6b74900700 1 -- 192.168.123.105:0/877917051 wait complete. 2026-03-09T14:56:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.600+0000 7f6b74900700 1 Processor -- start 2026-03-09T14:56:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.600+0000 7f6b74900700 1 -- start start 2026-03-09T14:56:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.600+0000 7f6b74900700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6b70072730 0x7f6b70116770 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.601+0000 7f6b74900700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b70116cb0 0x7f6b701b1790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.601+0000 7f6b74900700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6b701b1cd0 con 0x7f6b70072730 2026-03-09T14:56:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.601+0000 7f6b74900700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6b701b1e40 con 0x7f6b70116cb0 2026-03-09T14:56:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.601+0000 7f6b6effd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b70116cb0 0x7f6b701b1790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.601+0000 7f6b6effd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b70116cb0 0x7f6b701b1790 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:44190/0 (socket says 192.168.123.105:44190) 2026-03-09T14:56:55.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.601+0000 7f6b6effd700 1 -- 192.168.123.105:0/3586636518 learned_addr learned my addr 192.168.123.105:0/3586636518 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:56:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.601+0000 7f6b6f7fe700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6b70072730 0x7f6b70116770 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.601+0000 7f6b6effd700 1 -- 192.168.123.105:0/3586636518 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b70116cb0 msgr2=0x7f6b701b1790 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 12 2026-03-09T14:56:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.601+0000 7f6b6effd700 1 -- 192.168.123.105:0/3586636518 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b70116cb0 msgr2=0x7f6b701b1790 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed 2026-03-09T14:56:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.601+0000 7f6b6effd700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b70116cb0 0x7f6b701b1790 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-09T14:56:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.601+0000 7f6b6effd700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b70116cb0 0x7f6b701b1790 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-09T14:56:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.602+0000 7f6b6f7fe700 1 -- 192.168.123.105:0/3586636518 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b70116cb0 msgr2=0x7f6b701b1790 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T14:56:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.602+0000 7f6b6f7fe700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b70116cb0 0x7f6b701b1790 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.602+0000 7f6b6f7fe700 1 -- 192.168.123.105:0/3586636518 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6b600097e0 con 0x7f6b70072730 2026-03-09T14:56:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.602+0000 7f6b6f7fe700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6b70072730 0x7f6b70116770 secure :-1 s=READY pgs=190 cs=0 l=1 rev1=1 crypto rx=0x7f6b60000c00 tx=0x7f6b60005020 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.603+0000 7f6b6cff9700 1 -- 192.168.123.105:0/3586636518 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6b6001d070 con 0x7f6b70072730 2026-03-09T14:56:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.603+0000 7f6b74900700 1 -- 192.168.123.105:0/3586636518 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6b701b20c0 con 0x7f6b70072730 2026-03-09T14:56:55.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.603+0000 7f6b74900700 1 -- 192.168.123.105:0/3586636518 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6b701b2550 con 0x7f6b70072730 2026-03-09T14:56:55.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.604+0000 7f6b6cff9700 1 -- 192.168.123.105:0/3586636518 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f6b60004500 con 0x7f6b70072730 2026-03-09T14:56:55.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.603+0000 7f6b74900700 1 -- 192.168.123.105:0/3586636518 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6b70116580 con 0x7f6b70072730 2026-03-09T14:56:55.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.604+0000 7f6b6cff9700 1 -- 192.168.123.105:0/3586636518 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6b60003df0 con 0x7f6b70072730 2026-03-09T14:56:55.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.605+0000 7f6b6cff9700 1 -- 192.168.123.105:0/3586636518 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 18) v1 ==== 89854+0+0 (secure 0 0 0) 0x7f6b6000f460 con 0x7f6b70072730 2026-03-09T14:56:55.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.605+0000 7f6b6cff9700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6b5806c830 0x7f6b5806ece0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:55.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.605+0000 7f6b6cff9700 1 -- 192.168.123.105:0/3586636518 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7f6b6008bc10 con 0x7f6b70072730 2026-03-09T14:56:55.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.608+0000 7f6b6effd700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6b5806c830 0x7f6b5806ece0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:55.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.608+0000 7f6b6cff9700 1 -- 192.168.123.105:0/3586636518 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f6b6005bfd0 con 0x7f6b70072730 2026-03-09T14:56:55.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:55 vm09 ceph-mon[59673]: mgrmap e18: vm05.lhsexd(active, since 17s), standbys: vm09.cfuwdz 2026-03-09T14:56:55.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:55 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm09.cfuwdz", "id": "vm09.cfuwdz"}]: dispatch 2026-03-09T14:56:55.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:55 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:56:55.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:55 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/360086288' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:56:55.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:55 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:56:55.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:55 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:55.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:55 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:55.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:55 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:55.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:55 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:56:55.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:55 vm09 ceph-mon[59673]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T14:56:55.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:55 vm09 ceph-mon[59673]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T14:56:55.618 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.610+0000 7f6b6effd700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6b5806c830 0x7f6b5806ece0 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f6b64009180 tx=0x7f6b64008040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:55.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.785+0000 7f6b74900700 1 -- 192.168.123.105:0/3586636518 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) v1 -- 0x7f6b701178e0 con 0x7f6b70072730 2026-03-09T14:56:55.797 INFO:teuthology.orchestra.run.vm05.stdout:# minimal ceph.conf for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:56:55.797 INFO:teuthology.orchestra.run.vm05.stdout:[global] 2026-03-09T14:56:55.797 INFO:teuthology.orchestra.run.vm05.stdout: fsid = d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T14:56:55.797 INFO:teuthology.orchestra.run.vm05.stdout: mon_host = [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] 2026-03-09T14:56:55.797 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.794+0000 7f6b6cff9700 1 -- 192.168.123.105:0/3586636518 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v10) v1 ==== 76+0+235 (secure 0 0 0) 0x7f6b6005bb60 con 0x7f6b70072730 2026-03-09T14:56:55.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.801+0000 7f6b74900700 1 -- 192.168.123.105:0/3586636518 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6b5806c830 msgr2=0x7f6b5806ece0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:55.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.801+0000 7f6b74900700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6b5806c830 0x7f6b5806ece0 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f6b64009180 tx=0x7f6b64008040 comp rx=0 tx=0).stop 2026-03-09T14:56:55.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.801+0000 7f6b74900700 1 -- 192.168.123.105:0/3586636518 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6b70072730 msgr2=0x7f6b70116770 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:55.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.801+0000 7f6b74900700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6b70072730 0x7f6b70116770 secure :-1 s=READY pgs=190 cs=0 l=1 rev1=1 crypto rx=0x7f6b60000c00 tx=0x7f6b60005020 comp rx=0 tx=0).stop 2026-03-09T14:56:55.803 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.802+0000 7f6b74900700 1 -- 192.168.123.105:0/3586636518 shutdown_connections 2026-03-09T14:56:55.803 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.802+0000 7f6b74900700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6b5806c830 0x7f6b5806ece0 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:55.803 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.802+0000 7f6b74900700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6b70072730 0x7f6b70116770 unknown :-1 s=CLOSED pgs=190 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:55.803 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.802+0000 7f6b74900700 1 --2- 192.168.123.105:0/3586636518 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6b70116cb0 0x7f6b701b1790 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:55.803 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.802+0000 7f6b74900700 1 -- 192.168.123.105:0/3586636518 >> 192.168.123.105:0/3586636518 conn(0x7f6b7006c410 msgr2=0x7f6b70070b70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:55.804 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.803+0000 7f6b74900700 1 -- 192.168.123.105:0/3586636518 shutdown_connections 2026-03-09T14:56:55.804 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:55.803+0000 7f6b74900700 1 -- 192.168.123.105:0/3586636518 wait complete. 2026-03-09T14:56:56.194 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T14:56:56.195 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:56:56.195 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T14:56:56.227 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:56:56.227 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:56:56.301 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:56:56.301 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T14:56:56.329 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:56:56.329 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:56:56.397 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T14:56:56.398 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:56:56.398 DEBUG:teuthology.orchestra.run.vm05:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T14:56:56.417 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:56:56.418 DEBUG:teuthology.orchestra.run.vm05:> ls /dev/[sv]d? 2026-03-09T14:56:56.474 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vda 2026-03-09T14:56:56.474 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdb 2026-03-09T14:56:56.474 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdc 2026-03-09T14:56:56.474 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdd 2026-03-09T14:56:56.474 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vde 2026-03-09T14:56:56.474 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T14:56:56.474 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T14:56:56.474 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdb 2026-03-09T14:56:56.547 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdb 2026-03-09T14:56:56.547 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T14:56:56.547 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-09T14:56:56.547 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:56:56.547 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T14:56:56.547 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-09 14:55:43.723567992 +0000 2026-03-09T14:56:56.547 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 14:55:43.611567850 +0000 2026-03-09T14:56:56.547 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 14:55:43.611567850 +0000 2026-03-09T14:56:56.547 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-09 14:49:57.318000000 +0000 2026-03-09T14:56:56.548 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T14:56:56.647 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-09T14:56:56.647 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-09T14:56:56.648 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000145532 s, 3.5 MB/s 2026-03-09T14:56:56.650 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T14:56:56.678 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdc 2026-03-09T14:56:56.740 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdc 2026-03-09T14:56:56.741 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T14:56:56.741 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-09T14:56:56.741 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:56:56.741 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T14:56:56.741 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-09 14:55:43.788568074 +0000 2026-03-09T14:56:56.741 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 14:55:43.617567858 +0000 2026-03-09T14:56:56.741 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 14:55:43.617567858 +0000 2026-03-09T14:56:56.741 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-09 14:49:57.323000000 +0000 2026-03-09T14:56:56.741 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T14:56:56.812 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-09T14:56:56.812 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-09T14:56:56.812 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000166632 s, 3.1 MB/s 2026-03-09T14:56:56.812 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:56.812 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3586636518' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.lhsexd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:56:56.813 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:56.814 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T14:56:56.879 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdd 2026-03-09T14:56:56.942 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdd 2026-03-09T14:56:56.942 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T14:56:56.942 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-09T14:56:56.942 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:56:56.942 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T14:56:56.942 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-09 14:55:43.850568152 +0000 2026-03-09T14:56:56.942 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 14:55:43.607567845 +0000 2026-03-09T14:56:56.942 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 14:55:43.607567845 +0000 2026-03-09T14:56:56.942 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-09 14:49:57.327000000 +0000 2026-03-09T14:56:56.942 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T14:56:57.019 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-09T14:56:57.019 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-09T14:56:57.019 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000213269 s, 2.4 MB/s 2026-03-09T14:56:57.020 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T14:56:57.101 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vde 2026-03-09T14:56:57.115 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/3586636518' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.lhsexd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:56:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:57.162 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vde 2026-03-09T14:56:57.162 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T14:56:57.162 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-09T14:56:57.162 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:56:57.162 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T14:56:57.162 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-09 14:55:43.903568219 +0000 2026-03-09T14:56:57.162 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 14:55:43.606567844 +0000 2026-03-09T14:56:57.162 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 14:55:43.606567844 +0000 2026-03-09T14:56:57.162 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-09 14:49:57.335000000 +0000 2026-03-09T14:56:57.162 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T14:56:57.233 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-09T14:56:57.233 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-09T14:56:57.233 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.00016616 s, 3.1 MB/s 2026-03-09T14:56:57.234 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T14:56:57.301 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:56:57.301 DEBUG:teuthology.orchestra.run.vm09:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T14:56:57.317 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:56:57.317 DEBUG:teuthology.orchestra.run.vm09:> ls /dev/[sv]d? 2026-03-09T14:56:57.373 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vda 2026-03-09T14:56:57.373 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdb 2026-03-09T14:56:57.373 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdc 2026-03-09T14:56:57.373 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdd 2026-03-09T14:56:57.373 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vde 2026-03-09T14:56:57.373 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T14:56:57.373 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T14:56:57.373 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdb 2026-03-09T14:56:57.429 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdb 2026-03-09T14:56:57.429 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T14:56:57.429 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 239 Links: 1 Device type: fc,10 2026-03-09T14:56:57.429 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:56:57.429 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T14:56:57.429 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 14:56:39.372196029 +0000 2026-03-09T14:56:57.429 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 14:56:39.243196168 +0000 2026-03-09T14:56:57.429 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 14:56:39.243196168 +0000 2026-03-09T14:56:57.429 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-09 14:50:33.230000000 +0000 2026-03-09T14:56:57.430 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T14:56:57.492 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T14:56:57.492 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T14:56:57.492 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000154279 s, 3.3 MB/s 2026-03-09T14:56:57.493 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T14:56:57.550 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdc 2026-03-09T14:56:57.607 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdc 2026-03-09T14:56:57.607 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T14:56:57.607 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-09T14:56:57.607 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:56:57.607 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T14:56:57.607 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 14:56:39.434195962 +0000 2026-03-09T14:56:57.607 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 14:56:39.250196160 +0000 2026-03-09T14:56:57.607 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 14:56:39.250196160 +0000 2026-03-09T14:56:57.607 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-09 14:50:33.236000000 +0000 2026-03-09T14:56:57.607 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T14:56:57.672 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T14:56:57.672 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T14:56:57.672 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000139512 s, 3.7 MB/s 2026-03-09T14:56:57.673 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T14:56:57.733 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdd 2026-03-09T14:56:57.791 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdd 2026-03-09T14:56:57.791 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T14:56:57.791 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-09T14:56:57.791 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:56:57.791 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T14:56:57.791 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 14:56:39.510195880 +0000 2026-03-09T14:56:57.791 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 14:56:39.247196164 +0000 2026-03-09T14:56:57.791 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 14:56:39.247196164 +0000 2026-03-09T14:56:57.791 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-09 14:50:33.242000000 +0000 2026-03-09T14:56:57.792 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T14:56:57.856 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T14:56:57.856 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T14:56:57.856 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000182902 s, 2.8 MB/s 2026-03-09T14:56:57.857 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T14:56:57.915 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vde 2026-03-09T14:56:57.976 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vde 2026-03-09T14:56:57.976 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T14:56:57.976 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-09T14:56:57.976 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:56:57.976 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T14:56:57.976 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 14:56:39.578195806 +0000 2026-03-09T14:56:57.976 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 14:56:39.246196165 +0000 2026-03-09T14:56:57.976 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 14:56:39.246196165 +0000 2026-03-09T14:56:57.976 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-09 14:50:33.248000000 +0000 2026-03-09T14:56:57.976 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T14:56:58.044 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T14:56:58.044 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T14:56:58.044 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000248144 s, 2.1 MB/s 2026-03-09T14:56:58.045 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T14:56:58.106 INFO:tasks.cephadm:Deploying osd.0 on vm05 with /dev/vde... 2026-03-09T14:56:58.106 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- lvm zap /dev/vde 2026-03-09T14:56:58.334 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: Reconfiguring mon.vm05 (unknown last config time)... 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: Reconfiguring daemon mon.vm05 on vm05 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: Reconfiguring mgr.vm05.lhsexd (unknown last config time)... 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: Reconfiguring daemon mgr.vm05.lhsexd on vm05 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: Reconfiguring ceph-exporter.vm05 (monmap changed)... 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: Reconfiguring daemon ceph-exporter.vm05 on vm05 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:58.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:58 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: Reconfiguring mon.vm05 (unknown last config time)... 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: Reconfiguring daemon mon.vm05 on vm05 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: Reconfiguring mgr.vm05.lhsexd (unknown last config time)... 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: Reconfiguring daemon mgr.vm05.lhsexd on vm05 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: Reconfiguring ceph-exporter.vm05 (monmap changed)... 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: Reconfiguring daemon ceph-exporter.vm05 on vm05 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:58.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:58 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:59.280 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:59 vm05 ceph-mon[50611]: Reconfiguring crash.vm05 (monmap changed)... 2026-03-09T14:56:59.280 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:59 vm05 ceph-mon[50611]: Reconfiguring daemon crash.vm05 on vm05 2026-03-09T14:56:59.280 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:59 vm05 ceph-mon[50611]: Reconfiguring alertmanager.vm05 (dependencies changed)... 2026-03-09T14:56:59.280 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:59 vm05 ceph-mon[50611]: Reconfiguring daemon alertmanager.vm05 on vm05 2026-03-09T14:56:59.280 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:59 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:59.280 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:56:59 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:59.354 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:56:59.365 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph orch daemon add osd vm05:/dev/vde 2026-03-09T14:56:59.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:59 vm09 ceph-mon[59673]: Reconfiguring crash.vm05 (monmap changed)... 2026-03-09T14:56:59.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:59 vm09 ceph-mon[59673]: Reconfiguring daemon crash.vm05 on vm05 2026-03-09T14:56:59.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:59 vm09 ceph-mon[59673]: Reconfiguring alertmanager.vm05 (dependencies changed)... 2026-03-09T14:56:59.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:59 vm09 ceph-mon[59673]: Reconfiguring daemon alertmanager.vm05 on vm05 2026-03-09T14:56:59.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:59 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:59.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:56:59 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:56:59.625 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:56:59.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.960+0000 7fbba736e700 1 -- 192.168.123.105:0/1794774114 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbba0071a60 msgr2=0x7fbba0071e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:59.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.960+0000 7fbba736e700 1 --2- 192.168.123.105:0/1794774114 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbba0071a60 0x7fbba0071e70 secure :-1 s=READY pgs=191 cs=0 l=1 rev1=1 crypto rx=0x7fbb9800b3a0 tx=0x7fbb9800b6b0 comp rx=0 tx=0).stop 2026-03-09T14:56:59.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.961+0000 7fbba736e700 1 -- 192.168.123.105:0/1794774114 shutdown_connections 2026-03-09T14:56:59.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.961+0000 7fbba736e700 1 --2- 192.168.123.105:0/1794774114 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbba0072440 0x7fbba010be90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:59.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.961+0000 7fbba736e700 1 --2- 192.168.123.105:0/1794774114 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbba0071a60 0x7fbba0071e70 unknown :-1 s=CLOSED pgs=191 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:59.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.961+0000 7fbba736e700 1 -- 192.168.123.105:0/1794774114 >> 192.168.123.105:0/1794774114 conn(0x7fbba006d1a0 msgr2=0x7fbba006f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:56:59.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.962+0000 7fbba736e700 1 -- 192.168.123.105:0/1794774114 shutdown_connections 2026-03-09T14:56:59.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.966+0000 7fbba736e700 1 -- 192.168.123.105:0/1794774114 wait complete. 2026-03-09T14:56:59.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.966+0000 7fbba736e700 1 Processor -- start 2026-03-09T14:56:59.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.967+0000 7fbba736e700 1 -- start start 2026-03-09T14:56:59.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.967+0000 7fbba736e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbba0072440 0x7fbba0116ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:59.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.967+0000 7fbba736e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbba0117010 0x7fbba01b2800 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:59.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.967+0000 7fbba736e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbba0117510 con 0x7fbba0072440 2026-03-09T14:56:59.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.967+0000 7fbba736e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbba0117680 con 0x7fbba0117010 2026-03-09T14:56:59.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.967+0000 7fbba636c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbba0072440 0x7fbba0116ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:59.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.967+0000 7fbba636c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbba0072440 0x7fbba0116ad0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:36416/0 (socket says 192.168.123.105:36416) 2026-03-09T14:56:59.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.967+0000 7fbba636c700 1 -- 192.168.123.105:0/258174946 learned_addr learned my addr 192.168.123.105:0/258174946 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:56:59.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.967+0000 7fbba5b6b700 1 --2- 192.168.123.105:0/258174946 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbba0117010 0x7fbba01b2800 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:59.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.968+0000 7fbba636c700 1 -- 192.168.123.105:0/258174946 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbba0117010 msgr2=0x7fbba01b2800 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:56:59.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.968+0000 7fbba636c700 1 --2- 192.168.123.105:0/258174946 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbba0117010 0x7fbba01b2800 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:56:59.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.968+0000 7fbba636c700 1 -- 192.168.123.105:0/258174946 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbb9800b050 con 0x7fbba0072440 2026-03-09T14:56:59.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.968+0000 7fbba636c700 1 --2- 192.168.123.105:0/258174946 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbba0072440 0x7fbba0116ad0 secure :-1 s=READY pgs=192 cs=0 l=1 rev1=1 crypto rx=0x7fbb98000f80 tx=0x7fbb98007ab0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:59.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.969+0000 7fbb977fe700 1 -- 192.168.123.105:0/258174946 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbb9800e050 con 0x7fbba0072440 2026-03-09T14:56:59.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.969+0000 7fbb977fe700 1 -- 192.168.123.105:0/258174946 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fbb98007c00 con 0x7fbba0072440 2026-03-09T14:56:59.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.969+0000 7fbb977fe700 1 -- 192.168.123.105:0/258174946 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbb9801b940 con 0x7fbba0072440 2026-03-09T14:56:59.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.969+0000 7fbba736e700 1 -- 192.168.123.105:0/258174946 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbba01b2d40 con 0x7fbba0072440 2026-03-09T14:56:59.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.969+0000 7fbba736e700 1 -- 192.168.123.105:0/258174946 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbba01b31b0 con 0x7fbba0072440 2026-03-09T14:56:59.972 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.972+0000 7fbba736e700 1 -- 192.168.123.105:0/258174946 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbba004ea50 con 0x7fbba0072440 2026-03-09T14:56:59.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.973+0000 7fbb977fe700 1 -- 192.168.123.105:0/258174946 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 18) v1 ==== 89854+0+0 (secure 0 0 0) 0x7fbb98019040 con 0x7fbba0072440 2026-03-09T14:56:59.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.973+0000 7fbb977fe700 1 --2- 192.168.123.105:0/258174946 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbb8c06c720 0x7fbb8c06ebd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:56:59.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.974+0000 7fbba5b6b700 1 --2- 192.168.123.105:0/258174946 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbb8c06c720 0x7fbb8c06ebd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:56:59.975 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.974+0000 7fbb977fe700 1 -- 192.168.123.105:0/258174946 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(5..5 src has 1..5) v4 ==== 1198+0+0 (secure 0 0 0) 0x7fbb9808bb40 con 0x7fbba0072440 2026-03-09T14:56:59.976 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.976+0000 7fbba5b6b700 1 --2- 192.168.123.105:0/258174946 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbb8c06c720 0x7fbb8c06ebd0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fbb90005950 tx=0x7fbb9000b410 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:56:59.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:56:59.977+0000 7fbb977fe700 1 -- 192.168.123.105:0/258174946 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fbb9805ac10 con 0x7fbba0072440 2026-03-09T14:57:00.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:00.123+0000 7fbba736e700 1 -- 192.168.123.105:0/258174946 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}) v1 -- 0x7fbba01b3460 con 0x7fbb8c06c720 2026-03-09T14:57:00.368 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:00 vm05 ceph-mon[50611]: Reconfiguring grafana.vm05 (dependencies changed)... 2026-03-09T14:57:00.368 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:00 vm05 ceph-mon[50611]: Reconfiguring daemon grafana.vm05 on vm05 2026-03-09T14:57:00.368 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:00 vm05 ceph-mon[50611]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:00.368 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:00 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:00.368 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:00 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:00.368 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:00 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:57:00.368 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:00 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:57:00.368 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:00 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:00.615 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:00 vm09 ceph-mon[59673]: Reconfiguring grafana.vm05 (dependencies changed)... 2026-03-09T14:57:00.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:00 vm09 ceph-mon[59673]: Reconfiguring daemon grafana.vm05 on vm05 2026-03-09T14:57:00.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:00 vm09 ceph-mon[59673]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:00.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:00 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:00.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:00 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:00.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:00 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:57:00.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:00 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:57:00.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:00 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:01.492 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:01 vm05 ceph-mon[50611]: Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-09T14:57:01.492 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:01 vm05 ceph-mon[50611]: from='client.14288 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:57:01.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:01 vm09 ceph-mon[59673]: Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-09T14:57:01.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:01 vm09 ceph-mon[59673]: from='client.14288 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:57:02.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:02 vm05 ceph-mon[50611]: Reconfiguring daemon prometheus.vm05 on vm05 2026-03-09T14:57:02.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:02 vm05 ceph-mon[50611]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:02.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:02 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/788060230' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1022e816-4ad0-4a27-9052-07d4015a684e"}]: dispatch 2026-03-09T14:57:02.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:02 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/788060230' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1022e816-4ad0-4a27-9052-07d4015a684e"}]': finished 2026-03-09T14:57:02.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:02 vm05 ceph-mon[50611]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T14:57:02.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:02 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:57:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:02 vm09 ceph-mon[59673]: Reconfiguring daemon prometheus.vm05 on vm05 2026-03-09T14:57:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:02 vm09 ceph-mon[59673]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:02 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/788060230' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1022e816-4ad0-4a27-9052-07d4015a684e"}]: dispatch 2026-03-09T14:57:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:02 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/788060230' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1022e816-4ad0-4a27-9052-07d4015a684e"}]': finished 2026-03-09T14:57:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:02 vm09 ceph-mon[59673]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T14:57:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:02 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:57:03.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:03 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3437154779' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:57:03.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:03 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/3437154779' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:57:04.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:04 vm05 ceph-mon[50611]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:04.615 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:04 vm09 ceph-mon[59673]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:06.190 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:06 vm05 ceph-mon[50611]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:06.190 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:06 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:06.190 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:06 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:06.190 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:06 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T14:57:06.190 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:06 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:06.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:06 vm09 ceph-mon[59673]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:06.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:06 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:06.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:06 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:06.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:06 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T14:57:06.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:06 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:07.198 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-09T14:57:07.198 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-09T14:57:07.198 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:07.198 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:07.198 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T14:57:07.198 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:07.198 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:57:07.198 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:07.198 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:07.198 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:07.199 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.cfuwdz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:57:07.199 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:57:07.199 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:07.199 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.cfuwdz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:07.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:57:08.450 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: Reconfiguring crash.vm09 (monmap changed)... 2026-03-09T14:57:08.450 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: Reconfiguring daemon crash.vm09 on vm09 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: Deploying daemon osd.0 on vm05 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: Reconfiguring mgr.vm09.cfuwdz (monmap changed)... 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: Reconfiguring daemon mgr.vm09.cfuwdz on vm09 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.451 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:08 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: Reconfiguring crash.vm09 (monmap changed)... 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: Reconfiguring daemon crash.vm09 on vm09 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: Deploying daemon osd.0 on vm05 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: Reconfiguring mgr.vm09.cfuwdz (monmap changed)... 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: Reconfiguring daemon mgr.vm09.cfuwdz on vm09 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:08.471 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:08 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:09.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:09 vm05 ceph-mon[50611]: Reconfiguring mon.vm09 (monmap changed)... 2026-03-09T14:57:09.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:09 vm05 ceph-mon[50611]: Reconfiguring daemon mon.vm09 on vm09 2026-03-09T14:57:09.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:09 vm05 ceph-mon[50611]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:57:09.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:09 vm05 ceph-mon[50611]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-09T14:57:09.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:09 vm05 ceph-mon[50611]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:57:09.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:09 vm05 ceph-mon[50611]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-09T14:57:09.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:09 vm05 ceph-mon[50611]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:57:09.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:09 vm05 ceph-mon[50611]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-09T14:57:09.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:09 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:09.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:09 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:09.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:09 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:09.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:09 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:09 vm09 ceph-mon[59673]: Reconfiguring mon.vm09 (monmap changed)... 2026-03-09T14:57:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:09 vm09 ceph-mon[59673]: Reconfiguring daemon mon.vm09 on vm09 2026-03-09T14:57:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:09 vm09 ceph-mon[59673]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:57:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:09 vm09 ceph-mon[59673]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-09T14:57:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:09 vm09 ceph-mon[59673]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:57:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:09 vm09 ceph-mon[59673]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-09T14:57:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:09 vm09 ceph-mon[59673]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:57:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:09 vm09 ceph-mon[59673]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-09T14:57:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:09 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:09 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:09 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:09 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:09.960 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 0 on host 'vm05' 2026-03-09T14:57:09.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:09.958+0000 7fbb977fe700 1 -- 192.168.123.105:0/258174946 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7fbba01b3460 con 0x7fbb8c06c720 2026-03-09T14:57:09.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:09.961+0000 7fbba736e700 1 -- 192.168.123.105:0/258174946 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbb8c06c720 msgr2=0x7fbb8c06ebd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:09.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:09.961+0000 7fbba736e700 1 --2- 192.168.123.105:0/258174946 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbb8c06c720 0x7fbb8c06ebd0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fbb90005950 tx=0x7fbb9000b410 comp rx=0 tx=0).stop 2026-03-09T14:57:09.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:09.961+0000 7fbba736e700 1 -- 192.168.123.105:0/258174946 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbba0072440 msgr2=0x7fbba0116ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:09.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:09.961+0000 7fbba736e700 1 --2- 192.168.123.105:0/258174946 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbba0072440 0x7fbba0116ad0 secure :-1 s=READY pgs=192 cs=0 l=1 rev1=1 crypto rx=0x7fbb98000f80 tx=0x7fbb98007ab0 comp rx=0 tx=0).stop 2026-03-09T14:57:09.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:09.961+0000 7fbba736e700 1 -- 192.168.123.105:0/258174946 shutdown_connections 2026-03-09T14:57:09.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:09.961+0000 7fbba736e700 1 --2- 192.168.123.105:0/258174946 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbb8c06c720 0x7fbb8c06ebd0 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:09.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:09.961+0000 7fbba736e700 1 --2- 192.168.123.105:0/258174946 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbba0072440 0x7fbba0116ad0 unknown :-1 s=CLOSED pgs=192 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:09.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:09.961+0000 7fbba736e700 1 --2- 192.168.123.105:0/258174946 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbba0117010 0x7fbba01b2800 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:09.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:09.961+0000 7fbba736e700 1 -- 192.168.123.105:0/258174946 >> 192.168.123.105:0/258174946 conn(0x7fbba006d1a0 msgr2=0x7fbba0070650 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:57:09.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:09.961+0000 7fbba736e700 1 -- 192.168.123.105:0/258174946 shutdown_connections 2026-03-09T14:57:09.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:09.961+0000 7fbba736e700 1 -- 192.168.123.105:0/258174946 wait complete. 2026-03-09T14:57:10.030 DEBUG:teuthology.orchestra.run.vm05:osd.0> sudo journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.0.service 2026-03-09T14:57:10.031 INFO:tasks.cephadm:Deploying osd.1 on vm05 with /dev/vdd... 2026-03-09T14:57:10.031 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- lvm zap /dev/vdd 2026-03-09T14:57:10.295 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:57:10.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:10 vm05 ceph-mon[50611]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:10.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:10 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:10.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:10 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:10.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:10 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:10.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:10 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:10.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:10 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:10.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:10 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:10.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:10 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:10.392 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:10 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:10.393 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 14:57:10 vm05 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[68904]: 2026-03-09T14:57:10.389+0000 7f1074205640 -1 osd.0 0 log_to_monitors true 2026-03-09T14:57:10.497 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:10 vm09 ceph-mon[59673]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:10.497 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:10 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:10.497 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:10 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:10.497 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:10 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:10.497 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:10 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:10.497 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:10 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:10.497 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:10 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:10.497 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:10 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:10.497 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:10 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:10.937 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:57:10.963 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph orch daemon add osd vm05:/dev/vdd 2026-03-09T14:57:11.183 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:57:11.375 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:11 vm05 ceph-mon[50611]: from='osd.0 [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:57:11.375 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:11 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:11.375 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:11 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:11.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.514+0000 7f83b6259700 1 -- 192.168.123.105:0/221466933 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f83b0071db0 msgr2=0x7f83b00721c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:11.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.514+0000 7f83b6259700 1 --2- 192.168.123.105:0/221466933 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f83b0071db0 0x7f83b00721c0 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7f83a0009b00 tx=0x7f83a0009e10 comp rx=0 tx=0).stop 2026-03-09T14:57:11.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.515+0000 7f83b6259700 1 -- 192.168.123.105:0/221466933 shutdown_connections 2026-03-09T14:57:11.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.515+0000 7f83b6259700 1 --2- 192.168.123.105:0/221466933 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83b0107d50 0x7f83b01081c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:11.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.515+0000 7f83b6259700 1 --2- 192.168.123.105:0/221466933 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f83b0071db0 0x7f83b00721c0 unknown :-1 s=CLOSED pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:11.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.515+0000 7f83b6259700 1 -- 192.168.123.105:0/221466933 >> 192.168.123.105:0/221466933 conn(0x7f83b006d3e0 msgr2=0x7f83b006f830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:57:11.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.515+0000 7f83b6259700 1 -- 192.168.123.105:0/221466933 shutdown_connections 2026-03-09T14:57:11.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.515+0000 7f83b6259700 1 -- 192.168.123.105:0/221466933 wait complete. 2026-03-09T14:57:11.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.516+0000 7f83b6259700 1 Processor -- start 2026-03-09T14:57:11.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.516+0000 7f83b6259700 1 -- start start 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.516+0000 7f83b6259700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83b0071db0 0x7f83b01169f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.516+0000 7f83b6259700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f83b0107d50 0x7f83b0116f30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.516+0000 7f83b6259700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f83b0117550 con 0x7f83b0071db0 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.516+0000 7f83b6259700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f83b01b2940 con 0x7f83b0107d50 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.516+0000 7f83af7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83b0071db0 0x7f83b01169f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.516+0000 7f83af7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83b0071db0 0x7f83b01169f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:54718/0 (socket says 192.168.123.105:54718) 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.516+0000 7f83af7fe700 1 -- 192.168.123.105:0/997641324 learned_addr learned my addr 192.168.123.105:0/997641324 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.517+0000 7f83aeffd700 1 --2- 192.168.123.105:0/997641324 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f83b0107d50 0x7f83b0116f30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.517+0000 7f83af7fe700 1 -- 192.168.123.105:0/997641324 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f83b0107d50 msgr2=0x7f83b0116f30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.517+0000 7f83af7fe700 1 --2- 192.168.123.105:0/997641324 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f83b0107d50 0x7f83b0116f30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.517+0000 7f83af7fe700 1 -- 192.168.123.105:0/997641324 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f83a4009710 con 0x7f83b0071db0 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.517+0000 7f83aeffd700 1 --2- 192.168.123.105:0/997641324 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f83b0107d50 0x7f83b0116f30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.517+0000 7f83af7fe700 1 --2- 192.168.123.105:0/997641324 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83b0071db0 0x7f83b01169f0 secure :-1 s=READY pgs=199 cs=0 l=1 rev1=1 crypto rx=0x7f83a000bb70 tx=0x7f83a0004690 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:57:11.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.517+0000 7f83acff9700 1 -- 192.168.123.105:0/997641324 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f83a001d070 con 0x7f83b0071db0 2026-03-09T14:57:11.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.518+0000 7f83b6259700 1 -- 192.168.123.105:0/997641324 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f83a00097e0 con 0x7f83b0071db0 2026-03-09T14:57:11.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.518+0000 7f83b6259700 1 -- 192.168.123.105:0/997641324 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f83b01b2df0 con 0x7f83b0071db0 2026-03-09T14:57:11.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.518+0000 7f83acff9700 1 -- 192.168.123.105:0/997641324 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f83a0004d60 con 0x7f83b0071db0 2026-03-09T14:57:11.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.519+0000 7f83acff9700 1 -- 192.168.123.105:0/997641324 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f83a000f740 con 0x7f83b0071db0 2026-03-09T14:57:11.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.520+0000 7f83b6259700 1 -- 192.168.123.105:0/997641324 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f839c005320 con 0x7f83b0071db0 2026-03-09T14:57:11.525 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.524+0000 7f83acff9700 1 -- 192.168.123.105:0/997641324 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 18) v1 ==== 89854+0+0 (secure 0 0 0) 0x7f83a000f960 con 0x7f83b0071db0 2026-03-09T14:57:11.526 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.525+0000 7f83acff9700 1 --2- 192.168.123.105:0/997641324 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f839806c6c0 0x7f839806eb70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:11.526 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.525+0000 7f83acff9700 1 -- 192.168.123.105:0/997641324 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(7..7 src has 1..7) v4 ==== 1404+0+0 (secure 0 0 0) 0x7f83a008bd80 con 0x7f83b0071db0 2026-03-09T14:57:11.526 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.525+0000 7f83aeffd700 1 --2- 192.168.123.105:0/997641324 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f839806c6c0 0x7f839806eb70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:11.527 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.526+0000 7f83acff9700 1 -- 192.168.123.105:0/997641324 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f83a0090340 con 0x7f83b0071db0 2026-03-09T14:57:11.529 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.526+0000 7f83aeffd700 1 --2- 192.168.123.105:0/997641324 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f839806c6c0 0x7f839806eb70 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f83a4009f60 tx=0x7f83a4009450 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:57:11.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:11 vm09 ceph-mon[59673]: from='osd.0 [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:57:11.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:11 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:11.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:11 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:11.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:11.653+0000 7f83b6259700 1 -- 192.168.123.105:0/997641324 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}) v1 -- 0x7f839c000bf0 con 0x7f839806c6c0 2026-03-09T14:57:12.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:12 vm05 ceph-mon[50611]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:12.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:12 vm05 ceph-mon[50611]: from='osd.0 [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:57:12.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:12 vm05 ceph-mon[50611]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T14:57:12.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:12 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:57:12.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:12 vm05 ceph-mon[50611]: from='osd.0 [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:57:12.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:12 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:57:12.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:12 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:57:12.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:12 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:12.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:12 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:12.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:12 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:12.305 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 14:57:12 vm05 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[68904]: 2026-03-09T14:57:12.219+0000 7f106907b700 -1 osd.0 0 waiting for initial osdmap 2026-03-09T14:57:12.305 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 14:57:12 vm05 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[68904]: 2026-03-09T14:57:12.241+0000 7f1065671700 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:57:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:12 vm09 ceph-mon[59673]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:12 vm09 ceph-mon[59673]: from='osd.0 [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:57:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:12 vm09 ceph-mon[59673]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T14:57:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:12 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:57:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:12 vm09 ceph-mon[59673]: from='osd.0 [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:57:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:12 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:57:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:12 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:57:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:12 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:12 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:12 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:13.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:13 vm05 ceph-mon[50611]: from='client.14306 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:57:13.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:13 vm05 ceph-mon[50611]: from='osd.0 [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:57:13.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:13 vm05 ceph-mon[50611]: osdmap e8: 1 total, 0 up, 1 in 2026-03-09T14:57:13.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:13 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:57:13.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:13 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2649798658' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "73421280-9a73-4092-8e8f-854babd94f13"}]: dispatch 2026-03-09T14:57:13.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:13 vm05 ceph-mon[50611]: osd.0 [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580] boot 2026-03-09T14:57:13.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:13 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2649798658' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "73421280-9a73-4092-8e8f-854babd94f13"}]': finished 2026-03-09T14:57:13.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:13 vm05 ceph-mon[50611]: osdmap e9: 2 total, 1 up, 2 in 2026-03-09T14:57:13.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:13 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:57:13.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:13 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:57:13.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:13 vm05 ceph-mon[50611]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:13 vm09 ceph-mon[59673]: from='client.14306 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:57:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:13 vm09 ceph-mon[59673]: from='osd.0 [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:57:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:13 vm09 ceph-mon[59673]: osdmap e8: 1 total, 0 up, 1 in 2026-03-09T14:57:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:13 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:57:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:13 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/2649798658' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "73421280-9a73-4092-8e8f-854babd94f13"}]: dispatch 2026-03-09T14:57:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:13 vm09 ceph-mon[59673]: osd.0 [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580] boot 2026-03-09T14:57:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:13 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/2649798658' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "73421280-9a73-4092-8e8f-854babd94f13"}]': finished 2026-03-09T14:57:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:13 vm09 ceph-mon[59673]: osdmap e9: 2 total, 1 up, 2 in 2026-03-09T14:57:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:13 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:57:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:13 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:57:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:13 vm09 ceph-mon[59673]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:57:14.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:14 vm05 ceph-mon[50611]: purged_snaps scrub starts 2026-03-09T14:57:14.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:14 vm05 ceph-mon[50611]: purged_snaps scrub ok 2026-03-09T14:57:14.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:14 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/34378695' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:57:14.615 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:14 vm09 ceph-mon[59673]: purged_snaps scrub starts 2026-03-09T14:57:14.615 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:14 vm09 ceph-mon[59673]: purged_snaps scrub ok 2026-03-09T14:57:14.615 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:14 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/34378695' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:57:15.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:15 vm05 ceph-mon[50611]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T14:57:15.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:15 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:57:15.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:15 vm05 ceph-mon[50611]: pgmap v17: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:57:15.615 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:15 vm09 ceph-mon[59673]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T14:57:15.615 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:15 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:57:15.615 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:15 vm09 ceph-mon[59673]: pgmap v17: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:57:18.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: Detected new or changed devices on vm05 2026-03-09T14:57:18.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: pgmap v18: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:57:18.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:18.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:18.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:57:18.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:18.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:18.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:18.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:18.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:57:18.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:18.343 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:18.344 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: Detected new or changed devices on vm05 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: pgmap v18: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:19.287 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:19 vm05 ceph-mon[50611]: Deploying daemon osd.1 on vm05 2026-03-09T14:57:19.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:19 vm09 ceph-mon[59673]: Deploying daemon osd.1 on vm05 2026-03-09T14:57:20.417 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:20 vm05 ceph-mon[50611]: pgmap v19: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:57:20.417 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:20 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:20.417 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:20 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:20.417 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:20 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:20.417 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:20 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:20.417 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:20 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:20.417 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:20 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:20.417 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:20 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:20.417 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:20 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:20.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:20 vm09 ceph-mon[59673]: pgmap v19: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:57:20.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:20 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:20.453 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:20 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:20.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:20 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:20.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:20 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:20.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:20 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:20.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:20 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:20.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:20 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:20.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:20 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:21.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:21.023+0000 7f83acff9700 1 -- 192.168.123.105:0/997641324 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7f839c000bf0 con 0x7f839806c6c0 2026-03-09T14:57:21.026 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 1 on host 'vm05' 2026-03-09T14:57:21.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:21.026+0000 7f83b6259700 1 -- 192.168.123.105:0/997641324 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f839806c6c0 msgr2=0x7f839806eb70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:21.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:21.026+0000 7f83b6259700 1 --2- 192.168.123.105:0/997641324 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f839806c6c0 0x7f839806eb70 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f83a4009f60 tx=0x7f83a4009450 comp rx=0 tx=0).stop 2026-03-09T14:57:21.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:21.026+0000 7f83b6259700 1 -- 192.168.123.105:0/997641324 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83b0071db0 msgr2=0x7f83b01169f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:21.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:21.026+0000 7f83b6259700 1 --2- 192.168.123.105:0/997641324 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83b0071db0 0x7f83b01169f0 secure :-1 s=READY pgs=199 cs=0 l=1 rev1=1 crypto rx=0x7f83a000bb70 tx=0x7f83a0004690 comp rx=0 tx=0).stop 2026-03-09T14:57:21.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:21.026+0000 7f83b6259700 1 -- 192.168.123.105:0/997641324 shutdown_connections 2026-03-09T14:57:21.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:21.026+0000 7f83b6259700 1 --2- 192.168.123.105:0/997641324 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f839806c6c0 0x7f839806eb70 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:21.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:21.026+0000 7f83b6259700 1 --2- 192.168.123.105:0/997641324 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f83b0071db0 0x7f83b01169f0 unknown :-1 s=CLOSED pgs=199 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:21.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:21.026+0000 7f83b6259700 1 --2- 192.168.123.105:0/997641324 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f83b0107d50 0x7f83b0116f30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:21.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:21.026+0000 7f83b6259700 1 -- 192.168.123.105:0/997641324 >> 192.168.123.105:0/997641324 conn(0x7f83b006d3e0 msgr2=0x7f83b010af80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:57:21.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:21.026+0000 7f83b6259700 1 -- 192.168.123.105:0/997641324 shutdown_connections 2026-03-09T14:57:21.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:21.026+0000 7f83b6259700 1 -- 192.168.123.105:0/997641324 wait complete. 2026-03-09T14:57:21.111 DEBUG:teuthology.orchestra.run.vm05:osd.1> sudo journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.1.service 2026-03-09T14:57:21.113 INFO:tasks.cephadm:Deploying osd.2 on vm05 with /dev/vdc... 2026-03-09T14:57:21.113 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- lvm zap /dev/vdc 2026-03-09T14:57:21.394 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:57:21.693 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:21 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:21.693 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:21 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:21.693 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:21 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:21.693 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:21 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:21.693 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:21 vm05 ceph-mon[50611]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:57:22.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:21 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:22.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:21 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:22.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:21 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:22.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:21 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:22.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:21 vm09 ceph-mon[59673]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:57:22.152 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:57:22.198 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph orch daemon add osd vm05:/dev/vdc 2026-03-09T14:57:22.432 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 14:57:22 vm05 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[75070]: 2026-03-09T14:57:22.240+0000 7f1699a4b640 -1 osd.1 0 log_to_monitors true 2026-03-09T14:57:22.504 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:57:22.850 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.848+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/150441408 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7ee0101500 msgr2=0x7f7ee0101950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:22.850 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.848+0000 7f7ee6dd4700 1 --2- 192.168.123.105:0/150441408 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7ee0101500 0x7f7ee0101950 secure :-1 s=READY pgs=207 cs=0 l=1 rev1=1 crypto rx=0x7f7ecc009b00 tx=0x7f7ecc009e10 comp rx=0 tx=0).stop 2026-03-09T14:57:22.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.848+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/150441408 shutdown_connections 2026-03-09T14:57:22.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.848+0000 7f7ee6dd4700 1 --2- 192.168.123.105:0/150441408 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7ee0101500 0x7f7ee0101950 unknown :-1 s=CLOSED pgs=207 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:22.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.848+0000 7f7ee6dd4700 1 --2- 192.168.123.105:0/150441408 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7ee0100300 0x7f7ee0100710 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:22.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.848+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/150441408 >> 192.168.123.105:0/150441408 conn(0x7f7ee00fb890 msgr2=0x7f7ee00fdce0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:57:22.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.848+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/150441408 shutdown_connections 2026-03-09T14:57:22.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.848+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/150441408 wait complete. 2026-03-09T14:57:22.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.849+0000 7f7ee6dd4700 1 Processor -- start 2026-03-09T14:57:22.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.850+0000 7f7ee6dd4700 1 -- start start 2026-03-09T14:57:22.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.850+0000 7f7ee6dd4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7ee0100300 0x7f7ee0195a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:22.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.850+0000 7f7ee6dd4700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7ee0101500 0x7f7ee0195fd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:22.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.850+0000 7f7ee4b70700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7ee0100300 0x7f7ee0195a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:22.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.850+0000 7f7ee4b70700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7ee0100300 0x7f7ee0195a90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:34166/0 (socket says 192.168.123.105:34166) 2026-03-09T14:57:22.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.850+0000 7f7ee4b70700 1 -- 192.168.123.105:0/3882025799 learned_addr learned my addr 192.168.123.105:0/3882025799 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:57:22.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.850+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/3882025799 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7ee01965f0 con 0x7f7ee0100300 2026-03-09T14:57:22.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.850+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/3882025799 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7ee0196730 con 0x7f7ee0101500 2026-03-09T14:57:22.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.850+0000 7f7edffff700 1 --2- 192.168.123.105:0/3882025799 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7ee0101500 0x7f7ee0195fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:22.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.851+0000 7f7ee4b70700 1 -- 192.168.123.105:0/3882025799 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7ee0101500 msgr2=0x7f7ee0195fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:22.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.851+0000 7f7ee4b70700 1 --2- 192.168.123.105:0/3882025799 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7ee0101500 0x7f7ee0195fd0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:22.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.851+0000 7f7ee4b70700 1 -- 192.168.123.105:0/3882025799 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7ecc0097e0 con 0x7f7ee0100300 2026-03-09T14:57:22.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.851+0000 7f7ee4b70700 1 --2- 192.168.123.105:0/3882025799 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7ee0100300 0x7f7ee0195a90 secure :-1 s=READY pgs=208 cs=0 l=1 rev1=1 crypto rx=0x7f7ed400b700 tx=0x7f7ed400bac0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:57:22.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.851+0000 7f7eddffb700 1 -- 192.168.123.105:0/3882025799 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7ed4010820 con 0x7f7ee0100300 2026-03-09T14:57:22.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.851+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/3882025799 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7ee019b1e0 con 0x7f7ee0100300 2026-03-09T14:57:22.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.851+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/3882025799 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7ee019b730 con 0x7f7ee0100300 2026-03-09T14:57:22.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.853+0000 7f7eddffb700 1 -- 192.168.123.105:0/3882025799 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f7ed4010e60 con 0x7f7ee0100300 2026-03-09T14:57:22.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.853+0000 7f7eddffb700 1 -- 192.168.123.105:0/3882025799 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7ed4017570 con 0x7f7ee0100300 2026-03-09T14:57:22.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.853+0000 7f7eddffb700 1 -- 192.168.123.105:0/3882025799 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 18) v1 ==== 89854+0+0 (secure 0 0 0) 0x7f7ed4017790 con 0x7f7ee0100300 2026-03-09T14:57:22.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.854+0000 7f7eddffb700 1 --2- 192.168.123.105:0/3882025799 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7ed006c720 0x7f7ed006ebd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:22.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.855+0000 7f7edffff700 1 --2- 192.168.123.105:0/3882025799 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7ed006c720 0x7f7ed006ebd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:22.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.855+0000 7f7edffff700 1 --2- 192.168.123.105:0/3882025799 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7ed006c720 0x7f7ed006ebd0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f7ecc006010 tx=0x7f7ecc009f90 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:57:22.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.855+0000 7f7eddffb700 1 -- 192.168.123.105:0/3882025799 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(10..10 src has 1..10) v4 ==== 1915+0+0 (secure 0 0 0) 0x7f7ed408b660 con 0x7f7ee0100300 2026-03-09T14:57:22.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.855+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/3882025799 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7ec4005320 con 0x7f7ee0100300 2026-03-09T14:57:22.860 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.858+0000 7f7eddffb700 1 -- 192.168.123.105:0/3882025799 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f7ed405a670 con 0x7f7ee0100300 2026-03-09T14:57:22.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:22 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:22.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:22 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:22.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:22 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:57:22.983 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:22 vm05 ceph-mon[50611]: from='osd.1 [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:57:22.985 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:22.983+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/3882025799 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}) v1 -- 0x7f7ec4000bf0 con 0x7f7ed006c720 2026-03-09T14:57:23.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:22 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:23.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:22 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:23.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:22 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:57:23.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:22 vm09 ceph-mon[59673]: from='osd.1 [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:57:23.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='client.14324 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:57:23.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:57:23.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:57:23.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:23.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='osd.1 [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:57:23.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T14:57:23.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:57:23.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='osd.1 [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:57:23.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:57:23.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:23.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:23.986 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:57:24.305 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 14:57:24 vm05 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[75070]: 2026-03-09T14:57:24.021+0000 7f168e8c1700 -1 osd.1 0 waiting for initial osdmap 2026-03-09T14:57:24.305 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 14:57:24 vm05 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[75070]: 2026-03-09T14:57:24.040+0000 7f16896b4700 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:57:24.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:24.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:24.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:24.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='client.14324 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='osd.1 [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='osd.1 [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:25.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:25 vm05 ceph-mon[50611]: from='osd.1 [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:57:25.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:25 vm05 ceph-mon[50611]: osdmap e12: 2 total, 1 up, 2 in 2026-03-09T14:57:25.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:57:25.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:57:25.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:25 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2618486959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "14d84b3a-06be-48f8-89a7-0e9c83f76e3c"}]: dispatch 2026-03-09T14:57:25.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:25 vm05 ceph-mon[50611]: osd.1 [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416] boot 2026-03-09T14:57:25.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:25 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2618486959' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "14d84b3a-06be-48f8-89a7-0e9c83f76e3c"}]': finished 2026-03-09T14:57:25.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:25 vm05 ceph-mon[50611]: osdmap e13: 3 total, 2 up, 3 in 2026-03-09T14:57:25.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:57:25.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:57:25.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:25 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/987276904' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:57:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:25 vm09 ceph-mon[59673]: from='osd.1 [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:57:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:25 vm09 ceph-mon[59673]: osdmap e12: 2 total, 1 up, 2 in 2026-03-09T14:57:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:57:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:57:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:25 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/2618486959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "14d84b3a-06be-48f8-89a7-0e9c83f76e3c"}]: dispatch 2026-03-09T14:57:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:25 vm09 ceph-mon[59673]: osd.1 [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416] boot 2026-03-09T14:57:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:25 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/2618486959' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "14d84b3a-06be-48f8-89a7-0e9c83f76e3c"}]': finished 2026-03-09T14:57:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:25 vm09 ceph-mon[59673]: osdmap e13: 3 total, 2 up, 3 in 2026-03-09T14:57:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:57:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:57:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:25 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/987276904' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:57:26.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:26 vm05 ceph-mon[50611]: purged_snaps scrub starts 2026-03-09T14:57:26.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:26 vm05 ceph-mon[50611]: purged_snaps scrub ok 2026-03-09T14:57:26.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:26 vm05 ceph-mon[50611]: pgmap v25: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:57:26.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:26 vm05 ceph-mon[50611]: osdmap e14: 3 total, 2 up, 3 in 2026-03-09T14:57:26.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:26 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:57:26.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:26 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:26.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:26 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:26.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:26 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:26.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:26 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:26.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:26 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:26.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:26 vm09 ceph-mon[59673]: purged_snaps scrub starts 2026-03-09T14:57:26.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:26 vm09 ceph-mon[59673]: purged_snaps scrub ok 2026-03-09T14:57:26.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:26 vm09 ceph-mon[59673]: pgmap v25: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:57:26.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:26 vm09 ceph-mon[59673]: osdmap e14: 3 total, 2 up, 3 in 2026-03-09T14:57:26.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:26 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:57:26.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:26 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:26.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:26 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:26.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:26 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:26.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:26 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:26.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:26 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:28.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:27 vm05 ceph-mon[50611]: pgmap v27: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:57:28.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:27 vm09 ceph-mon[59673]: pgmap v27: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:57:29.109 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:28 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:57:29.109 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:28 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:29.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:28 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:57:29.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:28 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:30.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:29 vm05 ceph-mon[50611]: Deploying daemon osd.2 on vm05 2026-03-09T14:57:30.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:29 vm05 ceph-mon[50611]: pgmap v28: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:57:30.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:29 vm09 ceph-mon[59673]: Deploying daemon osd.2 on vm05 2026-03-09T14:57:30.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:29 vm09 ceph-mon[59673]: pgmap v28: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:57:31.624 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 2 on host 'vm05' 2026-03-09T14:57:31.624 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:31.622+0000 7f7eddffb700 1 -- 192.168.123.105:0/3882025799 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7f7ec4000bf0 con 0x7f7ed006c720 2026-03-09T14:57:31.625 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:31.624+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/3882025799 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7ed006c720 msgr2=0x7f7ed006ebd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:31.626 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:31.624+0000 7f7ee6dd4700 1 --2- 192.168.123.105:0/3882025799 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7ed006c720 0x7f7ed006ebd0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f7ecc006010 tx=0x7f7ecc009f90 comp rx=0 tx=0).stop 2026-03-09T14:57:31.626 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:31.624+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/3882025799 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7ee0100300 msgr2=0x7f7ee0195a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:31.626 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:31.624+0000 7f7ee6dd4700 1 --2- 192.168.123.105:0/3882025799 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7ee0100300 0x7f7ee0195a90 secure :-1 s=READY pgs=208 cs=0 l=1 rev1=1 crypto rx=0x7f7ed400b700 tx=0x7f7ed400bac0 comp rx=0 tx=0).stop 2026-03-09T14:57:31.626 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:31.625+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/3882025799 shutdown_connections 2026-03-09T14:57:31.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:31.625+0000 7f7ee6dd4700 1 --2- 192.168.123.105:0/3882025799 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7ed006c720 0x7f7ed006ebd0 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:31.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:31.625+0000 7f7ee6dd4700 1 --2- 192.168.123.105:0/3882025799 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7ee0100300 0x7f7ee0195a90 unknown :-1 s=CLOSED pgs=208 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:31.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:31.625+0000 7f7ee6dd4700 1 --2- 192.168.123.105:0/3882025799 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7ee0101500 0x7f7ee0195fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:31.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:31.625+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/3882025799 >> 192.168.123.105:0/3882025799 conn(0x7f7ee00fb890 msgr2=0x7f7ee00fdb50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:57:31.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:31.625+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/3882025799 shutdown_connections 2026-03-09T14:57:31.627 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:57:31.625+0000 7f7ee6dd4700 1 -- 192.168.123.105:0/3882025799 wait complete. 2026-03-09T14:57:31.683 DEBUG:teuthology.orchestra.run.vm05:osd.2> sudo journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.2.service 2026-03-09T14:57:31.685 INFO:tasks.cephadm:Deploying osd.3 on vm09 with /dev/vde... 2026-03-09T14:57:31.685 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- lvm zap /dev/vde 2026-03-09T14:57:31.724 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:31 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:31.724 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:31 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:31.724 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:31 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:31.724 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:31 vm05 ceph-mon[50611]: pgmap v29: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:57:31.833 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm09/config 2026-03-09T14:57:31.861 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:31 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:31.861 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:31 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:31.861 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:31 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:31.861 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:31 vm09 ceph-mon[59673]: pgmap v29: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:57:32.386 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:57:32.408 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph orch daemon add osd vm09:/dev/vde 2026-03-09T14:57:32.570 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm09/config 2026-03-09T14:57:32.853 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:32 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:32.853 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:32 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:32.853 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:32 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:32.853 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:32 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:32.853 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.852+0000 7fccf88bd700 1 -- 192.168.123.109:0/2587195825 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fccf0105a60 msgr2=0x7fccf0107e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:32.853 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.852+0000 7fccf88bd700 1 --2- 192.168.123.109:0/2587195825 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fccf0105a60 0x7fccf0107e40 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fcce4009b00 tx=0x7fcce4009e10 comp rx=0 tx=0).stop 2026-03-09T14:57:32.853 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.854+0000 7fccf88bd700 1 -- 192.168.123.109:0/2587195825 shutdown_connections 2026-03-09T14:57:32.853 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.854+0000 7fccf88bd700 1 --2- 192.168.123.109:0/2587195825 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fccf0105a60 0x7fccf0107e40 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:32.853 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.854+0000 7fccf88bd700 1 --2- 192.168.123.109:0/2587195825 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fccf00691a0 0x7fccf0105520 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:32.853 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.854+0000 7fccf88bd700 1 -- 192.168.123.109:0/2587195825 >> 192.168.123.109:0/2587195825 conn(0x7fccf00faa70 msgr2=0x7fccf00fcee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:57:32.854 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.854+0000 7fccf88bd700 1 -- 192.168.123.109:0/2587195825 shutdown_connections 2026-03-09T14:57:32.854 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.854+0000 7fccf88bd700 1 -- 192.168.123.109:0/2587195825 wait complete. 2026-03-09T14:57:32.854 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.855+0000 7fccf88bd700 1 Processor -- start 2026-03-09T14:57:32.854 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.855+0000 7fccf88bd700 1 -- start start 2026-03-09T14:57:32.855 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.855+0000 7fccf88bd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fccf00691a0 0x7fccf01980e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:32.855 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.855+0000 7fccf88bd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fccf0105a60 0x7fccf0198620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:32.855 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.855+0000 7fccf88bd700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fccf0198c40 con 0x7fccf00691a0 2026-03-09T14:57:32.855 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.855+0000 7fccf88bd700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fccf0198d80 con 0x7fccf0105a60 2026-03-09T14:57:32.855 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.856+0000 7fccf5e58700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fccf0105a60 0x7fccf0198620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:32.855 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.856+0000 7fccf5e58700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fccf0105a60 0x7fccf0198620 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.109:47352/0 (socket says 192.168.123.109:47352) 2026-03-09T14:57:32.855 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.856+0000 7fccf5e58700 1 -- 192.168.123.109:0/303557627 learned_addr learned my addr 192.168.123.109:0/303557627 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:57:32.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.856+0000 7fccf5e58700 1 -- 192.168.123.109:0/303557627 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fccf00691a0 msgr2=0x7fccf01980e0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T14:57:32.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.856+0000 7fccf5e58700 1 --2- 192.168.123.109:0/303557627 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fccf00691a0 0x7fccf01980e0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:32.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.856+0000 7fccf5e58700 1 -- 192.168.123.109:0/303557627 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcce40097e0 con 0x7fccf0105a60 2026-03-09T14:57:32.856 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.856+0000 7fccf5e58700 1 --2- 192.168.123.109:0/303557627 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fccf0105a60 0x7fccf0198620 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fcce40052d0 tx=0x7fcce4004a80 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:57:32.857 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.857+0000 7fcce37fe700 1 -- 192.168.123.109:0/303557627 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcce401d070 con 0x7fccf0105a60 2026-03-09T14:57:32.857 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.857+0000 7fcce37fe700 1 -- 192.168.123.109:0/303557627 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fcce4004500 con 0x7fccf0105a60 2026-03-09T14:57:32.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.857+0000 7fcce37fe700 1 -- 192.168.123.109:0/303557627 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcce4022470 con 0x7fccf0105a60 2026-03-09T14:57:32.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.857+0000 7fccf88bd700 1 -- 192.168.123.109:0/303557627 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fccf019d7d0 con 0x7fccf0105a60 2026-03-09T14:57:32.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.857+0000 7fccf88bd700 1 -- 192.168.123.109:0/303557627 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fccf019dc60 con 0x7fccf0105a60 2026-03-09T14:57:32.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.858+0000 7fccf88bd700 1 -- 192.168.123.109:0/303557627 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fccf0192240 con 0x7fccf0105a60 2026-03-09T14:57:32.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.858+0000 7fcce37fe700 1 -- 192.168.123.109:0/303557627 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 18) v1 ==== 89854+0+0 (secure 0 0 0) 0x7fcce40225d0 con 0x7fccf0105a60 2026-03-09T14:57:32.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.859+0000 7fcce37fe700 1 --2- 192.168.123.109:0/303557627 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fccdc06c5b0 0x7fccdc06ea60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:32.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.859+0000 7fcce37fe700 1 -- 192.168.123.109:0/303557627 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(14..14 src has 1..14) v4 ==== 2347+0+0 (secure 0 0 0) 0x7fcce408c850 con 0x7fccf0105a60 2026-03-09T14:57:32.861 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.862+0000 7fcce37fe700 1 -- 192.168.123.109:0/303557627 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fcce405b6b0 con 0x7fccf0105a60 2026-03-09T14:57:32.862 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.862+0000 7fccf6659700 1 --2- 192.168.123.109:0/303557627 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fccdc06c5b0 0x7fccdc06ea60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:32.863 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.863+0000 7fccf6659700 1 --2- 192.168.123.109:0/303557627 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fccdc06c5b0 0x7fccdc06ea60 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fccf0069570 tx=0x7fcce8006cb0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:57:32.932 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:32 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:32.932 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:32 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:32.932 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:32 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:32.932 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:32 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:32.975 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:32.976+0000 7fccf88bd700 1 -- 192.168.123.109:0/303557627 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}) v1 -- 0x7fccf0061190 con 0x7fccdc06c5b0 2026-03-09T14:57:33.251 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 14:57:32 vm05 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[81044]: 2026-03-09T14:57:32.929+0000 7fed1bf02640 -1 osd.2 0 log_to_monitors true 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='osd.2 [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='client.24133 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: pgmap v30: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:33 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='osd.2 [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='client.24133 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: pgmap v30: 0 pgs: ; 0 B data, 52 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:33.914 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:33 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:34.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 14:57:33 vm05 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[81044]: 2026-03-09T14:57:33.912+0000 7fed1257b700 -1 osd.2 0 waiting for initial osdmap 2026-03-09T14:57:34.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 14:57:33 vm05 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[81044]: 2026-03-09T14:57:33.928+0000 7fed0ab69700 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: Detected new or changed devices on vm05 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: from='osd.2 [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: osdmap e15: 3 total, 2 up, 3 in 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: from='osd.2 [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "24dfa5ad-72df-4f80-bf60-0507508104f2"}]: dispatch 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1589988577' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "24dfa5ad-72df-4f80-bf60-0507508104f2"}]: dispatch 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: from='osd.2 [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "24dfa5ad-72df-4f80-bf60-0507508104f2"}]': finished 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: osdmap e16: 4 total, 2 up, 4 in 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:57:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:34 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3201655659' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: Detected new or changed devices on vm05 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: from='osd.2 [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: osdmap e15: 3 total, 2 up, 3 in 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: from='osd.2 [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "24dfa5ad-72df-4f80-bf60-0507508104f2"}]: dispatch 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/1589988577' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "24dfa5ad-72df-4f80-bf60-0507508104f2"}]: dispatch 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: from='osd.2 [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "24dfa5ad-72df-4f80-bf60-0507508104f2"}]': finished 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: osdmap e16: 4 total, 2 up, 4 in 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:57:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:34 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/3201655659' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:57:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:35 vm05 ceph-mon[50611]: osd.2 [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520] boot 2026-03-09T14:57:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:35 vm05 ceph-mon[50611]: osdmap e17: 4 total, 3 up, 4 in 2026-03-09T14:57:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:35 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:57:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:35 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:35 vm05 ceph-mon[50611]: pgmap v34: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:57:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:35 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:57:36.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:35 vm09 ceph-mon[59673]: osd.2 [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520] boot 2026-03-09T14:57:36.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:35 vm09 ceph-mon[59673]: osdmap e17: 4 total, 3 up, 4 in 2026-03-09T14:57:36.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:35 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:57:36.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:35 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:36.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:35 vm09 ceph-mon[59673]: pgmap v34: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:57:36.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:35 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:57:37.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:36 vm05 ceph-mon[50611]: purged_snaps scrub starts 2026-03-09T14:57:37.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:36 vm05 ceph-mon[50611]: purged_snaps scrub ok 2026-03-09T14:57:37.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:36 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T14:57:37.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:36 vm05 ceph-mon[50611]: osdmap e18: 4 total, 3 up, 4 in 2026-03-09T14:57:37.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:36 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:37.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:36 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:57:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:36 vm09 ceph-mon[59673]: purged_snaps scrub starts 2026-03-09T14:57:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:36 vm09 ceph-mon[59673]: purged_snaps scrub ok 2026-03-09T14:57:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:36 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T14:57:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:36 vm09 ceph-mon[59673]: osdmap e18: 4 total, 3 up, 4 in 2026-03-09T14:57:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:36 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:36 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:57:37.933 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 14:57:37 vm05 sudo[84160]: ceph : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/sbin/smartctl -x --json=o /dev/vde 2026-03-09T14:57:37.933 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 14:57:37 vm05 sudo[84160]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T14:57:37.934 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 14:57:37 vm05 sudo[84160]: pam_unix(sudo:session): session opened for user root by (uid=0) 2026-03-09T14:57:37.934 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 14:57:37 vm05 sudo[84160]: pam_unix(sudo:session): session closed for user root 2026-03-09T14:57:37.934 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 14:57:37 vm05 sudo[84163]: ceph : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/sbin/smartctl -x --json=o /dev/vdd 2026-03-09T14:57:37.934 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 14:57:37 vm05 sudo[84163]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T14:57:37.934 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 14:57:37 vm05 sudo[84163]: pam_unix(sudo:session): session opened for user root by (uid=0) 2026-03-09T14:57:37.934 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 14:57:37 vm05 sudo[84163]: pam_unix(sudo:session): session closed for user root 2026-03-09T14:57:38.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:37 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:57:38.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:37 vm09 ceph-mon[59673]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T14:57:38.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:37 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:38.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:37 vm09 ceph-mon[59673]: pgmap v37: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:57:38.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:37 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:57:38.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 14:57:37 vm05 sudo[84166]: ceph : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/sbin/smartctl -x --json=o /dev/vdc 2026-03-09T14:57:38.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 14:57:37 vm05 sudo[84166]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T14:57:38.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 14:57:37 vm05 sudo[84166]: pam_unix(sudo:session): session opened for user root by (uid=0) 2026-03-09T14:57:38.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 14:57:37 vm05 sudo[84166]: pam_unix(sudo:session): session closed for user root 2026-03-09T14:57:38.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:57:38.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:37 vm05 ceph-mon[50611]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T14:57:38.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:38.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:37 vm05 ceph-mon[50611]: pgmap v37: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:57:38.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:37 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:57:38.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 sudo[84169]: ceph : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/sbin/smartctl -x --json=o /dev/vda 2026-03-09T14:57:38.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 sudo[84169]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T14:57:38.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 sudo[84169]: pam_unix(sudo:session): session opened for user root by (uid=0) 2026-03-09T14:57:38.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 sudo[84169]: pam_unix(sudo:session): session closed for user root 2026-03-09T14:57:38.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 sudo[65172]: ceph : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/sbin/smartctl -x --json=o /dev/vda 2026-03-09T14:57:38.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 sudo[65172]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T14:57:38.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 sudo[65172]: pam_unix(sudo:session): session opened for user root by (uid=0) 2026-03-09T14:57:38.550 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 sudo[65172]: pam_unix(sudo:session): session closed for user root 2026-03-09T14:57:39.213 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 ceph-mon[59673]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T14:57:39.213 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:39.213 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 ceph-mon[59673]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:57:39.213 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 ceph-mon[59673]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:57:39.213 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T14:57:39.213 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:57:39.213 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T14:57:39.213 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:57:39.214 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 ceph-mon[59673]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:57:39.214 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:57:39.214 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:39.214 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:38 vm09 ceph-mon[59673]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:57:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 ceph-mon[50611]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T14:57:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 ceph-mon[50611]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:57:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 ceph-mon[50611]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:57:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T14:57:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:57:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T14:57:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T14:57:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 ceph-mon[50611]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:57:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:57:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:38 vm05 ceph-mon[50611]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:57:39.986 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:39.985+0000 7fcce37fe700 1 -- 192.168.123.109:0/303557627 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fcce400bc50 con 0x7fccf0105a60 2026-03-09T14:57:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:39 vm05 ceph-mon[50611]: Deploying daemon osd.3 on vm09 2026-03-09T14:57:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:39 vm05 ceph-mon[50611]: pgmap v39: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:57:40.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:39 vm09 ceph-mon[59673]: Deploying daemon osd.3 on vm09 2026-03-09T14:57:40.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:39 vm09 ceph-mon[59673]: pgmap v39: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:57:41.057 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 3 on host 'vm09' 2026-03-09T14:57:41.057 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:41.057+0000 7fcce37fe700 1 -- 192.168.123.109:0/303557627 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7fccf0061190 con 0x7fccdc06c5b0 2026-03-09T14:57:41.061 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:41.060+0000 7fccf88bd700 1 -- 192.168.123.109:0/303557627 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fccdc06c5b0 msgr2=0x7fccdc06ea60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:41.061 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:41.060+0000 7fccf88bd700 1 --2- 192.168.123.109:0/303557627 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fccdc06c5b0 0x7fccdc06ea60 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fccf0069570 tx=0x7fcce8006cb0 comp rx=0 tx=0).stop 2026-03-09T14:57:41.061 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:41.060+0000 7fccf88bd700 1 -- 192.168.123.109:0/303557627 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fccf0105a60 msgr2=0x7fccf0198620 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:41.061 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:41.060+0000 7fccf88bd700 1 --2- 192.168.123.109:0/303557627 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fccf0105a60 0x7fccf0198620 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fcce40052d0 tx=0x7fcce4004a80 comp rx=0 tx=0).stop 2026-03-09T14:57:41.061 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:41.061+0000 7fccf88bd700 1 -- 192.168.123.109:0/303557627 shutdown_connections 2026-03-09T14:57:41.061 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:41.061+0000 7fccf88bd700 1 --2- 192.168.123.109:0/303557627 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fccdc06c5b0 0x7fccdc06ea60 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:41.061 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:41.061+0000 7fccf88bd700 1 --2- 192.168.123.109:0/303557627 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fccf00691a0 0x7fccf01980e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:41.061 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:41.061+0000 7fccf88bd700 1 --2- 192.168.123.109:0/303557627 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fccf0105a60 0x7fccf0198620 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:41.061 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:41.061+0000 7fccf88bd700 1 -- 192.168.123.109:0/303557627 >> 192.168.123.109:0/303557627 conn(0x7fccf00faa70 msgr2=0x7fccf00fcee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:57:41.061 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:41.061+0000 7fccf88bd700 1 -- 192.168.123.109:0/303557627 shutdown_connections 2026-03-09T14:57:41.061 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:41.061+0000 7fccf88bd700 1 -- 192.168.123.109:0/303557627 wait complete. 2026-03-09T14:57:41.118 DEBUG:teuthology.orchestra.run.vm09:osd.3> sudo journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.3.service 2026-03-09T14:57:41.119 INFO:tasks.cephadm:Deploying osd.4 on vm09 with /dev/vdd... 2026-03-09T14:57:41.119 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- lvm zap /dev/vdd 2026-03-09T14:57:41.184 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:40 vm09 ceph-mon[59673]: mgrmap e19: vm05.lhsexd(active, since 62s), standbys: vm09.cfuwdz 2026-03-09T14:57:41.184 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:40 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:41.184 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:40 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:41.184 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:40 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:40 vm05 ceph-mon[50611]: mgrmap e19: vm05.lhsexd(active, since 62s), standbys: vm09.cfuwdz 2026-03-09T14:57:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:40 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:40 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:40 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:41.322 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm09/config 2026-03-09T14:57:41.884 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:57:41.903 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph orch daemon add osd vm09:/dev/vdd 2026-03-09T14:57:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:42 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:42 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:42 vm05 ceph-mon[50611]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:57:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:42 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:42.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:42 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:42.070 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:42 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:42.070 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:42 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:42.070 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:42 vm09 ceph-mon[59673]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:57:42.070 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:42 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:42.070 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:42 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:42.207 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm09/config 2026-03-09T14:57:42.616 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.614+0000 7f4f56a01700 1 -- 192.168.123.109:0/1342968874 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4f500ff4c0 msgr2=0x7f4f500ff8d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:42.616 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.615+0000 7f4f56a01700 1 --2- 192.168.123.109:0/1342968874 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4f500ff4c0 0x7f4f500ff8d0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f4f40009ab0 tx=0x7f4f40009dc0 comp rx=0 tx=0).stop 2026-03-09T14:57:42.616 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.616+0000 7f4f56a01700 1 -- 192.168.123.109:0/1342968874 shutdown_connections 2026-03-09T14:57:42.616 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.616+0000 7f4f56a01700 1 --2- 192.168.123.109:0/1342968874 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4f500ffea0 0x7f4f500fe040 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:42.616 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.616+0000 7f4f56a01700 1 --2- 192.168.123.109:0/1342968874 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4f500ff4c0 0x7f4f500ff8d0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:42.616 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.616+0000 7f4f56a01700 1 -- 192.168.123.109:0/1342968874 >> 192.168.123.109:0/1342968874 conn(0x7f4f500f9a90 msgr2=0x7f4f500fbee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:57:42.616 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.616+0000 7f4f56a01700 1 -- 192.168.123.109:0/1342968874 shutdown_connections 2026-03-09T14:57:42.616 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.616+0000 7f4f56a01700 1 -- 192.168.123.109:0/1342968874 wait complete. 2026-03-09T14:57:42.617 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.617+0000 7f4f56a01700 1 Processor -- start 2026-03-09T14:57:42.617 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.617+0000 7f4f56a01700 1 -- start start 2026-03-09T14:57:42.617 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.618+0000 7f4f56a01700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4f500ff4c0 0x7f4f501053e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:42.617 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.618+0000 7f4f56a01700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4f500ffea0 0x7f4f50105920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:42.617 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.618+0000 7f4f56a01700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4f50101ed0 con 0x7f4f500ffea0 2026-03-09T14:57:42.617 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.618+0000 7f4f56a01700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4f50102040 con 0x7f4f500ff4c0 2026-03-09T14:57:42.617 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.618+0000 7f4f559ff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4f500ff4c0 0x7f4f501053e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:42.618 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.618+0000 7f4f559ff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4f500ff4c0 0x7f4f501053e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.109:43402/0 (socket says 192.168.123.109:43402) 2026-03-09T14:57:42.618 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.618+0000 7f4f559ff700 1 -- 192.168.123.109:0/3108340267 learned_addr learned my addr 192.168.123.109:0/3108340267 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:57:42.618 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.618+0000 7f4f551fe700 1 --2- 192.168.123.109:0/3108340267 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4f500ffea0 0x7f4f50105920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:42.618 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.618+0000 7f4f559ff700 1 -- 192.168.123.109:0/3108340267 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4f500ffea0 msgr2=0x7f4f50105920 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:42.618 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.618+0000 7f4f559ff700 1 --2- 192.168.123.109:0/3108340267 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4f500ffea0 0x7f4f50105920 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:42.618 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.618+0000 7f4f559ff700 1 -- 192.168.123.109:0/3108340267 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4f40009710 con 0x7f4f500ff4c0 2026-03-09T14:57:42.618 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.619+0000 7f4f551fe700 1 --2- 192.168.123.109:0/3108340267 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4f500ffea0 0x7f4f50105920 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T14:57:42.618 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.619+0000 7f4f559ff700 1 --2- 192.168.123.109:0/3108340267 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4f500ff4c0 0x7f4f501053e0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f4f400096e0 tx=0x7f4f4000fab0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:57:42.619 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.619+0000 7f4f4effd700 1 -- 192.168.123.109:0/3108340267 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4f4001d070 con 0x7f4f500ff4c0 2026-03-09T14:57:42.619 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.619+0000 7f4f56a01700 1 -- 192.168.123.109:0/3108340267 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4f501022c0 con 0x7f4f500ff4c0 2026-03-09T14:57:42.619 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.619+0000 7f4f56a01700 1 -- 192.168.123.109:0/3108340267 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4f501027b0 con 0x7f4f500ff4c0 2026-03-09T14:57:42.624 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.621+0000 7f4f4effd700 1 -- 192.168.123.109:0/3108340267 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f4f4000fdb0 con 0x7f4f500ff4c0 2026-03-09T14:57:42.624 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.621+0000 7f4f4effd700 1 -- 192.168.123.109:0/3108340267 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4f40017860 con 0x7f4f500ff4c0 2026-03-09T14:57:42.624 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.621+0000 7f4f4effd700 1 -- 192.168.123.109:0/3108340267 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f4f40017a80 con 0x7f4f500ff4c0 2026-03-09T14:57:42.624 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.622+0000 7f4f4effd700 1 --2- 192.168.123.109:0/3108340267 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4f3806c870 0x7f4f3806ed20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:42.624 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.622+0000 7f4f4effd700 1 -- 192.168.123.109:0/3108340267 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(20..20 src has 1..20) v4 ==== 3165+0+0 (secure 0 0 0) 0x7f4f4008d980 con 0x7f4f500ff4c0 2026-03-09T14:57:42.625 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.622+0000 7f4f56a01700 1 -- 192.168.123.109:0/3108340267 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4f5004ea50 con 0x7f4f500ff4c0 2026-03-09T14:57:42.625 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.622+0000 7f4f551fe700 1 --2- 192.168.123.109:0/3108340267 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4f3806c870 0x7f4f3806ed20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:42.625 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.623+0000 7f4f551fe700 1 --2- 192.168.123.109:0/3108340267 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4f3806c870 0x7f4f3806ed20 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f4f44005950 tx=0x7f4f440058e0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:57:42.625 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.625+0000 7f4f4effd700 1 -- 192.168.123.109:0/3108340267 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f4f4005c450 con 0x7f4f500ff4c0 2026-03-09T14:57:42.763 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:42.763+0000 7f4f56a01700 1 -- 192.168.123.109:0/3108340267 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}) v1 -- 0x7f4f501a21d0 con 0x7f4f3806c870 2026-03-09T14:57:42.871 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 14:57:42 vm09 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[65598]: 2026-03-09T14:57:42.775+0000 7f29fa74b640 -1 osd.3 0 log_to_monitors true 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: Detected new or changed devices on vm09 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='client.24155 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='osd.3 [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:43 vm05 ceph-mon[50611]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:57:44.116 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 14:57:43 vm09 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[65598]: 2026-03-09T14:57:43.801+0000 7f29ef5c1700 -1 osd.3 0 waiting for initial osdmap 2026-03-09T14:57:44.116 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 14:57:43 vm09 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[65598]: 2026-03-09T14:57:43.817+0000 7f29ebbb7700 -1 osd.3 22 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: Detected new or changed devices on vm09 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='client.24155 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='osd.3 [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:44.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:43 vm09 ceph-mon[59673]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: from='osd.3 [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: from='osd.3 [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c4ddfd7f-8055-4c53-a70a-131428da743a"}]: dispatch 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3226442566' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c4ddfd7f-8055-4c53-a70a-131428da743a"}]: dispatch 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: from='osd.3 [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c4ddfd7f-8055-4c53-a70a-131428da743a"}]': finished 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: osdmap e22: 5 total, 3 up, 5 in 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:44 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/3585008724' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:57:45.115 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: from='osd.3 [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:57:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T14:57:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: from='osd.3 [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T14:57:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c4ddfd7f-8055-4c53-a70a-131428da743a"}]: dispatch 2026-03-09T14:57:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/3226442566' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c4ddfd7f-8055-4c53-a70a-131428da743a"}]: dispatch 2026-03-09T14:57:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: from='osd.3 [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T14:57:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c4ddfd7f-8055-4c53-a70a-131428da743a"}]': finished 2026-03-09T14:57:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: osdmap e22: 5 total, 3 up, 5 in 2026-03-09T14:57:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:44 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/3585008724' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:57:46.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:45 vm09 ceph-mon[59673]: purged_snaps scrub starts 2026-03-09T14:57:46.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:45 vm09 ceph-mon[59673]: purged_snaps scrub ok 2026-03-09T14:57:46.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:45 vm09 ceph-mon[59673]: osd.3 [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815] boot 2026-03-09T14:57:46.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:45 vm09 ceph-mon[59673]: osdmap e23: 5 total, 4 up, 5 in 2026-03-09T14:57:46.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:45 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:46.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:45 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:46.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:45 vm09 ceph-mon[59673]: pgmap v45: 1 pgs: 1 remapped+peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:57:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:45 vm05 ceph-mon[50611]: purged_snaps scrub starts 2026-03-09T14:57:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:45 vm05 ceph-mon[50611]: purged_snaps scrub ok 2026-03-09T14:57:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:45 vm05 ceph-mon[50611]: osd.3 [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815] boot 2026-03-09T14:57:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:45 vm05 ceph-mon[50611]: osdmap e23: 5 total, 4 up, 5 in 2026-03-09T14:57:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:45 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:57:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:45 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:45 vm05 ceph-mon[50611]: pgmap v45: 1 pgs: 1 remapped+peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:57:47.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:46 vm09 ceph-mon[59673]: osdmap e24: 5 total, 4 up, 5 in 2026-03-09T14:57:47.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:46 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:46 vm05 ceph-mon[50611]: osdmap e24: 5 total, 4 up, 5 in 2026-03-09T14:57:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:46 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:47.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:47 vm09 ceph-mon[59673]: osdmap e25: 5 total, 4 up, 5 in 2026-03-09T14:57:47.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:47 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:47.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:47 vm09 ceph-mon[59673]: pgmap v48: 1 pgs: 1 remapped+peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:57:48.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:47 vm05 ceph-mon[50611]: osdmap e25: 5 total, 4 up, 5 in 2026-03-09T14:57:48.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:47 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:48.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:47 vm05 ceph-mon[50611]: pgmap v48: 1 pgs: 1 remapped+peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:57:49.048 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:48 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:57:49.048 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:48 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:49.048 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:48 vm09 ceph-mon[59673]: Deploying daemon osd.4 on vm09 2026-03-09T14:57:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:48 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:57:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:48 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:48 vm05 ceph-mon[50611]: Deploying daemon osd.4 on vm09 2026-03-09T14:57:50.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:49 vm09 ceph-mon[59673]: pgmap v49: 1 pgs: 1 remapped+peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:57:50.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:49 vm05 ceph-mon[50611]: pgmap v49: 1 pgs: 1 remapped+peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:57:50.948 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:50 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:50.948 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:50 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:50.948 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:50 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:50.993 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 4 on host 'vm09' 2026-03-09T14:57:50.993 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:50.993+0000 7f4f4effd700 1 -- 192.168.123.109:0/3108340267 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7f4f501a21d0 con 0x7f4f3806c870 2026-03-09T14:57:50.995 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:50.995+0000 7f4f56a01700 1 -- 192.168.123.109:0/3108340267 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4f3806c870 msgr2=0x7f4f3806ed20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:50.995 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:50.995+0000 7f4f56a01700 1 --2- 192.168.123.109:0/3108340267 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4f3806c870 0x7f4f3806ed20 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f4f44005950 tx=0x7f4f440058e0 comp rx=0 tx=0).stop 2026-03-09T14:57:50.995 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:50.995+0000 7f4f56a01700 1 -- 192.168.123.109:0/3108340267 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4f500ff4c0 msgr2=0x7f4f501053e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:50.995 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:50.995+0000 7f4f56a01700 1 --2- 192.168.123.109:0/3108340267 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4f500ff4c0 0x7f4f501053e0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f4f400096e0 tx=0x7f4f4000fab0 comp rx=0 tx=0).stop 2026-03-09T14:57:50.995 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:50.995+0000 7f4f56a01700 1 -- 192.168.123.109:0/3108340267 shutdown_connections 2026-03-09T14:57:50.995 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:50.995+0000 7f4f56a01700 1 --2- 192.168.123.109:0/3108340267 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4f3806c870 0x7f4f3806ed20 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:50.995 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:50.995+0000 7f4f56a01700 1 --2- 192.168.123.109:0/3108340267 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4f500ff4c0 0x7f4f501053e0 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:50.995 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:50.995+0000 7f4f56a01700 1 --2- 192.168.123.109:0/3108340267 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4f500ffea0 0x7f4f50105920 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:50.995 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:50.995+0000 7f4f56a01700 1 -- 192.168.123.109:0/3108340267 >> 192.168.123.109:0/3108340267 conn(0x7f4f500f9a90 msgr2=0x7f4f50107e90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:57:50.995 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:50.995+0000 7f4f56a01700 1 -- 192.168.123.109:0/3108340267 shutdown_connections 2026-03-09T14:57:50.995 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:50.996+0000 7f4f56a01700 1 -- 192.168.123.109:0/3108340267 wait complete. 2026-03-09T14:57:51.066 DEBUG:teuthology.orchestra.run.vm09:osd.4> sudo journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.4.service 2026-03-09T14:57:51.067 INFO:tasks.cephadm:Deploying osd.5 on vm09 with /dev/vdc... 2026-03-09T14:57:51.067 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- lvm zap /dev/vdc 2026-03-09T14:57:51.301 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm09/config 2026-03-09T14:57:51.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:50 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:51.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:50 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:51.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:50 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:51.855 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T14:57:51.869 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph orch daemon add osd vm09:/dev/vdc 2026-03-09T14:57:52.085 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm09/config 2026-03-09T14:57:52.114 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:51 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:52.114 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:51 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:52.114 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:51 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:52.114 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:51 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:52.114 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:51 vm09 ceph-mon[59673]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 72 KiB/s, 0 objects/s recovering 2026-03-09T14:57:52.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:51 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:52.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:51 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:52.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:51 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:52.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:51 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:52.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:51 vm05 ceph-mon[50611]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 72 KiB/s, 0 objects/s recovering 2026-03-09T14:57:52.404 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.403+0000 7f1384992700 1 -- 192.168.123.109:0/19581289 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1380072330 msgr2=0x7f13800770b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:52.404 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.403+0000 7f1384992700 1 --2- 192.168.123.109:0/19581289 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1380072330 0x7f13800770b0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f137800d3f0 tx=0x7f137800d700 comp rx=0 tx=0).stop 2026-03-09T14:57:52.404 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.404+0000 7f1384992700 1 -- 192.168.123.109:0/19581289 shutdown_connections 2026-03-09T14:57:52.404 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.404+0000 7f1384992700 1 --2- 192.168.123.109:0/19581289 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1380072330 0x7f13800770b0 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:52.404 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.404+0000 7f1384992700 1 --2- 192.168.123.109:0/19581289 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1380071950 0x7f1380071d60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:52.404 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.404+0000 7f1384992700 1 -- 192.168.123.109:0/19581289 >> 192.168.123.109:0/19581289 conn(0x7f138006d1a0 msgr2=0x7f138006f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:57:52.407 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.404+0000 7f1384992700 1 -- 192.168.123.109:0/19581289 shutdown_connections 2026-03-09T14:57:52.408 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.404+0000 7f1384992700 1 -- 192.168.123.109:0/19581289 wait complete. 2026-03-09T14:57:52.408 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f1384992700 1 Processor -- start 2026-03-09T14:57:52.408 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f1384992700 1 -- start start 2026-03-09T14:57:52.408 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f1384992700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1380071950 0x7f13801313a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:52.408 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f1384992700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13801318e0 0x7f138007f540 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:52.408 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f1384992700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1380131de0 con 0x7f13801318e0 2026-03-09T14:57:52.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f1384992700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1380131f20 con 0x7f1380071950 2026-03-09T14:57:52.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f137effd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13801318e0 0x7f138007f540 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:52.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f137effd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13801318e0 0x7f138007f540 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.109:49458/0 (socket says 192.168.123.109:49458) 2026-03-09T14:57:52.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f137effd700 1 -- 192.168.123.109:0/3543792831 learned_addr learned my addr 192.168.123.109:0/3543792831 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:57:52.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f137f7fe700 1 --2- 192.168.123.109:0/3543792831 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1380071950 0x7f13801313a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:52.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f137effd700 1 -- 192.168.123.109:0/3543792831 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1380071950 msgr2=0x7f13801313a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:57:52.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f137effd700 1 --2- 192.168.123.109:0/3543792831 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1380071950 0x7f13801313a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:57:52.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.408+0000 7f137effd700 1 -- 192.168.123.109:0/3543792831 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1378007ed0 con 0x7f13801318e0 2026-03-09T14:57:52.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.409+0000 7f137effd700 1 --2- 192.168.123.109:0/3543792831 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13801318e0 0x7f138007f540 secure :-1 s=READY pgs=218 cs=0 l=1 rev1=1 crypto rx=0x7f1378004010 tx=0x7f1378004040 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:57:52.411 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.412+0000 7f137cff9700 1 -- 192.168.123.109:0/3543792831 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f137801c070 con 0x7f13801318e0 2026-03-09T14:57:52.412 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.412+0000 7f1384992700 1 -- 192.168.123.109:0/3543792831 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f138007fa80 con 0x7f13801318e0 2026-03-09T14:57:52.412 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.412+0000 7f1384992700 1 -- 192.168.123.109:0/3543792831 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f138007ffa0 con 0x7f13801318e0 2026-03-09T14:57:52.412 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.413+0000 7f1384992700 1 -- 192.168.123.109:0/3543792831 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f138012b500 con 0x7f13801318e0 2026-03-09T14:57:52.412 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.413+0000 7f137cff9700 1 -- 192.168.123.109:0/3543792831 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f137800fb40 con 0x7f13801318e0 2026-03-09T14:57:52.413 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.413+0000 7f137cff9700 1 -- 192.168.123.109:0/3543792831 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1378017860 con 0x7f13801318e0 2026-03-09T14:57:52.414 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.414+0000 7f137cff9700 1 -- 192.168.123.109:0/3543792831 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f13780179c0 con 0x7f13801318e0 2026-03-09T14:57:52.414 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.415+0000 7f137cff9700 1 --2- 192.168.123.109:0/3543792831 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f136806c7a0 0x7f136806ec50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:57:52.414 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.415+0000 7f137cff9700 1 -- 192.168.123.109:0/3543792831 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(25..25 src has 1..25) v4 ==== 3697+0+0 (secure 0 0 0) 0x7f1378013070 con 0x7f13801318e0 2026-03-09T14:57:52.415 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.415+0000 7f137f7fe700 1 --2- 192.168.123.109:0/3543792831 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f136806c7a0 0x7f136806ec50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:57:52.415 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.415+0000 7f137f7fe700 1 --2- 192.168.123.109:0/3543792831 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f136806c7a0 0x7f136806ec50 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f1370009c80 tx=0x7f1370009400 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:57:52.416 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.417+0000 7f137cff9700 1 -- 192.168.123.109:0/3543792831 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f137805cc80 con 0x7f13801318e0 2026-03-09T14:57:52.585 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:57:52.585+0000 7f1384992700 1 -- 192.168.123.109:0/3543792831 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}) v1 -- 0x7f1380061190 con 0x7f136806c7a0 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:53 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:53.367 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 14:57:53 vm09 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[71018]: 2026-03-09T14:57:53.094+0000 7fb459773640 -1 osd.4 0 log_to_monitors true 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:57:53.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:53 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: from='client.14362 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: Detected new or changed devices on vm09 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: from='osd.4 [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ff88b04-4b34-4df6-bf77-2132a823172e"}]: dispatch 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/4240764021' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ff88b04-4b34-4df6-bf77-2132a823172e"}]: dispatch 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0ff88b04-4b34-4df6-bf77-2132a823172e"}]': finished 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: osdmap e26: 6 total, 4 up, 6 in 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: from='osd.4 [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T14:57:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:54 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/2180669217' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: from='client.14362 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: Detected new or changed devices on vm09 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: from='osd.4 [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ff88b04-4b34-4df6-bf77-2132a823172e"}]: dispatch 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/4240764021' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ff88b04-4b34-4df6-bf77-2132a823172e"}]: dispatch 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0ff88b04-4b34-4df6-bf77-2132a823172e"}]': finished 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: osdmap e26: 6 total, 4 up, 6 in 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: from='osd.4 [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T14:57:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:54 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/2180669217' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:57:54.866 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 14:57:54 vm09 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[71018]: 2026-03-09T14:57:54.547+0000 7fb44e5e9700 -1 osd.4 0 waiting for initial osdmap 2026-03-09T14:57:54.866 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 14:57:54 vm09 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[71018]: 2026-03-09T14:57:54.566+0000 7fb44abdf700 -1 osd.4 27 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:57:55.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:55 vm05 ceph-mon[50611]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T14:57:55.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:55 vm05 ceph-mon[50611]: osdmap e27: 6 total, 4 up, 6 in 2026-03-09T14:57:55.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:55 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:55.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:55 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:57:55.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:55 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:55.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:55 vm05 ceph-mon[50611]: pgmap v54: 1 pgs: 1 peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:57:55.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:55 vm09 ceph-mon[59673]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T14:57:55.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:55 vm09 ceph-mon[59673]: osdmap e27: 6 total, 4 up, 6 in 2026-03-09T14:57:55.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:55 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:55.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:55 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:57:55.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:55 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:55.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:55 vm09 ceph-mon[59673]: pgmap v54: 1 pgs: 1 peering; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:57:56.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:56 vm05 ceph-mon[50611]: purged_snaps scrub starts 2026-03-09T14:57:56.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:56 vm05 ceph-mon[50611]: purged_snaps scrub ok 2026-03-09T14:57:56.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:56 vm05 ceph-mon[50611]: osd.4 [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602] boot 2026-03-09T14:57:56.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:56 vm05 ceph-mon[50611]: osdmap e28: 6 total, 5 up, 6 in 2026-03-09T14:57:56.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:56.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:56 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:57:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:56 vm09 ceph-mon[59673]: purged_snaps scrub starts 2026-03-09T14:57:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:56 vm09 ceph-mon[59673]: purged_snaps scrub ok 2026-03-09T14:57:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:56 vm09 ceph-mon[59673]: osd.4 [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602] boot 2026-03-09T14:57:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:56 vm09 ceph-mon[59673]: osdmap e28: 6 total, 5 up, 6 in 2026-03-09T14:57:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:57:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:56 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:57:57.790 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:57 vm09 ceph-mon[59673]: osdmap e29: 6 total, 5 up, 6 in 2026-03-09T14:57:57.790 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:57 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:57:57.790 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:57 vm09 ceph-mon[59673]: pgmap v57: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:57:58.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:57 vm05 ceph-mon[50611]: osdmap e29: 6 total, 5 up, 6 in 2026-03-09T14:57:58.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:57 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:57:58.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:57 vm05 ceph-mon[50611]: pgmap v57: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:57:59.073 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:58 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:57:59.073 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:58 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:59.073 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:58 vm09 ceph-mon[59673]: Deploying daemon osd.5 on vm09 2026-03-09T14:57:59.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:58 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:57:59.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:58 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:57:59.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:58 vm05 ceph-mon[50611]: Deploying daemon osd.5 on vm09 2026-03-09T14:57:59.864 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:59 vm09 ceph-mon[59673]: pgmap v58: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:57:59.864 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:59 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:59.864 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:59 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:57:59.864 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:57:59 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:58:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:59 vm05 ceph-mon[50611]: pgmap v58: 1 pgs: 1 peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:58:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:59 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:59 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:57:59 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:58:00.713 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 5 on host 'vm09' 2026-03-09T14:58:00.713 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:00.712+0000 7f137cff9700 1 -- 192.168.123.109:0/3543792831 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+32 (secure 0 0 0) 0x7f1380061190 con 0x7f136806c7a0 2026-03-09T14:58:00.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:00.715+0000 7f1384992700 1 -- 192.168.123.109:0/3543792831 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f136806c7a0 msgr2=0x7f136806ec50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:00.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:00.715+0000 7f1384992700 1 --2- 192.168.123.109:0/3543792831 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f136806c7a0 0x7f136806ec50 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f1370009c80 tx=0x7f1370009400 comp rx=0 tx=0).stop 2026-03-09T14:58:00.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:00.715+0000 7f1384992700 1 -- 192.168.123.109:0/3543792831 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13801318e0 msgr2=0x7f138007f540 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:00.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:00.715+0000 7f1384992700 1 --2- 192.168.123.109:0/3543792831 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13801318e0 0x7f138007f540 secure :-1 s=READY pgs=218 cs=0 l=1 rev1=1 crypto rx=0x7f1378004010 tx=0x7f1378004040 comp rx=0 tx=0).stop 2026-03-09T14:58:00.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:00.715+0000 7f1384992700 1 -- 192.168.123.109:0/3543792831 shutdown_connections 2026-03-09T14:58:00.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:00.715+0000 7f1384992700 1 --2- 192.168.123.109:0/3543792831 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f136806c7a0 0x7f136806ec50 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:00.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:00.715+0000 7f1384992700 1 --2- 192.168.123.109:0/3543792831 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1380071950 0x7f13801313a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:00.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:00.715+0000 7f1384992700 1 --2- 192.168.123.109:0/3543792831 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13801318e0 0x7f138007f540 unknown :-1 s=CLOSED pgs=218 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:00.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:00.715+0000 7f1384992700 1 -- 192.168.123.109:0/3543792831 >> 192.168.123.109:0/3543792831 conn(0x7f138006d1a0 msgr2=0x7f1380076470 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:00.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:00.715+0000 7f1384992700 1 -- 192.168.123.109:0/3543792831 shutdown_connections 2026-03-09T14:58:00.715 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:00.716+0000 7f1384992700 1 -- 192.168.123.109:0/3543792831 wait complete. 2026-03-09T14:58:00.793 DEBUG:teuthology.orchestra.run.vm09:osd.5> sudo journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.5.service 2026-03-09T14:58:00.795 INFO:tasks.cephadm:Waiting for 6 OSDs to come up... 2026-03-09T14:58:00.795 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd stat -f json 2026-03-09T14:58:00.964 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:01.246 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.244+0000 7efce2dd4700 1 -- 192.168.123.105:0/1625720441 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efcdc102760 msgr2=0x7efcdc102b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:01.246 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.244+0000 7efce2dd4700 1 --2- 192.168.123.105:0/1625720441 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efcdc102760 0x7efcdc102b70 secure :-1 s=READY pgs=220 cs=0 l=1 rev1=1 crypto rx=0x7efccc009b00 tx=0x7efccc009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:01.246 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.245+0000 7efce2dd4700 1 -- 192.168.123.105:0/1625720441 shutdown_connections 2026-03-09T14:58:01.246 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.245+0000 7efce2dd4700 1 --2- 192.168.123.105:0/1625720441 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efcdc103960 0x7efcdc103db0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:01.246 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.245+0000 7efce2dd4700 1 --2- 192.168.123.105:0/1625720441 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efcdc102760 0x7efcdc102b70 unknown :-1 s=CLOSED pgs=220 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:01.246 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.245+0000 7efce2dd4700 1 -- 192.168.123.105:0/1625720441 >> 192.168.123.105:0/1625720441 conn(0x7efcdc0fdcf0 msgr2=0x7efcdc100140 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:01.247 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.246+0000 7efce2dd4700 1 -- 192.168.123.105:0/1625720441 shutdown_connections 2026-03-09T14:58:01.247 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.246+0000 7efce2dd4700 1 -- 192.168.123.105:0/1625720441 wait complete. 2026-03-09T14:58:01.247 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.246+0000 7efce2dd4700 1 Processor -- start 2026-03-09T14:58:01.247 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.246+0000 7efce2dd4700 1 -- start start 2026-03-09T14:58:01.247 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.247+0000 7efce2dd4700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efcdc103960 0x7efcdc198290 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:01.248 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.247+0000 7efce2dd4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efcdc1987d0 0x7efcdc19d840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:01.248 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.247+0000 7efce2dd4700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efcdc198cd0 con 0x7efcdc1987d0 2026-03-09T14:58:01.248 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.247+0000 7efce2dd4700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efcdc198e40 con 0x7efcdc103960 2026-03-09T14:58:01.248 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.247+0000 7efcdbfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efcdc1987d0 0x7efcdc19d840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:01.248 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.247+0000 7efcdbfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efcdc1987d0 0x7efcdc19d840 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:46118/0 (socket says 192.168.123.105:46118) 2026-03-09T14:58:01.248 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.247+0000 7efcdbfff700 1 -- 192.168.123.105:0/187313358 learned_addr learned my addr 192.168.123.105:0/187313358 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:01.248 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.247+0000 7efcdbfff700 1 -- 192.168.123.105:0/187313358 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efcdc103960 msgr2=0x7efcdc198290 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:01.248 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.247+0000 7efcdbfff700 1 --2- 192.168.123.105:0/187313358 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efcdc103960 0x7efcdc198290 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:01.248 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.247+0000 7efcdbfff700 1 -- 192.168.123.105:0/187313358 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7efccc0097e0 con 0x7efcdc1987d0 2026-03-09T14:58:01.249 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.247+0000 7efcdbfff700 1 --2- 192.168.123.105:0/187313358 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efcdc1987d0 0x7efcdc19d840 secure :-1 s=READY pgs=221 cs=0 l=1 rev1=1 crypto rx=0x7efcd000eb10 tx=0x7efcd000eed0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:01.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.248+0000 7efcd9ffb700 1 -- 192.168.123.105:0/187313358 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efcd000cca0 con 0x7efcdc1987d0 2026-03-09T14:58:01.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.248+0000 7efcd9ffb700 1 -- 192.168.123.105:0/187313358 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7efcd000ce00 con 0x7efcdc1987d0 2026-03-09T14:58:01.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.248+0000 7efcd9ffb700 1 -- 192.168.123.105:0/187313358 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efcd00189c0 con 0x7efcdc1987d0 2026-03-09T14:58:01.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.248+0000 7efce2dd4700 1 -- 192.168.123.105:0/187313358 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7efcdc19dde0 con 0x7efcdc1987d0 2026-03-09T14:58:01.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.248+0000 7efce2dd4700 1 -- 192.168.123.105:0/187313358 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7efcdc19e300 con 0x7efcdc1987d0 2026-03-09T14:58:01.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.249+0000 7efce2dd4700 1 -- 192.168.123.105:0/187313358 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7efcdc066e40 con 0x7efcdc1987d0 2026-03-09T14:58:01.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.250+0000 7efcd9ffb700 1 -- 192.168.123.105:0/187313358 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7efcd0018b20 con 0x7efcdc1987d0 2026-03-09T14:58:01.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.250+0000 7efcd9ffb700 1 --2- 192.168.123.105:0/187313358 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7efcc406c750 0x7efcc406ec00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:01.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.250+0000 7efcd9ffb700 1 -- 192.168.123.105:0/187313358 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(29..29 src has 1..29) v4 ==== 4129+0+0 (secure 0 0 0) 0x7efcd0014070 con 0x7efcdc1987d0 2026-03-09T14:58:01.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.253+0000 7efce0b70700 1 --2- 192.168.123.105:0/187313358 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7efcc406c750 0x7efcc406ec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:01.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.253+0000 7efce0b70700 1 --2- 192.168.123.105:0/187313358 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7efcc406c750 0x7efcc406ec00 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7efccc00b5c0 tx=0x7efccc005fb0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:01.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.253+0000 7efcd9ffb700 1 -- 192.168.123.105:0/187313358 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7efcd005a430 con 0x7efcdc1987d0 2026-03-09T14:58:01.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.362+0000 7efce2dd4700 1 -- 192.168.123.105:0/187313358 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) v1 -- 0x7efcdc19e5b0 con 0x7efcdc1987d0 2026-03-09T14:58:01.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.363+0000 7efcd9ffb700 1 -- 192.168.123.105:0/187313358 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v29) v1 ==== 74+0+130 (secure 0 0 0) 0x7efcd0059fc0 con 0x7efcdc1987d0 2026-03-09T14:58:01.365 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:01.366 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.365+0000 7efce2dd4700 1 -- 192.168.123.105:0/187313358 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7efcc406c750 msgr2=0x7efcc406ec00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:01.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.365+0000 7efce2dd4700 1 --2- 192.168.123.105:0/187313358 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7efcc406c750 0x7efcc406ec00 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7efccc00b5c0 tx=0x7efccc005fb0 comp rx=0 tx=0).stop 2026-03-09T14:58:01.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.365+0000 7efce2dd4700 1 -- 192.168.123.105:0/187313358 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efcdc1987d0 msgr2=0x7efcdc19d840 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:01.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.365+0000 7efce2dd4700 1 --2- 192.168.123.105:0/187313358 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efcdc1987d0 0x7efcdc19d840 secure :-1 s=READY pgs=221 cs=0 l=1 rev1=1 crypto rx=0x7efcd000eb10 tx=0x7efcd000eed0 comp rx=0 tx=0).stop 2026-03-09T14:58:01.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.366+0000 7efce2dd4700 1 -- 192.168.123.105:0/187313358 shutdown_connections 2026-03-09T14:58:01.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.366+0000 7efce2dd4700 1 --2- 192.168.123.105:0/187313358 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7efcc406c750 0x7efcc406ec00 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:01.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.366+0000 7efce2dd4700 1 --2- 192.168.123.105:0/187313358 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efcdc103960 0x7efcdc198290 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:01.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.366+0000 7efce2dd4700 1 --2- 192.168.123.105:0/187313358 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efcdc1987d0 0x7efcdc19d840 unknown :-1 s=CLOSED pgs=221 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:01.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.366+0000 7efce2dd4700 1 -- 192.168.123.105:0/187313358 >> 192.168.123.105:0/187313358 conn(0x7efcdc0fdcf0 msgr2=0x7efcdc106b90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:01.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.366+0000 7efce2dd4700 1 -- 192.168.123.105:0/187313358 shutdown_connections 2026-03-09T14:58:01.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:01.366+0000 7efce2dd4700 1 -- 192.168.123.105:0/187313358 wait complete. 2026-03-09T14:58:01.433 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":29,"num_osds":6,"num_up_osds":5,"osd_up_since":1773068275,"num_in_osds":6,"osd_in_since":1773068273,"num_remapped_pgs":0} 2026-03-09T14:58:02.033 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:01 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:02.033 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:01 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:02.034 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:01 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:02.034 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:01 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:02.034 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:01 vm09 ceph-mon[59673]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:58:02.034 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:01 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/187313358' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:58:02.034 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 14:58:01 vm09 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[76422]: 2026-03-09T14:58:01.847+0000 7f8bfc0ad640 -1 osd.5 0 log_to_monitors true 2026-03-09T14:58:02.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:01 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:02.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:01 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:02.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:01 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:02.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:01 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:02.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:01 vm05 ceph-mon[50611]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:58:02.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:01 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/187313358' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:58:02.434 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd stat -f json 2026-03-09T14:58:02.600 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: from='osd.5 [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: Detected new or changed devices on vm09 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:58:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:02 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:02.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.883+0000 7f344a385700 1 -- 192.168.123.105:0/2120987754 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3444103a00 msgr2=0x7f3444103e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:02.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.883+0000 7f344a385700 1 --2- 192.168.123.105:0/2120987754 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3444103a00 0x7f3444103e70 secure :-1 s=READY pgs=222 cs=0 l=1 rev1=1 crypto rx=0x7f3434009b00 tx=0x7f3434009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:02.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.884+0000 7f344a385700 1 -- 192.168.123.105:0/2120987754 shutdown_connections 2026-03-09T14:58:02.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.884+0000 7f344a385700 1 --2- 192.168.123.105:0/2120987754 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3444103a00 0x7f3444103e70 unknown :-1 s=CLOSED pgs=222 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:02.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.884+0000 7f344a385700 1 --2- 192.168.123.105:0/2120987754 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3444102760 0x7f3444102b70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:02.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.884+0000 7f344a385700 1 -- 192.168.123.105:0/2120987754 >> 192.168.123.105:0/2120987754 conn(0x7f34440fddb0 msgr2=0x7f34441001e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:02.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.885+0000 7f344a385700 1 -- 192.168.123.105:0/2120987754 shutdown_connections 2026-03-09T14:58:02.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.885+0000 7f344a385700 1 -- 192.168.123.105:0/2120987754 wait complete. 2026-03-09T14:58:02.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.885+0000 7f344a385700 1 Processor -- start 2026-03-09T14:58:02.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.885+0000 7f344a385700 1 -- start start 2026-03-09T14:58:02.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.886+0000 7f344a385700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3444102760 0x7f3444197f50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:02.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.886+0000 7f344a385700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3444103a00 0x7f3444198490 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:02.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.886+0000 7f344a385700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3444198ab0 con 0x7f3444103a00 2026-03-09T14:58:02.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.886+0000 7f344a385700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3444198bf0 con 0x7f3444102760 2026-03-09T14:58:02.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.886+0000 7f34437fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3444103a00 0x7f3444198490 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:02.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.886+0000 7f34437fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3444103a00 0x7f3444198490 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:46136/0 (socket says 192.168.123.105:46136) 2026-03-09T14:58:02.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.886+0000 7f34437fe700 1 -- 192.168.123.105:0/884996800 learned_addr learned my addr 192.168.123.105:0/884996800 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:02.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.886+0000 7f3443fff700 1 --2- 192.168.123.105:0/884996800 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3444102760 0x7f3444197f50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:02.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.886+0000 7f34437fe700 1 -- 192.168.123.105:0/884996800 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3444102760 msgr2=0x7f3444197f50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:02.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.886+0000 7f34437fe700 1 --2- 192.168.123.105:0/884996800 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3444102760 0x7f3444197f50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:02.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.886+0000 7f34437fe700 1 -- 192.168.123.105:0/884996800 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f34340097e0 con 0x7f3444103a00 2026-03-09T14:58:02.888 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.887+0000 7f34437fe700 1 --2- 192.168.123.105:0/884996800 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3444103a00 0x7f3444198490 secure :-1 s=READY pgs=223 cs=0 l=1 rev1=1 crypto rx=0x7f3434009fd0 tx=0x7f3434004970 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:02.888 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.887+0000 7f34417fa700 1 -- 192.168.123.105:0/884996800 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f343401d070 con 0x7f3444103a00 2026-03-09T14:58:02.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.887+0000 7f34417fa700 1 -- 192.168.123.105:0/884996800 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f343400bb40 con 0x7f3444103a00 2026-03-09T14:58:02.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.887+0000 7f34417fa700 1 -- 192.168.123.105:0/884996800 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f343400f670 con 0x7f3444103a00 2026-03-09T14:58:02.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.887+0000 7f344a385700 1 -- 192.168.123.105:0/884996800 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f344419d640 con 0x7f3444103a00 2026-03-09T14:58:02.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.887+0000 7f344a385700 1 -- 192.168.123.105:0/884996800 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f344419db30 con 0x7f3444103a00 2026-03-09T14:58:02.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.888+0000 7f344a385700 1 -- 192.168.123.105:0/884996800 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3444066e40 con 0x7f3444103a00 2026-03-09T14:58:02.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.890+0000 7f34417fa700 1 -- 192.168.123.105:0/884996800 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f3434004d50 con 0x7f3444103a00 2026-03-09T14:58:02.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.890+0000 7f34417fa700 1 --2- 192.168.123.105:0/884996800 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f343006c7a0 0x7f343006ec50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:02.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.890+0000 7f34417fa700 1 -- 192.168.123.105:0/884996800 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(30..30 src has 1..30) v4 ==== 4150+0+0 (secure 0 0 0) 0x7f343408d2a0 con 0x7f3444103a00 2026-03-09T14:58:02.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.890+0000 7f3443fff700 1 --2- 192.168.123.105:0/884996800 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f343006c7a0 0x7f343006ec50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:02.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.890+0000 7f3443fff700 1 --2- 192.168.123.105:0/884996800 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f343006c7a0 0x7f343006ec50 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f342c009de0 tx=0x7f342c009450 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:02.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.892+0000 7f34417fa700 1 -- 192.168.123.105:0/884996800 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f343405b990 con 0x7f3444103a00 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: from='osd.5 [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: Detected new or changed devices on vm09 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:58:03.001 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:02 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:03.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:02.998+0000 7f344a385700 1 -- 192.168.123.105:0/884996800 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) v1 -- 0x7f344419df00 con 0x7f3444103a00 2026-03-09T14:58:03.002 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:03.001+0000 7f34417fa700 1 -- 192.168.123.105:0/884996800 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v30) v1 ==== 74+0+130 (secure 0 0 0) 0x7f343405b520 con 0x7f3444103a00 2026-03-09T14:58:03.002 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:03.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:03.002+0000 7f344a385700 1 -- 192.168.123.105:0/884996800 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f343006c7a0 msgr2=0x7f343006ec50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:03.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:03.002+0000 7f344a385700 1 --2- 192.168.123.105:0/884996800 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f343006c7a0 0x7f343006ec50 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f342c009de0 tx=0x7f342c009450 comp rx=0 tx=0).stop 2026-03-09T14:58:03.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:03.002+0000 7f344a385700 1 -- 192.168.123.105:0/884996800 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3444103a00 msgr2=0x7f3444198490 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:03.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:03.003+0000 7f344a385700 1 --2- 192.168.123.105:0/884996800 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3444103a00 0x7f3444198490 secure :-1 s=READY pgs=223 cs=0 l=1 rev1=1 crypto rx=0x7f3434009fd0 tx=0x7f3434004970 comp rx=0 tx=0).stop 2026-03-09T14:58:03.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:03.003+0000 7f344a385700 1 -- 192.168.123.105:0/884996800 shutdown_connections 2026-03-09T14:58:03.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:03.003+0000 7f344a385700 1 --2- 192.168.123.105:0/884996800 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f343006c7a0 0x7f343006ec50 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:03.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:03.003+0000 7f344a385700 1 --2- 192.168.123.105:0/884996800 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3444102760 0x7f3444197f50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:03.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:03.003+0000 7f344a385700 1 --2- 192.168.123.105:0/884996800 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3444103a00 0x7f3444198490 unknown :-1 s=CLOSED pgs=223 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:03.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:03.003+0000 7f344a385700 1 -- 192.168.123.105:0/884996800 >> 192.168.123.105:0/884996800 conn(0x7f34440fddb0 msgr2=0x7f3444100160 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:03.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:03.003+0000 7f344a385700 1 -- 192.168.123.105:0/884996800 shutdown_connections 2026-03-09T14:58:03.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:03.003+0000 7f344a385700 1 -- 192.168.123.105:0/884996800 wait complete. 2026-03-09T14:58:03.075 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":30,"num_osds":6,"num_up_osds":5,"osd_up_since":1773068275,"num_in_osds":6,"osd_in_since":1773068273,"num_remapped_pgs":0} 2026-03-09T14:58:04.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:03 vm05 ceph-mon[50611]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:58:04.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:03 vm05 ceph-mon[50611]: osdmap e30: 6 total, 5 up, 6 in 2026-03-09T14:58:04.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:03 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:58:04.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:03 vm05 ceph-mon[50611]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T14:58:04.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:03 vm05 ceph-mon[50611]: from='osd.5 [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T14:58:04.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:03 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/884996800' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:58:04.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:03 vm05 ceph-mon[50611]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:58:04.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:03 vm05 ceph-mon[50611]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T14:58:04.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:03 vm05 ceph-mon[50611]: osdmap e31: 6 total, 5 up, 6 in 2026-03-09T14:58:04.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:03 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:58:04.076 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd stat -f json 2026-03-09T14:58:04.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:03 vm09 ceph-mon[59673]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:58:04.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:03 vm09 ceph-mon[59673]: osdmap e30: 6 total, 5 up, 6 in 2026-03-09T14:58:04.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:03 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:58:04.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:03 vm09 ceph-mon[59673]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T14:58:04.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:03 vm09 ceph-mon[59673]: from='osd.5 [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T14:58:04.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:03 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/884996800' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:58:04.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:03 vm09 ceph-mon[59673]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:58:04.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:03 vm09 ceph-mon[59673]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T14:58:04.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:03 vm09 ceph-mon[59673]: osdmap e31: 6 total, 5 up, 6 in 2026-03-09T14:58:04.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:03 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:58:04.116 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 14:58:03 vm09 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[76422]: 2026-03-09T14:58:03.724+0000 7f8bf0f23700 -1 osd.5 0 waiting for initial osdmap 2026-03-09T14:58:04.117 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 14:58:03 vm09 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[76422]: 2026-03-09T14:58:03.738+0000 7f8bed519700 -1 osd.5 31 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:58:04.247 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:04.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.514+0000 7f8760f7a700 1 -- 192.168.123.105:0/3355844969 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f875c102760 msgr2=0x7f875c102b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:04.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.514+0000 7f8760f7a700 1 --2- 192.168.123.105:0/3355844969 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f875c102760 0x7f875c102b70 secure :-1 s=READY pgs=224 cs=0 l=1 rev1=1 crypto rx=0x7f8744009b00 tx=0x7f8744009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:04.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.515+0000 7f8760f7a700 1 -- 192.168.123.105:0/3355844969 shutdown_connections 2026-03-09T14:58:04.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.515+0000 7f8760f7a700 1 --2- 192.168.123.105:0/3355844969 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f875c103960 0x7f875c103db0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:04.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.515+0000 7f8760f7a700 1 --2- 192.168.123.105:0/3355844969 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f875c102760 0x7f875c102b70 unknown :-1 s=CLOSED pgs=224 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:04.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.515+0000 7f8760f7a700 1 -- 192.168.123.105:0/3355844969 >> 192.168.123.105:0/3355844969 conn(0x7f875c0fdcf0 msgr2=0x7f875c100140 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:04.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.515+0000 7f8760f7a700 1 -- 192.168.123.105:0/3355844969 shutdown_connections 2026-03-09T14:58:04.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.515+0000 7f8760f7a700 1 -- 192.168.123.105:0/3355844969 wait complete. 2026-03-09T14:58:04.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.516+0000 7f8760f7a700 1 Processor -- start 2026-03-09T14:58:04.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.516+0000 7f8760f7a700 1 -- start start 2026-03-09T14:58:04.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.516+0000 7f8760f7a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f875c102760 0x7f875c198040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:04.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.516+0000 7f8760f7a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f875c103960 0x7f875c198580 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:04.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.516+0000 7f8760f7a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f875c198ba0 con 0x7f875c103960 2026-03-09T14:58:04.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.516+0000 7f8760f7a700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f875c198ce0 con 0x7f875c102760 2026-03-09T14:58:04.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.517+0000 7f8759d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f875c103960 0x7f875c198580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:04.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.517+0000 7f8759d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f875c103960 0x7f875c198580 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:46158/0 (socket says 192.168.123.105:46158) 2026-03-09T14:58:04.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.517+0000 7f8759d9b700 1 -- 192.168.123.105:0/1130898502 learned_addr learned my addr 192.168.123.105:0/1130898502 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:04.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.517+0000 7f8759d9b700 1 -- 192.168.123.105:0/1130898502 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f875c102760 msgr2=0x7f875c198040 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T14:58:04.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.517+0000 7f8759d9b700 1 --2- 192.168.123.105:0/1130898502 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f875c102760 0x7f875c198040 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:04.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.517+0000 7f8759d9b700 1 -- 192.168.123.105:0/1130898502 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f87440097e0 con 0x7f875c103960 2026-03-09T14:58:04.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.517+0000 7f8759d9b700 1 --2- 192.168.123.105:0/1130898502 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f875c103960 0x7f875c198580 secure :-1 s=READY pgs=225 cs=0 l=1 rev1=1 crypto rx=0x7f874c00ebf0 tx=0x7f874c00c2d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:04.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.517+0000 7f87537fe700 1 -- 192.168.123.105:0/1130898502 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f874c00cd00 con 0x7f875c103960 2026-03-09T14:58:04.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.517+0000 7f87537fe700 1 -- 192.168.123.105:0/1130898502 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f874c00ce60 con 0x7f875c103960 2026-03-09T14:58:04.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.517+0000 7f87537fe700 1 -- 192.168.123.105:0/1130898502 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f874c010640 con 0x7f875c103960 2026-03-09T14:58:04.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.517+0000 7f8760f7a700 1 -- 192.168.123.105:0/1130898502 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f875c19d790 con 0x7f875c103960 2026-03-09T14:58:04.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.517+0000 7f8760f7a700 1 -- 192.168.123.105:0/1130898502 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f875c0754f0 con 0x7f875c103960 2026-03-09T14:58:04.526 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.524+0000 7f87537fe700 1 -- 192.168.123.105:0/1130898502 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f874c0107a0 con 0x7f875c103960 2026-03-09T14:58:04.526 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.524+0000 7f87537fe700 1 --2- 192.168.123.105:0/1130898502 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f874806c750 0x7f874806ec00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:04.526 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.524+0000 7f87537fe700 1 -- 192.168.123.105:0/1130898502 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(31..31 src has 1..31) v4 ==== 4166+0+0 (secure 0 0 0) 0x7f874c014070 con 0x7f875c103960 2026-03-09T14:58:04.526 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.525+0000 7f875a59c700 1 --2- 192.168.123.105:0/1130898502 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f874806c750 0x7f874806ec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:04.526 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.525+0000 7f8760f7a700 1 -- 192.168.123.105:0/1130898502 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f875c066e40 con 0x7f875c103960 2026-03-09T14:58:04.529 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.526+0000 7f875a59c700 1 --2- 192.168.123.105:0/1130898502 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f874806c750 0x7f874806ec00 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f8744005f50 tx=0x7f8744005dc0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:04.529 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.529+0000 7f87537fe700 1 -- 192.168.123.105:0/1130898502 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f874c05b030 con 0x7f875c103960 2026-03-09T14:58:04.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.633+0000 7f8760f7a700 1 -- 192.168.123.105:0/1130898502 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) v1 -- 0x7f875c075d20 con 0x7f875c103960 2026-03-09T14:58:04.635 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.634+0000 7f87537fe700 1 -- 192.168.123.105:0/1130898502 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v31) v1 ==== 74+0+130 (secure 0 0 0) 0x7f874c005740 con 0x7f875c103960 2026-03-09T14:58:04.635 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:04.637 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.636+0000 7f8760f7a700 1 -- 192.168.123.105:0/1130898502 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f874806c750 msgr2=0x7f874806ec00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:04.637 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.636+0000 7f8760f7a700 1 --2- 192.168.123.105:0/1130898502 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f874806c750 0x7f874806ec00 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f8744005f50 tx=0x7f8744005dc0 comp rx=0 tx=0).stop 2026-03-09T14:58:04.637 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.636+0000 7f8760f7a700 1 -- 192.168.123.105:0/1130898502 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f875c103960 msgr2=0x7f875c198580 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:04.637 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.636+0000 7f8760f7a700 1 --2- 192.168.123.105:0/1130898502 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f875c103960 0x7f875c198580 secure :-1 s=READY pgs=225 cs=0 l=1 rev1=1 crypto rx=0x7f874c00ebf0 tx=0x7f874c00c2d0 comp rx=0 tx=0).stop 2026-03-09T14:58:04.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.637+0000 7f8760f7a700 1 -- 192.168.123.105:0/1130898502 shutdown_connections 2026-03-09T14:58:04.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.637+0000 7f8760f7a700 1 --2- 192.168.123.105:0/1130898502 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f874806c750 0x7f874806ec00 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:04.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.637+0000 7f8760f7a700 1 --2- 192.168.123.105:0/1130898502 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f875c102760 0x7f875c198040 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:04.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.637+0000 7f8760f7a700 1 --2- 192.168.123.105:0/1130898502 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f875c103960 0x7f875c198580 unknown :-1 s=CLOSED pgs=225 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:04.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.637+0000 7f8760f7a700 1 -- 192.168.123.105:0/1130898502 >> 192.168.123.105:0/1130898502 conn(0x7f875c0fdcf0 msgr2=0x7f875c106b90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:04.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.637+0000 7f8760f7a700 1 -- 192.168.123.105:0/1130898502 shutdown_connections 2026-03-09T14:58:04.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:04.637+0000 7f8760f7a700 1 -- 192.168.123.105:0/1130898502 wait complete. 2026-03-09T14:58:04.707 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":31,"num_osds":6,"num_up_osds":5,"osd_up_since":1773068275,"num_in_osds":6,"osd_in_since":1773068273,"num_remapped_pgs":0} 2026-03-09T14:58:05.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:04 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:58:05.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:04 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/1130898502' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:58:05.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:04 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:58:05.115 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:04 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:58:05.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:04 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/1130898502' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:58:05.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:04 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:58:05.708 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd stat -f json 2026-03-09T14:58:05.877 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:06.030 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:05 vm05 ceph-mon[50611]: purged_snaps scrub starts 2026-03-09T14:58:06.030 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:05 vm05 ceph-mon[50611]: purged_snaps scrub ok 2026-03-09T14:58:06.030 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:05 vm05 ceph-mon[50611]: osd.5 [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478] boot 2026-03-09T14:58:06.030 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:05 vm05 ceph-mon[50611]: osdmap e32: 6 total, 6 up, 6 in 2026-03-09T14:58:06.030 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:05 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:58:06.030 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:05 vm05 ceph-mon[50611]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:58:06.115 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:05 vm09 ceph-mon[59673]: purged_snaps scrub starts 2026-03-09T14:58:06.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:05 vm09 ceph-mon[59673]: purged_snaps scrub ok 2026-03-09T14:58:06.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:05 vm09 ceph-mon[59673]: osd.5 [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478] boot 2026-03-09T14:58:06.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:05 vm09 ceph-mon[59673]: osdmap e32: 6 total, 6 up, 6 in 2026-03-09T14:58:06.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:05 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:58:06.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:05 vm09 ceph-mon[59673]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:58:06.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.142+0000 7f66af888700 1 -- 192.168.123.105:0/227676016 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f66a8103960 msgr2=0x7f66a8103db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:06.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.142+0000 7f66af888700 1 --2- 192.168.123.105:0/227676016 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f66a8103960 0x7f66a8103db0 secure :-1 s=READY pgs=226 cs=0 l=1 rev1=1 crypto rx=0x7f66a4009b00 tx=0x7f66a4009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:06.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.142+0000 7f66af888700 1 -- 192.168.123.105:0/227676016 shutdown_connections 2026-03-09T14:58:06.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.142+0000 7f66af888700 1 --2- 192.168.123.105:0/227676016 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f66a8103960 0x7f66a8103db0 unknown :-1 s=CLOSED pgs=226 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:06.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.142+0000 7f66af888700 1 --2- 192.168.123.105:0/227676016 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f66a8102760 0x7f66a8102b70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:06.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.142+0000 7f66af888700 1 -- 192.168.123.105:0/227676016 >> 192.168.123.105:0/227676016 conn(0x7f66a80fdcf0 msgr2=0x7f66a8100140 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:06.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.142+0000 7f66af888700 1 -- 192.168.123.105:0/227676016 shutdown_connections 2026-03-09T14:58:06.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.143+0000 7f66af888700 1 -- 192.168.123.105:0/227676016 wait complete. 2026-03-09T14:58:06.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.143+0000 7f66af888700 1 Processor -- start 2026-03-09T14:58:06.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.143+0000 7f66af888700 1 -- start start 2026-03-09T14:58:06.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.143+0000 7f66af888700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f66a8102760 0x7f66a81980e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:06.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.143+0000 7f66af888700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f66a8103960 0x7f66a8198620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:06.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.143+0000 7f66af888700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f66a8198c40 con 0x7f66a8102760 2026-03-09T14:58:06.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.143+0000 7f66af888700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f66a8198d80 con 0x7f66a8103960 2026-03-09T14:58:06.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.144+0000 7f66ad624700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f66a8102760 0x7f66a81980e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:06.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.144+0000 7f66ad624700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f66a8102760 0x7f66a81980e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:46172/0 (socket says 192.168.123.105:46172) 2026-03-09T14:58:06.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.144+0000 7f66ad624700 1 -- 192.168.123.105:0/3442185120 learned_addr learned my addr 192.168.123.105:0/3442185120 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:06.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.144+0000 7f66ace23700 1 --2- 192.168.123.105:0/3442185120 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f66a8103960 0x7f66a8198620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:06.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.144+0000 7f66ace23700 1 -- 192.168.123.105:0/3442185120 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f66a8102760 msgr2=0x7f66a81980e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:06.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.144+0000 7f66ace23700 1 --2- 192.168.123.105:0/3442185120 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f66a8102760 0x7f66a81980e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:06.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.144+0000 7f66ace23700 1 -- 192.168.123.105:0/3442185120 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f66a40097e0 con 0x7f66a8103960 2026-03-09T14:58:06.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.145+0000 7f66ace23700 1 --2- 192.168.123.105:0/3442185120 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f66a8103960 0x7f66a8198620 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f66a4000c00 tx=0x7f66a4004a00 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:06.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.145+0000 7f669e7fc700 1 -- 192.168.123.105:0/3442185120 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f66a401d070 con 0x7f66a8103960 2026-03-09T14:58:06.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.145+0000 7f669e7fc700 1 -- 192.168.123.105:0/3442185120 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f66a400bc50 con 0x7f66a8103960 2026-03-09T14:58:06.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.145+0000 7f66af888700 1 -- 192.168.123.105:0/3442185120 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f66a819d7d0 con 0x7f66a8103960 2026-03-09T14:58:06.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.145+0000 7f669e7fc700 1 -- 192.168.123.105:0/3442185120 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f66a400f740 con 0x7f66a8103960 2026-03-09T14:58:06.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.145+0000 7f66af888700 1 -- 192.168.123.105:0/3442185120 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f66a819dcc0 con 0x7f66a8103960 2026-03-09T14:58:06.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.146+0000 7f66af888700 1 -- 192.168.123.105:0/3442185120 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f66a8066e40 con 0x7f66a8103960 2026-03-09T14:58:06.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.147+0000 7f669e7fc700 1 -- 192.168.123.105:0/3442185120 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f66a4022a50 con 0x7f66a8103960 2026-03-09T14:58:06.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.147+0000 7f669e7fc700 1 --2- 192.168.123.105:0/3442185120 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f669406c680 0x7f669406eb30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:06.149 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.147+0000 7f669e7fc700 1 -- 192.168.123.105:0/3442185120 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f66a408cb70 con 0x7f66a8103960 2026-03-09T14:58:06.149 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.148+0000 7f66ad624700 1 --2- 192.168.123.105:0/3442185120 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f669406c680 0x7f669406eb30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:06.149 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.148+0000 7f66ad624700 1 --2- 192.168.123.105:0/3442185120 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f669406c680 0x7f669406eb30 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f6698009ce0 tx=0x7f6698009430 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:06.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.150+0000 7f669e7fc700 1 -- 192.168.123.105:0/3442185120 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f66a405b170 con 0x7f66a8103960 2026-03-09T14:58:06.252 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.251+0000 7f66af888700 1 -- 192.168.123.105:0/3442185120 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) v1 -- 0x7f66a819e090 con 0x7f66a8103960 2026-03-09T14:58:06.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.252+0000 7f669e7fc700 1 -- 192.168.123.105:0/3442185120 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v33) v1 ==== 74+0+130 (secure 0 0 0) 0x7f66a405ad00 con 0x7f66a8103960 2026-03-09T14:58:06.253 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:06.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.255+0000 7f66af888700 1 -- 192.168.123.105:0/3442185120 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f669406c680 msgr2=0x7f669406eb30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:06.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.255+0000 7f66af888700 1 --2- 192.168.123.105:0/3442185120 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f669406c680 0x7f669406eb30 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f6698009ce0 tx=0x7f6698009430 comp rx=0 tx=0).stop 2026-03-09T14:58:06.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.255+0000 7f66af888700 1 -- 192.168.123.105:0/3442185120 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f66a8103960 msgr2=0x7f66a8198620 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:06.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.255+0000 7f66af888700 1 --2- 192.168.123.105:0/3442185120 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f66a8103960 0x7f66a8198620 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f66a4000c00 tx=0x7f66a4004a00 comp rx=0 tx=0).stop 2026-03-09T14:58:06.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.255+0000 7f66af888700 1 -- 192.168.123.105:0/3442185120 shutdown_connections 2026-03-09T14:58:06.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.255+0000 7f66af888700 1 --2- 192.168.123.105:0/3442185120 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f669406c680 0x7f669406eb30 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:06.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.256+0000 7f66af888700 1 --2- 192.168.123.105:0/3442185120 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f66a8102760 0x7f66a81980e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:06.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.256+0000 7f66af888700 1 --2- 192.168.123.105:0/3442185120 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f66a8103960 0x7f66a8198620 unknown :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:06.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.256+0000 7f66af888700 1 -- 192.168.123.105:0/3442185120 >> 192.168.123.105:0/3442185120 conn(0x7f66a80fdcf0 msgr2=0x7f66a8106b90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:06.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.256+0000 7f66af888700 1 -- 192.168.123.105:0/3442185120 shutdown_connections 2026-03-09T14:58:06.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:06.256+0000 7f66af888700 1 -- 192.168.123.105:0/3442185120 wait complete. 2026-03-09T14:58:06.326 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":33,"num_osds":6,"num_up_osds":6,"osd_up_since":1773068284,"num_in_osds":6,"osd_in_since":1773068273,"num_remapped_pgs":0} 2026-03-09T14:58:06.327 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd dump --format=json 2026-03-09T14:58:06.487 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:07.098 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:06 vm05 ceph-mon[50611]: osdmap e33: 6 total, 6 up, 6 in 2026-03-09T14:58:07.098 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:06 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3442185120' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:58:07.115 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:06 vm09 ceph-mon[59673]: osdmap e33: 6 total, 6 up, 6 in 2026-03-09T14:58:07.115 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:06 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/3442185120' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:58:07.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.164+0000 7f6ea99b9700 1 -- 192.168.123.105:0/1364550802 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6ea4073a00 msgr2=0x7f6ea4111020 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:07.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.164+0000 7f6ea99b9700 1 --2- 192.168.123.105:0/1364550802 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6ea4073a00 0x7f6ea4111020 secure :-1 s=READY pgs=227 cs=0 l=1 rev1=1 crypto rx=0x7f6e94009b00 tx=0x7f6e94009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:07.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.165+0000 7f6ea99b9700 1 -- 192.168.123.105:0/1364550802 shutdown_connections 2026-03-09T14:58:07.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.165+0000 7f6ea99b9700 1 --2- 192.168.123.105:0/1364550802 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6ea4073a00 0x7f6ea4111020 unknown :-1 s=CLOSED pgs=227 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:07.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.165+0000 7f6ea99b9700 1 --2- 192.168.123.105:0/1364550802 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6ea40730f0 0x7f6ea40734c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:07.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.165+0000 7f6ea99b9700 1 -- 192.168.123.105:0/1364550802 >> 192.168.123.105:0/1364550802 conn(0x7f6ea40fc010 msgr2=0x7f6ea40fe420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:07.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.165+0000 7f6ea99b9700 1 -- 192.168.123.105:0/1364550802 shutdown_connections 2026-03-09T14:58:07.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.165+0000 7f6ea99b9700 1 -- 192.168.123.105:0/1364550802 wait complete. 2026-03-09T14:58:07.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.166+0000 7f6ea99b9700 1 Processor -- start 2026-03-09T14:58:07.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.166+0000 7f6ea99b9700 1 -- start start 2026-03-09T14:58:07.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.166+0000 7f6ea99b9700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6ea40730f0 0x7f6ea41a25b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:07.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.166+0000 7f6ea99b9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6ea4073a00 0x7f6ea41a2af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:07.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.166+0000 7f6ea99b9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6ea41a3180 con 0x7f6ea4073a00 2026-03-09T14:58:07.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.166+0000 7f6ea99b9700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6ea419c630 con 0x7f6ea40730f0 2026-03-09T14:58:07.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.167+0000 7f6ea27fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6ea4073a00 0x7f6ea41a2af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:07.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.167+0000 7f6ea27fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6ea4073a00 0x7f6ea41a2af0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:46186/0 (socket says 192.168.123.105:46186) 2026-03-09T14:58:07.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.167+0000 7f6ea27fc700 1 -- 192.168.123.105:0/3904611640 learned_addr learned my addr 192.168.123.105:0/3904611640 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:07.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.167+0000 7f6ea27fc700 1 -- 192.168.123.105:0/3904611640 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6ea40730f0 msgr2=0x7f6ea41a25b0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T14:58:07.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.167+0000 7f6ea2ffd700 1 --2- 192.168.123.105:0/3904611640 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6ea40730f0 0x7f6ea41a25b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:07.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.167+0000 7f6ea27fc700 1 --2- 192.168.123.105:0/3904611640 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6ea40730f0 0x7f6ea41a25b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:07.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.167+0000 7f6ea27fc700 1 -- 192.168.123.105:0/3904611640 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6e940097e0 con 0x7f6ea4073a00 2026-03-09T14:58:07.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.167+0000 7f6ea27fc700 1 --2- 192.168.123.105:0/3904611640 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6ea4073a00 0x7f6ea41a2af0 secure :-1 s=READY pgs=228 cs=0 l=1 rev1=1 crypto rx=0x7f6e94000c00 tx=0x7f6e94004a20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:07.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.167+0000 7f6ea89b7700 1 -- 192.168.123.105:0/3904611640 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6e9401d070 con 0x7f6ea4073a00 2026-03-09T14:58:07.169 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.167+0000 7f6ea99b9700 1 -- 192.168.123.105:0/3904611640 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6ea419c8b0 con 0x7f6ea4073a00 2026-03-09T14:58:07.169 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.168+0000 7f6ea99b9700 1 -- 192.168.123.105:0/3904611640 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6ea419ce00 con 0x7f6ea4073a00 2026-03-09T14:58:07.170 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.169+0000 7f6ea89b7700 1 -- 192.168.123.105:0/3904611640 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f6e94004500 con 0x7f6ea4073a00 2026-03-09T14:58:07.170 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.169+0000 7f6ea99b9700 1 -- 192.168.123.105:0/3904611640 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6ea410e7a0 con 0x7f6ea4073a00 2026-03-09T14:58:07.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.170+0000 7f6ea89b7700 1 -- 192.168.123.105:0/3904611640 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6e94022470 con 0x7f6ea4073a00 2026-03-09T14:58:07.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.170+0000 7f6ea89b7700 1 -- 192.168.123.105:0/3904611640 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f6e94022610 con 0x7f6ea4073a00 2026-03-09T14:58:07.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.170+0000 7f6ea89b7700 1 --2- 192.168.123.105:0/3904611640 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6e8c06c520 0x7f6e8c06e9d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:07.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.170+0000 7f6ea89b7700 1 -- 192.168.123.105:0/3904611640 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f6e9408e110 con 0x7f6ea4073a00 2026-03-09T14:58:07.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.172+0000 7f6ea2ffd700 1 --2- 192.168.123.105:0/3904611640 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6e8c06c520 0x7f6e8c06e9d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:07.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.173+0000 7f6ea2ffd700 1 --2- 192.168.123.105:0/3904611640 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6e8c06c520 0x7f6e8c06e9d0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f6e9800a850 tx=0x7f6e98008040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:07.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.173+0000 7f6ea89b7700 1 -- 192.168.123.105:0/3904611640 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f6e9405c6e0 con 0x7f6ea4073a00 2026-03-09T14:58:07.295 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.294+0000 7f6ea99b9700 1 -- 192.168.123.105:0/3904611640 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) v1 -- 0x7f6ea404ea50 con 0x7f6ea4073a00 2026-03-09T14:58:07.296 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.295+0000 7f6ea89b7700 1 -- 192.168.123.105:0/3904611640 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v33) v1 ==== 74+0+11285 (secure 0 0 0) 0x7f6e9405c270 con 0x7f6ea4073a00 2026-03-09T14:58:07.296 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:07.296 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":33,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","created":"2026-03-09T14:55:10.534285+0000","modified":"2026-03-09T14:58:05.734328+0000","last_up_change":"2026-03-09T14:58:04.726665+0000","last_in_change":"2026-03-09T14:57:53.533741+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":14,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"reef","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T14:57:35.266161+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_acting":6,"score_stable":6,"optimal_score":0.5,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"1022e816-4ad0-4a27-9052-07d4015a684e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":3444724580},{"type":"v1","addr":"192.168.123.105:6803","nonce":3444724580}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":3444724580},{"type":"v1","addr":"192.168.123.105:6805","nonce":3444724580}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":3444724580},{"type":"v1","addr":"192.168.123.105:6809","nonce":3444724580}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":3444724580},{"type":"v1","addr":"192.168.123.105:6807","nonce":3444724580}]},"public_addr":"192.168.123.105:6803/3444724580","cluster_addr":"192.168.123.105:6805/3444724580","heartbeat_back_addr":"192.168.123.105:6809/3444724580","heartbeat_front_addr":"192.168.123.105:6807/3444724580","state":["exists","up"]},{"osd":1,"uuid":"73421280-9a73-4092-8e8f-854babd94f13","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":24,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":1642565416},{"type":"v1","addr":"192.168.123.105:6811","nonce":1642565416}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":1642565416},{"type":"v1","addr":"192.168.123.105:6813","nonce":1642565416}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":1642565416},{"type":"v1","addr":"192.168.123.105:6817","nonce":1642565416}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":1642565416},{"type":"v1","addr":"192.168.123.105:6815","nonce":1642565416}]},"public_addr":"192.168.123.105:6811/1642565416","cluster_addr":"192.168.123.105:6813/1642565416","heartbeat_back_addr":"192.168.123.105:6817/1642565416","heartbeat_front_addr":"192.168.123.105:6815/1642565416","state":["exists","up"]},{"osd":2,"uuid":"14d84b3a-06be-48f8-89a7-0e9c83f76e3c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":4063272520},{"type":"v1","addr":"192.168.123.105:6819","nonce":4063272520}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":4063272520},{"type":"v1","addr":"192.168.123.105:6821","nonce":4063272520}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":4063272520},{"type":"v1","addr":"192.168.123.105:6825","nonce":4063272520}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":4063272520},{"type":"v1","addr":"192.168.123.105:6823","nonce":4063272520}]},"public_addr":"192.168.123.105:6819/4063272520","cluster_addr":"192.168.123.105:6821/4063272520","heartbeat_back_addr":"192.168.123.105:6825/4063272520","heartbeat_front_addr":"192.168.123.105:6823/4063272520","state":["exists","up"]},{"osd":3,"uuid":"24dfa5ad-72df-4f80-bf60-0507508104f2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":27,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":1968723815},{"type":"v1","addr":"192.168.123.109:6801","nonce":1968723815}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":1968723815},{"type":"v1","addr":"192.168.123.109:6803","nonce":1968723815}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":1968723815},{"type":"v1","addr":"192.168.123.109:6807","nonce":1968723815}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":1968723815},{"type":"v1","addr":"192.168.123.109:6805","nonce":1968723815}]},"public_addr":"192.168.123.109:6801/1968723815","cluster_addr":"192.168.123.109:6803/1968723815","heartbeat_back_addr":"192.168.123.109:6807/1968723815","heartbeat_front_addr":"192.168.123.109:6805/1968723815","state":["exists","up"]},{"osd":4,"uuid":"c4ddfd7f-8055-4c53-a70a-131428da743a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":1714619602},{"type":"v1","addr":"192.168.123.109:6809","nonce":1714619602}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":1714619602},{"type":"v1","addr":"192.168.123.109:6811","nonce":1714619602}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":1714619602},{"type":"v1","addr":"192.168.123.109:6815","nonce":1714619602}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":1714619602},{"type":"v1","addr":"192.168.123.109:6813","nonce":1714619602}]},"public_addr":"192.168.123.109:6809/1714619602","cluster_addr":"192.168.123.109:6811/1714619602","heartbeat_back_addr":"192.168.123.109:6815/1714619602","heartbeat_front_addr":"192.168.123.109:6813/1714619602","state":["exists","up"]},{"osd":5,"uuid":"0ff88b04-4b34-4df6-bf77-2132a823172e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":2797312478},{"type":"v1","addr":"192.168.123.109:6817","nonce":2797312478}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":2797312478},{"type":"v1","addr":"192.168.123.109:6819","nonce":2797312478}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":2797312478},{"type":"v1","addr":"192.168.123.109:6823","nonce":2797312478}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":2797312478},{"type":"v1","addr":"192.168.123.109:6821","nonce":2797312478}]},"public_addr":"192.168.123.109:6817/2797312478","cluster_addr":"192.168.123.109:6819/2797312478","heartbeat_back_addr":"192.168.123.109:6823/2797312478","heartbeat_front_addr":"192.168.123.109:6821/2797312478","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:11.388791+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:23.267691+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:33.929621+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:43.737026+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:54.082613+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:58:02.883518+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:0/2507132558":"2026-03-10T14:56:37.039546+0000","192.168.123.105:0/3823132518":"2026-03-10T14:56:37.039546+0000","192.168.123.105:0/3432926156":"2026-03-10T14:55:39.031351+0000","192.168.123.105:0/2312625867":"2026-03-10T14:55:39.031351+0000","192.168.123.105:6801/2":"2026-03-10T14:55:24.676889+0000","192.168.123.105:6800/2":"2026-03-10T14:55:24.676889+0000","192.168.123.105:0/3215275472":"2026-03-10T14:55:24.676889+0000","192.168.123.105:0/116363292":"2026-03-10T14:55:39.031351+0000","192.168.123.105:0/4048863196":"2026-03-10T14:55:24.676889+0000","192.168.123.105:0/2546376349":"2026-03-10T14:56:37.039546+0000","192.168.123.105:0/3523930106":"2026-03-10T14:55:24.676889+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T14:58:07.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.298+0000 7f6ea99b9700 1 -- 192.168.123.105:0/3904611640 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6e8c06c520 msgr2=0x7f6e8c06e9d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:07.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.298+0000 7f6ea99b9700 1 --2- 192.168.123.105:0/3904611640 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6e8c06c520 0x7f6e8c06e9d0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f6e9800a850 tx=0x7f6e98008040 comp rx=0 tx=0).stop 2026-03-09T14:58:07.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.298+0000 7f6ea99b9700 1 -- 192.168.123.105:0/3904611640 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6ea4073a00 msgr2=0x7f6ea41a2af0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:07.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.298+0000 7f6ea99b9700 1 --2- 192.168.123.105:0/3904611640 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6ea4073a00 0x7f6ea41a2af0 secure :-1 s=READY pgs=228 cs=0 l=1 rev1=1 crypto rx=0x7f6e94000c00 tx=0x7f6e94004a20 comp rx=0 tx=0).stop 2026-03-09T14:58:07.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.298+0000 7f6ea99b9700 1 -- 192.168.123.105:0/3904611640 shutdown_connections 2026-03-09T14:58:07.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.298+0000 7f6ea99b9700 1 --2- 192.168.123.105:0/3904611640 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6e8c06c520 0x7f6e8c06e9d0 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:07.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.298+0000 7f6ea99b9700 1 --2- 192.168.123.105:0/3904611640 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6ea40730f0 0x7f6ea41a25b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:07.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.298+0000 7f6ea99b9700 1 --2- 192.168.123.105:0/3904611640 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6ea4073a00 0x7f6ea41a2af0 unknown :-1 s=CLOSED pgs=228 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:07.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.298+0000 7f6ea99b9700 1 -- 192.168.123.105:0/3904611640 >> 192.168.123.105:0/3904611640 conn(0x7f6ea40fc010 msgr2=0x7f6ea4102b30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:07.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.298+0000 7f6ea99b9700 1 -- 192.168.123.105:0/3904611640 shutdown_connections 2026-03-09T14:58:07.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.298+0000 7f6ea99b9700 1 -- 192.168.123.105:0/3904611640 wait complete. 2026-03-09T14:58:07.489 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T14:57:35.266161+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6, 'score_stable': 6, 'optimal_score': 0.5, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-09T14:58:07.489 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd pool get .mgr pg_num 2026-03-09T14:58:07.660 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:07.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.948+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1713076793 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7c0100540 msgr2=0x7fd7c01009b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:07.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.948+0000 7fd7c68d4700 1 --2- 192.168.123.105:0/1713076793 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7c0100540 0x7fd7c01009b0 secure :-1 s=READY pgs=229 cs=0 l=1 rev1=1 crypto rx=0x7fd7b0009b00 tx=0x7fd7b0009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:07.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.949+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1713076793 shutdown_connections 2026-03-09T14:58:07.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.949+0000 7fd7c68d4700 1 --2- 192.168.123.105:0/1713076793 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7c0100540 0x7fd7c01009b0 unknown :-1 s=CLOSED pgs=229 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:07.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.949+0000 7fd7c68d4700 1 --2- 192.168.123.105:0/1713076793 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd7c0106560 0x7fd7c0106930 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:07.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.949+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1713076793 >> 192.168.123.105:0/1713076793 conn(0x7fd7c00fbfc0 msgr2=0x7fd7c00fe3d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:07.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.949+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1713076793 shutdown_connections 2026-03-09T14:58:07.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.949+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1713076793 wait complete. 2026-03-09T14:58:07.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.950+0000 7fd7c68d4700 1 Processor -- start 2026-03-09T14:58:07.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.950+0000 7fd7c68d4700 1 -- start start 2026-03-09T14:58:07.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.950+0000 7fd7c68d4700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd7c0100540 0x7fd7c0073050 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:07.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.950+0000 7fd7c68d4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7c0106560 0x7fd7c0073590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:07.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.950+0000 7fd7c68d4700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd7c0074e00 con 0x7fd7c0106560 2026-03-09T14:58:07.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.951+0000 7fd7bf7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7c0106560 0x7fd7c0073590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:07.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.950+0000 7fd7c68d4700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd7c0074f70 con 0x7fd7c0100540 2026-03-09T14:58:07.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.951+0000 7fd7bf7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7c0106560 0x7fd7c0073590 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:46206/0 (socket says 192.168.123.105:46206) 2026-03-09T14:58:07.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.951+0000 7fd7bf7fe700 1 -- 192.168.123.105:0/1686890089 learned_addr learned my addr 192.168.123.105:0/1686890089 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:07.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.951+0000 7fd7bffff700 1 --2- 192.168.123.105:0/1686890089 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd7c0100540 0x7fd7c0073050 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:07.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.951+0000 7fd7bf7fe700 1 -- 192.168.123.105:0/1686890089 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd7c0100540 msgr2=0x7fd7c0073050 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:07.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.951+0000 7fd7bf7fe700 1 --2- 192.168.123.105:0/1686890089 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd7c0100540 0x7fd7c0073050 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:07.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.951+0000 7fd7bf7fe700 1 -- 192.168.123.105:0/1686890089 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd7b00097e0 con 0x7fd7c0106560 2026-03-09T14:58:07.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.951+0000 7fd7bf7fe700 1 --2- 192.168.123.105:0/1686890089 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7c0106560 0x7fd7c0073590 secure :-1 s=READY pgs=230 cs=0 l=1 rev1=1 crypto rx=0x7fd7b00048c0 tx=0x7fd7b00048f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:07.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.952+0000 7fd7bd7fa700 1 -- 192.168.123.105:0/1686890089 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd7b001d070 con 0x7fd7c0106560 2026-03-09T14:58:07.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.952+0000 7fd7bd7fa700 1 -- 192.168.123.105:0/1686890089 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd7b0022470 con 0x7fd7c0106560 2026-03-09T14:58:07.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.952+0000 7fd7bd7fa700 1 -- 192.168.123.105:0/1686890089 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd7b000f670 con 0x7fd7c0106560 2026-03-09T14:58:07.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.952+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1686890089 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd7c0073b30 con 0x7fd7c0106560 2026-03-09T14:58:07.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.952+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1686890089 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd7c01a6b80 con 0x7fd7c0106560 2026-03-09T14:58:07.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.953+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1686890089 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd7c01089f0 con 0x7fd7c0106560 2026-03-09T14:58:07.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.956+0000 7fd7bd7fa700 1 -- 192.168.123.105:0/1686890089 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fd7b00225e0 con 0x7fd7c0106560 2026-03-09T14:58:07.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.956+0000 7fd7bd7fa700 1 --2- 192.168.123.105:0/1686890089 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7ac06c750 0x7fd7ac06ec00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:07.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.956+0000 7fd7bd7fa700 1 -- 192.168.123.105:0/1686890089 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fd7b008d2e0 con 0x7fd7c0106560 2026-03-09T14:58:07.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.956+0000 7fd7bffff700 1 --2- 192.168.123.105:0/1686890089 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7ac06c750 0x7fd7ac06ec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:07.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.957+0000 7fd7bffff700 1 --2- 192.168.123.105:0/1686890089 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7ac06c750 0x7fd7ac06ec00 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fd7a8005950 tx=0x7fd7a800b410 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:07.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:07.957+0000 7fd7bd7fa700 1 -- 192.168.123.105:0/1686890089 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fd7b0057e20 con 0x7fd7c0106560 2026-03-09T14:58:07.994 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:07 vm05 ceph-mon[50611]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:07.994 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:07 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:58:07.994 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:07 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3904611640' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:58:08.068 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.067+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1686890089 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"} v 0) v1 -- 0x7fd7c004ea50 con 0x7fd7c0106560 2026-03-09T14:58:08.068 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.067+0000 7fd7bd7fa700 1 -- 192.168.123.105:0/1686890089 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]=0 v33) v1 ==== 93+0+10 (secure 0 0 0) 0x7fd7b005b440 con 0x7fd7c0106560 2026-03-09T14:58:08.068 INFO:teuthology.orchestra.run.vm05.stdout:pg_num: 1 2026-03-09T14:58:08.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.070+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1686890089 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7ac06c750 msgr2=0x7fd7ac06ec00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:08.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.070+0000 7fd7c68d4700 1 --2- 192.168.123.105:0/1686890089 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7ac06c750 0x7fd7ac06ec00 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fd7a8005950 tx=0x7fd7a800b410 comp rx=0 tx=0).stop 2026-03-09T14:58:08.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.070+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1686890089 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7c0106560 msgr2=0x7fd7c0073590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:08.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.070+0000 7fd7c68d4700 1 --2- 192.168.123.105:0/1686890089 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7c0106560 0x7fd7c0073590 secure :-1 s=READY pgs=230 cs=0 l=1 rev1=1 crypto rx=0x7fd7b00048c0 tx=0x7fd7b00048f0 comp rx=0 tx=0).stop 2026-03-09T14:58:08.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.070+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1686890089 shutdown_connections 2026-03-09T14:58:08.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.070+0000 7fd7c68d4700 1 --2- 192.168.123.105:0/1686890089 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd7ac06c750 0x7fd7ac06ec00 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:08.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.070+0000 7fd7c68d4700 1 --2- 192.168.123.105:0/1686890089 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd7c0100540 0x7fd7c0073050 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:08.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.070+0000 7fd7c68d4700 1 --2- 192.168.123.105:0/1686890089 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7c0106560 0x7fd7c0073590 unknown :-1 s=CLOSED pgs=230 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:08.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.070+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1686890089 >> 192.168.123.105:0/1686890089 conn(0x7fd7c00fbfc0 msgr2=0x7fd7c00fd910 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:08.072 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.071+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1686890089 shutdown_connections 2026-03-09T14:58:08.072 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.071+0000 7fd7c68d4700 1 -- 192.168.123.105:0/1686890089 wait complete. 2026-03-09T14:58:08.143 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T14:58:08.143 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T14:58:08.296 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:07 vm09 ceph-mon[59673]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:07 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:58:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:07 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/3904611640' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:58:08.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.554+0000 7fa5f627c700 1 -- 192.168.123.105:0/1462754202 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa5f0105a60 msgr2=0x7fa5f0107e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:08.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.554+0000 7fa5f627c700 1 --2- 192.168.123.105:0/1462754202 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa5f0105a60 0x7fa5f0107e40 secure :-1 s=READY pgs=231 cs=0 l=1 rev1=1 crypto rx=0x7fa5e0009b00 tx=0x7fa5e0009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:08.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.555+0000 7fa5f627c700 1 -- 192.168.123.105:0/1462754202 shutdown_connections 2026-03-09T14:58:08.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.555+0000 7fa5f627c700 1 --2- 192.168.123.105:0/1462754202 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa5f0105a60 0x7fa5f0107e40 unknown :-1 s=CLOSED pgs=231 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:08.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.555+0000 7fa5f627c700 1 --2- 192.168.123.105:0/1462754202 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa5f00691a0 0x7fa5f0105520 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:08.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.555+0000 7fa5f627c700 1 -- 192.168.123.105:0/1462754202 >> 192.168.123.105:0/1462754202 conn(0x7fa5f00faa70 msgr2=0x7fa5f00fcee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:08.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.555+0000 7fa5f627c700 1 -- 192.168.123.105:0/1462754202 shutdown_connections 2026-03-09T14:58:08.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.555+0000 7fa5f627c700 1 -- 192.168.123.105:0/1462754202 wait complete. 2026-03-09T14:58:08.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.556+0000 7fa5f627c700 1 Processor -- start 2026-03-09T14:58:08.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.556+0000 7fa5f627c700 1 -- start start 2026-03-09T14:58:08.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.556+0000 7fa5f627c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa5f00691a0 0x7fa5f0198030 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:08.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.556+0000 7fa5f627c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa5f0105a60 0x7fa5f0198570 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:08.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.556+0000 7fa5f627c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa5f0198b90 con 0x7fa5f0105a60 2026-03-09T14:58:08.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.556+0000 7fa5f627c700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa5f0198cd0 con 0x7fa5f00691a0 2026-03-09T14:58:08.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.556+0000 7fa5ef7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa5f0105a60 0x7fa5f0198570 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:08.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.556+0000 7fa5ef7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa5f0105a60 0x7fa5f0198570 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:46230/0 (socket says 192.168.123.105:46230) 2026-03-09T14:58:08.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.556+0000 7fa5ef7fe700 1 -- 192.168.123.105:0/615486100 learned_addr learned my addr 192.168.123.105:0/615486100 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:08.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.557+0000 7fa5ef7fe700 1 -- 192.168.123.105:0/615486100 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa5f00691a0 msgr2=0x7fa5f0198030 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T14:58:08.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.557+0000 7fa5effff700 1 --2- 192.168.123.105:0/615486100 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa5f00691a0 0x7fa5f0198030 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:08.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.557+0000 7fa5ef7fe700 1 --2- 192.168.123.105:0/615486100 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa5f00691a0 0x7fa5f0198030 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:08.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.557+0000 7fa5ef7fe700 1 -- 192.168.123.105:0/615486100 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa5e00097e0 con 0x7fa5f0105a60 2026-03-09T14:58:08.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.557+0000 7fa5effff700 1 --2- 192.168.123.105:0/615486100 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa5f00691a0 0x7fa5f0198030 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T14:58:08.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.557+0000 7fa5ef7fe700 1 --2- 192.168.123.105:0/615486100 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa5f0105a60 0x7fa5f0198570 secure :-1 s=READY pgs=232 cs=0 l=1 rev1=1 crypto rx=0x7fa5e0009ad0 tx=0x7fa5e0004970 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:08.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.557+0000 7fa5ed7fa700 1 -- 192.168.123.105:0/615486100 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa5e001d070 con 0x7fa5f0105a60 2026-03-09T14:58:08.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.557+0000 7fa5ed7fa700 1 -- 192.168.123.105:0/615486100 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa5e000bb40 con 0x7fa5f0105a60 2026-03-09T14:58:08.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.557+0000 7fa5ed7fa700 1 -- 192.168.123.105:0/615486100 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa5e000f670 con 0x7fa5f0105a60 2026-03-09T14:58:08.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.557+0000 7fa5f627c700 1 -- 192.168.123.105:0/615486100 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa5f019d720 con 0x7fa5f0105a60 2026-03-09T14:58:08.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.557+0000 7fa5f627c700 1 -- 192.168.123.105:0/615486100 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa5f019dc10 con 0x7fa5f0105a60 2026-03-09T14:58:08.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.559+0000 7fa5f627c700 1 -- 192.168.123.105:0/615486100 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa5f01921e0 con 0x7fa5f0105a60 2026-03-09T14:58:08.563 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.562+0000 7fa5ed7fa700 1 -- 192.168.123.105:0/615486100 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fa5e0004d20 con 0x7fa5f0105a60 2026-03-09T14:58:08.564 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.563+0000 7fa5ed7fa700 1 --2- 192.168.123.105:0/615486100 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa5dc06c7a0 0x7fa5dc06ec50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:08.564 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.563+0000 7fa5ed7fa700 1 -- 192.168.123.105:0/615486100 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fa5e008d530 con 0x7fa5f0105a60 2026-03-09T14:58:08.564 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.563+0000 7fa5ed7fa700 1 -- 192.168.123.105:0/615486100 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fa5e008d9b0 con 0x7fa5f0105a60 2026-03-09T14:58:08.564 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.563+0000 7fa5effff700 1 --2- 192.168.123.105:0/615486100 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa5dc06c7a0 0x7fa5dc06ec50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:08.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.564+0000 7fa5effff700 1 --2- 192.168.123.105:0/615486100 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa5dc06c7a0 0x7fa5dc06ec50 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7fa5f0069570 tx=0x7fa5d8009450 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:08.722 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.719+0000 7fa5f627c700 1 -- 192.168.123.105:0/615486100 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) v1 -- 0x7fa5f0066e40 con 0x7fa5f0105a60 2026-03-09T14:58:08.726 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.725+0000 7fa5ed7fa700 1 -- 192.168.123.105:0/615486100 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v16) v1 ==== 170+0+59 (secure 0 0 0) 0x7fa5e005bb00 con 0x7fa5f0105a60 2026-03-09T14:58:08.726 INFO:teuthology.orchestra.run.vm05.stdout:[client.0] 2026-03-09T14:58:08.726 INFO:teuthology.orchestra.run.vm05.stdout: key = AQAA4K5peu0SKxAAf1dYWfL+p94EdA+8kpiFWA== 2026-03-09T14:58:08.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.728+0000 7fa5f627c700 1 -- 192.168.123.105:0/615486100 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa5dc06c7a0 msgr2=0x7fa5dc06ec50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:08.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.728+0000 7fa5f627c700 1 --2- 192.168.123.105:0/615486100 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa5dc06c7a0 0x7fa5dc06ec50 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7fa5f0069570 tx=0x7fa5d8009450 comp rx=0 tx=0).stop 2026-03-09T14:58:08.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.728+0000 7fa5f627c700 1 -- 192.168.123.105:0/615486100 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa5f0105a60 msgr2=0x7fa5f0198570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:08.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.728+0000 7fa5f627c700 1 --2- 192.168.123.105:0/615486100 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa5f0105a60 0x7fa5f0198570 secure :-1 s=READY pgs=232 cs=0 l=1 rev1=1 crypto rx=0x7fa5e0009ad0 tx=0x7fa5e0004970 comp rx=0 tx=0).stop 2026-03-09T14:58:08.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.728+0000 7fa5f627c700 1 -- 192.168.123.105:0/615486100 shutdown_connections 2026-03-09T14:58:08.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.728+0000 7fa5f627c700 1 --2- 192.168.123.105:0/615486100 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa5dc06c7a0 0x7fa5dc06ec50 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:08.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.728+0000 7fa5f627c700 1 --2- 192.168.123.105:0/615486100 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa5f00691a0 0x7fa5f0198030 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:08.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.728+0000 7fa5f627c700 1 --2- 192.168.123.105:0/615486100 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa5f0105a60 0x7fa5f0198570 unknown :-1 s=CLOSED pgs=232 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:08.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.728+0000 7fa5f627c700 1 -- 192.168.123.105:0/615486100 >> 192.168.123.105:0/615486100 conn(0x7fa5f00faa70 msgr2=0x7fa5f00fcee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:08.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.728+0000 7fa5f627c700 1 -- 192.168.123.105:0/615486100 shutdown_connections 2026-03-09T14:58:08.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:08.729+0000 7fa5f627c700 1 -- 192.168.123.105:0/615486100 wait complete. 2026-03-09T14:58:08.780 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T14:58:08.780 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T14:58:08.780 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T14:58:08.820 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T14:58:08.994 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm09/config 2026-03-09T14:58:09.000 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:08 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/1686890089' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:58:09.000 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:08 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/615486100' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:58:09.000 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:08 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/615486100' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:58:09.096 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:09 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/1686890089' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:58:09.096 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:09 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/615486100' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:58:09.096 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:09 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/615486100' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:58:09.277 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.276+0000 7f0143f9e700 1 -- 192.168.123.109:0/3943628893 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f013c101010 msgr2=0x7f013c1033f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:09.277 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.276+0000 7f0143f9e700 1 --2- 192.168.123.109:0/3943628893 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f013c101010 0x7f013c1033f0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f012c009b00 tx=0x7f012c009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:09.277 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.277+0000 7f0143f9e700 1 -- 192.168.123.109:0/3943628893 shutdown_connections 2026-03-09T14:58:09.277 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.277+0000 7f0143f9e700 1 --2- 192.168.123.109:0/3943628893 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f013c103930 0x7f013c105d10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:09.277 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.277+0000 7f0143f9e700 1 --2- 192.168.123.109:0/3943628893 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f013c101010 0x7f013c1033f0 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:09.277 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.277+0000 7f0143f9e700 1 -- 192.168.123.109:0/3943628893 >> 192.168.123.109:0/3943628893 conn(0x7f013c0faa10 msgr2=0x7f013c0fce60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:09.277 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.277+0000 7f0143f9e700 1 -- 192.168.123.109:0/3943628893 shutdown_connections 2026-03-09T14:58:09.277 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.277+0000 7f0143f9e700 1 -- 192.168.123.109:0/3943628893 wait complete. 2026-03-09T14:58:09.277 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.278+0000 7f0143f9e700 1 Processor -- start 2026-03-09T14:58:09.277 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.278+0000 7f0143f9e700 1 -- start start 2026-03-09T14:58:09.278 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.278+0000 7f0143f9e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f013c101010 0x7f013c195df0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:09.278 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.278+0000 7f0141d3a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f013c101010 0x7f013c195df0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:09.278 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.278+0000 7f0141d3a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f013c101010 0x7f013c195df0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.109:42066/0 (socket says 192.168.123.109:42066) 2026-03-09T14:58:09.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.278+0000 7f0143f9e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f013c103930 0x7f013c196330 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:09.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.278+0000 7f0143f9e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f013c196950 con 0x7f013c103930 2026-03-09T14:58:09.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.278+0000 7f0143f9e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f013c196a90 con 0x7f013c101010 2026-03-09T14:58:09.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.278+0000 7f0141d3a700 1 -- 192.168.123.109:0/1625331578 learned_addr learned my addr 192.168.123.109:0/1625331578 (peer_addr_for_me v2:192.168.123.109:0/0) 2026-03-09T14:58:09.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.279+0000 7f0141d3a700 1 -- 192.168.123.109:0/1625331578 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f013c103930 msgr2=0x7f013c196330 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T14:58:09.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.279+0000 7f0141d3a700 1 --2- 192.168.123.109:0/1625331578 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f013c103930 0x7f013c196330 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:09.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.279+0000 7f0141d3a700 1 -- 192.168.123.109:0/1625331578 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f012c0097e0 con 0x7f013c101010 2026-03-09T14:58:09.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.279+0000 7f0141d3a700 1 --2- 192.168.123.109:0/1625331578 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f013c101010 0x7f013c195df0 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f012c004990 tx=0x7f012c004a70 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:09.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.279+0000 7f0132ffd700 1 -- 192.168.123.109:0/1625331578 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f012c01d070 con 0x7f013c101010 2026-03-09T14:58:09.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.279+0000 7f0143f9e700 1 -- 192.168.123.109:0/1625331578 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f013c19b4e0 con 0x7f013c101010 2026-03-09T14:58:09.279 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.279+0000 7f0143f9e700 1 -- 192.168.123.109:0/1625331578 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f013c19ba50 con 0x7f013c101010 2026-03-09T14:58:09.280 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.280+0000 7f0132ffd700 1 -- 192.168.123.109:0/1625331578 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f012c00bcd0 con 0x7f013c101010 2026-03-09T14:58:09.280 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.280+0000 7f0132ffd700 1 -- 192.168.123.109:0/1625331578 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f012c00f8d0 con 0x7f013c101010 2026-03-09T14:58:09.280 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.281+0000 7f0132ffd700 1 -- 192.168.123.109:0/1625331578 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f012c00fa30 con 0x7f013c101010 2026-03-09T14:58:09.280 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.281+0000 7f0132ffd700 1 --2- 192.168.123.109:0/1625331578 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f012806c750 0x7f012806ec00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:09.281 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.281+0000 7f0132ffd700 1 -- 192.168.123.109:0/1625331578 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f012c08dce0 con 0x7f013c101010 2026-03-09T14:58:09.281 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.281+0000 7f0141539700 1 --2- 192.168.123.109:0/1625331578 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f012806c750 0x7f012806ec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:09.284 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.282+0000 7f0143f9e700 1 -- 192.168.123.109:0/1625331578 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0120005320 con 0x7f013c101010 2026-03-09T14:58:09.284 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.285+0000 7f0141539700 1 --2- 192.168.123.109:0/1625331578 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f012806c750 0x7f012806ec00 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f0138009730 tx=0x7f0138006cb0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:09.284 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.285+0000 7f0132ffd700 1 -- 192.168.123.109:0/1625331578 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f012c05c360 con 0x7f013c101010 2026-03-09T14:58:09.428 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.428+0000 7f0143f9e700 1 -- 192.168.123.109:0/1625331578 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) v1 -- 0x7f0120005190 con 0x7f013c101010 2026-03-09T14:58:09.434 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.434+0000 7f0132ffd700 1 -- 192.168.123.109:0/1625331578 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v17) v1 ==== 170+0+59 (secure 0 0 0) 0x7f012c027070 con 0x7f013c101010 2026-03-09T14:58:09.434 INFO:teuthology.orchestra.run.vm09.stdout:[client.1] 2026-03-09T14:58:09.434 INFO:teuthology.orchestra.run.vm09.stdout: key = AQAB4K5p0/SdGRAAfu9bkuanRMYzsxIz5A2xjw== 2026-03-09T14:58:09.436 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.437+0000 7f0143f9e700 1 -- 192.168.123.109:0/1625331578 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f012806c750 msgr2=0x7f012806ec00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:09.436 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.437+0000 7f0143f9e700 1 --2- 192.168.123.109:0/1625331578 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f012806c750 0x7f012806ec00 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f0138009730 tx=0x7f0138006cb0 comp rx=0 tx=0).stop 2026-03-09T14:58:09.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.437+0000 7f0143f9e700 1 -- 192.168.123.109:0/1625331578 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f013c101010 msgr2=0x7f013c195df0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:09.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.437+0000 7f0143f9e700 1 --2- 192.168.123.109:0/1625331578 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f013c101010 0x7f013c195df0 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f012c004990 tx=0x7f012c004a70 comp rx=0 tx=0).stop 2026-03-09T14:58:09.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.437+0000 7f0143f9e700 1 -- 192.168.123.109:0/1625331578 shutdown_connections 2026-03-09T14:58:09.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.437+0000 7f0143f9e700 1 --2- 192.168.123.109:0/1625331578 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f012806c750 0x7f012806ec00 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:09.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.437+0000 7f0143f9e700 1 --2- 192.168.123.109:0/1625331578 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f013c101010 0x7f013c195df0 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:09.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.437+0000 7f0143f9e700 1 --2- 192.168.123.109:0/1625331578 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f013c103930 0x7f013c196330 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:09.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.437+0000 7f0143f9e700 1 -- 192.168.123.109:0/1625331578 >> 192.168.123.109:0/1625331578 conn(0x7f013c0faa10 msgr2=0x7f013c0fce60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:09.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.438+0000 7f0143f9e700 1 -- 192.168.123.109:0/1625331578 shutdown_connections 2026-03-09T14:58:09.437 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:09.438+0000 7f0143f9e700 1 -- 192.168.123.109:0/1625331578 wait complete. 2026-03-09T14:58:09.506 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:58:09.506 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-09T14:58:09.506 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-09T14:58:09.546 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T14:58:09.546 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T14:58:09.546 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mgr dump --format=json 2026-03-09T14:58:09.718 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:10.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.003+0000 7fb01c5f5700 1 -- 192.168.123.105:0/4205920189 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb014102780 msgr2=0x7fb014102bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:10.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.003+0000 7fb01c5f5700 1 --2- 192.168.123.105:0/4205920189 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb014102780 0x7fb014102bf0 secure :-1 s=READY pgs=233 cs=0 l=1 rev1=1 crypto rx=0x7fb010009b00 tx=0x7fb010009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:10.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.003+0000 7fb01c5f5700 1 -- 192.168.123.105:0/4205920189 shutdown_connections 2026-03-09T14:58:10.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.003+0000 7fb01c5f5700 1 --2- 192.168.123.105:0/4205920189 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb014102780 0x7fb014102bf0 unknown :-1 s=CLOSED pgs=233 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.003+0000 7fb01c5f5700 1 --2- 192.168.123.105:0/4205920189 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb014108780 0x7fb014108b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.003+0000 7fb01c5f5700 1 -- 192.168.123.105:0/4205920189 >> 192.168.123.105:0/4205920189 conn(0x7fb0140fe280 msgr2=0x7fb014100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:10.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.003+0000 7fb01c5f5700 1 -- 192.168.123.105:0/4205920189 shutdown_connections 2026-03-09T14:58:10.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.003+0000 7fb01c5f5700 1 -- 192.168.123.105:0/4205920189 wait complete. 2026-03-09T14:58:10.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.004+0000 7fb01c5f5700 1 Processor -- start 2026-03-09T14:58:10.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.004+0000 7fb01c5f5700 1 -- start start 2026-03-09T14:58:10.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.004+0000 7fb01c5f5700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb014102780 0x7fb014075260 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:10.007 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.005+0000 7fb01c5f5700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb014108780 0x7fb0140757a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:10.007 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.005+0000 7fb01c5f5700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb014079310 con 0x7fb014108780 2026-03-09T14:58:10.007 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.005+0000 7fb01c5f5700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb014075ce0 con 0x7fb014102780 2026-03-09T14:58:10.010 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.007+0000 7fb019b90700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb014108780 0x7fb0140757a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:10.010 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.007+0000 7fb019b90700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb014108780 0x7fb0140757a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:51020/0 (socket says 192.168.123.105:51020) 2026-03-09T14:58:10.010 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.007+0000 7fb019b90700 1 -- 192.168.123.105:0/1885684094 learned_addr learned my addr 192.168.123.105:0/1885684094 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:10.010 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.007+0000 7fb019b90700 1 -- 192.168.123.105:0/1885684094 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb014102780 msgr2=0x7fb014075260 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:10.010 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.007+0000 7fb019b90700 1 --2- 192.168.123.105:0/1885684094 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb014102780 0x7fb014075260 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.010 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.007+0000 7fb019b90700 1 -- 192.168.123.105:0/1885684094 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb0100097e0 con 0x7fb014108780 2026-03-09T14:58:10.010 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.007+0000 7fb019b90700 1 --2- 192.168.123.105:0/1885684094 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb014108780 0x7fb0140757a0 secure :-1 s=READY pgs=234 cs=0 l=1 rev1=1 crypto rx=0x7fb010009fd0 tx=0x7fb010004a40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:10.010 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.009+0000 7fb0077fe700 1 -- 192.168.123.105:0/1885684094 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb01001d070 con 0x7fb014108780 2026-03-09T14:58:10.010 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.010+0000 7fb01c5f5700 1 -- 192.168.123.105:0/1885684094 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb014075f60 con 0x7fb014108780 2026-03-09T14:58:10.011 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.010+0000 7fb01c5f5700 1 -- 192.168.123.105:0/1885684094 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb014071a10 con 0x7fb014108780 2026-03-09T14:58:10.011 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.010+0000 7fb0077fe700 1 -- 192.168.123.105:0/1885684094 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb010004510 con 0x7fb014108780 2026-03-09T14:58:10.011 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.010+0000 7fb0077fe700 1 -- 192.168.123.105:0/1885684094 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb010003e90 con 0x7fb014108780 2026-03-09T14:58:10.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.011+0000 7fb0077fe700 1 -- 192.168.123.105:0/1885684094 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fb01000f540 con 0x7fb014108780 2026-03-09T14:58:10.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.012+0000 7fb0077fe700 1 --2- 192.168.123.105:0/1885684094 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb00006c750 0x7fb00006ec00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:10.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.012+0000 7fb0077fe700 1 -- 192.168.123.105:0/1885684094 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fb01008d7c0 con 0x7fb014108780 2026-03-09T14:58:10.013 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.012+0000 7fb01a391700 1 --2- 192.168.123.105:0/1885684094 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb00006c750 0x7fb00006ec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:10.016 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.012+0000 7fb01c5f5700 1 -- 192.168.123.105:0/1885684094 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7faff8005320 con 0x7fb014108780 2026-03-09T14:58:10.016 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.014+0000 7fb01a391700 1 --2- 192.168.123.105:0/1885684094 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb00006c750 0x7fb00006ec00 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7fb0141038c0 tx=0x7fb008005d20 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:10.016 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.015+0000 7fb0077fe700 1 -- 192.168.123.105:0/1885684094 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fb0100583b0 con 0x7fb014108780 2026-03-09T14:58:10.157 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:10 vm05 ceph-mon[50611]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:10.157 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:10 vm05 ceph-mon[50611]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:58:10.157 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:10 vm05 ceph-mon[50611]: from='client.? 192.168.123.109:0/1625331578' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:58:10.157 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:10 vm05 ceph-mon[50611]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:58:10.159 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.156+0000 7fb01c5f5700 1 -- 192.168.123.105:0/1885684094 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "mgr dump", "format": "json"} v 0) v1 -- 0x7faff8005190 con 0x7fb014108780 2026-03-09T14:58:10.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.159+0000 7fb0077fe700 1 -- 192.168.123.105:0/1885684094 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "mgr dump", "format": "json"}]=0 v19) v1 ==== 74+0+172845 (secure 0 0 0) 0x7fb010026020 con 0x7fb014108780 2026-03-09T14:58:10.160 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:10.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.163+0000 7fb01c5f5700 1 -- 192.168.123.105:0/1885684094 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb00006c750 msgr2=0x7fb00006ec00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:10.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.163+0000 7fb01c5f5700 1 --2- 192.168.123.105:0/1885684094 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb00006c750 0x7fb00006ec00 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7fb0141038c0 tx=0x7fb008005d20 comp rx=0 tx=0).stop 2026-03-09T14:58:10.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.164+0000 7fb01c5f5700 1 -- 192.168.123.105:0/1885684094 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb014108780 msgr2=0x7fb0140757a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:10.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.164+0000 7fb01c5f5700 1 --2- 192.168.123.105:0/1885684094 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb014108780 0x7fb0140757a0 secure :-1 s=READY pgs=234 cs=0 l=1 rev1=1 crypto rx=0x7fb010009fd0 tx=0x7fb010004a40 comp rx=0 tx=0).stop 2026-03-09T14:58:10.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.164+0000 7fb01c5f5700 1 -- 192.168.123.105:0/1885684094 shutdown_connections 2026-03-09T14:58:10.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.164+0000 7fb01c5f5700 1 --2- 192.168.123.105:0/1885684094 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb00006c750 0x7fb00006ec00 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.164+0000 7fb01c5f5700 1 --2- 192.168.123.105:0/1885684094 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb014102780 0x7fb014075260 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.164+0000 7fb01c5f5700 1 --2- 192.168.123.105:0/1885684094 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb014108780 0x7fb0140757a0 unknown :-1 s=CLOSED pgs=234 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.164+0000 7fb01c5f5700 1 -- 192.168.123.105:0/1885684094 >> 192.168.123.105:0/1885684094 conn(0x7fb0140fe280 msgr2=0x7fb0140ffc00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:10.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.164+0000 7fb01c5f5700 1 -- 192.168.123.105:0/1885684094 shutdown_connections 2026-03-09T14:58:10.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.164+0000 7fb01c5f5700 1 -- 192.168.123.105:0/1885684094 wait complete. 2026-03-09T14:58:10.211 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":19,"active_gid":14249,"active_name":"vm05.lhsexd","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":2},{"type":"v1","addr":"192.168.123.105:6801","nonce":2}]},"active_addr":"192.168.123.105:6801/2","active_change":"2026-03-09T14:56:37.039680+0000","active_mgr_features":4540138322906710015,"available":true,"standbys":[{"gid":14274,"name":"vm09.cfuwdz","mgr_features":4540138322906710015,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"alertmanager_web_password":{"name":"alertmanager_web_password","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"Alertmanager web password","long_desc":"","tags":[],"see_also":[]},"alertmanager_web_user":{"name":"alertmanager_web_user","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"Alertmanager web user","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph:v18","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:9.4.7","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:2.4.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.5.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.43.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:2.4.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"secs","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"docker.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"prometheus_web_password":{"name":"prometheus_web_password","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"Prometheus web password","long_desc":"","tags":[],"see_also":[]},"prometheus_web_user":{"name":"prometheus_web_user","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"Prometheus web user","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"noautoscale":{"name":"noautoscale","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"global autoscale flag","long_desc":"Option to turn on/off the autoscaler for all pools","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"alertmanager_web_password":{"name":"alertmanager_web_password","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"Alertmanager web password","long_desc":"","tags":[],"see_also":[]},"alertmanager_web_user":{"name":"alertmanager_web_user","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"Alertmanager web user","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph:v18","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:9.4.7","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:2.4.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.5.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.43.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:2.4.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"secs","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"docker.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"prometheus_web_password":{"name":"prometheus_web_password","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"Prometheus web password","long_desc":"","tags":[],"see_also":[]},"prometheus_web_user":{"name":"prometheus_web_user","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"Prometheus web user","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"noautoscale":{"name":"noautoscale","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"global autoscale flag","long_desc":"Option to turn on/off the autoscaler for all pools","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.105:8443/","prometheus":"http://192.168.123.105:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"last_failure_osd_epoch":5,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":2983840049}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":2534607224}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":3660404579}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":3749740799}]}]} 2026-03-09T14:58:10.212 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T14:58:10.212 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T14:58:10.212 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd dump --format=json 2026-03-09T14:58:10.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:10 vm09 ceph-mon[59673]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:10.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:10 vm09 ceph-mon[59673]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:58:10.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:10 vm09 ceph-mon[59673]: from='client.? 192.168.123.109:0/1625331578' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:58:10.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:10 vm09 ceph-mon[59673]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:58:10.372 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.654+0000 7ff0b5491700 1 -- 192.168.123.105:0/2800912416 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0b01065b0 msgr2=0x7ff0b0106980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.654+0000 7ff0b5491700 1 --2- 192.168.123.105:0/2800912416 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0b01065b0 0x7ff0b0106980 secure :-1 s=READY pgs=235 cs=0 l=1 rev1=1 crypto rx=0x7ff098009b00 tx=0x7ff098009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.655+0000 7ff0b5491700 1 -- 192.168.123.105:0/2800912416 shutdown_connections 2026-03-09T14:58:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.655+0000 7ff0b5491700 1 --2- 192.168.123.105:0/2800912416 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff0b0100590 0x7ff0b0100a00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.655+0000 7ff0b5491700 1 --2- 192.168.123.105:0/2800912416 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0b01065b0 0x7ff0b0106980 unknown :-1 s=CLOSED pgs=235 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.655+0000 7ff0b5491700 1 -- 192.168.123.105:0/2800912416 >> 192.168.123.105:0/2800912416 conn(0x7ff0b00fc090 msgr2=0x7ff0b00fe4a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.655+0000 7ff0b5491700 1 -- 192.168.123.105:0/2800912416 shutdown_connections 2026-03-09T14:58:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.655+0000 7ff0b5491700 1 -- 192.168.123.105:0/2800912416 wait complete. 2026-03-09T14:58:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0b5491700 1 Processor -- start 2026-03-09T14:58:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0b5491700 1 -- start start 2026-03-09T14:58:10.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0b5491700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0b0100590 0x7ff0b0073050 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:10.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0b5491700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff0b01065b0 0x7ff0b0073590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:10.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0b5491700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff0b0073ad0 con 0x7ff0b0100590 2026-03-09T14:58:10.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0b5491700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff0b0073c10 con 0x7ff0b01065b0 2026-03-09T14:58:10.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0ae7fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff0b01065b0 0x7ff0b0073590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:10.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0ae7fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff0b01065b0 0x7ff0b0073590 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:46702/0 (socket says 192.168.123.105:46702) 2026-03-09T14:58:10.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0ae7fc700 1 -- 192.168.123.105:0/1217665920 learned_addr learned my addr 192.168.123.105:0/1217665920 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:10.658 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0aeffd700 1 --2- 192.168.123.105:0/1217665920 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0b0100590 0x7ff0b0073050 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:10.658 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0aeffd700 1 -- 192.168.123.105:0/1217665920 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff0b01065b0 msgr2=0x7ff0b0073590 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:10.658 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0aeffd700 1 --2- 192.168.123.105:0/1217665920 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff0b01065b0 0x7ff0b0073590 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.658 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.656+0000 7ff0aeffd700 1 -- 192.168.123.105:0/1217665920 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff0980097e0 con 0x7ff0b0100590 2026-03-09T14:58:10.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.657+0000 7ff0aeffd700 1 --2- 192.168.123.105:0/1217665920 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0b0100590 0x7ff0b0073050 secure :-1 s=READY pgs=236 cs=0 l=1 rev1=1 crypto rx=0x7ff098009fd0 tx=0x7ff0980049b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:10.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.657+0000 7ff0a7fff700 1 -- 192.168.123.105:0/1217665920 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff09801d070 con 0x7ff0b0100590 2026-03-09T14:58:10.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.657+0000 7ff0b5491700 1 -- 192.168.123.105:0/1217665920 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff0b0073e90 con 0x7ff0b0100590 2026-03-09T14:58:10.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.657+0000 7ff0b5491700 1 -- 192.168.123.105:0/1217665920 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff0b01a2540 con 0x7ff0b0100590 2026-03-09T14:58:10.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.657+0000 7ff0a7fff700 1 -- 192.168.123.105:0/1217665920 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff09800bc50 con 0x7ff0b0100590 2026-03-09T14:58:10.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.657+0000 7ff0a7fff700 1 -- 192.168.123.105:0/1217665920 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff09802b650 con 0x7ff0b0100590 2026-03-09T14:58:10.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.658+0000 7ff0b5491700 1 -- 192.168.123.105:0/1217665920 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff090005320 con 0x7ff0b0100590 2026-03-09T14:58:10.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.659+0000 7ff0a7fff700 1 -- 192.168.123.105:0/1217665920 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7ff09800f460 con 0x7ff0b0100590 2026-03-09T14:58:10.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.659+0000 7ff0a7fff700 1 --2- 192.168.123.105:0/1217665920 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff09c074ee0 0x7ff09c077390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:10.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.659+0000 7ff0a7fff700 1 -- 192.168.123.105:0/1217665920 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7ff09808cc10 con 0x7ff0b0100590 2026-03-09T14:58:10.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.660+0000 7ff0ae7fc700 1 --2- 192.168.123.105:0/1217665920 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff09c074ee0 0x7ff09c077390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:10.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.660+0000 7ff0ae7fc700 1 --2- 192.168.123.105:0/1217665920 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff09c074ee0 0x7ff09c077390 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7ff0b0074840 tx=0x7ff0a0005ef0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:10.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.662+0000 7ff0a7fff700 1 -- 192.168.123.105:0/1217665920 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7ff098057630 con 0x7ff0b0100590 2026-03-09T14:58:10.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.767+0000 7ff0b5491700 1 -- 192.168.123.105:0/1217665920 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) v1 -- 0x7ff090005190 con 0x7ff0b0100590 2026-03-09T14:58:10.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.770+0000 7ff0a7fff700 1 -- 192.168.123.105:0/1217665920 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v33) v1 ==== 74+0+11285 (secure 0 0 0) 0x7ff09805ac50 con 0x7ff0b0100590 2026-03-09T14:58:10.771 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:10.772 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":33,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","created":"2026-03-09T14:55:10.534285+0000","modified":"2026-03-09T14:58:05.734328+0000","last_up_change":"2026-03-09T14:58:04.726665+0000","last_in_change":"2026-03-09T14:57:53.533741+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":14,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"reef","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T14:57:35.266161+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_acting":6,"score_stable":6,"optimal_score":0.5,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"1022e816-4ad0-4a27-9052-07d4015a684e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":3444724580},{"type":"v1","addr":"192.168.123.105:6803","nonce":3444724580}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":3444724580},{"type":"v1","addr":"192.168.123.105:6805","nonce":3444724580}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":3444724580},{"type":"v1","addr":"192.168.123.105:6809","nonce":3444724580}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":3444724580},{"type":"v1","addr":"192.168.123.105:6807","nonce":3444724580}]},"public_addr":"192.168.123.105:6803/3444724580","cluster_addr":"192.168.123.105:6805/3444724580","heartbeat_back_addr":"192.168.123.105:6809/3444724580","heartbeat_front_addr":"192.168.123.105:6807/3444724580","state":["exists","up"]},{"osd":1,"uuid":"73421280-9a73-4092-8e8f-854babd94f13","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":24,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":1642565416},{"type":"v1","addr":"192.168.123.105:6811","nonce":1642565416}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":1642565416},{"type":"v1","addr":"192.168.123.105:6813","nonce":1642565416}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":1642565416},{"type":"v1","addr":"192.168.123.105:6817","nonce":1642565416}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":1642565416},{"type":"v1","addr":"192.168.123.105:6815","nonce":1642565416}]},"public_addr":"192.168.123.105:6811/1642565416","cluster_addr":"192.168.123.105:6813/1642565416","heartbeat_back_addr":"192.168.123.105:6817/1642565416","heartbeat_front_addr":"192.168.123.105:6815/1642565416","state":["exists","up"]},{"osd":2,"uuid":"14d84b3a-06be-48f8-89a7-0e9c83f76e3c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":4063272520},{"type":"v1","addr":"192.168.123.105:6819","nonce":4063272520}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":4063272520},{"type":"v1","addr":"192.168.123.105:6821","nonce":4063272520}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":4063272520},{"type":"v1","addr":"192.168.123.105:6825","nonce":4063272520}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":4063272520},{"type":"v1","addr":"192.168.123.105:6823","nonce":4063272520}]},"public_addr":"192.168.123.105:6819/4063272520","cluster_addr":"192.168.123.105:6821/4063272520","heartbeat_back_addr":"192.168.123.105:6825/4063272520","heartbeat_front_addr":"192.168.123.105:6823/4063272520","state":["exists","up"]},{"osd":3,"uuid":"24dfa5ad-72df-4f80-bf60-0507508104f2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":27,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":1968723815},{"type":"v1","addr":"192.168.123.109:6801","nonce":1968723815}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":1968723815},{"type":"v1","addr":"192.168.123.109:6803","nonce":1968723815}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":1968723815},{"type":"v1","addr":"192.168.123.109:6807","nonce":1968723815}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":1968723815},{"type":"v1","addr":"192.168.123.109:6805","nonce":1968723815}]},"public_addr":"192.168.123.109:6801/1968723815","cluster_addr":"192.168.123.109:6803/1968723815","heartbeat_back_addr":"192.168.123.109:6807/1968723815","heartbeat_front_addr":"192.168.123.109:6805/1968723815","state":["exists","up"]},{"osd":4,"uuid":"c4ddfd7f-8055-4c53-a70a-131428da743a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":1714619602},{"type":"v1","addr":"192.168.123.109:6809","nonce":1714619602}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":1714619602},{"type":"v1","addr":"192.168.123.109:6811","nonce":1714619602}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":1714619602},{"type":"v1","addr":"192.168.123.109:6815","nonce":1714619602}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":1714619602},{"type":"v1","addr":"192.168.123.109:6813","nonce":1714619602}]},"public_addr":"192.168.123.109:6809/1714619602","cluster_addr":"192.168.123.109:6811/1714619602","heartbeat_back_addr":"192.168.123.109:6815/1714619602","heartbeat_front_addr":"192.168.123.109:6813/1714619602","state":["exists","up"]},{"osd":5,"uuid":"0ff88b04-4b34-4df6-bf77-2132a823172e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":2797312478},{"type":"v1","addr":"192.168.123.109:6817","nonce":2797312478}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":2797312478},{"type":"v1","addr":"192.168.123.109:6819","nonce":2797312478}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":2797312478},{"type":"v1","addr":"192.168.123.109:6823","nonce":2797312478}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":2797312478},{"type":"v1","addr":"192.168.123.109:6821","nonce":2797312478}]},"public_addr":"192.168.123.109:6817/2797312478","cluster_addr":"192.168.123.109:6819/2797312478","heartbeat_back_addr":"192.168.123.109:6823/2797312478","heartbeat_front_addr":"192.168.123.109:6821/2797312478","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:11.388791+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:23.267691+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:33.929621+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:43.737026+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:54.082613+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:58:02.883518+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:0/2507132558":"2026-03-10T14:56:37.039546+0000","192.168.123.105:0/3823132518":"2026-03-10T14:56:37.039546+0000","192.168.123.105:0/3432926156":"2026-03-10T14:55:39.031351+0000","192.168.123.105:0/2312625867":"2026-03-10T14:55:39.031351+0000","192.168.123.105:6801/2":"2026-03-10T14:55:24.676889+0000","192.168.123.105:6800/2":"2026-03-10T14:55:24.676889+0000","192.168.123.105:0/3215275472":"2026-03-10T14:55:24.676889+0000","192.168.123.105:0/116363292":"2026-03-10T14:55:39.031351+0000","192.168.123.105:0/4048863196":"2026-03-10T14:55:24.676889+0000","192.168.123.105:0/2546376349":"2026-03-10T14:56:37.039546+0000","192.168.123.105:0/3523930106":"2026-03-10T14:55:24.676889+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T14:58:10.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.773+0000 7ff0b5491700 1 -- 192.168.123.105:0/1217665920 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff09c074ee0 msgr2=0x7ff09c077390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:10.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.773+0000 7ff0b5491700 1 --2- 192.168.123.105:0/1217665920 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff09c074ee0 0x7ff09c077390 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7ff0b0074840 tx=0x7ff0a0005ef0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.774+0000 7ff0b5491700 1 -- 192.168.123.105:0/1217665920 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0b0100590 msgr2=0x7ff0b0073050 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:10.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.774+0000 7ff0b5491700 1 --2- 192.168.123.105:0/1217665920 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0b0100590 0x7ff0b0073050 secure :-1 s=READY pgs=236 cs=0 l=1 rev1=1 crypto rx=0x7ff098009fd0 tx=0x7ff0980049b0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.774+0000 7ff0b5491700 1 -- 192.168.123.105:0/1217665920 shutdown_connections 2026-03-09T14:58:10.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.774+0000 7ff0b5491700 1 --2- 192.168.123.105:0/1217665920 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff09c074ee0 0x7ff09c077390 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.774+0000 7ff0b5491700 1 --2- 192.168.123.105:0/1217665920 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff0b0100590 0x7ff0b0073050 unknown :-1 s=CLOSED pgs=236 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.774+0000 7ff0b5491700 1 --2- 192.168.123.105:0/1217665920 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff0b01065b0 0x7ff0b0073590 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:10.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.774+0000 7ff0b5491700 1 -- 192.168.123.105:0/1217665920 >> 192.168.123.105:0/1217665920 conn(0x7ff0b00fc090 msgr2=0x7ff0b00fd890 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:10.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.774+0000 7ff0b5491700 1 -- 192.168.123.105:0/1217665920 shutdown_connections 2026-03-09T14:58:10.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:10.774+0000 7ff0b5491700 1 -- 192.168.123.105:0/1217665920 wait complete. 2026-03-09T14:58:10.846 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T14:58:10.846 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd dump --format=json 2026-03-09T14:58:11.015 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:11.043 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:11 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/1885684094' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:58:11.043 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:11 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/1217665920' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:58:11.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.289+0000 7fb1e44f5700 1 -- 192.168.123.105:0/2798017012 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1dc073090 msgr2=0x7fb1dc073460 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:11.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.289+0000 7fb1e44f5700 1 --2- 192.168.123.105:0/2798017012 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1dc073090 0x7fb1dc073460 secure :-1 s=READY pgs=237 cs=0 l=1 rev1=1 crypto rx=0x7fb1cc009b30 tx=0x7fb1cc009e40 comp rx=0 tx=0).stop 2026-03-09T14:58:11.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.290+0000 7fb1e44f5700 1 -- 192.168.123.105:0/2798017012 shutdown_connections 2026-03-09T14:58:11.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.290+0000 7fb1e44f5700 1 --2- 192.168.123.105:0/2798017012 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1dc0739a0 0x7fb1dc10c880 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:11.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.290+0000 7fb1e44f5700 1 --2- 192.168.123.105:0/2798017012 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1dc073090 0x7fb1dc073460 unknown :-1 s=CLOSED pgs=237 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:11.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.290+0000 7fb1e44f5700 1 -- 192.168.123.105:0/2798017012 >> 192.168.123.105:0/2798017012 conn(0x7fb1dc0fbfc0 msgr2=0x7fb1dc0fe3d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:11.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.291+0000 7fb1e44f5700 1 -- 192.168.123.105:0/2798017012 shutdown_connections 2026-03-09T14:58:11.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.291+0000 7fb1e44f5700 1 -- 192.168.123.105:0/2798017012 wait complete. 2026-03-09T14:58:11.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.291+0000 7fb1e44f5700 1 Processor -- start 2026-03-09T14:58:11.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.292+0000 7fb1e44f5700 1 -- start start 2026-03-09T14:58:11.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.292+0000 7fb1e44f5700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1dc0739a0 0x7fb1dc1985e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:11.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.292+0000 7fb1e44f5700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1dc198b20 0x7fb1dc19cf90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:11.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.292+0000 7fb1e44f5700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb1dc199130 con 0x7fb1dc0739a0 2026-03-09T14:58:11.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.292+0000 7fb1e44f5700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb1dc1992a0 con 0x7fb1dc198b20 2026-03-09T14:58:11.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.292+0000 7fb1e2291700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1dc0739a0 0x7fb1dc1985e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:11.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.292+0000 7fb1e2291700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1dc0739a0 0x7fb1dc1985e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:51042/0 (socket says 192.168.123.105:51042) 2026-03-09T14:58:11.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.292+0000 7fb1e2291700 1 -- 192.168.123.105:0/3323844288 learned_addr learned my addr 192.168.123.105:0/3323844288 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:11.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.293+0000 7fb1e1a90700 1 --2- 192.168.123.105:0/3323844288 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1dc198b20 0x7fb1dc19cf90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:11.294 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.293+0000 7fb1e2291700 1 -- 192.168.123.105:0/3323844288 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1dc198b20 msgr2=0x7fb1dc19cf90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:11.294 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.293+0000 7fb1e2291700 1 --2- 192.168.123.105:0/3323844288 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1dc198b20 0x7fb1dc19cf90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:11.294 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.293+0000 7fb1e2291700 1 -- 192.168.123.105:0/3323844288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb1cc0097e0 con 0x7fb1dc0739a0 2026-03-09T14:58:11.294 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.293+0000 7fb1e2291700 1 --2- 192.168.123.105:0/3323844288 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1dc0739a0 0x7fb1dc1985e0 secure :-1 s=READY pgs=238 cs=0 l=1 rev1=1 crypto rx=0x7fb1cc000c00 tx=0x7fb1cc00be50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:11.294 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.294+0000 7fb1d37fe700 1 -- 192.168.123.105:0/3323844288 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb1cc01d070 con 0x7fb1dc0739a0 2026-03-09T14:58:11.295 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.294+0000 7fb1d37fe700 1 -- 192.168.123.105:0/3323844288 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb1cc00f460 con 0x7fb1dc0739a0 2026-03-09T14:58:11.295 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.294+0000 7fb1d37fe700 1 -- 192.168.123.105:0/3323844288 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb1cc021620 con 0x7fb1dc0739a0 2026-03-09T14:58:11.295 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.294+0000 7fb1e44f5700 1 -- 192.168.123.105:0/3323844288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb1dc19d530 con 0x7fb1dc0739a0 2026-03-09T14:58:11.295 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.294+0000 7fb1e44f5700 1 -- 192.168.123.105:0/3323844288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb1dc19da80 con 0x7fb1dc0739a0 2026-03-09T14:58:11.296 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.296+0000 7fb1d37fe700 1 -- 192.168.123.105:0/3323844288 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fb1cc00fab0 con 0x7fb1dc0739a0 2026-03-09T14:58:11.297 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.296+0000 7fb1d37fe700 1 --2- 192.168.123.105:0/3323844288 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb1c806c630 0x7fb1c806eae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:11.297 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.296+0000 7fb1d37fe700 1 -- 192.168.123.105:0/3323844288 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fb1cc08c5c0 con 0x7fb1dc0739a0 2026-03-09T14:58:11.297 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.296+0000 7fb1e1a90700 1 --2- 192.168.123.105:0/3323844288 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb1c806c630 0x7fb1c806eae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:11.297 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.296+0000 7fb1e44f5700 1 -- 192.168.123.105:0/3323844288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb1dc19d6c0 con 0x7fb1dc0739a0 2026-03-09T14:58:11.297 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.297+0000 7fb1e1a90700 1 --2- 192.168.123.105:0/3323844288 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb1c806c630 0x7fb1c806eae0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7fb1d8009de0 tx=0x7fb1d8009450 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:11.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.301+0000 7fb1d37fe700 1 -- 192.168.123.105:0/3323844288 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fb1dc19d6c0 con 0x7fb1dc0739a0 2026-03-09T14:58:11.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:11 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/1885684094' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:58:11.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:11 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/1217665920' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:58:11.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.413+0000 7fb1e44f5700 1 -- 192.168.123.105:0/3323844288 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) v1 -- 0x7fb1dc066e40 con 0x7fb1dc0739a0 2026-03-09T14:58:11.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.413+0000 7fb1d37fe700 1 -- 192.168.123.105:0/3323844288 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v33) v1 ==== 74+0+11285 (secure 0 0 0) 0x7fb1cc057080 con 0x7fb1dc0739a0 2026-03-09T14:58:11.415 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:11.415 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":33,"fsid":"d952ca1a-1bc7-11f1-a184-f9dcb7ee7000","created":"2026-03-09T14:55:10.534285+0000","modified":"2026-03-09T14:58:05.734328+0000","last_up_change":"2026-03-09T14:58:04.726665+0000","last_in_change":"2026-03-09T14:57:53.533741+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":14,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"reef","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T14:57:35.266161+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_acting":6,"score_stable":6,"optimal_score":0.5,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"1022e816-4ad0-4a27-9052-07d4015a684e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":3444724580},{"type":"v1","addr":"192.168.123.105:6803","nonce":3444724580}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":3444724580},{"type":"v1","addr":"192.168.123.105:6805","nonce":3444724580}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":3444724580},{"type":"v1","addr":"192.168.123.105:6809","nonce":3444724580}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":3444724580},{"type":"v1","addr":"192.168.123.105:6807","nonce":3444724580}]},"public_addr":"192.168.123.105:6803/3444724580","cluster_addr":"192.168.123.105:6805/3444724580","heartbeat_back_addr":"192.168.123.105:6809/3444724580","heartbeat_front_addr":"192.168.123.105:6807/3444724580","state":["exists","up"]},{"osd":1,"uuid":"73421280-9a73-4092-8e8f-854babd94f13","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":24,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":1642565416},{"type":"v1","addr":"192.168.123.105:6811","nonce":1642565416}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":1642565416},{"type":"v1","addr":"192.168.123.105:6813","nonce":1642565416}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":1642565416},{"type":"v1","addr":"192.168.123.105:6817","nonce":1642565416}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":1642565416},{"type":"v1","addr":"192.168.123.105:6815","nonce":1642565416}]},"public_addr":"192.168.123.105:6811/1642565416","cluster_addr":"192.168.123.105:6813/1642565416","heartbeat_back_addr":"192.168.123.105:6817/1642565416","heartbeat_front_addr":"192.168.123.105:6815/1642565416","state":["exists","up"]},{"osd":2,"uuid":"14d84b3a-06be-48f8-89a7-0e9c83f76e3c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":4063272520},{"type":"v1","addr":"192.168.123.105:6819","nonce":4063272520}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":4063272520},{"type":"v1","addr":"192.168.123.105:6821","nonce":4063272520}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":4063272520},{"type":"v1","addr":"192.168.123.105:6825","nonce":4063272520}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":4063272520},{"type":"v1","addr":"192.168.123.105:6823","nonce":4063272520}]},"public_addr":"192.168.123.105:6819/4063272520","cluster_addr":"192.168.123.105:6821/4063272520","heartbeat_back_addr":"192.168.123.105:6825/4063272520","heartbeat_front_addr":"192.168.123.105:6823/4063272520","state":["exists","up"]},{"osd":3,"uuid":"24dfa5ad-72df-4f80-bf60-0507508104f2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":27,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":1968723815},{"type":"v1","addr":"192.168.123.109:6801","nonce":1968723815}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":1968723815},{"type":"v1","addr":"192.168.123.109:6803","nonce":1968723815}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":1968723815},{"type":"v1","addr":"192.168.123.109:6807","nonce":1968723815}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":1968723815},{"type":"v1","addr":"192.168.123.109:6805","nonce":1968723815}]},"public_addr":"192.168.123.109:6801/1968723815","cluster_addr":"192.168.123.109:6803/1968723815","heartbeat_back_addr":"192.168.123.109:6807/1968723815","heartbeat_front_addr":"192.168.123.109:6805/1968723815","state":["exists","up"]},{"osd":4,"uuid":"c4ddfd7f-8055-4c53-a70a-131428da743a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":1714619602},{"type":"v1","addr":"192.168.123.109:6809","nonce":1714619602}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":1714619602},{"type":"v1","addr":"192.168.123.109:6811","nonce":1714619602}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":1714619602},{"type":"v1","addr":"192.168.123.109:6815","nonce":1714619602}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":1714619602},{"type":"v1","addr":"192.168.123.109:6813","nonce":1714619602}]},"public_addr":"192.168.123.109:6809/1714619602","cluster_addr":"192.168.123.109:6811/1714619602","heartbeat_back_addr":"192.168.123.109:6815/1714619602","heartbeat_front_addr":"192.168.123.109:6813/1714619602","state":["exists","up"]},{"osd":5,"uuid":"0ff88b04-4b34-4df6-bf77-2132a823172e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":2797312478},{"type":"v1","addr":"192.168.123.109:6817","nonce":2797312478}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":2797312478},{"type":"v1","addr":"192.168.123.109:6819","nonce":2797312478}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":2797312478},{"type":"v1","addr":"192.168.123.109:6823","nonce":2797312478}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":2797312478},{"type":"v1","addr":"192.168.123.109:6821","nonce":2797312478}]},"public_addr":"192.168.123.109:6817/2797312478","cluster_addr":"192.168.123.109:6819/2797312478","heartbeat_back_addr":"192.168.123.109:6823/2797312478","heartbeat_front_addr":"192.168.123.109:6821/2797312478","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:11.388791+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:23.267691+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:33.929621+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:43.737026+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:57:54.082613+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138322906710015,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:58:02.883518+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:0/2507132558":"2026-03-10T14:56:37.039546+0000","192.168.123.105:0/3823132518":"2026-03-10T14:56:37.039546+0000","192.168.123.105:0/3432926156":"2026-03-10T14:55:39.031351+0000","192.168.123.105:0/2312625867":"2026-03-10T14:55:39.031351+0000","192.168.123.105:6801/2":"2026-03-10T14:55:24.676889+0000","192.168.123.105:6800/2":"2026-03-10T14:55:24.676889+0000","192.168.123.105:0/3215275472":"2026-03-10T14:55:24.676889+0000","192.168.123.105:0/116363292":"2026-03-10T14:55:39.031351+0000","192.168.123.105:0/4048863196":"2026-03-10T14:55:24.676889+0000","192.168.123.105:0/2546376349":"2026-03-10T14:56:37.039546+0000","192.168.123.105:0/3523930106":"2026-03-10T14:55:24.676889+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T14:58:11.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.416+0000 7fb1e44f5700 1 -- 192.168.123.105:0/3323844288 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb1c806c630 msgr2=0x7fb1c806eae0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:11.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.416+0000 7fb1e44f5700 1 --2- 192.168.123.105:0/3323844288 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb1c806c630 0x7fb1c806eae0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7fb1d8009de0 tx=0x7fb1d8009450 comp rx=0 tx=0).stop 2026-03-09T14:58:11.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.416+0000 7fb1e44f5700 1 -- 192.168.123.105:0/3323844288 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1dc0739a0 msgr2=0x7fb1dc1985e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:11.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.417+0000 7fb1e44f5700 1 --2- 192.168.123.105:0/3323844288 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1dc0739a0 0x7fb1dc1985e0 secure :-1 s=READY pgs=238 cs=0 l=1 rev1=1 crypto rx=0x7fb1cc000c00 tx=0x7fb1cc00be50 comp rx=0 tx=0).stop 2026-03-09T14:58:11.417 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.417+0000 7fb1e44f5700 1 -- 192.168.123.105:0/3323844288 shutdown_connections 2026-03-09T14:58:11.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.417+0000 7fb1e44f5700 1 --2- 192.168.123.105:0/3323844288 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb1c806c630 0x7fb1c806eae0 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:11.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.417+0000 7fb1e44f5700 1 --2- 192.168.123.105:0/3323844288 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1dc0739a0 0x7fb1dc1985e0 unknown :-1 s=CLOSED pgs=238 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:11.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.417+0000 7fb1e44f5700 1 --2- 192.168.123.105:0/3323844288 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1dc198b20 0x7fb1dc19cf90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:11.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.417+0000 7fb1e44f5700 1 -- 192.168.123.105:0/3323844288 >> 192.168.123.105:0/3323844288 conn(0x7fb1dc0fbfc0 msgr2=0x7fb1dc1070c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:11.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.417+0000 7fb1e44f5700 1 -- 192.168.123.105:0/3323844288 shutdown_connections 2026-03-09T14:58:11.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:11.417+0000 7fb1e44f5700 1 -- 192.168.123.105:0/3323844288 wait complete. 2026-03-09T14:58:11.497 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph tell osd.0 flush_pg_stats 2026-03-09T14:58:11.497 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph tell osd.1 flush_pg_stats 2026-03-09T14:58:11.497 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph tell osd.2 flush_pg_stats 2026-03-09T14:58:11.497 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph tell osd.3 flush_pg_stats 2026-03-09T14:58:11.497 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph tell osd.4 flush_pg_stats 2026-03-09T14:58:11.497 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph tell osd.5 flush_pg_stats 2026-03-09T14:58:12.009 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:12.013 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:12.106 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:12.122 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:12.193 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:12 vm05 ceph-mon[50611]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:12.193 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:12 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3323844288' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:58:12.258 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:12.266 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:12.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:12 vm09 ceph-mon[59673]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:12.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:12 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/3323844288' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:58:12.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.927+0000 7fd3a9e94700 1 -- 192.168.123.105:0/2116372338 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd3a4072aa0 msgr2=0x7fd3a4107d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:12.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.927+0000 7fd3a9e94700 1 --2- 192.168.123.105:0/2116372338 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd3a4072aa0 0x7fd3a4107d50 secure :-1 s=READY pgs=239 cs=0 l=1 rev1=1 crypto rx=0x7fd394009b00 tx=0x7fd394009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:12.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.938+0000 7fd3a9e94700 1 -- 192.168.123.105:0/2116372338 shutdown_connections 2026-03-09T14:58:12.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.938+0000 7fd3a9e94700 1 --2- 192.168.123.105:0/2116372338 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd3a4108290 0x7fd3a4108700 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:12.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.938+0000 7fd3a9e94700 1 --2- 192.168.123.105:0/2116372338 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd3a4072aa0 0x7fd3a4107d50 unknown :-1 s=CLOSED pgs=239 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:12.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.938+0000 7fd3a9e94700 1 -- 192.168.123.105:0/2116372338 >> 192.168.123.105:0/2116372338 conn(0x7fd3a406c6c0 msgr2=0x7fd3a406cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:12.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.943+0000 7fd3a9e94700 1 -- 192.168.123.105:0/2116372338 shutdown_connections 2026-03-09T14:58:12.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.943+0000 7fd3a9e94700 1 -- 192.168.123.105:0/2116372338 wait complete. 2026-03-09T14:58:12.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.943+0000 7fd3a9e94700 1 Processor -- start 2026-03-09T14:58:12.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.943+0000 7fd3a9e94700 1 -- start start 2026-03-09T14:58:12.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.944+0000 7fd3a9e94700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd3a4072aa0 0x7fd3a41a6660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:12.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.944+0000 7fd3a9e94700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd3a4108290 0x7fd3a41a6ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:12.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.944+0000 7fd3a9e94700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd3a41a7230 con 0x7fd3a4072aa0 2026-03-09T14:58:12.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.944+0000 7fd3a9e94700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd3a41a0750 con 0x7fd3a4108290 2026-03-09T14:58:12.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.944+0000 7fd3a3fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd3a4108290 0x7fd3a41a6ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:12.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.944+0000 7fd3a3fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd3a4108290 0x7fd3a41a6ba0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:46744/0 (socket says 192.168.123.105:46744) 2026-03-09T14:58:12.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.944+0000 7fd3a3fff700 1 -- 192.168.123.105:0/1729827790 learned_addr learned my addr 192.168.123.105:0/1729827790 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:12.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.944+0000 7fd3a8e92700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd3a4072aa0 0x7fd3a41a6660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:12.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.944+0000 7fd3a3fff700 1 -- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd3a4072aa0 msgr2=0x7fd3a41a6660 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:12.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.944+0000 7fd3a3fff700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd3a4072aa0 0x7fd3a41a6660 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:12.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.944+0000 7fd3a3fff700 1 -- 192.168.123.105:0/1729827790 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd3940097e0 con 0x7fd3a4108290 2026-03-09T14:58:12.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.945+0000 7fd3a3fff700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd3a4108290 0x7fd3a41a6ba0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fd39800d8d0 tx=0x7fd39800dbe0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:12.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.945+0000 7fd3a1ffb700 1 -- 192.168.123.105:0/1729827790 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd398009880 con 0x7fd3a4108290 2026-03-09T14:58:12.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.945+0000 7fd3a1ffb700 1 -- 192.168.123.105:0/1729827790 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd398010460 con 0x7fd3a4108290 2026-03-09T14:58:12.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.946+0000 7fd3a1ffb700 1 -- 192.168.123.105:0/1729827790 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd39800f5d0 con 0x7fd3a4108290 2026-03-09T14:58:12.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.948+0000 7fd3a9e94700 1 -- 192.168.123.105:0/1729827790 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd3a41a0a30 con 0x7fd3a4108290 2026-03-09T14:58:12.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.948+0000 7fd3a9e94700 1 -- 192.168.123.105:0/1729827790 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd3a41a0f00 con 0x7fd3a4108290 2026-03-09T14:58:12.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.949+0000 7fd3a1ffb700 1 -- 192.168.123.105:0/1729827790 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fd3980099e0 con 0x7fd3a4108290 2026-03-09T14:58:12.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.949+0000 7fd3a1ffb700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd38c06c480 0x7fd38c06e930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:12.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.949+0000 7fd3a1ffb700 1 -- 192.168.123.105:0/1729827790 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fd39808b110 con 0x7fd3a4108290 2026-03-09T14:58:12.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.950+0000 7fd3a9e94700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580] conn(0x7fd3a4061190 0x7fd3a4061560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:12.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.950+0000 7fd3a9e94700 1 -- 192.168.123.105:0/1729827790 --> [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7fd3a404f2a0 con 0x7fd3a4061190 2026-03-09T14:58:12.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.950+0000 7fd3a8e92700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd38c06c480 0x7fd38c06e930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:12.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.950+0000 7fd3a9693700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580] conn(0x7fd3a4061190 0x7fd3a4061560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:12.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.951+0000 7fd3a9693700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580] conn(0x7fd3a4061190 0x7fd3a4061560 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:12.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.951+0000 7fd3a8e92700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd38c06c480 0x7fd38c06e930 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fd394006010 tx=0x7fd39400b540 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:12.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.951+0000 7fd3a1ffb700 1 -- 192.168.123.105:0/1729827790 <== osd.0 v2:192.168.123.105:6802/3444724580 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+25106 (crc 0 0 0) 0x7fd3a404f2a0 con 0x7fd3a4061190 2026-03-09T14:58:12.979 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.975+0000 7fe2a989d700 1 -- 192.168.123.105:0/124858448 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a410e9e0 msgr2=0x7fe2a410edb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:12.979 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.975+0000 7fe2a989d700 1 --2- 192.168.123.105:0/124858448 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a410e9e0 0x7fe2a410edb0 secure :-1 s=READY pgs=240 cs=0 l=1 rev1=1 crypto rx=0x7fe298009b00 tx=0x7fe298009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:12.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.989+0000 7fe2a989d700 1 -- 192.168.123.105:0/124858448 shutdown_connections 2026-03-09T14:58:12.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.989+0000 7fe2a989d700 1 --2- 192.168.123.105:0/124858448 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe2a4071b60 0x7fe2a4071fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:12.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.989+0000 7fe2a989d700 1 --2- 192.168.123.105:0/124858448 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a410e9e0 0x7fe2a410edb0 unknown :-1 s=CLOSED pgs=240 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:12.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.989+0000 7fe2a989d700 1 -- 192.168.123.105:0/124858448 >> 192.168.123.105:0/124858448 conn(0x7fe2a406c6c0 msgr2=0x7fe2a406cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:12.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.989+0000 7fe2a989d700 1 -- 192.168.123.105:0/124858448 shutdown_connections 2026-03-09T14:58:12.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.989+0000 7fe2a989d700 1 -- 192.168.123.105:0/124858448 wait complete. 2026-03-09T14:58:12.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.990+0000 7fe2a989d700 1 Processor -- start 2026-03-09T14:58:12.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.990+0000 7fe2a989d700 1 -- start start 2026-03-09T14:58:12.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.990+0000 7fe2a989d700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe2a4071b60 0x7fe2a41a4ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:12.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.990+0000 7fe2a989d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a410e9e0 0x7fe2a41a51e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:12.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.990+0000 7fe2a989d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe2a41a5870 con 0x7fe2a410e9e0 2026-03-09T14:58:12.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.990+0000 7fe2a989d700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe2a41a85a0 con 0x7fe2a4071b60 2026-03-09T14:58:12.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.991+0000 7fe2a3fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a410e9e0 0x7fe2a41a51e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:12.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.991+0000 7fe2a3fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a410e9e0 0x7fe2a41a51e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:51080/0 (socket says 192.168.123.105:51080) 2026-03-09T14:58:12.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.991+0000 7fe2a3fff700 1 -- 192.168.123.105:0/2321126206 learned_addr learned my addr 192.168.123.105:0/2321126206 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:12.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.991+0000 7fe2a889b700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe2a4071b60 0x7fe2a41a4ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:12.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.991+0000 7fe2a889b700 1 -- 192.168.123.105:0/2321126206 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a410e9e0 msgr2=0x7fe2a41a51e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:12.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.991+0000 7fe2a889b700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a410e9e0 0x7fe2a41a51e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:12.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.991+0000 7fe2a889b700 1 -- 192.168.123.105:0/2321126206 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe2980097e0 con 0x7fe2a4071b60 2026-03-09T14:58:12.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.991+0000 7fe2a889b700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe2a4071b60 0x7fe2a41a4ca0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fe298005fd0 tx=0x7fe298004ca0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:12.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.991+0000 7fe2a1ffb700 1 -- 192.168.123.105:0/2321126206 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe29801d070 con 0x7fe2a4071b60 2026-03-09T14:58:12.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.992+0000 7fe2a1ffb700 1 -- 192.168.123.105:0/2321126206 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe29800bd80 con 0x7fe2a4071b60 2026-03-09T14:58:12.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.992+0000 7fe2a1ffb700 1 -- 192.168.123.105:0/2321126206 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe2980219f0 con 0x7fe2a4071b60 2026-03-09T14:58:12.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.995+0000 7fe2a989d700 1 -- 192.168.123.105:0/2321126206 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe2a41a8820 con 0x7fe2a4071b60 2026-03-09T14:58:12.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.995+0000 7fe2a989d700 1 -- 192.168.123.105:0/2321126206 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe2a41a8c90 con 0x7fe2a4071b60 2026-03-09T14:58:12.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.996+0000 7fe2a1ffb700 1 -- 192.168.123.105:0/2321126206 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fe2980053a0 con 0x7fe2a4071b60 2026-03-09T14:58:12.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.996+0000 7fe2a1ffb700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe29406c4d0 0x7fe29406e980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:12.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.996+0000 7fe2a1ffb700 1 -- 192.168.123.105:0/2321126206 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fe29808c700 con 0x7fe2a4071b60 2026-03-09T14:58:12.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.996+0000 7fe2a3fff700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe29406c4d0 0x7fe29406e980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:12.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.997+0000 7fe2a989d700 1 -- 192.168.123.105:0/2321126206 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_get_version(what=osdmap handle=1) v1 -- 0x7fe2a411c670 con 0x7fe2a4071b60 2026-03-09T14:58:12.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.997+0000 7fe2a3fff700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe29406c4d0 0x7fe29406e980 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7fe2a41a6280 tx=0x7fe29000b500 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:12.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.997+0000 7fe2a1ffb700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478] conn(0x7fe294071ef0 0x7fe294074300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:12.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.997+0000 7fe2a1ffb700 1 -- 192.168.123.105:0/2321126206 --> [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7fe2940749b0 con 0x7fe294071ef0 2026-03-09T14:58:12.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.997+0000 7fe2a1ffb700 1 -- 192.168.123.105:0/2321126206 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_get_version_reply(handle=1 version=33) v2 ==== 24+0+0 (secure 0 0 0) 0x7fe2980571c0 con 0x7fe2a4071b60 2026-03-09T14:58:12.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.998+0000 7fe2a909c700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478] conn(0x7fe294071ef0 0x7fe294074300 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.999+0000 7fe2a909c700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478] conn(0x7fe294071ef0 0x7fe294074300 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.5 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:12.999+0000 7fe2a1ffb700 1 -- 192.168.123.105:0/2321126206 <== osd.5 v2:192.168.123.109:6816/2797312478 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+25106 (crc 0 0 0) 0x7fe2940749b0 con 0x7fe294071ef0 2026-03-09T14:58:13.009 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.008+0000 7fd3a9e94700 1 -- 192.168.123.105:0/1729827790 --> [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7fd3a4061d50 con 0x7fd3a4061190 2026-03-09T14:58:13.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.011+0000 7fd3a1ffb700 1 -- 192.168.123.105:0/1729827790 <== osd.0 v2:192.168.123.105:6802/3444724580 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+11 (crc 0 0 0) 0x7fd3a4061d50 con 0x7fd3a4061190 2026-03-09T14:58:13.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.011+0000 7fd3a9e94700 1 -- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580] conn(0x7fd3a4061190 msgr2=0x7fd3a4061560 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.011+0000 7fd3a9e94700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580] conn(0x7fd3a4061190 0x7fd3a4061560 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.012+0000 7fd3a9e94700 1 -- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd38c06c480 msgr2=0x7fd38c06e930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.012+0000 7fd3a9e94700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd38c06c480 0x7fd38c06e930 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fd394006010 tx=0x7fd39400b540 comp rx=0 tx=0).stop 2026-03-09T14:58:13.013 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.012+0000 7fd3a9e94700 1 -- 192.168.123.105:0/1729827790 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd3a4108290 msgr2=0x7fd3a41a6ba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.013 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.012+0000 7fd3a9e94700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd3a4108290 0x7fd3a41a6ba0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fd39800d8d0 tx=0x7fd39800dbe0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.014 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.012+0000 7fd3a9e94700 1 -- 192.168.123.105:0/1729827790 shutdown_connections 2026-03-09T14:58:13.014 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.014+0000 7fd3a9e94700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:6802/3444724580,v1:192.168.123.105:6803/3444724580] conn(0x7fd3a4061190 0x7fd3a4061560 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.015 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.014+0000 7fd3a9e94700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd38c06c480 0x7fd38c06e930 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.015 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.014+0000 7fd3a9e94700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd3a4072aa0 0x7fd3a41a6660 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.015 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.014+0000 7fd3a9e94700 1 --2- 192.168.123.105:0/1729827790 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd3a4108290 0x7fd3a41a6ba0 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.015 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.014+0000 7fd3a9e94700 1 -- 192.168.123.105:0/1729827790 >> 192.168.123.105:0/1729827790 conn(0x7fd3a406c6c0 msgr2=0x7fd3a4070390 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:13.015 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.014+0000 7fd3a9e94700 1 -- 192.168.123.105:0/1729827790 shutdown_connections 2026-03-09T14:58:13.015 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.014+0000 7fd3a9e94700 1 -- 192.168.123.105:0/1729827790 wait complete. 2026-03-09T14:58:13.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.031+0000 7fe2a989d700 1 -- 192.168.123.105:0/2321126206 --> [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7fe2a404f2a0 con 0x7fe294071ef0 2026-03-09T14:58:13.037 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.034+0000 7fe2a1ffb700 1 -- 192.168.123.105:0/2321126206 <== osd.5 v2:192.168.123.109:6816/2797312478 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+12 (crc 0 0 0) 0x7fe2a404f2a0 con 0x7fe294071ef0 2026-03-09T14:58:13.041 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.040+0000 7fe2a989d700 1 -- 192.168.123.105:0/2321126206 >> [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478] conn(0x7fe294071ef0 msgr2=0x7fe294074300 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.041 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.040+0000 7fe2a989d700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478] conn(0x7fe294071ef0 0x7fe294074300 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.043+0000 7fe2a989d700 1 -- 192.168.123.105:0/2321126206 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe29406c4d0 msgr2=0x7fe29406e980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.043+0000 7fe2a989d700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe29406c4d0 0x7fe29406e980 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7fe2a41a6280 tx=0x7fe29000b500 comp rx=0 tx=0).stop 2026-03-09T14:58:13.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.043+0000 7fe2a989d700 1 -- 192.168.123.105:0/2321126206 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe2a4071b60 msgr2=0x7fe2a41a4ca0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.043+0000 7fe2a989d700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe2a4071b60 0x7fe2a41a4ca0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fe298005fd0 tx=0x7fe298004ca0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.045 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.044+0000 7fe2a989d700 1 -- 192.168.123.105:0/2321126206 shutdown_connections 2026-03-09T14:58:13.045 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.044+0000 7fe2a989d700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe29406c4d0 0x7fe29406e980 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.045 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.044+0000 7fe2a989d700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.109:6816/2797312478,v1:192.168.123.109:6817/2797312478] conn(0x7fe294071ef0 0x7fe294074300 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.045 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.044+0000 7fe2a989d700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe2a4071b60 0x7fe2a41a4ca0 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.045 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.044+0000 7fe2a989d700 1 --2- 192.168.123.105:0/2321126206 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe2a410e9e0 0x7fe2a41a51e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.045 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.044+0000 7fe2a989d700 1 -- 192.168.123.105:0/2321126206 >> 192.168.123.105:0/2321126206 conn(0x7fe2a406c6c0 msgr2=0x7fe2a406cf10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:13.047 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.046+0000 7fe2a989d700 1 -- 192.168.123.105:0/2321126206 shutdown_connections 2026-03-09T14:58:13.047 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.046+0000 7fe2a989d700 1 -- 192.168.123.105:0/2321126206 wait complete. 2026-03-09T14:58:13.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.049+0000 7fea7bfff700 1 -- 192.168.123.105:0/2631570073 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea7c071b60 msgr2=0x7fea7c071fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.049+0000 7fea7bfff700 1 --2- 192.168.123.105:0/2631570073 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea7c071b60 0x7fea7c071fd0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fea70009a60 tx=0x7fea70009d70 comp rx=0 tx=0).stop 2026-03-09T14:58:13.051 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.050+0000 7f6208aa5700 1 -- 192.168.123.105:0/3453202137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f620410eab0 msgr2=0x7f620410ee80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.055 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.054+0000 7fea7bfff700 1 -- 192.168.123.105:0/2631570073 shutdown_connections 2026-03-09T14:58:13.055 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.054+0000 7fea7bfff700 1 --2- 192.168.123.105:0/2631570073 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea7c071b60 0x7fea7c071fd0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.055 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.054+0000 7fea7bfff700 1 --2- 192.168.123.105:0/2631570073 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea7c10e9e0 0x7fea7c10edb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.055 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.054+0000 7fea7bfff700 1 -- 192.168.123.105:0/2631570073 >> 192.168.123.105:0/2631570073 conn(0x7fea7c06c6c0 msgr2=0x7fea7c06cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:13.056 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.055+0000 7fea7bfff700 1 -- 192.168.123.105:0/2631570073 shutdown_connections 2026-03-09T14:58:13.056 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.050+0000 7f6208aa5700 1 --2- 192.168.123.105:0/3453202137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f620410eab0 0x7f620410ee80 secure :-1 s=READY pgs=241 cs=0 l=1 rev1=1 crypto rx=0x7f61f4009b00 tx=0x7f61f4009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:13.057 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.057+0000 7f6208aa5700 1 -- 192.168.123.105:0/3453202137 shutdown_connections 2026-03-09T14:58:13.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.055+0000 7fea7bfff700 1 -- 192.168.123.105:0/2631570073 wait complete. 2026-03-09T14:58:13.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.057+0000 7fea7bfff700 1 Processor -- start 2026-03-09T14:58:13.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.057+0000 7fea7bfff700 1 -- start start 2026-03-09T14:58:13.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.057+0000 7fea7bfff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea7c10e9e0 0x7fea7c119570 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.057+0000 7fea7bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea7c114570 0x7fea7c1149e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.057+0000 7fea7bfff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fea7c114f20 con 0x7fea7c114570 2026-03-09T14:58:13.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.057+0000 7fea7bfff700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fea7c115090 con 0x7fea7c10e9e0 2026-03-09T14:58:13.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.057+0000 7f6208aa5700 1 --2- 192.168.123.105:0/3453202137 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6204071b60 0x7f6204071fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.057+0000 7f6208aa5700 1 --2- 192.168.123.105:0/3453202137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f620410eab0 0x7f620410ee80 unknown :-1 s=CLOSED pgs=241 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.057+0000 7f6208aa5700 1 -- 192.168.123.105:0/3453202137 >> 192.168.123.105:0/3453202137 conn(0x7f620406c6c0 msgr2=0x7f620406cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:13.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.058+0000 7f6208aa5700 1 -- 192.168.123.105:0/3453202137 shutdown_connections 2026-03-09T14:58:13.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.058+0000 7f6208aa5700 1 -- 192.168.123.105:0/3453202137 wait complete. 2026-03-09T14:58:13.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.061+0000 7fea7affd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea7c10e9e0 0x7fea7c119570 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.061+0000 7fea7affd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea7c10e9e0 0x7fea7c119570 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:46794/0 (socket says 192.168.123.105:46794) 2026-03-09T14:58:13.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.061+0000 7fea7affd700 1 -- 192.168.123.105:0/809089099 learned_addr learned my addr 192.168.123.105:0/809089099 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:13.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.061+0000 7fea7a7fc700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea7c114570 0x7fea7c1149e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.061+0000 7fea7affd700 1 -- 192.168.123.105:0/809089099 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea7c114570 msgr2=0x7fea7c1149e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.061+0000 7fea7affd700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea7c114570 0x7fea7c1149e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.061+0000 7fea7affd700 1 -- 192.168.123.105:0/809089099 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fea70009710 con 0x7fea7c10e9e0 2026-03-09T14:58:13.063 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.060+0000 7f6208aa5700 1 Processor -- start 2026-03-09T14:58:13.063 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.060+0000 7f6208aa5700 1 -- start start 2026-03-09T14:58:13.063 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.060+0000 7f6208aa5700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6204071b60 0x7f6204119510 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.063 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.060+0000 7f6208aa5700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6204114510 0x7f6204114980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.060+0000 7f6208aa5700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6204114ec0 con 0x7f6204071b60 2026-03-09T14:58:13.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.060+0000 7f6208aa5700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6204115030 con 0x7f6204114510 2026-03-09T14:58:13.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.063+0000 7f620259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6204071b60 0x7f6204119510 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.063+0000 7f620259c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6204071b60 0x7f6204119510 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:51132/0 (socket says 192.168.123.105:51132) 2026-03-09T14:58:13.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.063+0000 7f620259c700 1 -- 192.168.123.105:0/44675673 learned_addr learned my addr 192.168.123.105:0/44675673 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:13.066 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.064+0000 7f6201d9b700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6204114510 0x7f6204114980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.066 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.065+0000 7fea7affd700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea7c10e9e0 0x7fea7c119570 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fea6c00ea30 tx=0x7fea6c00edf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.066 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.065+0000 7f6201d9b700 1 -- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6204071b60 msgr2=0x7f6204119510 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.066 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.065+0000 7fea80913700 1 -- 192.168.123.105:0/809089099 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fea6c00cc40 con 0x7fea7c10e9e0 2026-03-09T14:58:13.066 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.065+0000 7f6201d9b700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6204071b60 0x7f6204119510 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.067 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.065+0000 7f6201d9b700 1 -- 192.168.123.105:0/44675673 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f61f40097e0 con 0x7f6204114510 2026-03-09T14:58:13.067 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.065+0000 7fea80913700 1 -- 192.168.123.105:0/809089099 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fea6c00cda0 con 0x7fea7c10e9e0 2026-03-09T14:58:13.068 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.066+0000 7fea80913700 1 -- 192.168.123.105:0/809089099 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fea6c010430 con 0x7fea7c10e9e0 2026-03-09T14:58:13.068 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.067+0000 7fea7bfff700 1 -- 192.168.123.105:0/809089099 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fea7c115370 con 0x7fea7c10e9e0 2026-03-09T14:58:13.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.068+0000 7fea7bfff700 1 -- 192.168.123.105:0/809089099 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fea7c1b7c50 con 0x7fea7c10e9e0 2026-03-09T14:58:13.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.069+0000 7f6201d9b700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6204114510 0x7f6204114980 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f61f800eab0 tx=0x7f61f800edc0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.070+0000 7f61f37fe700 1 -- 192.168.123.105:0/44675673 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f61f800cb20 con 0x7f6204114510 2026-03-09T14:58:13.072 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.071+0000 7f6208aa5700 1 -- 192.168.123.105:0/44675673 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6204115310 con 0x7f6204114510 2026-03-09T14:58:13.072 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.071+0000 7f6208aa5700 1 -- 192.168.123.105:0/44675673 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6204077170 con 0x7f6204114510 2026-03-09T14:58:13.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.072+0000 7f61f37fe700 1 -- 192.168.123.105:0/44675673 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f61f800cc80 con 0x7f6204114510 2026-03-09T14:58:13.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.072+0000 7f61f37fe700 1 -- 192.168.123.105:0/44675673 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f61f8018860 con 0x7f6204114510 2026-03-09T14:58:13.074 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.074+0000 7f61f37fe700 1 -- 192.168.123.105:0/44675673 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f61f80189c0 con 0x7f6204114510 2026-03-09T14:58:13.077 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.075+0000 7fea80913700 1 -- 192.168.123.105:0/809089099 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fea6c004710 con 0x7fea7c10e9e0 2026-03-09T14:58:13.077 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.075+0000 7fea7bfff700 1 -- 192.168.123.105:0/809089099 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_get_version(what=osdmap handle=1) v1 -- 0x7fea7c11c670 con 0x7fea7c10e9e0 2026-03-09T14:58:13.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.076+0000 7fea80913700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fea6406c4d0 0x7fea6406e980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.076+0000 7fea80913700 1 -- 192.168.123.105:0/809089099 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fea6c014070 con 0x7fea7c10e9e0 2026-03-09T14:58:13.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.076+0000 7fea80913700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815] conn(0x7fea64071ef0 0x7fea64074300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.076+0000 7fea80913700 1 -- 192.168.123.105:0/809089099 --> [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7fea640749b0 con 0x7fea64071ef0 2026-03-09T14:58:13.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.080+0000 7f61f37fe700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f61ec06c7a0 0x7f61ec06ec50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.083 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.081+0000 7fea80913700 1 -- 192.168.123.105:0/809089099 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_get_version_reply(handle=1 version=33) v2 ==== 24+0+0 (secure 0 0 0) 0x7fea6c0565c0 con 0x7fea7c10e9e0 2026-03-09T14:58:13.083 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.082+0000 7fea7b7fe700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815] conn(0x7fea64071ef0 0x7fea64074300 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.083 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.082+0000 7fea7a7fc700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fea6406c4d0 0x7fea6406e980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.083 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.081+0000 7f61f17fa700 1 -- 192.168.123.105:0/44675673 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_get_version(what=osdmap handle=1) v1 -- 0x7f61e4000ff0 con 0x7f6204114510 2026-03-09T14:58:13.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.083+0000 7f620259c700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f61ec06c7a0 0x7f61ec06ec50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.083+0000 7fea7b7fe700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815] conn(0x7fea64071ef0 0x7fea64074300 crc :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.3 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.083+0000 7f620259c700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f61ec06c7a0 0x7f61ec06ec50 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f61f4000c00 tx=0x7f61f4019040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.084+0000 7f61f37fe700 1 -- 192.168.123.105:0/44675673 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f61f8014070 con 0x7f6204114510 2026-03-09T14:58:13.085 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.084+0000 7fea80913700 1 -- 192.168.123.105:0/809089099 <== osd.3 v2:192.168.123.109:6800/1968723815 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+25106 (crc 0 0 0) 0x7fea640749b0 con 0x7fea64071ef0 2026-03-09T14:58:13.085 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.084+0000 7fea7a7fc700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fea6406c4d0 0x7fea6406e980 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7fea70000c00 tx=0x7fea70011040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.086+0000 7f61f37fe700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520] conn(0x7f61ec072380 0x7f61ec074790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.086+0000 7f6202d9d700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520] conn(0x7f61ec072380 0x7f61ec074790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.086+0000 7f61f37fe700 1 -- 192.168.123.105:0/44675673 --> [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f61ec074e40 con 0x7f61ec072380 2026-03-09T14:58:13.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.087+0000 7f6202d9d700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520] conn(0x7f61ec072380 0x7f61ec074790 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.089+0000 7f61f37fe700 1 -- 192.168.123.105:0/44675673 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_get_version_reply(handle=1 version=33) v2 ==== 24+0+0 (secure 0 0 0) 0x7f61f800cdf0 con 0x7f6204114510 2026-03-09T14:58:13.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.103+0000 7f61f37fe700 1 -- 192.168.123.105:0/44675673 <== osd.2 v2:192.168.123.105:6818/4063272520 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+25106 (crc 0 0 0) 0x7f61ec074e40 con 0x7f61ec072380 2026-03-09T14:58:13.131 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.111+0000 7fea7bfff700 1 -- 192.168.123.105:0/809089099 --> [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7fea7c04f2a0 con 0x7fea64071ef0 2026-03-09T14:58:13.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 -- 192.168.123.105:0/269537882 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f123810c8b0 msgr2=0x7f123810cc80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 --2- 192.168.123.105:0/269537882 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f123810c8b0 0x7f123810cc80 secure :-1 s=READY pgs=242 cs=0 l=1 rev1=1 crypto rx=0x7f1228009ab0 tx=0x7f1228009dc0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 -- 192.168.123.105:0/269537882 shutdown_connections 2026-03-09T14:58:13.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 --2- 192.168.123.105:0/269537882 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1238071e40 0x7f12380722b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 --2- 192.168.123.105:0/269537882 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f123810c8b0 0x7f123810cc80 unknown :-1 s=CLOSED pgs=242 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 -- 192.168.123.105:0/269537882 >> 192.168.123.105:0/269537882 conn(0x7f123806c6c0 msgr2=0x7f123806cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:13.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.127+0000 7fea80913700 1 -- 192.168.123.105:0/809089099 <== osd.3 v2:192.168.123.109:6800/1968723815 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+11 (crc 0 0 0) 0x7fea7c04f2a0 con 0x7fea64071ef0 2026-03-09T14:58:13.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.135+0000 7fea627fc700 1 -- 192.168.123.105:0/809089099 >> [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815] conn(0x7fea64071ef0 msgr2=0x7fea64074300 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.135+0000 7fea627fc700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815] conn(0x7fea64071ef0 0x7fea64074300 crc :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.136+0000 7fea627fc700 1 -- 192.168.123.105:0/809089099 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fea6406c4d0 msgr2=0x7fea6406e980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.136+0000 7fea627fc700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fea6406c4d0 0x7fea6406e980 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7fea70000c00 tx=0x7fea70011040 comp rx=0 tx=0).stop 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.136+0000 7fea627fc700 1 -- 192.168.123.105:0/809089099 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea7c10e9e0 msgr2=0x7fea7c119570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.136+0000 7fea627fc700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea7c10e9e0 0x7fea7c119570 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fea6c00ea30 tx=0x7fea6c00edf0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.137+0000 7fea627fc700 1 -- 192.168.123.105:0/809089099 shutdown_connections 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.137+0000 7fea627fc700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.109:6800/1968723815,v1:192.168.123.109:6801/1968723815] conn(0x7fea64071ef0 0x7fea64074300 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.137+0000 7fea627fc700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fea6406c4d0 0x7fea6406e980 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.137+0000 7fea627fc700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea7c10e9e0 0x7fea7c119570 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.137+0000 7fea627fc700 1 --2- 192.168.123.105:0/809089099 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea7c114570 0x7fea7c1149e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.137+0000 7fea627fc700 1 -- 192.168.123.105:0/809089099 >> 192.168.123.105:0/809089099 conn(0x7fea7c06c6c0 msgr2=0x7fea7c06cca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.137+0000 7fea627fc700 1 -- 192.168.123.105:0/809089099 shutdown_connections 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.137+0000 7fea627fc700 1 -- 192.168.123.105:0/809089099 wait complete. 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 -- 192.168.123.105:0/269537882 shutdown_connections 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 -- 192.168.123.105:0/269537882 wait complete. 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 Processor -- start 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 -- start start 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1238071e40 0x7f123807ceb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f123807d3f0 0x7f123807d860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1238081a30 con 0x7f1238071e40 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.128+0000 7f123dda8700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1238081ba0 con 0x7f123807d3f0 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.135+0000 7f1236ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f123807d3f0 0x7f123807d860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.135+0000 7f1236ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f123807d3f0 0x7f123807d860 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:46834/0 (socket says 192.168.123.105:46834) 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.135+0000 7f1236ffd700 1 -- 192.168.123.105:0/3995862138 learned_addr learned my addr 192.168.123.105:0/3995862138 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.135+0000 7f12377fe700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1238071e40 0x7f123807ceb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.135+0000 7f1236ffd700 1 -- 192.168.123.105:0/3995862138 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1238071e40 msgr2=0x7f123807ceb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.135+0000 7f1236ffd700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1238071e40 0x7f123807ceb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.135+0000 7f1236ffd700 1 -- 192.168.123.105:0/3995862138 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1228009710 con 0x7f123807d3f0 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.135+0000 7f12377fe700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1238071e40 0x7f123807ceb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T14:58:13.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.136+0000 7f1236ffd700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f123807d3f0 0x7f123807d860 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f123000e3f0 tx=0x7f123000e700 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.136+0000 7f1234ff9700 1 -- 192.168.123.105:0/3995862138 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f123000eea0 con 0x7f123807d3f0 2026-03-09T14:58:13.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.136+0000 7f123dda8700 1 -- 192.168.123.105:0/3995862138 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1238081e20 con 0x7f123807d3f0 2026-03-09T14:58:13.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.136+0000 7f123dda8700 1 -- 192.168.123.105:0/3995862138 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1238082370 con 0x7f123807d3f0 2026-03-09T14:58:13.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.137+0000 7f1234ff9700 1 -- 192.168.123.105:0/3995862138 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f123000f040 con 0x7f123807d3f0 2026-03-09T14:58:13.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.137+0000 7f1234ff9700 1 -- 192.168.123.105:0/3995862138 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1230014720 con 0x7f123807d3f0 2026-03-09T14:58:13.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.142+0000 7f1234ff9700 1 -- 192.168.123.105:0/3995862138 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f1230014880 con 0x7f123807d3f0 2026-03-09T14:58:13.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.142+0000 7f1234ff9700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f122006e9c0 0x7f1220070e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.147 INFO:teuthology.orchestra.run.vm05.stdout:38654705677 2026-03-09T14:58:13.147 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd last-stat-seq osd.0 2026-03-09T14:58:13.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.144+0000 7f12377fe700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f122006e9c0 0x7f1220070e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.145+0000 7f12377fe700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f122006e9c0 0x7f1220070e70 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f122800b5c0 tx=0x7f1228019040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.145+0000 7f1234ff9700 1 -- 192.168.123.105:0/3995862138 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f123008c810 con 0x7f123807d3f0 2026-03-09T14:58:13.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.146+0000 7f123dda8700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602] conn(0x7f1224001610 0x7f1224003ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.146+0000 7f123dda8700 1 -- 192.168.123.105:0/3995862138 --> [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7f1224006bf0 con 0x7f1224001610 2026-03-09T14:58:13.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.175+0000 7f61f17fa700 1 -- 192.168.123.105:0/44675673 --> [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f61e4002ce0 con 0x7f61ec072380 2026-03-09T14:58:13.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.176+0000 7f61f37fe700 1 -- 192.168.123.105:0/44675673 <== osd.2 v2:192.168.123.105:6818/4063272520 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+11 (crc 0 0 0) 0x7f61e4002ce0 con 0x7f61ec072380 2026-03-09T14:58:13.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.170+0000 7f1237fff700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602] conn(0x7f1224001610 0x7f1224003ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.186 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.179+0000 7f1237fff700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602] conn(0x7f1224001610 0x7f1224003ac0 crc :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.4 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.186 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.182+0000 7f1234ff9700 1 -- 192.168.123.105:0/3995862138 <== osd.4 v2:192.168.123.109:6808/1714619602 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+25106 (crc 0 0 0) 0x7f1224006bf0 con 0x7f1224001610 2026-03-09T14:58:13.186 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.185+0000 7f61f17fa700 1 -- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520] conn(0x7f61ec072380 msgr2=0x7f61ec074790 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.186 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.185+0000 7f61f17fa700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520] conn(0x7f61ec072380 0x7f61ec074790 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.187+0000 7f61f17fa700 1 -- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f61ec06c7a0 msgr2=0x7f61ec06ec50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.187+0000 7f61f17fa700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f61ec06c7a0 0x7f61ec06ec50 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f61f4000c00 tx=0x7f61f4019040 comp rx=0 tx=0).stop 2026-03-09T14:58:13.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.187+0000 7f61f17fa700 1 -- 192.168.123.105:0/44675673 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6204114510 msgr2=0x7f6204114980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.187+0000 7f61f17fa700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6204114510 0x7f6204114980 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f61f800eab0 tx=0x7f61f800edc0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.188+0000 7f61f17fa700 1 -- 192.168.123.105:0/44675673 shutdown_connections 2026-03-09T14:58:13.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.188+0000 7f61f17fa700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:6818/4063272520,v1:192.168.123.105:6819/4063272520] conn(0x7f61ec072380 0x7f61ec074790 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.188+0000 7f61f17fa700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f61ec06c7a0 0x7f61ec06ec50 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.188+0000 7f61f17fa700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6204071b60 0x7f6204119510 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.188+0000 7f61f17fa700 1 --2- 192.168.123.105:0/44675673 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6204114510 0x7f6204114980 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.188+0000 7f61f17fa700 1 -- 192.168.123.105:0/44675673 >> 192.168.123.105:0/44675673 conn(0x7f620406c6c0 msgr2=0x7f6204070070 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:13.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.193+0000 7f61f17fa700 1 -- 192.168.123.105:0/44675673 shutdown_connections 2026-03-09T14:58:13.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.193+0000 7f61f17fa700 1 -- 192.168.123.105:0/44675673 wait complete. 2026-03-09T14:58:13.219 INFO:teuthology.orchestra.run.vm05.stdout:137438953477 2026-03-09T14:58:13.219 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd last-stat-seq osd.5 2026-03-09T14:58:13.239 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.228+0000 7f123dda8700 1 -- 192.168.123.105:0/3995862138 --> [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7f1224005cd0 con 0x7f1224001610 2026-03-09T14:58:13.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.246+0000 7f1234ff9700 1 -- 192.168.123.105:0/3995862138 <== osd.4 v2:192.168.123.109:6808/1714619602 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+12 (crc 0 0 0) 0x7f1224005cd0 con 0x7f1224001610 2026-03-09T14:58:13.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.247+0000 7f121e7fc700 1 -- 192.168.123.105:0/3995862138 >> [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602] conn(0x7f1224001610 msgr2=0x7f1224003ac0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.247+0000 7f121e7fc700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602] conn(0x7f1224001610 0x7f1224003ac0 crc :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.247+0000 7f121e7fc700 1 -- 192.168.123.105:0/3995862138 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f122006e9c0 msgr2=0x7f1220070e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.247+0000 7f121e7fc700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f122006e9c0 0x7f1220070e70 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f122800b5c0 tx=0x7f1228019040 comp rx=0 tx=0).stop 2026-03-09T14:58:13.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.248+0000 7f121e7fc700 1 -- 192.168.123.105:0/3995862138 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f123807d3f0 msgr2=0x7f123807d860 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.248+0000 7f121e7fc700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f123807d3f0 0x7f123807d860 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f123000e3f0 tx=0x7f123000e700 comp rx=0 tx=0).stop 2026-03-09T14:58:13.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.252+0000 7f121e7fc700 1 -- 192.168.123.105:0/3995862138 shutdown_connections 2026-03-09T14:58:13.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.252+0000 7f121e7fc700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f122006e9c0 0x7f1220070e70 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.252+0000 7f121e7fc700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.109:6808/1714619602,v1:192.168.123.109:6809/1714619602] conn(0x7f1224001610 0x7f1224003ac0 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.252+0000 7f121e7fc700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1238071e40 0x7f123807ceb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.252+0000 7f121e7fc700 1 --2- 192.168.123.105:0/3995862138 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f123807d3f0 0x7f123807d860 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.252+0000 7f121e7fc700 1 -- 192.168.123.105:0/3995862138 >> 192.168.123.105:0/3995862138 conn(0x7f123806c6c0 msgr2=0x7f123806ffd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:13.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.252+0000 7f121e7fc700 1 -- 192.168.123.105:0/3995862138 shutdown_connections 2026-03-09T14:58:13.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.252+0000 7f121e7fc700 1 -- 192.168.123.105:0/3995862138 wait complete. 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.281+0000 7fe6a9da9700 1 -- 192.168.123.105:0/58238533 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe69c0ac550 msgr2=0x7fe69c0a4e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.281+0000 7fe6a9da9700 1 --2- 192.168.123.105:0/58238533 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe69c0ac550 0x7fe69c0a4e60 secure :-1 s=READY pgs=243 cs=0 l=1 rev1=1 crypto rx=0x7fe690009b50 tx=0x7fe690009e60 comp rx=0 tx=0).stop 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.281+0000 7fe6a9da9700 1 -- 192.168.123.105:0/58238533 shutdown_connections 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.281+0000 7fe6a9da9700 1 --2- 192.168.123.105:0/58238533 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe69c0ac920 0x7fe69c0a53a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.281+0000 7fe6a9da9700 1 --2- 192.168.123.105:0/58238533 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe69c0ac550 0x7fe69c0a4e60 unknown :-1 s=CLOSED pgs=243 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.281+0000 7fe6a9da9700 1 -- 192.168.123.105:0/58238533 >> 192.168.123.105:0/58238533 conn(0x7fe69c01a420 msgr2=0x7fe69c01a820 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.281+0000 7fe6a9da9700 1 -- 192.168.123.105:0/58238533 shutdown_connections 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.281+0000 7fe6a9da9700 1 -- 192.168.123.105:0/58238533 wait complete. 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a9da9700 1 Processor -- start 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a9da9700 1 -- start start 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a9da9700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe69c0ac920 0x7fe69c0b6f20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a9da9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe69c0b1f20 0x7fe69c0b2390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a9da9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe69c0b28d0 con 0x7fe69c0b1f20 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a9da9700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe69c0b2a40 con 0x7fe69c0ac920 2026-03-09T14:58:13.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a27fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe69c0b1f20 0x7fe69c0b2390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.285 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a27fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe69c0b1f20 0x7fe69c0b2390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:51174/0 (socket says 192.168.123.105:51174) 2026-03-09T14:58:13.285 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a27fc700 1 -- 192.168.123.105:0/20777077 learned_addr learned my addr 192.168.123.105:0/20777077 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:13.285 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a27fc700 1 -- 192.168.123.105:0/20777077 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe69c0ac920 msgr2=0x7fe69c0b6f20 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T14:58:13.285 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a27fc700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe69c0ac920 0x7fe69c0b6f20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.285 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a27fc700 1 -- 192.168.123.105:0/20777077 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe6900097e0 con 0x7fe69c0b1f20 2026-03-09T14:58:13.285 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.282+0000 7fe6a27fc700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe69c0b1f20 0x7fe69c0b2390 secure :-1 s=READY pgs=244 cs=0 l=1 rev1=1 crypto rx=0x7fe69400dc40 tx=0x7fe69400be10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.285 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.283+0000 7fe687fff700 1 -- 192.168.123.105:0/20777077 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe6940099a0 con 0x7fe69c0b1f20 2026-03-09T14:58:13.287 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.283+0000 7fe6a9da9700 1 -- 192.168.123.105:0/20777077 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe69c0b2d20 con 0x7fe69c0b1f20 2026-03-09T14:58:13.287 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.283+0000 7fe6a9da9700 1 -- 192.168.123.105:0/20777077 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe69c155760 con 0x7fe69c0b1f20 2026-03-09T14:58:13.287 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.285+0000 7fe687fff700 1 -- 192.168.123.105:0/20777077 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe694010460 con 0x7fe69c0b1f20 2026-03-09T14:58:13.287 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.285+0000 7fe687fff700 1 -- 192.168.123.105:0/20777077 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe69400f6f0 con 0x7fe69c0b1f20 2026-03-09T14:58:13.288 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.285+0000 7fe687fff700 1 -- 192.168.123.105:0/20777077 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fe69400f850 con 0x7fe69c0b1f20 2026-03-09T14:58:13.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.286+0000 7fe6a9da9700 1 -- 192.168.123.105:0/20777077 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_get_version(what=osdmap handle=1) v1 -- 0x7fe6a404fa60 con 0x7fe69c0b1f20 2026-03-09T14:58:13.304 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.298+0000 7fe687fff700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe68806c870 0x7fe68806ed20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.304 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.298+0000 7fe6a2ffd700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe68806c870 0x7fe68806ed20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.298+0000 7fe6a2ffd700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe68806c870 0x7fe68806ed20 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fe6900097b0 tx=0x7fe690009700 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.298+0000 7fe687fff700 1 -- 192.168.123.105:0/20777077 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fe69408c710 con 0x7fe69c0b1f20 2026-03-09T14:58:13.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.298+0000 7fe687fff700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416] conn(0x7fe688072400 0x7fe688074810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:13.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.298+0000 7fe687fff700 1 -- 192.168.123.105:0/20777077 --> [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416] -- command(tid 1: {"prefix": "get_command_descriptions"}) v1 -- 0x7fe688074ec0 con 0x7fe688072400 2026-03-09T14:58:13.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.298+0000 7fe687fff700 1 -- 192.168.123.105:0/20777077 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_get_version_reply(handle=1 version=33) v2 ==== 24+0+0 (secure 0 0 0) 0x7fe694010a40 con 0x7fe69c0b1f20 2026-03-09T14:58:13.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.298+0000 7fe6a37fe700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416] conn(0x7fe688072400 0x7fe688074810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:13.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.303+0000 7fe6a37fe700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416] conn(0x7fe688072400 0x7fe688074810 crc :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:13.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.306+0000 7fe687fff700 1 -- 192.168.123.105:0/20777077 <== osd.1 v2:192.168.123.105:6810/1642565416 1 ==== command_reply(tid 1: 0 ) v1 ==== 8+0+25106 (crc 0 0 0) 0x7fe688074ec0 con 0x7fe688072400 2026-03-09T14:58:13.374 INFO:teuthology.orchestra.run.vm05.stdout:98784247815 2026-03-09T14:58:13.374 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd last-stat-seq osd.3 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.380+0000 7fe6a9da9700 1 -- 192.168.123.105:0/20777077 --> [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416] -- command(tid 2: {"prefix": "flush_pg_stats"}) v1 -- 0x7fe6a4062380 con 0x7fe688072400 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.380+0000 7fe687fff700 1 -- 192.168.123.105:0/20777077 <== osd.1 v2:192.168.123.105:6810/1642565416 2 ==== command_reply(tid 2: 0 ) v1 ==== 8+0+11 (crc 0 0 0) 0x7fe6a4062380 con 0x7fe688072400 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.381+0000 7fe6a8b47700 1 -- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416] conn(0x7fe688072400 msgr2=0x7fe688074810 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.381+0000 7fe6a8b47700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416] conn(0x7fe688072400 0x7fe688074810 crc :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.381+0000 7fe6a8b47700 1 -- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe68806c870 msgr2=0x7fe68806ed20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.381+0000 7fe6a8b47700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe68806c870 0x7fe68806ed20 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fe6900097b0 tx=0x7fe690009700 comp rx=0 tx=0).stop 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.381+0000 7fe6a8b47700 1 -- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe69c0b1f20 msgr2=0x7fe69c0b2390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.381+0000 7fe6a8b47700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe69c0b1f20 0x7fe69c0b2390 secure :-1 s=READY pgs=244 cs=0 l=1 rev1=1 crypto rx=0x7fe69400dc40 tx=0x7fe69400be10 comp rx=0 tx=0).stop 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.382+0000 7fe6a8b47700 1 -- 192.168.123.105:0/20777077 shutdown_connections 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.382+0000 7fe6a8b47700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:6810/1642565416,v1:192.168.123.105:6811/1642565416] conn(0x7fe688072400 0x7fe688074810 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.382+0000 7fe6a8b47700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe68806c870 0x7fe68806ed20 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.382+0000 7fe6a8b47700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe69c0ac920 0x7fe69c0b6f20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.382+0000 7fe6a8b47700 1 --2- 192.168.123.105:0/20777077 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe69c0b1f20 0x7fe69c0b2390 unknown :-1 s=CLOSED pgs=244 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:13.386 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.382+0000 7fe6a8b47700 1 -- 192.168.123.105:0/20777077 >> 192.168.123.105:0/20777077 conn(0x7fe69c01a420 msgr2=0x7fe69c0a22b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:13.386 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.383+0000 7fe6a8b47700 1 -- 192.168.123.105:0/20777077 shutdown_connections 2026-03-09T14:58:13.389 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:13.386+0000 7fe6a8b47700 1 -- 192.168.123.105:0/20777077 wait complete. 2026-03-09T14:58:13.417 INFO:teuthology.orchestra.run.vm05.stdout:73014444041 2026-03-09T14:58:13.417 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd last-stat-seq osd.2 2026-03-09T14:58:13.423 INFO:teuthology.orchestra.run.vm05.stdout:120259084293 2026-03-09T14:58:13.423 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd last-stat-seq osd.4 2026-03-09T14:58:13.468 INFO:teuthology.orchestra.run.vm05.stdout:55834574859 2026-03-09T14:58:13.468 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd last-stat-seq osd.1 2026-03-09T14:58:13.593 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:13.870 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:14.015 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:14.174 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:14.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.179+0000 7fc7d56be700 1 -- 192.168.123.105:0/2149238861 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7d0101110 msgr2=0x7fc7d01014e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:14.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.179+0000 7fc7d56be700 1 --2- 192.168.123.105:0/2149238861 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7d0101110 0x7fc7d01014e0 secure :-1 s=READY pgs=245 cs=0 l=1 rev1=1 crypto rx=0x7fc7b8009b00 tx=0x7fc7b8009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:14.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.181+0000 7fc7d56be700 1 -- 192.168.123.105:0/2149238861 shutdown_connections 2026-03-09T14:58:14.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.181+0000 7fc7d56be700 1 --2- 192.168.123.105:0/2149238861 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc7d0068490 0x7fc7d0068900 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.181+0000 7fc7d56be700 1 --2- 192.168.123.105:0/2149238861 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7d0101110 0x7fc7d01014e0 unknown :-1 s=CLOSED pgs=245 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.181+0000 7fc7d56be700 1 -- 192.168.123.105:0/2149238861 >> 192.168.123.105:0/2149238861 conn(0x7fc7d0075240 msgr2=0x7fc7d0075640 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:14.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.181+0000 7fc7d56be700 1 -- 192.168.123.105:0/2149238861 shutdown_connections 2026-03-09T14:58:14.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.183+0000 7fc7d56be700 1 -- 192.168.123.105:0/2149238861 wait complete. 2026-03-09T14:58:14.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.183+0000 7fc7d56be700 1 Processor -- start 2026-03-09T14:58:14.186 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.185+0000 7fc7d56be700 1 -- start start 2026-03-09T14:58:14.186 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.185+0000 7fc7d56be700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc7d0068490 0x7fc7d0101f20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:14.186 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.185+0000 7fc7d56be700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7d0101110 0x7fc7d0102460 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:14.186 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.185+0000 7fc7d56be700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc7d01060b0 con 0x7fc7d0101110 2026-03-09T14:58:14.187 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.185+0000 7fc7d56be700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc7d01029a0 con 0x7fc7d0068490 2026-03-09T14:58:14.187 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.185+0000 7fc7cffff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc7d0068490 0x7fc7d0101f20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:14.187 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.185+0000 7fc7cffff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc7d0068490 0x7fc7d0101f20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:46878/0 (socket says 192.168.123.105:46878) 2026-03-09T14:58:14.187 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.185+0000 7fc7cffff700 1 -- 192.168.123.105:0/991099628 learned_addr learned my addr 192.168.123.105:0/991099628 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:14.187 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.185+0000 7fc7cf7fe700 1 --2- 192.168.123.105:0/991099628 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7d0101110 0x7fc7d0102460 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:14.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.186+0000 7fc7cffff700 1 -- 192.168.123.105:0/991099628 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7d0101110 msgr2=0x7fc7d0102460 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:14.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.186+0000 7fc7cffff700 1 --2- 192.168.123.105:0/991099628 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7d0101110 0x7fc7d0102460 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.186+0000 7fc7cffff700 1 -- 192.168.123.105:0/991099628 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc7b80097e0 con 0x7fc7d0068490 2026-03-09T14:58:14.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.186+0000 7fc7cffff700 1 --2- 192.168.123.105:0/991099628 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc7d0068490 0x7fc7d0101f20 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fc7b8005e50 tx=0x7fc7b8004970 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:14.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.188+0000 7fc7cf7fe700 1 --2- 192.168.123.105:0/991099628 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7d0101110 0x7fc7d0102460 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T14:58:14.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.188+0000 7fc7cd7fa700 1 -- 192.168.123.105:0/991099628 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc7b801d070 con 0x7fc7d0068490 2026-03-09T14:58:14.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.188+0000 7fc7d56be700 1 -- 192.168.123.105:0/991099628 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc7d0102c20 con 0x7fc7d0068490 2026-03-09T14:58:14.191 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.188+0000 7fc7d56be700 1 -- 192.168.123.105:0/991099628 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc7d01a68f0 con 0x7fc7d0068490 2026-03-09T14:58:14.191 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.188+0000 7fc7cd7fa700 1 -- 192.168.123.105:0/991099628 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc7b800f460 con 0x7fc7d0068490 2026-03-09T14:58:14.191 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.188+0000 7fc7cd7fa700 1 -- 192.168.123.105:0/991099628 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc7b80216b0 con 0x7fc7d0068490 2026-03-09T14:58:14.191 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.190+0000 7fc7d56be700 1 -- 192.168.123.105:0/991099628 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc7b0005320 con 0x7fc7d0068490 2026-03-09T14:58:14.191 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.190+0000 7fc7cd7fa700 1 -- 192.168.123.105:0/991099628 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fc7b800baa0 con 0x7fc7d0068490 2026-03-09T14:58:14.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.191+0000 7fc7cd7fa700 1 --2- 192.168.123.105:0/991099628 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc7bc06c7a0 0x7fc7bc06ec50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:14.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.191+0000 7fc7cf7fe700 1 --2- 192.168.123.105:0/991099628 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc7bc06c7a0 0x7fc7bc06ec50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:14.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.191+0000 7fc7cd7fa700 1 -- 192.168.123.105:0/991099628 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fc7b808c760 con 0x7fc7d0068490 2026-03-09T14:58:14.193 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.192+0000 7fc7cf7fe700 1 --2- 192.168.123.105:0/991099628 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc7bc06c7a0 0x7fc7bc06ec50 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fc7c0005950 tx=0x7fc7c00058e0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:14.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.194+0000 7fc7cd7fa700 1 -- 192.168.123.105:0/991099628 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fc7b80572a0 con 0x7fc7d0068490 2026-03-09T14:58:14.244 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:14 vm05 ceph-mon[50611]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:14.297 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:14.381 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.380+0000 7fc7d56be700 1 -- 192.168.123.105:0/991099628 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 0} v 0) v1 -- 0x7fc7b0005190 con 0x7fc7d0068490 2026-03-09T14:58:14.383 INFO:teuthology.orchestra.run.vm05.stdout:38654705677 2026-03-09T14:58:14.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.381+0000 7fc7cd7fa700 1 -- 192.168.123.105:0/991099628 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 0}]=0 v0) v1 ==== 74+0+12 (secure 0 0 0) 0x7fc7b805a8c0 con 0x7fc7d0068490 2026-03-09T14:58:14.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.387+0000 7fc7c6ffd700 1 -- 192.168.123.105:0/991099628 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc7bc06c7a0 msgr2=0x7fc7bc06ec50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:14.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.390+0000 7fc7c6ffd700 1 --2- 192.168.123.105:0/991099628 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc7bc06c7a0 0x7fc7bc06ec50 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fc7c0005950 tx=0x7fc7c00058e0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.390+0000 7fc7c6ffd700 1 -- 192.168.123.105:0/991099628 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc7d0068490 msgr2=0x7fc7d0101f20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:14.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.390+0000 7fc7c6ffd700 1 --2- 192.168.123.105:0/991099628 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc7d0068490 0x7fc7d0101f20 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fc7b8005e50 tx=0x7fc7b8004970 comp rx=0 tx=0).stop 2026-03-09T14:58:14.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.390+0000 7fc7c6ffd700 1 -- 192.168.123.105:0/991099628 shutdown_connections 2026-03-09T14:58:14.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.390+0000 7fc7c6ffd700 1 --2- 192.168.123.105:0/991099628 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc7bc06c7a0 0x7fc7bc06ec50 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.390+0000 7fc7c6ffd700 1 --2- 192.168.123.105:0/991099628 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc7d0068490 0x7fc7d0101f20 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.390+0000 7fc7c6ffd700 1 --2- 192.168.123.105:0/991099628 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc7d0101110 0x7fc7d0102460 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.390+0000 7fc7c6ffd700 1 -- 192.168.123.105:0/991099628 >> 192.168.123.105:0/991099628 conn(0x7fc7d0075240 msgr2=0x7fc7d00fdb50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:14.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.391+0000 7fc7c6ffd700 1 -- 192.168.123.105:0/991099628 shutdown_connections 2026-03-09T14:58:14.392 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.391+0000 7fc7c6ffd700 1 -- 192.168.123.105:0/991099628 wait complete. 2026-03-09T14:58:14.518 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:14.571 INFO:tasks.cephadm.ceph_manager.ceph:need seq 38654705677 got 38654705677 for osd.0 2026-03-09T14:58:14.571 DEBUG:teuthology.parallel:result is None 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.589+0000 7fcb465ea700 1 -- 192.168.123.105:0/307063841 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcb4010c8b0 msgr2=0x7fcb4010cc80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.589+0000 7fcb465ea700 1 --2- 192.168.123.105:0/307063841 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcb4010c8b0 0x7fcb4010cc80 secure :-1 s=READY pgs=246 cs=0 l=1 rev1=1 crypto rx=0x7fcb30008790 tx=0x7fcb30008aa0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.591+0000 7fcb465ea700 1 -- 192.168.123.105:0/307063841 shutdown_connections 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.591+0000 7fcb465ea700 1 --2- 192.168.123.105:0/307063841 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcb40071e40 0x7fcb400722b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.591+0000 7fcb465ea700 1 --2- 192.168.123.105:0/307063841 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcb4010c8b0 0x7fcb4010cc80 unknown :-1 s=CLOSED pgs=246 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.592+0000 7fcb465ea700 1 -- 192.168.123.105:0/307063841 >> 192.168.123.105:0/307063841 conn(0x7fcb4006c6c0 msgr2=0x7fcb4006cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.592+0000 7fcb465ea700 1 -- 192.168.123.105:0/307063841 shutdown_connections 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.592+0000 7fcb465ea700 1 -- 192.168.123.105:0/307063841 wait complete. 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.592+0000 7fcb465ea700 1 Processor -- start 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.592+0000 7fcb465ea700 1 -- start start 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.592+0000 7fcb465ea700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcb40071e40 0x7fcb4007cfb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.592+0000 7fcb465ea700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcb4007d4f0 0x7fcb4007d960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.592+0000 7fcb465ea700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcb40081bc0 con 0x7fcb40071e40 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.592+0000 7fcb465ea700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcb40081d30 con 0x7fcb4007d4f0 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.593+0000 7fcb3f7fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcb4007d4f0 0x7fcb4007d960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.593+0000 7fcb3f7fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcb4007d4f0 0x7fcb4007d960 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:46894/0 (socket says 192.168.123.105:46894) 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.593+0000 7fcb3f7fe700 1 -- 192.168.123.105:0/1039026571 learned_addr learned my addr 192.168.123.105:0/1039026571 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.593+0000 7fcb3ffff700 1 --2- 192.168.123.105:0/1039026571 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcb40071e40 0x7fcb4007cfb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.593+0000 7fcb3f7fe700 1 -- 192.168.123.105:0/1039026571 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcb40071e40 msgr2=0x7fcb4007cfb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.593+0000 7fcb3f7fe700 1 --2- 192.168.123.105:0/1039026571 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcb40071e40 0x7fcb4007cfb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.593+0000 7fcb3f7fe700 1 -- 192.168.123.105:0/1039026571 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcb30008440 con 0x7fcb4007d4f0 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.593+0000 7fcb3f7fe700 1 --2- 192.168.123.105:0/1039026571 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcb4007d4f0 0x7fcb4007d960 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fcb3800f4d0 tx=0x7fcb3800f890 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.594+0000 7fcb3d7fa700 1 -- 192.168.123.105:0/1039026571 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcb38010040 con 0x7fcb4007d4f0 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.594+0000 7fcb465ea700 1 -- 192.168.123.105:0/1039026571 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcb40081fb0 con 0x7fcb4007d4f0 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.594+0000 7fcb465ea700 1 -- 192.168.123.105:0/1039026571 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcb40082500 con 0x7fcb4007d4f0 2026-03-09T14:58:14.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.594+0000 7fcb3d7fa700 1 -- 192.168.123.105:0/1039026571 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fcb38009bf0 con 0x7fcb4007d4f0 2026-03-09T14:58:14.597 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.594+0000 7fcb3d7fa700 1 -- 192.168.123.105:0/1039026571 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcb38015980 con 0x7fcb4007d4f0 2026-03-09T14:58:14.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.597+0000 7fcb3d7fa700 1 -- 192.168.123.105:0/1039026571 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fcb38015c10 con 0x7fcb4007d4f0 2026-03-09T14:58:14.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.598+0000 7fcb3d7fa700 1 --2- 192.168.123.105:0/1039026571 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcb2806c6d0 0x7fcb2806eb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:14.602 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.599+0000 7fcb3ffff700 1 --2- 192.168.123.105:0/1039026571 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcb2806c6d0 0x7fcb2806eb80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:14.602 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.601+0000 7fcb3ffff700 1 --2- 192.168.123.105:0/1039026571 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcb2806c6d0 0x7fcb2806eb80 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7fcb3000f7b0 tx=0x7fcb30011040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:14.602 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.601+0000 7fcb3d7fa700 1 -- 192.168.123.105:0/1039026571 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fcb3808cc50 con 0x7fcb4007d4f0 2026-03-09T14:58:14.602 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.601+0000 7fcb26ffd700 1 -- 192.168.123.105:0/1039026571 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcb2c005320 con 0x7fcb4007d4f0 2026-03-09T14:58:14.613 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.605+0000 7fcb3d7fa700 1 -- 192.168.123.105:0/1039026571 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fcb38057710 con 0x7fcb4007d4f0 2026-03-09T14:58:14.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:14 vm09 ceph-mon[59673]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:14.904 INFO:teuthology.orchestra.run.vm05.stdout:137438953477 2026-03-09T14:58:14.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.879+0000 7fcb26ffd700 1 -- 192.168.123.105:0/1039026571 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 5} v 0) v1 -- 0x7fcb2c005190 con 0x7fcb4007d4f0 2026-03-09T14:58:14.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.885+0000 7fcb3d7fa700 1 -- 192.168.123.105:0/1039026571 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 5}]=0 v0) v1 ==== 74+0+13 (secure 0 0 0) 0x7fcb3805ad30 con 0x7fcb4007d4f0 2026-03-09T14:58:14.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.904+0000 7fcb26ffd700 1 -- 192.168.123.105:0/1039026571 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcb2806c6d0 msgr2=0x7fcb2806eb80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:14.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.904+0000 7fcb26ffd700 1 --2- 192.168.123.105:0/1039026571 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcb2806c6d0 0x7fcb2806eb80 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7fcb3000f7b0 tx=0x7fcb30011040 comp rx=0 tx=0).stop 2026-03-09T14:58:14.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.904+0000 7fcb26ffd700 1 -- 192.168.123.105:0/1039026571 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcb4007d4f0 msgr2=0x7fcb4007d960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:14.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.905+0000 7fcb26ffd700 1 --2- 192.168.123.105:0/1039026571 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcb4007d4f0 0x7fcb4007d960 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fcb3800f4d0 tx=0x7fcb3800f890 comp rx=0 tx=0).stop 2026-03-09T14:58:14.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.905+0000 7fcb26ffd700 1 -- 192.168.123.105:0/1039026571 shutdown_connections 2026-03-09T14:58:14.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.905+0000 7fcb26ffd700 1 --2- 192.168.123.105:0/1039026571 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcb2806c6d0 0x7fcb2806eb80 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.905+0000 7fcb26ffd700 1 --2- 192.168.123.105:0/1039026571 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcb40071e40 0x7fcb4007cfb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.905+0000 7fcb26ffd700 1 --2- 192.168.123.105:0/1039026571 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcb4007d4f0 0x7fcb4007d960 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.905+0000 7fcb26ffd700 1 -- 192.168.123.105:0/1039026571 >> 192.168.123.105:0/1039026571 conn(0x7fcb4006c6c0 msgr2=0x7fcb4006ff70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:14.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.905+0000 7fcb26ffd700 1 -- 192.168.123.105:0/1039026571 shutdown_connections 2026-03-09T14:58:14.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.906+0000 7fcb26ffd700 1 -- 192.168.123.105:0/1039026571 wait complete. 2026-03-09T14:58:14.926 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.923+0000 7f4abec4b700 1 -- 192.168.123.105:0/741552740 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4ab8071b60 msgr2=0x7f4ab8071fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:14.926 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.923+0000 7f4abec4b700 1 --2- 192.168.123.105:0/741552740 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4ab8071b60 0x7f4ab8071fd0 secure :-1 s=READY pgs=247 cs=0 l=1 rev1=1 crypto rx=0x7f4aa8009b00 tx=0x7f4aa8009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:14.926 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.924+0000 7f4abec4b700 1 -- 192.168.123.105:0/741552740 shutdown_connections 2026-03-09T14:58:14.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.924+0000 7f4abec4b700 1 --2- 192.168.123.105:0/741552740 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4ab8071b60 0x7f4ab8071fd0 unknown :-1 s=CLOSED pgs=247 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.924+0000 7f4abec4b700 1 --2- 192.168.123.105:0/741552740 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4ab810e9e0 0x7f4ab810edb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.924+0000 7f4abec4b700 1 -- 192.168.123.105:0/741552740 >> 192.168.123.105:0/741552740 conn(0x7f4ab806c6c0 msgr2=0x7f4ab806cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:14.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.924+0000 7f4abec4b700 1 -- 192.168.123.105:0/741552740 shutdown_connections 2026-03-09T14:58:14.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.924+0000 7f4abec4b700 1 -- 192.168.123.105:0/741552740 wait complete. 2026-03-09T14:58:14.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.925+0000 7f4abec4b700 1 Processor -- start 2026-03-09T14:58:14.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.925+0000 7f4abec4b700 1 -- start start 2026-03-09T14:58:14.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.925+0000 7f4abec4b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4ab8071b60 0x7f4ab81a4cb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:14.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.925+0000 7f4abec4b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4ab810e9e0 0x7f4ab81a51f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:14.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.925+0000 7f4abec4b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4ab81a5880 con 0x7f4ab810e9e0 2026-03-09T14:58:14.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.925+0000 7f4abec4b700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4ab81a85b0 con 0x7f4ab8071b60 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.928+0000 7f4abd448700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4ab810e9e0 0x7f4ab81a51f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.928+0000 7f4abd448700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4ab810e9e0 0x7f4ab81a51f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:51222/0 (socket says 192.168.123.105:51222) 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.928+0000 7f4abd448700 1 -- 192.168.123.105:0/3122087309 learned_addr learned my addr 192.168.123.105:0/3122087309 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.927+0000 7f4abdc49700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4ab8071b60 0x7f4ab81a4cb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.929+0000 7f4abd448700 1 -- 192.168.123.105:0/3122087309 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4ab8071b60 msgr2=0x7f4ab81a4cb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.929+0000 7f4abd448700 1 --2- 192.168.123.105:0/3122087309 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4ab8071b60 0x7f4ab81a4cb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.929+0000 7f4abd448700 1 -- 192.168.123.105:0/3122087309 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4aa80097e0 con 0x7f4ab810e9e0 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.929+0000 7f4abd448700 1 --2- 192.168.123.105:0/3122087309 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4ab810e9e0 0x7f4ab81a51f0 secure :-1 s=READY pgs=248 cs=0 l=1 rev1=1 crypto rx=0x7f4aa8000c00 tx=0x7f4aa800bfd0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.929+0000 7f4aaeffd700 1 -- 192.168.123.105:0/3122087309 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4aa801d070 con 0x7f4ab810e9e0 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.929+0000 7f4aaeffd700 1 -- 192.168.123.105:0/3122087309 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f4aa8003d50 con 0x7f4ab810e9e0 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.931+0000 7f4aaeffd700 1 -- 192.168.123.105:0/3122087309 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4aa8017910 con 0x7f4ab810e9e0 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.931+0000 7f4abec4b700 1 -- 192.168.123.105:0/3122087309 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4ab81a8750 con 0x7f4ab810e9e0 2026-03-09T14:58:14.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.931+0000 7f4abec4b700 1 -- 192.168.123.105:0/3122087309 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4ab81a8bc0 con 0x7f4ab810e9e0 2026-03-09T14:58:14.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.933+0000 7f4abec4b700 1 -- 192.168.123.105:0/3122087309 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4ab804f2a0 con 0x7f4ab810e9e0 2026-03-09T14:58:14.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.936+0000 7f4aaeffd700 1 -- 192.168.123.105:0/3122087309 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f4aa8017a70 con 0x7f4ab810e9e0 2026-03-09T14:58:14.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.936+0000 7f4aaeffd700 1 --2- 192.168.123.105:0/3122087309 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4aa406c5a0 0x7f4aa406ea50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:14.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.936+0000 7f4aaeffd700 1 -- 192.168.123.105:0/3122087309 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f4aa808c8e0 con 0x7f4ab810e9e0 2026-03-09T14:58:14.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.936+0000 7f4abdc49700 1 --2- 192.168.123.105:0/3122087309 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4aa406c5a0 0x7f4aa406ea50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:14.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.937+0000 7f4aaeffd700 1 -- 192.168.123.105:0/3122087309 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f4aa80574e0 con 0x7f4ab810e9e0 2026-03-09T14:58:14.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:14.937+0000 7f4abdc49700 1 --2- 192.168.123.105:0/3122087309 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4aa406c5a0 0x7f4aa406ea50 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f4ab4009e50 tx=0x7f4ab4009450 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:15.101 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.100+0000 7f9b6277a700 1 -- 192.168.123.105:0/3574767832 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9b5c071e40 msgr2=0x7f9b5c0722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.100+0000 7f9b6277a700 1 --2- 192.168.123.105:0/3574767832 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9b5c071e40 0x7f9b5c0722b0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f9b50009a60 tx=0x7f9b50009d70 comp rx=0 tx=0).stop 2026-03-09T14:58:15.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.101+0000 7f9b6277a700 1 -- 192.168.123.105:0/3574767832 shutdown_connections 2026-03-09T14:58:15.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.101+0000 7f9b6277a700 1 --2- 192.168.123.105:0/3574767832 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9b5c071e40 0x7f9b5c0722b0 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.101+0000 7f9b6277a700 1 --2- 192.168.123.105:0/3574767832 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b5c10c8b0 0x7f9b5c10cc80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.101+0000 7f9b6277a700 1 -- 192.168.123.105:0/3574767832 >> 192.168.123.105:0/3574767832 conn(0x7f9b5c06c6c0 msgr2=0x7f9b5c06cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:15.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.101+0000 7f9b6277a700 1 -- 192.168.123.105:0/3574767832 shutdown_connections 2026-03-09T14:58:15.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.101+0000 7f9b6277a700 1 -- 192.168.123.105:0/3574767832 wait complete. 2026-03-09T14:58:15.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.102+0000 7f9b6277a700 1 Processor -- start 2026-03-09T14:58:15.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.102+0000 7f9b6277a700 1 -- start start 2026-03-09T14:58:15.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.102+0000 7f9b6277a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9b5c10c8b0 0x7f9b5c1b74c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:15.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.102+0000 7f9b6277a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b5c1b7a00 0x7f9b5c1b7e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:15.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.102+0000 7f9b6277a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9b5c07ee70 con 0x7f9b5c1b7a00 2026-03-09T14:58:15.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.102+0000 7f9b6277a700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9b5c07efe0 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.105+0000 7f9b5bfff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9b5c10c8b0 0x7f9b5c1b74c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:15.106 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.105+0000 7f9b5bfff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9b5c10c8b0 0x7f9b5c1b74c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:46944/0 (socket says 192.168.123.105:46944) 2026-03-09T14:58:15.106 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.105+0000 7f9b5bfff700 1 -- 192.168.123.105:0/2441088863 learned_addr learned my addr 192.168.123.105:0/2441088863 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:15.115 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.105+0000 7f9b5bfff700 1 -- 192.168.123.105:0/2441088863 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b5c1b7a00 msgr2=0x7f9b5c1b7e70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.115 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.105+0000 7f9b5bfff700 1 --2- 192.168.123.105:0/2441088863 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b5c1b7a00 0x7f9b5c1b7e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.115 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.105+0000 7f9b5bfff700 1 -- 192.168.123.105:0/2441088863 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9b50009710 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.117 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.116+0000 7f9b5bfff700 1 --2- 192.168.123.105:0/2441088863 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9b5c10c8b0 0x7f9b5c1b74c0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f9b4c009d00 tx=0x7f9b4c00e3b0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:15.119 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.118+0000 7f9b597fa700 1 -- 192.168.123.105:0/2441088863 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9b4c00a4f0 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.118+0000 7f9b6277a700 1 -- 192.168.123.105:0/2441088863 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9b5c07f260 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.118+0000 7f9b6277a700 1 -- 192.168.123.105:0/2441088863 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9b5c07f7b0 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.127+0000 7f9b597fa700 1 -- 192.168.123.105:0/2441088863 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9b4c010040 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.127+0000 7f9b597fa700 1 -- 192.168.123.105:0/2441088863 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9b4c0136a0 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.128+0000 7f9b597fa700 1 -- 192.168.123.105:0/2441088863 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f9b4c013800 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.128+0000 7f9b597fa700 1 --2- 192.168.123.105:0/2441088863 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9b4406c7a0 0x7f9b4406ec50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:15.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.129+0000 7f9b597fa700 1 -- 192.168.123.105:0/2441088863 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f9b4c08c8e0 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.134+0000 7f9b42ffd700 1 -- 192.168.123.105:0/2441088863 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9b5c04f2a0 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.142 INFO:tasks.cephadm.ceph_manager.ceph:need seq 137438953477 got 137438953477 for osd.5 2026-03-09T14:58:15.142 DEBUG:teuthology.parallel:result is None 2026-03-09T14:58:15.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.135+0000 7f9b5b7fe700 1 --2- 192.168.123.105:0/2441088863 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9b4406c7a0 0x7f9b4406ec50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:15.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.135+0000 7f9b5b7fe700 1 --2- 192.168.123.105:0/2441088863 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9b4406c7a0 0x7f9b4406ec50 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f9b50009a60 tx=0x7f9b5000b540 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:15.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.144+0000 7f9b597fa700 1 -- 192.168.123.105:0/2441088863 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f9b4c05aeb0 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.149 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.148+0000 7f736ffff700 1 -- 192.168.123.105:0/2919484104 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7370071b60 msgr2=0x7f7370071fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.149 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.148+0000 7f736ffff700 1 --2- 192.168.123.105:0/2919484104 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7370071b60 0x7f7370071fd0 secure :-1 s=READY pgs=249 cs=0 l=1 rev1=1 crypto rx=0x7f7364009b00 tx=0x7f7364009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:15.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.148+0000 7f736ffff700 1 -- 192.168.123.105:0/2919484104 shutdown_connections 2026-03-09T14:58:15.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.148+0000 7f736ffff700 1 --2- 192.168.123.105:0/2919484104 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7370071b60 0x7f7370071fd0 unknown :-1 s=CLOSED pgs=249 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.148+0000 7f736ffff700 1 --2- 192.168.123.105:0/2919484104 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f737010e9e0 0x7f737010edb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.148+0000 7f736ffff700 1 -- 192.168.123.105:0/2919484104 >> 192.168.123.105:0/2919484104 conn(0x7f737006c6c0 msgr2=0x7f737006cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:15.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.150+0000 7f736ffff700 1 -- 192.168.123.105:0/2919484104 shutdown_connections 2026-03-09T14:58:15.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.150+0000 7f736ffff700 1 -- 192.168.123.105:0/2919484104 wait complete. 2026-03-09T14:58:15.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.150+0000 7f736ffff700 1 Processor -- start 2026-03-09T14:58:15.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.150+0000 7f736ffff700 1 -- start start 2026-03-09T14:58:15.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.150+0000 7f736ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7370071b60 0x7f7370119570 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:15.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.150+0000 7f736ffff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f737010e9e0 0x7f7370114570 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:15.152 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.150+0000 7f736ffff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7370114ab0 con 0x7f7370071b60 2026-03-09T14:58:15.152 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.150+0000 7f736ffff700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7370114c20 con 0x7f737010e9e0 2026-03-09T14:58:15.152 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.151+0000 7f736e7fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f737010e9e0 0x7f7370114570 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:15.152 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.151+0000 7f736effd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7370071b60 0x7f7370119570 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:15.152 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.151+0000 7f736effd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7370071b60 0x7f7370119570 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:51262/0 (socket says 192.168.123.105:51262) 2026-03-09T14:58:15.152 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.151+0000 7f736effd700 1 -- 192.168.123.105:0/2910530 learned_addr learned my addr 192.168.123.105:0/2910530 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:15.152 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.151+0000 7f736effd700 1 -- 192.168.123.105:0/2910530 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f737010e9e0 msgr2=0x7f7370114570 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.152 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.151+0000 7f736effd700 1 --2- 192.168.123.105:0/2910530 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f737010e9e0 0x7f7370114570 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.152 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.151+0000 7f736effd700 1 -- 192.168.123.105:0/2910530 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f73640097e0 con 0x7f7370071b60 2026-03-09T14:58:15.153 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.151+0000 7f736effd700 1 --2- 192.168.123.105:0/2910530 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7370071b60 0x7f7370119570 secure :-1 s=READY pgs=250 cs=0 l=1 rev1=1 crypto rx=0x7f736000d8d0 tx=0x7f736000dbe0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:15.153 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.152+0000 7f7357fff700 1 -- 192.168.123.105:0/2910530 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7360009880 con 0x7f7370071b60 2026-03-09T14:58:15.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.152+0000 7f7357fff700 1 -- 192.168.123.105:0/2910530 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f7360010460 con 0x7f7370071b60 2026-03-09T14:58:15.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.152+0000 7f7357fff700 1 -- 192.168.123.105:0/2910530 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f736000f5d0 con 0x7f7370071b60 2026-03-09T14:58:15.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.152+0000 7f736ffff700 1 -- 192.168.123.105:0/2910530 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7370114eb0 con 0x7f7370071b60 2026-03-09T14:58:15.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.153+0000 7f736ffff700 1 -- 192.168.123.105:0/2910530 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7370115380 con 0x7f7370071b60 2026-03-09T14:58:15.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.153+0000 7f7357fff700 1 -- 192.168.123.105:0/2910530 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f73600099e0 con 0x7f7370071b60 2026-03-09T14:58:15.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.154+0000 7f7357fff700 1 --2- 192.168.123.105:0/2910530 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f735806c430 0x7f735806e8e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:15.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.154+0000 7f736e7fc700 1 --2- 192.168.123.105:0/2910530 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f735806c430 0x7f735806e8e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:15.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.154+0000 7f736e7fc700 1 --2- 192.168.123.105:0/2910530 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f735806c430 0x7f735806e8e0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f7370115d60 tx=0x7f736400b540 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:15.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.155+0000 7f7357fff700 1 -- 192.168.123.105:0/2910530 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f736008cad0 con 0x7f7370071b60 2026-03-09T14:58:15.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.155+0000 7f736ffff700 1 -- 192.168.123.105:0/2910530 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f737004f2a0 con 0x7f7370071b60 2026-03-09T14:58:15.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.157+0000 7f7357fff700 1 -- 192.168.123.105:0/2910530 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f7360090050 con 0x7f7370071b60 2026-03-09T14:58:15.194 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.193+0000 7fcffb77e700 1 -- 192.168.123.105:0/2824850002 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcff00a58d0 msgr2=0x7fcff00a8e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.194 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.193+0000 7fcffb77e700 1 --2- 192.168.123.105:0/2824850002 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcff00a58d0 0x7fcff00a8e90 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7fcff40669f0 tx=0x7fcff40699f0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.194 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.193+0000 7fcffb77e700 1 -- 192.168.123.105:0/2824850002 shutdown_connections 2026-03-09T14:58:15.194 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.193+0000 7fcffb77e700 1 --2- 192.168.123.105:0/2824850002 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcff00a58d0 0x7fcff00a8e90 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.194 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.193+0000 7fcffb77e700 1 --2- 192.168.123.105:0/2824850002 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcff00a4f30 0x7fcff00a5300 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.194 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.193+0000 7fcffb77e700 1 -- 192.168.123.105:0/2824850002 >> 192.168.123.105:0/2824850002 conn(0x7fcff001a290 msgr2=0x7fcff001a690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:15.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.193+0000 7fcffb77e700 1 -- 192.168.123.105:0/2824850002 shutdown_connections 2026-03-09T14:58:15.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.193+0000 7fcffb77e700 1 -- 192.168.123.105:0/2824850002 wait complete. 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.193+0000 7fcffb77e700 1 Processor -- start 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcffb77e700 1 -- start start 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcffb77e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcff00a4f30 0x7fcff000f730 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcffb77e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcff000fc70 0x7fcff00100e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcffb77e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcff00167a0 con 0x7fcff00a4f30 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcffb77e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcff00142b0 con 0x7fcff000fc70 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcff9f7b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcff000fc70 0x7fcff00100e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcff9f7b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcff000fc70 0x7fcff00100e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:46970/0 (socket says 192.168.123.105:46970) 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcff9f7b700 1 -- 192.168.123.105:0/2299952818 learned_addr learned my addr 192.168.123.105:0/2299952818 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcffa77c700 1 --2- 192.168.123.105:0/2299952818 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcff00a4f30 0x7fcff000f730 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcff9f7b700 1 -- 192.168.123.105:0/2299952818 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcff00a4f30 msgr2=0x7fcff000f730 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcff9f7b700 1 --2- 192.168.123.105:0/2299952818 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcff00a4f30 0x7fcff000f730 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcff9f7b700 1 -- 192.168.123.105:0/2299952818 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcfe8009710 con 0x7fcff000fc70 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcffa77c700 1 --2- 192.168.123.105:0/2299952818 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcff00a4f30 0x7fcff000f730 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.194+0000 7fcff9f7b700 1 --2- 192.168.123.105:0/2299952818 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcff000fc70 0x7fcff00100e0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fcff404ed20 tx=0x7fcff4072e60 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.195+0000 7fcfe77fe700 1 -- 192.168.123.105:0/2299952818 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcff4078070 con 0x7fcff000fc70 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.195+0000 7fcffb77e700 1 -- 192.168.123.105:0/2299952818 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcff4067050 con 0x7fcff000fc70 2026-03-09T14:58:15.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.195+0000 7fcffb77e700 1 -- 192.168.123.105:0/2299952818 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcff0014800 con 0x7fcff000fc70 2026-03-09T14:58:15.197 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.195+0000 7fcfe77fe700 1 -- 192.168.123.105:0/2299952818 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fcff4073650 con 0x7fcff000fc70 2026-03-09T14:58:15.197 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.195+0000 7fcfe77fe700 1 -- 192.168.123.105:0/2299952818 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcff407e630 con 0x7fcff000fc70 2026-03-09T14:58:15.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.196+0000 7fcffb77e700 1 -- 192.168.123.105:0/2299952818 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcfd8005320 con 0x7fcff000fc70 2026-03-09T14:58:15.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.198+0000 7fcfe77fe700 1 -- 192.168.123.105:0/2299952818 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fcff40737d0 con 0x7fcff000fc70 2026-03-09T14:58:15.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.198+0000 7fcfe77fe700 1 --2- 192.168.123.105:0/2299952818 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcfe006e9a0 0x7fcfe0070e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:15.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.198+0000 7fcffa77c700 1 --2- 192.168.123.105:0/2299952818 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcfe006e9a0 0x7fcfe0070e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:15.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.199+0000 7fcffa77c700 1 --2- 192.168.123.105:0/2299952818 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcfe006e9a0 0x7fcfe0070e50 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7fcfe800f790 tx=0x7fcfe8009450 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:15.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.199+0000 7fcfe77fe700 1 -- 192.168.123.105:0/2299952818 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fcff40efd30 con 0x7fcff000fc70 2026-03-09T14:58:15.211 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.207+0000 7fcfe77fe700 1 -- 192.168.123.105:0/2299952818 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fcff40ba870 con 0x7fcff000fc70 2026-03-09T14:58:15.266 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.264+0000 7f4abec4b700 1 -- 192.168.123.105:0/3122087309 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 3} v 0) v1 -- 0x7f4ab804ea50 con 0x7f4ab810e9e0 2026-03-09T14:58:15.266 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.265+0000 7f4aaeffd700 1 -- 192.168.123.105:0/3122087309 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 3}]=0 v0) v1 ==== 74+0+12 (secure 0 0 0) 0x7f4aa8005c00 con 0x7f4ab810e9e0 2026-03-09T14:58:15.266 INFO:teuthology.orchestra.run.vm05.stdout:98784247816 2026-03-09T14:58:15.270 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.269+0000 7f4aacff9700 1 -- 192.168.123.105:0/3122087309 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4aa406c5a0 msgr2=0x7f4aa406ea50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.270 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.269+0000 7f4aacff9700 1 --2- 192.168.123.105:0/3122087309 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4aa406c5a0 0x7f4aa406ea50 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f4ab4009e50 tx=0x7f4ab4009450 comp rx=0 tx=0).stop 2026-03-09T14:58:15.270 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.269+0000 7f4aacff9700 1 -- 192.168.123.105:0/3122087309 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4ab810e9e0 msgr2=0x7f4ab81a51f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.270 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.269+0000 7f4aacff9700 1 --2- 192.168.123.105:0/3122087309 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4ab810e9e0 0x7f4ab81a51f0 secure :-1 s=READY pgs=248 cs=0 l=1 rev1=1 crypto rx=0x7f4aa8000c00 tx=0x7f4aa800bfd0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.270 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.269+0000 7f4aacff9700 1 -- 192.168.123.105:0/3122087309 shutdown_connections 2026-03-09T14:58:15.270 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.269+0000 7f4aacff9700 1 --2- 192.168.123.105:0/3122087309 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4aa406c5a0 0x7f4aa406ea50 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.269+0000 7f4aacff9700 1 --2- 192.168.123.105:0/3122087309 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4ab8071b60 0x7f4ab81a4cb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.269+0000 7f4aacff9700 1 --2- 192.168.123.105:0/3122087309 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4ab810e9e0 0x7f4ab81a51f0 unknown :-1 s=CLOSED pgs=248 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.269+0000 7f4aacff9700 1 -- 192.168.123.105:0/3122087309 >> 192.168.123.105:0/3122087309 conn(0x7f4ab806c6c0 msgr2=0x7f4ab806cf80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:15.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.269+0000 7f4aacff9700 1 -- 192.168.123.105:0/3122087309 shutdown_connections 2026-03-09T14:58:15.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.270+0000 7f4aacff9700 1 -- 192.168.123.105:0/3122087309 wait complete. 2026-03-09T14:58:15.284 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:15 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/991099628' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:58:15.285 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:15 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/1039026571' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:58:15.387 INFO:tasks.cephadm.ceph_manager.ceph:need seq 98784247815 got 98784247816 for osd.3 2026-03-09T14:58:15.387 DEBUG:teuthology.parallel:result is None 2026-03-09T14:58:15.425 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.424+0000 7fcffb77e700 1 -- 192.168.123.105:0/2299952818 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) v1 -- 0x7fcfd8005cc0 con 0x7fcff000fc70 2026-03-09T14:58:15.426 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.425+0000 7fcfe77fe700 1 -- 192.168.123.105:0/2299952818 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) v1 ==== 74+0+12 (secure 0 0 0) 0x7fcff40bde90 con 0x7fcff000fc70 2026-03-09T14:58:15.426 INFO:teuthology.orchestra.run.vm05.stdout:55834574860 2026-03-09T14:58:15.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.428+0000 7fcfe57fa700 1 -- 192.168.123.105:0/2299952818 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcfe006e9a0 msgr2=0x7fcfe0070e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.428+0000 7fcfe57fa700 1 --2- 192.168.123.105:0/2299952818 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcfe006e9a0 0x7fcfe0070e50 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7fcfe800f790 tx=0x7fcfe8009450 comp rx=0 tx=0).stop 2026-03-09T14:58:15.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.428+0000 7fcfe57fa700 1 -- 192.168.123.105:0/2299952818 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcff000fc70 msgr2=0x7fcff00100e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.428+0000 7fcfe57fa700 1 --2- 192.168.123.105:0/2299952818 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcff000fc70 0x7fcff00100e0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fcff404ed20 tx=0x7fcff4072e60 comp rx=0 tx=0).stop 2026-03-09T14:58:15.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.428+0000 7fcfe57fa700 1 -- 192.168.123.105:0/2299952818 shutdown_connections 2026-03-09T14:58:15.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.428+0000 7fcfe57fa700 1 --2- 192.168.123.105:0/2299952818 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fcfe006e9a0 0x7fcfe0070e50 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.428+0000 7fcfe57fa700 1 --2- 192.168.123.105:0/2299952818 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcff00a4f30 0x7fcff000f730 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.429+0000 7fcfe57fa700 1 --2- 192.168.123.105:0/2299952818 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcff000fc70 0x7fcff00100e0 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.429 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.429+0000 7fcfe57fa700 1 -- 192.168.123.105:0/2299952818 >> 192.168.123.105:0/2299952818 conn(0x7fcff001a290 msgr2=0x7fcff00a2ba0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:15.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.429+0000 7fcfe57fa700 1 -- 192.168.123.105:0/2299952818 shutdown_connections 2026-03-09T14:58:15.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.429+0000 7fcfe57fa700 1 -- 192.168.123.105:0/2299952818 wait complete. 2026-03-09T14:58:15.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.435+0000 7f9b42ffd700 1 -- 192.168.123.105:0/2441088863 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) v1 -- 0x7f9b5c04ea50 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.433+0000 7f736ffff700 1 -- 192.168.123.105:0/2910530 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 4} v 0) v1 -- 0x7f737004ea50 con 0x7f7370071b60 2026-03-09T14:58:15.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.435+0000 7f7357fff700 1 -- 192.168.123.105:0/2910530 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 4}]=0 v0) v1 ==== 74+0+13 (secure 0 0 0) 0x7f7360016020 con 0x7f7370071b60 2026-03-09T14:58:15.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.435+0000 7f9b597fa700 1 -- 192.168.123.105:0/2441088863 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) v1 ==== 74+0+12 (secure 0 0 0) 0x7f9b4c018070 con 0x7f9b5c10c8b0 2026-03-09T14:58:15.437 INFO:teuthology.orchestra.run.vm05.stdout:120259084293 2026-03-09T14:58:15.438 INFO:teuthology.orchestra.run.vm05.stdout:73014444042 2026-03-09T14:58:15.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.440+0000 7f9b6277a700 1 -- 192.168.123.105:0/2441088863 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9b4406c7a0 msgr2=0x7f9b4406ec50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.440+0000 7f9b6277a700 1 --2- 192.168.123.105:0/2441088863 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9b4406c7a0 0x7f9b4406ec50 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f9b50009a60 tx=0x7f9b5000b540 comp rx=0 tx=0).stop 2026-03-09T14:58:15.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.440+0000 7f9b6277a700 1 -- 192.168.123.105:0/2441088863 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9b5c10c8b0 msgr2=0x7f9b5c1b74c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.440+0000 7f9b6277a700 1 --2- 192.168.123.105:0/2441088863 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9b5c10c8b0 0x7f9b5c1b74c0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f9b4c009d00 tx=0x7f9b4c00e3b0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f736ffff700 1 -- 192.168.123.105:0/2910530 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f735806c430 msgr2=0x7f735806e8e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f736ffff700 1 --2- 192.168.123.105:0/2910530 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f735806c430 0x7f735806e8e0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f7370115d60 tx=0x7f736400b540 comp rx=0 tx=0).stop 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f736ffff700 1 -- 192.168.123.105:0/2910530 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7370071b60 msgr2=0x7f7370119570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f736ffff700 1 --2- 192.168.123.105:0/2910530 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7370071b60 0x7f7370119570 secure :-1 s=READY pgs=250 cs=0 l=1 rev1=1 crypto rx=0x7f736000d8d0 tx=0x7f736000dbe0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f736ffff700 1 -- 192.168.123.105:0/2910530 shutdown_connections 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f736ffff700 1 --2- 192.168.123.105:0/2910530 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f735806c430 0x7f735806e8e0 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f736ffff700 1 --2- 192.168.123.105:0/2910530 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7370071b60 0x7f7370119570 unknown :-1 s=CLOSED pgs=250 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f736ffff700 1 --2- 192.168.123.105:0/2910530 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f737010e9e0 0x7f7370114570 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f736ffff700 1 -- 192.168.123.105:0/2910530 >> 192.168.123.105:0/2910530 conn(0x7f737006c6c0 msgr2=0x7f737006cfb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f9b6277a700 1 -- 192.168.123.105:0/2441088863 shutdown_connections 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f9b6277a700 1 --2- 192.168.123.105:0/2441088863 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f9b4406c7a0 0x7f9b4406ec50 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f9b6277a700 1 --2- 192.168.123.105:0/2441088863 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9b5c10c8b0 0x7f9b5c1b74c0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.441+0000 7f736ffff700 1 -- 192.168.123.105:0/2910530 shutdown_connections 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.442+0000 7f9b6277a700 1 --2- 192.168.123.105:0/2441088863 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9b5c1b7a00 0x7f9b5c1b7e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:15.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.442+0000 7f736ffff700 1 -- 192.168.123.105:0/2910530 wait complete. 2026-03-09T14:58:15.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.442+0000 7f9b6277a700 1 -- 192.168.123.105:0/2441088863 >> 192.168.123.105:0/2441088863 conn(0x7f9b5c06c6c0 msgr2=0x7f9b5c06ff50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:15.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.442+0000 7f9b6277a700 1 -- 192.168.123.105:0/2441088863 shutdown_connections 2026-03-09T14:58:15.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:15.442+0000 7f9b6277a700 1 -- 192.168.123.105:0/2441088863 wait complete. 2026-03-09T14:58:15.543 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444041 got 73014444042 for osd.2 2026-03-09T14:58:15.543 DEBUG:teuthology.parallel:result is None 2026-03-09T14:58:15.569 INFO:tasks.cephadm.ceph_manager.ceph:need seq 120259084293 got 120259084293 for osd.4 2026-03-09T14:58:15.569 DEBUG:teuthology.parallel:result is None 2026-03-09T14:58:15.575 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574859 got 55834574860 for osd.1 2026-03-09T14:58:15.575 DEBUG:teuthology.parallel:result is None 2026-03-09T14:58:15.575 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T14:58:15.575 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph pg dump --format=json 2026-03-09T14:58:15.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:15 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/991099628' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:58:15.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:15 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/1039026571' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:58:15.774 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:16.035 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.033+0000 7f5d38f41700 1 -- 192.168.123.105:0/2220731296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5d34068490 msgr2=0x7f5d34068900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:16.035 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.033+0000 7f5d38f41700 1 --2- 192.168.123.105:0/2220731296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5d34068490 0x7f5d34068900 secure :-1 s=READY pgs=251 cs=0 l=1 rev1=1 crypto rx=0x7f5d28009b00 tx=0x7f5d28009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:16.035 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.034+0000 7f5d38f41700 1 -- 192.168.123.105:0/2220731296 shutdown_connections 2026-03-09T14:58:16.035 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.034+0000 7f5d38f41700 1 --2- 192.168.123.105:0/2220731296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5d34068490 0x7f5d34068900 unknown :-1 s=CLOSED pgs=251 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.035 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.034+0000 7f5d38f41700 1 --2- 192.168.123.105:0/2220731296 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5d341066c0 0x7f5d34106a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.035 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.034+0000 7f5d38f41700 1 -- 192.168.123.105:0/2220731296 >> 192.168.123.105:0/2220731296 conn(0x7f5d340754a0 msgr2=0x7f5d340758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:16.035 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.034+0000 7f5d38f41700 1 -- 192.168.123.105:0/2220731296 shutdown_connections 2026-03-09T14:58:16.035 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.034+0000 7f5d38f41700 1 -- 192.168.123.105:0/2220731296 wait complete. 2026-03-09T14:58:16.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.035+0000 7f5d38f41700 1 Processor -- start 2026-03-09T14:58:16.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.035+0000 7f5d38f41700 1 -- start start 2026-03-09T14:58:16.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.035+0000 7f5d38f41700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5d34068490 0x7f5d34196180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:16.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.035+0000 7f5d38f41700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5d341066c0 0x7f5d341966c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:16.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.035+0000 7f5d38f41700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5d34196da0 con 0x7f5d341066c0 2026-03-09T14:58:16.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.035+0000 7f5d38f41700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5d3419ab30 con 0x7f5d34068490 2026-03-09T14:58:16.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.036+0000 7f5d31d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5d341066c0 0x7f5d341966c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:16.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.036+0000 7f5d31d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5d341066c0 0x7f5d341966c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:51290/0 (socket says 192.168.123.105:51290) 2026-03-09T14:58:16.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.036+0000 7f5d31d9b700 1 -- 192.168.123.105:0/1598214527 learned_addr learned my addr 192.168.123.105:0/1598214527 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:16.037 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.036+0000 7f5d31d9b700 1 -- 192.168.123.105:0/1598214527 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5d34068490 msgr2=0x7f5d34196180 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T14:58:16.037 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.036+0000 7f5d31d9b700 1 --2- 192.168.123.105:0/1598214527 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5d34068490 0x7f5d34196180 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.037 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.036+0000 7f5d31d9b700 1 -- 192.168.123.105:0/1598214527 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5d280097e0 con 0x7f5d341066c0 2026-03-09T14:58:16.037 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.036+0000 7f5d31d9b700 1 --2- 192.168.123.105:0/1598214527 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5d341066c0 0x7f5d341966c0 secure :-1 s=READY pgs=252 cs=0 l=1 rev1=1 crypto rx=0x7f5d28004a80 tx=0x7f5d28004b60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:16.038 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.036+0000 7f5d237fe700 1 -- 192.168.123.105:0/1598214527 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5d2801d070 con 0x7f5d341066c0 2026-03-09T14:58:16.038 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.037+0000 7f5d237fe700 1 -- 192.168.123.105:0/1598214527 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5d2800bb80 con 0x7f5d341066c0 2026-03-09T14:58:16.038 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.037+0000 7f5d237fe700 1 -- 192.168.123.105:0/1598214527 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5d2800f790 con 0x7f5d341066c0 2026-03-09T14:58:16.038 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.037+0000 7f5d38f41700 1 -- 192.168.123.105:0/1598214527 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5d3419adb0 con 0x7f5d341066c0 2026-03-09T14:58:16.038 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.037+0000 7f5d38f41700 1 -- 192.168.123.105:0/1598214527 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5d3419b2a0 con 0x7f5d341066c0 2026-03-09T14:58:16.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.038+0000 7f5d237fe700 1 -- 192.168.123.105:0/1598214527 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f5d2800f8f0 con 0x7f5d341066c0 2026-03-09T14:58:16.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.038+0000 7f5d38f41700 1 -- 192.168.123.105:0/1598214527 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5d3404ea50 con 0x7f5d341066c0 2026-03-09T14:58:16.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.039+0000 7f5d237fe700 1 --2- 192.168.123.105:0/1598214527 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5d1c06c680 0x7f5d1c06eb30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:16.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.039+0000 7f5d237fe700 1 -- 192.168.123.105:0/1598214527 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f5d2808ca20 con 0x7f5d341066c0 2026-03-09T14:58:16.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.043+0000 7f5d3259c700 1 --2- 192.168.123.105:0/1598214527 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5d1c06c680 0x7f5d1c06eb30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:16.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.043+0000 7f5d237fe700 1 -- 192.168.123.105:0/1598214527 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f5d28057590 con 0x7f5d341066c0 2026-03-09T14:58:16.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.043+0000 7f5d3259c700 1 --2- 192.168.123.105:0/1598214527 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5d1c06c680 0x7f5d1c06eb30 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f5d2400ba10 tx=0x7f5d2400b3f0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:16.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.150+0000 7f5d38f41700 1 -- 192.168.123.105:0/1598214527 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) v1 -- 0x7f5d3419b580 con 0x7f5d1c06c680 2026-03-09T14:58:16.152 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.151+0000 7f5d237fe700 1 -- 192.168.123.105:0/1598214527 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 dumped all) v1 ==== 18+0+19151 (secure 0 0 0) 0x7f5d3419b580 con 0x7f5d1c06c680 2026-03-09T14:58:16.152 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:16.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.154+0000 7f5d38f41700 1 -- 192.168.123.105:0/1598214527 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5d1c06c680 msgr2=0x7f5d1c06eb30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:16.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.154+0000 7f5d38f41700 1 --2- 192.168.123.105:0/1598214527 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5d1c06c680 0x7f5d1c06eb30 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f5d2400ba10 tx=0x7f5d2400b3f0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.154+0000 7f5d38f41700 1 -- 192.168.123.105:0/1598214527 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5d341066c0 msgr2=0x7f5d341966c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:16.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.154+0000 7f5d38f41700 1 --2- 192.168.123.105:0/1598214527 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5d341066c0 0x7f5d341966c0 secure :-1 s=READY pgs=252 cs=0 l=1 rev1=1 crypto rx=0x7f5d28004a80 tx=0x7f5d28004b60 comp rx=0 tx=0).stop 2026-03-09T14:58:16.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.154+0000 7f5d38f41700 1 -- 192.168.123.105:0/1598214527 shutdown_connections 2026-03-09T14:58:16.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.154+0000 7f5d38f41700 1 --2- 192.168.123.105:0/1598214527 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f5d1c06c680 0x7f5d1c06eb30 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.154+0000 7f5d38f41700 1 --2- 192.168.123.105:0/1598214527 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5d34068490 0x7f5d34196180 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.154+0000 7f5d38f41700 1 --2- 192.168.123.105:0/1598214527 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5d341066c0 0x7f5d341966c0 unknown :-1 s=CLOSED pgs=252 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.154+0000 7f5d38f41700 1 -- 192.168.123.105:0/1598214527 >> 192.168.123.105:0/1598214527 conn(0x7f5d340754a0 msgr2=0x7f5d340fec90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:16.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.154+0000 7f5d38f41700 1 -- 192.168.123.105:0/1598214527 shutdown_connections 2026-03-09T14:58:16.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.154+0000 7f5d38f41700 1 -- 192.168.123.105:0/1598214527 wait complete. 2026-03-09T14:58:16.156 INFO:teuthology.orchestra.run.vm05.stderr:dumped all 2026-03-09T14:58:16.218 INFO:teuthology.orchestra.run.vm05.stdout:{"pg_ready":true,"pg_map":{"version":70,"stamp":"2026-03-09T14:58:15.063990+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":96,"num_read_kb":82,"num_write":113,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":76,"ondisk_log_size":76,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":3,"kb":125804544,"kb_used":163640,"kb_used_data":3080,"kb_used_omap":0,"kb_used_meta":160512,"kb_avail":125640904,"statfs":{"total":128823853056,"available":128656285696,"internally_reserved":0,"allocated":3153920,"data_stored":2041406,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":164364288},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"10.001439"},"pg_stats":[{"pgid":"1.0","version":"20'76","reported_seq":135,"reported_epoch":31,"state":"active+clean","last_fresh":"2026-03-09T14:58:03.716925+0000","last_change":"2026-03-09T14:57:55.549226+0000","last_active":"2026-03-09T14:58:03.716925+0000","last_peered":"2026-03-09T14:58:03.716925+0000","last_clean":"2026-03-09T14:58:03.716925+0000","last_became_active":"2026-03-09T14:57:55.549069+0000","last_became_peered":"2026-03-09T14:57:55.549069+0000","last_unstale":"2026-03-09T14:58:03.716925+0000","last_undegraded":"2026-03-09T14:58:03.716925+0000","last_fullsized":"2026-03-09T14:58:03.716925+0000","mapping_epoch":27,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":28,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:57:35.907716+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:57:35.907716+0000","last_clean_scrub_stamp":"2026-03-09T14:57:35.907716+0000","objects_scrubbed":0,"log_size":76,"log_dups_size":0,"ondisk_log_size":76,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:52:39.407343+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":96,"num_read_kb":82,"num_write":113,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":96,"num_read_kb":82,"num_write":113,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":76,"ondisk_log_size":76,"up":3,"acting":3,"num_store_stats":4}],"osd_stats":[{"osd":5,"up_from":32,"seq":137438953477,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":288,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":294912,"data_stored":110698,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.52900000000000003}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47299999999999998}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.64800000000000002}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.50900000000000001}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.499}]}]},{"osd":4,"up_from":28,"seq":120259084293,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":288,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":294912,"data_stored":110698,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.44600000000000001}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.72799999999999998}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.56699999999999995}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.40000000000000002}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.51300000000000001}]}]},{"osd":3,"up_from":23,"seq":98784247816,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":740,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":757760,"data_stored":569978,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.35999999999999999}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.497}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.65600000000000003}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.34399999999999997}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.26400000000000001}]}]},{"osd":2,"up_from":17,"seq":73014444042,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":288,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":294912,"data_stored":110698,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.41799999999999998}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.53200000000000003}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.54300000000000004}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.55900000000000005}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.55100000000000005}]}]},{"osd":0,"up_from":9,"seq":38654705678,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":740,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":757760,"data_stored":569978,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.621}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.57399999999999995}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.65000000000000002}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.58899999999999997}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.66500000000000004}]}]},{"osd":1,"up_from":13,"seq":55834574860,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27496,"kb_used_data":736,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20939928,"statfs":{"total":21470642176,"available":21442486272,"internally_reserved":0,"allocated":753664,"data_stored":569356,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.49299999999999999}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.53100000000000003}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.60699999999999998}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.56999999999999995}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.61699999999999999}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T14:58:16.218 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph pg dump --format=json 2026-03-09T14:58:16.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:16 vm09 ceph-mon[59673]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:16.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:16 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/3122087309' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:58:16.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:16 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/2299952818' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:58:16.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:16 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/2910530' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:58:16.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:16 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/2441088863' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:58:16.387 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:16.416 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:16 vm05 ceph-mon[50611]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:16.416 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:16 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3122087309' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:58:16.416 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:16 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2299952818' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:58:16.416 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:16 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2910530' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:58:16.416 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:16 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2441088863' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:58:16.651 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.649+0000 7f116920d700 1 -- 192.168.123.105:0/2626604867 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f11640ff7c0 msgr2=0x7f11640ffc30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:16.651 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.649+0000 7f116920d700 1 --2- 192.168.123.105:0/2626604867 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f11640ff7c0 0x7f11640ffc30 secure :-1 s=READY pgs=253 cs=0 l=1 rev1=1 crypto rx=0x7f115c00b3a0 tx=0x7f115c00b6b0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.652 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.650+0000 7f116920d700 1 -- 192.168.123.105:0/2626604867 shutdown_connections 2026-03-09T14:58:16.652 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.650+0000 7f116920d700 1 --2- 192.168.123.105:0/2626604867 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f11640ff7c0 0x7f11640ffc30 unknown :-1 s=CLOSED pgs=253 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.652 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.650+0000 7f116920d700 1 --2- 192.168.123.105:0/2626604867 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f11640ff3f0 0x7f1164106280 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.652 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.650+0000 7f116920d700 1 -- 192.168.123.105:0/2626604867 >> 192.168.123.105:0/2626604867 conn(0x7f11640747e0 msgr2=0x7f1164074be0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:16.652 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.651+0000 7f116920d700 1 -- 192.168.123.105:0/2626604867 shutdown_connections 2026-03-09T14:58:16.652 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.651+0000 7f116920d700 1 -- 192.168.123.105:0/2626604867 wait complete. 2026-03-09T14:58:16.652 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.651+0000 7f116920d700 1 Processor -- start 2026-03-09T14:58:16.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.652+0000 7f116920d700 1 -- start start 2026-03-09T14:58:16.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.652+0000 7f116920d700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f11640ff3f0 0x7f1164072870 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:16.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.652+0000 7f116920d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f11640ff7c0 0x7f116406d870 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:16.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.652+0000 7f116920d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f116406ddb0 con 0x7f11640ff7c0 2026-03-09T14:58:16.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.652+0000 7f116920d700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f116406def0 con 0x7f11640ff3f0 2026-03-09T14:58:16.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.652+0000 7f1162d9d700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f11640ff3f0 0x7f1164072870 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:16.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.652+0000 7f1162d9d700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f11640ff3f0 0x7f1164072870 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:47000/0 (socket says 192.168.123.105:47000) 2026-03-09T14:58:16.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.652+0000 7f1162d9d700 1 -- 192.168.123.105:0/1163423823 learned_addr learned my addr 192.168.123.105:0/1163423823 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:16.653 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.653+0000 7f116259c700 1 --2- 192.168.123.105:0/1163423823 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f11640ff7c0 0x7f116406d870 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:16.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.653+0000 7f1162d9d700 1 -- 192.168.123.105:0/1163423823 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f11640ff7c0 msgr2=0x7f116406d870 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:16.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.653+0000 7f1162d9d700 1 --2- 192.168.123.105:0/1163423823 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f11640ff7c0 0x7f116406d870 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.653+0000 7f1162d9d700 1 -- 192.168.123.105:0/1163423823 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f115c00b050 con 0x7f11640ff3f0 2026-03-09T14:58:16.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.653+0000 7f116259c700 1 --2- 192.168.123.105:0/1163423823 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f11640ff7c0 0x7f116406d870 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T14:58:16.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.653+0000 7f1162d9d700 1 --2- 192.168.123.105:0/1163423823 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f11640ff3f0 0x7f1164072870 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f115400d8d0 tx=0x7f115400dc90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:16.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.654+0000 7f114bfff700 1 -- 192.168.123.105:0/1163423823 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1154009940 con 0x7f11640ff3f0 2026-03-09T14:58:16.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.654+0000 7f116920d700 1 -- 192.168.123.105:0/1163423823 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f116406e1d0 con 0x7f11640ff3f0 2026-03-09T14:58:16.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.654+0000 7f114bfff700 1 -- 192.168.123.105:0/1163423823 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1154010460 con 0x7f11640ff3f0 2026-03-09T14:58:16.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.654+0000 7f116920d700 1 -- 192.168.123.105:0/1163423823 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f11641af430 con 0x7f11640ff3f0 2026-03-09T14:58:16.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.654+0000 7f114bfff700 1 -- 192.168.123.105:0/1163423823 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f115400f5d0 con 0x7f11640ff3f0 2026-03-09T14:58:16.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.655+0000 7f114bfff700 1 -- 192.168.123.105:0/1163423823 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f115400f7c0 con 0x7f11640ff3f0 2026-03-09T14:58:16.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.655+0000 7f116920d700 1 -- 192.168.123.105:0/1163423823 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f11641039e0 con 0x7f11640ff3f0 2026-03-09T14:58:16.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.656+0000 7f114bfff700 1 --2- 192.168.123.105:0/1163423823 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f114c06c6f0 0x7f114c06eba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:16.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.656+0000 7f114bfff700 1 -- 192.168.123.105:0/1163423823 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f115408c470 con 0x7f11640ff3f0 2026-03-09T14:58:16.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.656+0000 7f116259c700 1 --2- 192.168.123.105:0/1163423823 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f114c06c6f0 0x7f114c06eba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:16.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.657+0000 7f116259c700 1 --2- 192.168.123.105:0/1163423823 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f114c06c6f0 0x7f114c06eba0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f115c009250 tx=0x7f115c00bf90 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:16.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.659+0000 7f114bfff700 1 -- 192.168.123.105:0/1163423823 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f115405a9c0 con 0x7f11640ff3f0 2026-03-09T14:58:16.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.762+0000 7f116920d700 1 -- 192.168.123.105:0/1163423823 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) v1 -- 0x7f116406ee10 con 0x7f114c06c6f0 2026-03-09T14:58:16.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.765+0000 7f114bfff700 1 -- 192.168.123.105:0/1163423823 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 dumped all) v1 ==== 18+0+19151 (secure 0 0 0) 0x7f116406ee10 con 0x7f114c06c6f0 2026-03-09T14:58:16.766 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:16.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.768+0000 7f116920d700 1 -- 192.168.123.105:0/1163423823 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f114c06c6f0 msgr2=0x7f114c06eba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:16.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.768+0000 7f116920d700 1 --2- 192.168.123.105:0/1163423823 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f114c06c6f0 0x7f114c06eba0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f115c009250 tx=0x7f115c00bf90 comp rx=0 tx=0).stop 2026-03-09T14:58:16.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.768+0000 7f116920d700 1 -- 192.168.123.105:0/1163423823 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f11640ff3f0 msgr2=0x7f1164072870 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:16.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.768+0000 7f116920d700 1 --2- 192.168.123.105:0/1163423823 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f11640ff3f0 0x7f1164072870 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f115400d8d0 tx=0x7f115400dc90 comp rx=0 tx=0).stop 2026-03-09T14:58:16.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.768+0000 7f116920d700 1 -- 192.168.123.105:0/1163423823 shutdown_connections 2026-03-09T14:58:16.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.768+0000 7f116920d700 1 --2- 192.168.123.105:0/1163423823 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f114c06c6f0 0x7f114c06eba0 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.768+0000 7f116920d700 1 --2- 192.168.123.105:0/1163423823 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f11640ff3f0 0x7f1164072870 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.768+0000 7f116920d700 1 --2- 192.168.123.105:0/1163423823 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f11640ff7c0 0x7f116406d870 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:16.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.768+0000 7f116920d700 1 -- 192.168.123.105:0/1163423823 >> 192.168.123.105:0/1163423823 conn(0x7f11640747e0 msgr2=0x7f11640fe360 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:16.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.768+0000 7f116920d700 1 -- 192.168.123.105:0/1163423823 shutdown_connections 2026-03-09T14:58:16.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:16.768+0000 7f116920d700 1 -- 192.168.123.105:0/1163423823 wait complete. 2026-03-09T14:58:16.770 INFO:teuthology.orchestra.run.vm05.stderr:dumped all 2026-03-09T14:58:16.821 INFO:teuthology.orchestra.run.vm05.stdout:{"pg_ready":true,"pg_map":{"version":70,"stamp":"2026-03-09T14:58:15.063990+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":96,"num_read_kb":82,"num_write":113,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":76,"ondisk_log_size":76,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":3,"kb":125804544,"kb_used":163640,"kb_used_data":3080,"kb_used_omap":0,"kb_used_meta":160512,"kb_avail":125640904,"statfs":{"total":128823853056,"available":128656285696,"internally_reserved":0,"allocated":3153920,"data_stored":2041406,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":164364288},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"10.001439"},"pg_stats":[{"pgid":"1.0","version":"20'76","reported_seq":135,"reported_epoch":31,"state":"active+clean","last_fresh":"2026-03-09T14:58:03.716925+0000","last_change":"2026-03-09T14:57:55.549226+0000","last_active":"2026-03-09T14:58:03.716925+0000","last_peered":"2026-03-09T14:58:03.716925+0000","last_clean":"2026-03-09T14:58:03.716925+0000","last_became_active":"2026-03-09T14:57:55.549069+0000","last_became_peered":"2026-03-09T14:57:55.549069+0000","last_unstale":"2026-03-09T14:58:03.716925+0000","last_undegraded":"2026-03-09T14:58:03.716925+0000","last_fullsized":"2026-03-09T14:58:03.716925+0000","mapping_epoch":27,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":28,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:57:35.907716+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:57:35.907716+0000","last_clean_scrub_stamp":"2026-03-09T14:57:35.907716+0000","objects_scrubbed":0,"log_size":76,"log_dups_size":0,"ondisk_log_size":76,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:52:39.407343+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":96,"num_read_kb":82,"num_write":113,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":96,"num_read_kb":82,"num_write":113,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":76,"ondisk_log_size":76,"up":3,"acting":3,"num_store_stats":4}],"osd_stats":[{"osd":5,"up_from":32,"seq":137438953477,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":288,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":294912,"data_stored":110698,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.52900000000000003}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47299999999999998}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.64800000000000002}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.50900000000000001}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.499}]}]},{"osd":4,"up_from":28,"seq":120259084293,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":288,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":294912,"data_stored":110698,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.44600000000000001}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.72799999999999998}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.56699999999999995}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.40000000000000002}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.51300000000000001}]}]},{"osd":3,"up_from":23,"seq":98784247816,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":740,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":757760,"data_stored":569978,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.35999999999999999}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.497}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.65600000000000003}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.34399999999999997}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.26400000000000001}]}]},{"osd":2,"up_from":17,"seq":73014444042,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27048,"kb_used_data":288,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20940376,"statfs":{"total":21470642176,"available":21442945024,"internally_reserved":0,"allocated":294912,"data_stored":110698,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.41799999999999998}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.53200000000000003}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.54300000000000004}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.55900000000000005}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.55100000000000005}]}]},{"osd":0,"up_from":9,"seq":38654705678,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27500,"kb_used_data":740,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20939924,"statfs":{"total":21470642176,"available":21442482176,"internally_reserved":0,"allocated":757760,"data_stored":569978,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.621}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.57399999999999995}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.65000000000000002}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.58899999999999997}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.66500000000000004}]}]},{"osd":1,"up_from":13,"seq":55834574860,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27496,"kb_used_data":736,"kb_used_omap":0,"kb_used_meta":26752,"kb_avail":20939928,"statfs":{"total":21470642176,"available":21442486272,"internally_reserved":0,"allocated":753664,"data_stored":569356,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":27394048},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.49299999999999999}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.53100000000000003}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.60699999999999998}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.56999999999999995}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.61699999999999999}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T14:58:16.821 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T14:58:16.821 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T14:58:16.822 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T14:58:16.822 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph health --format=json 2026-03-09T14:58:16.994 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:17.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.250+0000 7f0596e13700 1 -- 192.168.123.105:0/491396939 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0590102780 msgr2=0x7f0590102bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:17.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.250+0000 7f0596e13700 1 --2- 192.168.123.105:0/491396939 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0590102780 0x7f0590102bf0 secure :-1 s=READY pgs=254 cs=0 l=1 rev1=1 crypto rx=0x7f0584009b50 tx=0x7f0584009e60 comp rx=0 tx=0).stop 2026-03-09T14:58:17.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.252+0000 7f0596e13700 1 -- 192.168.123.105:0/491396939 shutdown_connections 2026-03-09T14:58:17.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.252+0000 7f0596e13700 1 --2- 192.168.123.105:0/491396939 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0590102780 0x7f0590102bf0 unknown :-1 s=CLOSED pgs=254 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:17.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.252+0000 7f0596e13700 1 --2- 192.168.123.105:0/491396939 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0590108780 0x7f0590108b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:17.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.252+0000 7f0596e13700 1 -- 192.168.123.105:0/491396939 >> 192.168.123.105:0/491396939 conn(0x7f05900fe280 msgr2=0x7f0590100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:17.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.252+0000 7f0596e13700 1 -- 192.168.123.105:0/491396939 shutdown_connections 2026-03-09T14:58:17.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.252+0000 7f0596e13700 1 -- 192.168.123.105:0/491396939 wait complete. 2026-03-09T14:58:17.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.253+0000 7f0596e13700 1 Processor -- start 2026-03-09T14:58:17.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.253+0000 7f0596e13700 1 -- start start 2026-03-09T14:58:17.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.253+0000 7f0596e13700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0590102780 0x7f0590198390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:17.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.253+0000 7f0596e13700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0590108780 0x7f05901988d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:17.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.253+0000 7f0596e13700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0590198fb0 con 0x7f0590108780 2026-03-09T14:58:17.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.253+0000 7f0596e13700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f059019ccf0 con 0x7f0590102780 2026-03-09T14:58:17.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.253+0000 7f0594baf700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0590102780 0x7f0590198390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:17.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.253+0000 7f0594baf700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0590102780 0x7f0590198390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:47024/0 (socket says 192.168.123.105:47024) 2026-03-09T14:58:17.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.253+0000 7f0594baf700 1 -- 192.168.123.105:0/3523422558 learned_addr learned my addr 192.168.123.105:0/3523422558 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:17.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.253+0000 7f058ffff700 1 --2- 192.168.123.105:0/3523422558 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0590108780 0x7f05901988d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:17.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.254+0000 7f058ffff700 1 -- 192.168.123.105:0/3523422558 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0590102780 msgr2=0x7f0590198390 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:17.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.254+0000 7f058ffff700 1 --2- 192.168.123.105:0/3523422558 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0590102780 0x7f0590198390 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:17.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.254+0000 7f058ffff700 1 -- 192.168.123.105:0/3523422558 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f05840097e0 con 0x7f0590108780 2026-03-09T14:58:17.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.254+0000 7f058ffff700 1 --2- 192.168.123.105:0/3523422558 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0590108780 0x7f05901988d0 secure :-1 s=READY pgs=255 cs=0 l=1 rev1=1 crypto rx=0x7f0584009b50 tx=0x7f0584004c30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:17.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.254+0000 7f058dffb700 1 -- 192.168.123.105:0/3523422558 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f058401d070 con 0x7f0590108780 2026-03-09T14:58:17.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.254+0000 7f058dffb700 1 -- 192.168.123.105:0/3523422558 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f0584022470 con 0x7f0590108780 2026-03-09T14:58:17.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.254+0000 7f0596e13700 1 -- 192.168.123.105:0/3523422558 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f059019cf70 con 0x7f0590108780 2026-03-09T14:58:17.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.254+0000 7f058dffb700 1 -- 192.168.123.105:0/3523422558 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f058400f670 con 0x7f0590108780 2026-03-09T14:58:17.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.254+0000 7f0596e13700 1 -- 192.168.123.105:0/3523422558 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f059019d460 con 0x7f0590108780 2026-03-09T14:58:17.261 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.257+0000 7f0596e13700 1 -- 192.168.123.105:0/3523422558 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f059004ea50 con 0x7f0590108780 2026-03-09T14:58:17.261 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.260+0000 7f058dffb700 1 -- 192.168.123.105:0/3523422558 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f0584022a50 con 0x7f0590108780 2026-03-09T14:58:17.261 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.260+0000 7f058dffb700 1 --2- 192.168.123.105:0/3523422558 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f057806c7a0 0x7f057806ec50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:17.261 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.260+0000 7f058dffb700 1 -- 192.168.123.105:0/3523422558 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f058408c840 con 0x7f0590108780 2026-03-09T14:58:17.261 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.261+0000 7f0594baf700 1 --2- 192.168.123.105:0/3523422558 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f057806c7a0 0x7f057806ec50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:17.262 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.261+0000 7f058dffb700 1 -- 192.168.123.105:0/3523422558 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f058405ae40 con 0x7f0590108780 2026-03-09T14:58:17.262 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.261+0000 7f0594baf700 1 --2- 192.168.123.105:0/3523422558 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f057806c7a0 0x7f057806ec50 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f0580005950 tx=0x7f0580009450 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:17.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:17 vm05 ceph-mon[50611]: from='client.14466 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:58:17.387 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.386+0000 7f0596e13700 1 -- 192.168.123.105:0/3523422558 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "format": "json"} v 0) v1 -- 0x7f0590066e40 con 0x7f0590108780 2026-03-09T14:58:17.387 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.386+0000 7f058dffb700 1 -- 192.168.123.105:0/3523422558 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "format": "json"}]=0 v0) v1 ==== 72+0+46 (secure 0 0 0) 0x7f0584031050 con 0x7f0590108780 2026-03-09T14:58:17.387 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:17.387 INFO:teuthology.orchestra.run.vm05.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T14:58:17.390 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.389+0000 7f0596e13700 1 -- 192.168.123.105:0/3523422558 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f057806c7a0 msgr2=0x7f057806ec50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:17.390 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.389+0000 7f0596e13700 1 --2- 192.168.123.105:0/3523422558 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f057806c7a0 0x7f057806ec50 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f0580005950 tx=0x7f0580009450 comp rx=0 tx=0).stop 2026-03-09T14:58:17.390 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.389+0000 7f0596e13700 1 -- 192.168.123.105:0/3523422558 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0590108780 msgr2=0x7f05901988d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:17.390 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.389+0000 7f0596e13700 1 --2- 192.168.123.105:0/3523422558 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0590108780 0x7f05901988d0 secure :-1 s=READY pgs=255 cs=0 l=1 rev1=1 crypto rx=0x7f0584009b50 tx=0x7f0584004c30 comp rx=0 tx=0).stop 2026-03-09T14:58:17.390 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.389+0000 7f0596e13700 1 -- 192.168.123.105:0/3523422558 shutdown_connections 2026-03-09T14:58:17.390 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.389+0000 7f0596e13700 1 --2- 192.168.123.105:0/3523422558 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f057806c7a0 0x7f057806ec50 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:17.390 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.389+0000 7f0596e13700 1 --2- 192.168.123.105:0/3523422558 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0590102780 0x7f0590198390 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:17.390 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.389+0000 7f0596e13700 1 --2- 192.168.123.105:0/3523422558 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0590108780 0x7f05901988d0 unknown :-1 s=CLOSED pgs=255 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:17.390 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.389+0000 7f0596e13700 1 -- 192.168.123.105:0/3523422558 >> 192.168.123.105:0/3523422558 conn(0x7f05900fe280 msgr2=0x7f05900ffbd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:17.390 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.390+0000 7f0596e13700 1 -- 192.168.123.105:0/3523422558 shutdown_connections 2026-03-09T14:58:17.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.390+0000 7f0596e13700 1 -- 192.168.123.105:0/3523422558 wait complete. 2026-03-09T14:58:17.442 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T14:58:17.442 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T14:58:17.442 INFO:teuthology.run_tasks:Running task print... 2026-03-09T14:58:17.444 INFO:teuthology.task.print:**** done end installing v18.2.0 cephadm ... 2026-03-09T14:58:17.444 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T14:58:17.446 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-09T14:58:17.446 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph config set mgr mgr/cephadm/use_repo_digest true --force' 2026-03-09T14:58:17.553 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:17 vm09 ceph-mon[59673]: from='client.14466 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:58:17.606 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:17.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.879+0000 7f0c40155700 1 -- 192.168.123.105:0/2700297404 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c380fedc0 msgr2=0x7f0c381011e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:17.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.879+0000 7f0c40155700 1 --2- 192.168.123.105:0/2700297404 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c380fedc0 0x7f0c381011e0 secure :-1 s=READY pgs=256 cs=0 l=1 rev1=1 crypto rx=0x7f0c28009b00 tx=0x7f0c28009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:17.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.880+0000 7f0c40155700 1 -- 192.168.123.105:0/2700297404 shutdown_connections 2026-03-09T14:58:17.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.880+0000 7f0c40155700 1 --2- 192.168.123.105:0/2700297404 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0c38101720 0x7f0c38103ba0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:17.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.880+0000 7f0c40155700 1 --2- 192.168.123.105:0/2700297404 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c380fedc0 0x7f0c381011e0 unknown :-1 s=CLOSED pgs=256 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:17.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.880+0000 7f0c40155700 1 -- 192.168.123.105:0/2700297404 >> 192.168.123.105:0/2700297404 conn(0x7f0c380fa9b0 msgr2=0x7f0c380fce20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:17.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.880+0000 7f0c40155700 1 -- 192.168.123.105:0/2700297404 shutdown_connections 2026-03-09T14:58:17.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.880+0000 7f0c40155700 1 -- 192.168.123.105:0/2700297404 wait complete. 2026-03-09T14:58:17.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.881+0000 7f0c40155700 1 Processor -- start 2026-03-09T14:58:17.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.881+0000 7f0c40155700 1 -- start start 2026-03-09T14:58:17.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.881+0000 7f0c40155700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c380fedc0 0x7f0c3819c470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:17.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.881+0000 7f0c40155700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0c38101720 0x7f0c3819c9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:17.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.881+0000 7f0c40155700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0c3819cfd0 con 0x7f0c380fedc0 2026-03-09T14:58:17.883 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.881+0000 7f0c40155700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0c3819d110 con 0x7f0c38101720 2026-03-09T14:58:17.883 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.881+0000 7f0c3def1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c380fedc0 0x7f0c3819c470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:17.883 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.881+0000 7f0c3def1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c380fedc0 0x7f0c3819c470 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:51356/0 (socket says 192.168.123.105:51356) 2026-03-09T14:58:17.884 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.881+0000 7f0c3def1700 1 -- 192.168.123.105:0/1942544858 learned_addr learned my addr 192.168.123.105:0/1942544858 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:17.884 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.881+0000 7f0c3d6f0700 1 --2- 192.168.123.105:0/1942544858 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0c38101720 0x7f0c3819c9b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:17.884 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.882+0000 7f0c3def1700 1 -- 192.168.123.105:0/1942544858 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0c38101720 msgr2=0x7f0c3819c9b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:17.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.882+0000 7f0c3def1700 1 --2- 192.168.123.105:0/1942544858 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0c38101720 0x7f0c3819c9b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:17.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.882+0000 7f0c3def1700 1 -- 192.168.123.105:0/1942544858 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0c280097e0 con 0x7f0c380fedc0 2026-03-09T14:58:17.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.882+0000 7f0c3def1700 1 --2- 192.168.123.105:0/1942544858 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c380fedc0 0x7f0c3819c470 secure :-1 s=READY pgs=257 cs=0 l=1 rev1=1 crypto rx=0x7f0c2800bb70 tx=0x7f0c2800bba0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:17.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.882+0000 7f0c2effd700 1 -- 192.168.123.105:0/1942544858 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0c2801d070 con 0x7f0c380fedc0 2026-03-09T14:58:17.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.882+0000 7f0c40155700 1 -- 192.168.123.105:0/1942544858 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0c381a1b10 con 0x7f0c380fedc0 2026-03-09T14:58:17.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.882+0000 7f0c40155700 1 -- 192.168.123.105:0/1942544858 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0c381a2000 con 0x7f0c380fedc0 2026-03-09T14:58:17.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.882+0000 7f0c2effd700 1 -- 192.168.123.105:0/1942544858 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f0c28022470 con 0x7f0c380fedc0 2026-03-09T14:58:17.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.882+0000 7f0c2effd700 1 -- 192.168.123.105:0/1942544858 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0c2800f670 con 0x7f0c380fedc0 2026-03-09T14:58:17.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.883+0000 7f0c2effd700 1 -- 192.168.123.105:0/1942544858 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f0c2800f7d0 con 0x7f0c380fedc0 2026-03-09T14:58:17.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.884+0000 7f0c2effd700 1 --2- 192.168.123.105:0/1942544858 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f0c2406c750 0x7f0c2406ec00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:17.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.884+0000 7f0c3d6f0700 1 --2- 192.168.123.105:0/1942544858 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f0c2406c750 0x7f0c2406ec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:17.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.885+0000 7f0c2effd700 1 -- 192.168.123.105:0/1942544858 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f0c2808da50 con 0x7f0c380fedc0 2026-03-09T14:58:17.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.885+0000 7f0c3d6f0700 1 --2- 192.168.123.105:0/1942544858 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f0c2406c750 0x7f0c2406ec00 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f0c34009e50 tx=0x7f0c34009450 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:17.888 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.885+0000 7f0c40155700 1 -- 192.168.123.105:0/1942544858 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0c1c005320 con 0x7f0c380fedc0 2026-03-09T14:58:17.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.888+0000 7f0c2effd700 1 -- 192.168.123.105:0/1942544858 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f0c2805c0d0 con 0x7f0c380fedc0 2026-03-09T14:58:17.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.991+0000 7f0c40155700 1 -- 192.168.123.105:0/1942544858 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1 -- 0x7f0c1c0059f0 con 0x7f0c380fedc0 2026-03-09T14:58:17.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:17.997+0000 7f0c2effd700 1 -- 192.168.123.105:0/1942544858 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/use_repo_digest}]=0 v14)=0 v14) v1 ==== 143+0+0 (secure 0 0 0) 0x7f0c28027070 con 0x7f0c380fedc0 2026-03-09T14:58:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.004+0000 7f0c40155700 1 -- 192.168.123.105:0/1942544858 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f0c2406c750 msgr2=0x7f0c2406ec00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.004+0000 7f0c40155700 1 --2- 192.168.123.105:0/1942544858 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f0c2406c750 0x7f0c2406ec00 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f0c34009e50 tx=0x7f0c34009450 comp rx=0 tx=0).stop 2026-03-09T14:58:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.005+0000 7f0c40155700 1 -- 192.168.123.105:0/1942544858 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c380fedc0 msgr2=0x7f0c3819c470 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.005+0000 7f0c40155700 1 --2- 192.168.123.105:0/1942544858 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c380fedc0 0x7f0c3819c470 secure :-1 s=READY pgs=257 cs=0 l=1 rev1=1 crypto rx=0x7f0c2800bb70 tx=0x7f0c2800bba0 comp rx=0 tx=0).stop 2026-03-09T14:58:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.005+0000 7f0c40155700 1 -- 192.168.123.105:0/1942544858 shutdown_connections 2026-03-09T14:58:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.005+0000 7f0c40155700 1 --2- 192.168.123.105:0/1942544858 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f0c2406c750 0x7f0c2406ec00 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.005+0000 7f0c40155700 1 --2- 192.168.123.105:0/1942544858 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0c380fedc0 0x7f0c3819c470 unknown :-1 s=CLOSED pgs=257 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.005+0000 7f0c40155700 1 --2- 192.168.123.105:0/1942544858 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0c38101720 0x7f0c3819c9b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.005+0000 7f0c40155700 1 -- 192.168.123.105:0/1942544858 >> 192.168.123.105:0/1942544858 conn(0x7f0c380fa9b0 msgr2=0x7f0c380fce20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.006+0000 7f0c40155700 1 -- 192.168.123.105:0/1942544858 shutdown_connections 2026-03-09T14:58:18.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.006+0000 7f0c40155700 1 -- 192.168.123.105:0/1942544858 wait complete. 2026-03-09T14:58:18.050 INFO:teuthology.run_tasks:Running task print... 2026-03-09T14:58:18.052 INFO:teuthology.task.print:**** done cephadm.shell ceph config set mgr... 2026-03-09T14:58:18.052 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T14:58:18.054 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-09T14:58:18.054 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph orch status' 2026-03-09T14:58:18.233 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:18.261 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:18 vm05 ceph-mon[50611]: from='client.24263 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:58:18.261 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:18 vm05 ceph-mon[50611]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:18.261 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:18 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3523422558' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:58:18.261 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:18 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/1942544858' entity='client.admin' 2026-03-09T14:58:18.261 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:58:18.261 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:18.261 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:58:18.261 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:18 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:18.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.516+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/3350844011 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6b0100560 msgr2=0x7fc6b0100970 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:18.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.516+0000 7fc6b5f9b700 1 --2- 192.168.123.105:0/3350844011 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6b0100560 0x7fc6b0100970 secure :-1 s=READY pgs=258 cs=0 l=1 rev1=1 crypto rx=0x7fc698009b00 tx=0x7fc698009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:18.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.518+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/3350844011 shutdown_connections 2026-03-09T14:58:18.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.518+0000 7fc6b5f9b700 1 --2- 192.168.123.105:0/3350844011 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc6b0101760 0x7fc6b0101bb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:18.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.518+0000 7fc6b5f9b700 1 --2- 192.168.123.105:0/3350844011 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6b0100560 0x7fc6b0100970 unknown :-1 s=CLOSED pgs=258 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:18.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.518+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/3350844011 >> 192.168.123.105:0/3350844011 conn(0x7fc6b00fbb10 msgr2=0x7fc6b00fdf40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:18.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.519+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/3350844011 shutdown_connections 2026-03-09T14:58:18.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.519+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/3350844011 wait complete. 2026-03-09T14:58:18.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.519+0000 7fc6b5f9b700 1 Processor -- start 2026-03-09T14:58:18.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.519+0000 7fc6b5f9b700 1 -- start start 2026-03-09T14:58:18.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.520+0000 7fc6b5f9b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc6b0100560 0x7fc6b0195e30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:18.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.520+0000 7fc6b5f9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6b0101760 0x7fc6b0196370 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:18.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.520+0000 7fc6b5f9b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc6b0196990 con 0x7fc6b0101760 2026-03-09T14:58:18.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.520+0000 7fc6b5f9b700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc6b0196ad0 con 0x7fc6b0100560 2026-03-09T14:58:18.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.520+0000 7fc6aeffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6b0101760 0x7fc6b0196370 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:18.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.520+0000 7fc6aeffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6b0101760 0x7fc6b0196370 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:51376/0 (socket says 192.168.123.105:51376) 2026-03-09T14:58:18.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.520+0000 7fc6aeffd700 1 -- 192.168.123.105:0/478615343 learned_addr learned my addr 192.168.123.105:0/478615343 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:18.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.520+0000 7fc6aeffd700 1 -- 192.168.123.105:0/478615343 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc6b0100560 msgr2=0x7fc6b0195e30 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T14:58:18.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.520+0000 7fc6af7fe700 1 --2- 192.168.123.105:0/478615343 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc6b0100560 0x7fc6b0195e30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:18.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.520+0000 7fc6aeffd700 1 --2- 192.168.123.105:0/478615343 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc6b0100560 0x7fc6b0195e30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:18.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.520+0000 7fc6aeffd700 1 -- 192.168.123.105:0/478615343 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc6980097e0 con 0x7fc6b0101760 2026-03-09T14:58:18.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.520+0000 7fc6af7fe700 1 --2- 192.168.123.105:0/478615343 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc6b0100560 0x7fc6b0195e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T14:58:18.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.521+0000 7fc6aeffd700 1 --2- 192.168.123.105:0/478615343 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6b0101760 0x7fc6b0196370 secure :-1 s=READY pgs=259 cs=0 l=1 rev1=1 crypto rx=0x7fc6a000d8d0 tx=0x7fc6a000dc90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:18.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.521+0000 7fc6acff9700 1 -- 192.168.123.105:0/478615343 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc6a0009940 con 0x7fc6b0101760 2026-03-09T14:58:18.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.521+0000 7fc6acff9700 1 -- 192.168.123.105:0/478615343 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc6a0010460 con 0x7fc6b0101760 2026-03-09T14:58:18.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.521+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/478615343 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc6b019b580 con 0x7fc6b0101760 2026-03-09T14:58:18.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.521+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/478615343 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc6b019bad0 con 0x7fc6b0101760 2026-03-09T14:58:18.524 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.522+0000 7fc6acff9700 1 -- 192.168.123.105:0/478615343 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc6a000f5b0 con 0x7fc6b0101760 2026-03-09T14:58:18.524 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.523+0000 7fc6acff9700 1 -- 192.168.123.105:0/478615343 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fc6a00105d0 con 0x7fc6b0101760 2026-03-09T14:58:18.525 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.523+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/478615343 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc6b0066e40 con 0x7fc6b0101760 2026-03-09T14:58:18.525 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.523+0000 7fc6acff9700 1 --2- 192.168.123.105:0/478615343 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc69c06c680 0x7fc69c06eb30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:18.525 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.524+0000 7fc6af7fe700 1 --2- 192.168.123.105:0/478615343 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc69c06c680 0x7fc69c06eb30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:18.525 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.524+0000 7fc6acff9700 1 -- 192.168.123.105:0/478615343 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fc6a008b4c0 con 0x7fc6b0101760 2026-03-09T14:58:18.527 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.527+0000 7fc6af7fe700 1 --2- 192.168.123.105:0/478615343 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc69c06c680 0x7fc69c06eb30 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7fc698009fd0 tx=0x7fc698005fd0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:18.528 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.527+0000 7fc6acff9700 1 -- 192.168.123.105:0/478615343 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fc6a0059700 con 0x7fc6b0101760 2026-03-09T14:58:18.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:18 vm09 ceph-mon[59673]: from='client.24263 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:58:18.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:18 vm09 ceph-mon[59673]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:18.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:18 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/3523422558' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:58:18.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:18 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/1942544858' entity='client.admin' 2026-03-09T14:58:18.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:58:18.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:18.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:58:18.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:18 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:18.640 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.639+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/478615343 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch status", "target": ["mon-mgr", ""]}) v1 -- 0x7fc6b019b710 con 0x7fc69c06c680 2026-03-09T14:58:18.641 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.640+0000 7fc6acff9700 1 -- 192.168.123.105:0/478615343 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+43 (secure 0 0 0) 0x7fc6b019b710 con 0x7fc69c06c680 2026-03-09T14:58:18.641 INFO:teuthology.orchestra.run.vm05.stdout:Backend: cephadm 2026-03-09T14:58:18.641 INFO:teuthology.orchestra.run.vm05.stdout:Available: Yes 2026-03-09T14:58:18.641 INFO:teuthology.orchestra.run.vm05.stdout:Paused: No 2026-03-09T14:58:18.643 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.642+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/478615343 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc69c06c680 msgr2=0x7fc69c06eb30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:18.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.642+0000 7fc6b5f9b700 1 --2- 192.168.123.105:0/478615343 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc69c06c680 0x7fc69c06eb30 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7fc698009fd0 tx=0x7fc698005fd0 comp rx=0 tx=0).stop 2026-03-09T14:58:18.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.643+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/478615343 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6b0101760 msgr2=0x7fc6b0196370 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:18.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.643+0000 7fc6b5f9b700 1 --2- 192.168.123.105:0/478615343 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6b0101760 0x7fc6b0196370 secure :-1 s=READY pgs=259 cs=0 l=1 rev1=1 crypto rx=0x7fc6a000d8d0 tx=0x7fc6a000dc90 comp rx=0 tx=0).stop 2026-03-09T14:58:18.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.643+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/478615343 shutdown_connections 2026-03-09T14:58:18.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.643+0000 7fc6b5f9b700 1 --2- 192.168.123.105:0/478615343 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc69c06c680 0x7fc69c06eb30 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:18.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.643+0000 7fc6b5f9b700 1 --2- 192.168.123.105:0/478615343 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc6b0100560 0x7fc6b0195e30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:18.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.643+0000 7fc6b5f9b700 1 --2- 192.168.123.105:0/478615343 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc6b0101760 0x7fc6b0196370 unknown :-1 s=CLOSED pgs=259 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:18.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.643+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/478615343 >> 192.168.123.105:0/478615343 conn(0x7fc6b00fbb10 msgr2=0x7fc6b0104990 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:18.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.643+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/478615343 shutdown_connections 2026-03-09T14:58:18.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:18.643+0000 7fc6b5f9b700 1 -- 192.168.123.105:0/478615343 wait complete. 2026-03-09T14:58:18.691 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph orch ps' 2026-03-09T14:58:18.855 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:19.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.136+0000 7fae8759e700 1 -- 192.168.123.105:0/3330691748 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae88072440 msgr2=0x7fae8810be90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:19.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.136+0000 7fae8759e700 1 --2- 192.168.123.105:0/3330691748 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae88072440 0x7fae8810be90 secure :-1 s=READY pgs=260 cs=0 l=1 rev1=1 crypto rx=0x7fae70009b00 tx=0x7fae70009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:19.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.137+0000 7fae8759e700 1 -- 192.168.123.105:0/3330691748 shutdown_connections 2026-03-09T14:58:19.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.137+0000 7fae8759e700 1 --2- 192.168.123.105:0/3330691748 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae88072440 0x7fae8810be90 unknown :-1 s=CLOSED pgs=260 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.137+0000 7fae8759e700 1 --2- 192.168.123.105:0/3330691748 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fae88071a60 0x7fae88071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.137+0000 7fae8759e700 1 -- 192.168.123.105:0/3330691748 >> 192.168.123.105:0/3330691748 conn(0x7fae8806d1a0 msgr2=0x7fae8806f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:19.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.137+0000 7fae8759e700 1 -- 192.168.123.105:0/3330691748 shutdown_connections 2026-03-09T14:58:19.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.138+0000 7fae8759e700 1 -- 192.168.123.105:0/3330691748 wait complete. 2026-03-09T14:58:19.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.138+0000 7fae8759e700 1 Processor -- start 2026-03-09T14:58:19.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.140+0000 7fae8759e700 1 -- start start 2026-03-09T14:58:19.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.141+0000 7fae8759e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fae88071a60 0x7fae881a49f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:19.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.141+0000 7fae8759e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae88072440 0x7fae881a4f30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:19.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.141+0000 7fae8759e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fae881a5550 con 0x7fae88072440 2026-03-09T14:58:19.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.141+0000 7fae8759e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fae881a5690 con 0x7fae88071a60 2026-03-09T14:58:19.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.141+0000 7fae7ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae88072440 0x7fae881a4f30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:19.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.141+0000 7fae7ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae88072440 0x7fae881a4f30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41646/0 (socket says 192.168.123.105:41646) 2026-03-09T14:58:19.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.141+0000 7fae7ffff700 1 -- 192.168.123.105:0/3378204711 learned_addr learned my addr 192.168.123.105:0/3378204711 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:19.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.141+0000 7fae8659c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fae88071a60 0x7fae881a49f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:19.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.141+0000 7fae7ffff700 1 -- 192.168.123.105:0/3378204711 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fae88071a60 msgr2=0x7fae881a49f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:19.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.141+0000 7fae7ffff700 1 --2- 192.168.123.105:0/3378204711 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fae88071a60 0x7fae881a49f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.141+0000 7fae7ffff700 1 -- 192.168.123.105:0/3378204711 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fae700097e0 con 0x7fae88072440 2026-03-09T14:58:19.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.142+0000 7fae7ffff700 1 --2- 192.168.123.105:0/3378204711 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae88072440 0x7fae881a4f30 secure :-1 s=READY pgs=261 cs=0 l=1 rev1=1 crypto rx=0x7fae70006010 tx=0x7fae70004930 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:19.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.142+0000 7fae7f7fe700 1 -- 192.168.123.105:0/3378204711 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fae7001d070 con 0x7fae88072440 2026-03-09T14:58:19.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.142+0000 7fae7f7fe700 1 -- 192.168.123.105:0/3378204711 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fae7000bc50 con 0x7fae88072440 2026-03-09T14:58:19.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.142+0000 7fae8759e700 1 -- 192.168.123.105:0/3378204711 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fae881aa0e0 con 0x7fae88072440 2026-03-09T14:58:19.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.142+0000 7fae8759e700 1 -- 192.168.123.105:0/3378204711 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fae881aa5d0 con 0x7fae88072440 2026-03-09T14:58:19.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.143+0000 7fae7f7fe700 1 -- 192.168.123.105:0/3378204711 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fae7000f780 con 0x7fae88072440 2026-03-09T14:58:19.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.143+0000 7fae8759e700 1 -- 192.168.123.105:0/3378204711 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fae8819ec00 con 0x7fae88072440 2026-03-09T14:58:19.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.145+0000 7fae7f7fe700 1 -- 192.168.123.105:0/3378204711 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fae70022470 con 0x7fae88072440 2026-03-09T14:58:19.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.145+0000 7fae7f7fe700 1 --2- 192.168.123.105:0/3378204711 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fae7406c680 0x7fae7406eb30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:19.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.145+0000 7fae7f7fe700 1 -- 192.168.123.105:0/3378204711 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7fae7008d030 con 0x7fae88072440 2026-03-09T14:58:19.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.146+0000 7fae8659c700 1 --2- 192.168.123.105:0/3378204711 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fae7406c680 0x7fae7406eb30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:19.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.146+0000 7fae8659c700 1 --2- 192.168.123.105:0/3378204711 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fae7406c680 0x7fae7406eb30 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7fae88072f50 tx=0x7fae7800b410 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:19.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.147+0000 7fae7f7fe700 1 -- 192.168.123.105:0/3378204711 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fae7005b630 con 0x7fae88072440 2026-03-09T14:58:19.265 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.261+0000 7fae8759e700 1 -- 192.168.123.105:0/3378204711 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fae88061190 con 0x7fae7406c680 2026-03-09T14:58:19.268 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.267+0000 7fae7f7fe700 1 -- 192.168.123.105:0/3378204711 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+2640 (secure 0 0 0) 0x7fae88061190 con 0x7fae7406c680 2026-03-09T14:58:19.268 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:58:19.268 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (80s) 47s ago 2m 22.6M - 0.25.0 c8568f914cd2 35e160b8d1de 2026-03-09T14:58:19.268 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (2m) 47s ago 2m 7708k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T14:58:19.268 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (96s) 18s ago 96s 7952k - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T14:58:19.268 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (2m) 47s ago 2m 7411k - 18.2.0 dc2bc1663786 1c577d7a0de0 2026-03-09T14:58:19.268 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (95s) 18s ago 95s 7402k - 18.2.0 dc2bc1663786 9e4961442551 2026-03-09T14:58:19.268 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (79s) 47s ago 113s 76.1M - 9.4.7 954c08fa6188 46e00e5e5b38 2026-03-09T14:58:19.268 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:9283,8765,8443 running (3m) 47s ago 3m 486M - 18.2.0 dc2bc1663786 528c75e7c581 2026-03-09T14:58:19.268 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (91s) 18s ago 91s 444M - 18.2.0 dc2bc1663786 b7db289ecc14 2026-03-09T14:58:19.268 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (3m) 47s ago 3m 43.4M 2048M 18.2.0 dc2bc1663786 c83e96b62251 2026-03-09T14:58:19.269 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (90s) 18s ago 90s 41.9M 2048M 18.2.0 dc2bc1663786 7963792b5376 2026-03-09T14:58:19.269 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (2m) 47s ago 2m 13.9M - 1.5.0 0da6a335fe13 925d94d1da6f 2026-03-09T14:58:19.269 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (92s) 18s ago 92s 14.0M - 1.5.0 0da6a335fe13 e0b25e3a046e 2026-03-09T14:58:19.269 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (70s) 47s ago 70s 37.0M 4096M 18.2.0 dc2bc1663786 50f3ca995318 2026-03-09T14:58:19.269 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (59s) 47s ago 59s 37.5M 4096M 18.2.0 dc2bc1663786 23e35bdafe50 2026-03-09T14:58:19.269 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (48s) 47s ago 48s 13.4M 4096M 18.2.0 dc2bc1663786 75097dc12979 2026-03-09T14:58:19.269 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (39s) 18s ago 39s 40.9M 4096M 18.2.0 dc2bc1663786 e79644a0564f 2026-03-09T14:58:19.269 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (29s) 18s ago 29s 39.2M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T14:58:19.269 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (19s) 18s ago 19s 13.3M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T14:58:19.269 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (73s) 47s ago 108s 29.9M - 2.43.0 a07b618ecd1d c36363ff6641 2026-03-09T14:58:19.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.270+0000 7fae8759e700 1 -- 192.168.123.105:0/3378204711 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fae7406c680 msgr2=0x7fae7406eb30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:19.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.270+0000 7fae8759e700 1 --2- 192.168.123.105:0/3378204711 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fae7406c680 0x7fae7406eb30 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7fae88072f50 tx=0x7fae7800b410 comp rx=0 tx=0).stop 2026-03-09T14:58:19.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.270+0000 7fae8759e700 1 -- 192.168.123.105:0/3378204711 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae88072440 msgr2=0x7fae881a4f30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:19.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.270+0000 7fae8759e700 1 --2- 192.168.123.105:0/3378204711 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae88072440 0x7fae881a4f30 secure :-1 s=READY pgs=261 cs=0 l=1 rev1=1 crypto rx=0x7fae70006010 tx=0x7fae70004930 comp rx=0 tx=0).stop 2026-03-09T14:58:19.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.270+0000 7fae8759e700 1 -- 192.168.123.105:0/3378204711 shutdown_connections 2026-03-09T14:58:19.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.270+0000 7fae8759e700 1 --2- 192.168.123.105:0/3378204711 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fae7406c680 0x7fae7406eb30 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.270+0000 7fae8759e700 1 --2- 192.168.123.105:0/3378204711 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fae88071a60 0x7fae881a49f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.270+0000 7fae8759e700 1 --2- 192.168.123.105:0/3378204711 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae88072440 0x7fae881a4f30 unknown :-1 s=CLOSED pgs=261 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.270+0000 7fae8759e700 1 -- 192.168.123.105:0/3378204711 >> 192.168.123.105:0/3378204711 conn(0x7fae8806d1a0 msgr2=0x7fae8810a6d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:19.272 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.271+0000 7fae8759e700 1 -- 192.168.123.105:0/3378204711 shutdown_connections 2026-03-09T14:58:19.272 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.271+0000 7fae8759e700 1 -- 192.168.123.105:0/3378204711 wait complete. 2026-03-09T14:58:19.334 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph orch ls' 2026-03-09T14:58:19.499 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:19.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.771+0000 7f7a4932c700 1 -- 192.168.123.105:0/1208373905 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7a44101760 msgr2=0x7f7a44101bb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:19.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.771+0000 7f7a4932c700 1 --2- 192.168.123.105:0/1208373905 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7a44101760 0x7f7a44101bb0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f7a34009b00 tx=0x7f7a34009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:19.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.772+0000 7f7a4932c700 1 -- 192.168.123.105:0/1208373905 shutdown_connections 2026-03-09T14:58:19.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.772+0000 7f7a4932c700 1 --2- 192.168.123.105:0/1208373905 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7a44101760 0x7f7a44101bb0 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.772+0000 7f7a4932c700 1 --2- 192.168.123.105:0/1208373905 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7a44100560 0x7f7a44100970 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.772+0000 7f7a4932c700 1 -- 192.168.123.105:0/1208373905 >> 192.168.123.105:0/1208373905 conn(0x7f7a440fbb10 msgr2=0x7f7a440fdf40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:19.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.772+0000 7f7a4932c700 1 -- 192.168.123.105:0/1208373905 shutdown_connections 2026-03-09T14:58:19.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.772+0000 7f7a4932c700 1 -- 192.168.123.105:0/1208373905 wait complete. 2026-03-09T14:58:19.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.773+0000 7f7a4932c700 1 Processor -- start 2026-03-09T14:58:19.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.773+0000 7f7a4932c700 1 -- start start 2026-03-09T14:58:19.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.773+0000 7f7a4932c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7a44100560 0x7f7a44198020 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:19.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.773+0000 7f7a4932c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7a44101760 0x7f7a44198560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:19.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.773+0000 7f7a4932c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7a44198b80 con 0x7f7a44100560 2026-03-09T14:58:19.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.773+0000 7f7a4932c700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7a44198cc0 con 0x7f7a44101760 2026-03-09T14:58:19.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.773+0000 7f7a427fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7a44101760 0x7f7a44198560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:19.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.773+0000 7f7a427fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7a44101760 0x7f7a44198560 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:49930/0 (socket says 192.168.123.105:49930) 2026-03-09T14:58:19.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.773+0000 7f7a427fc700 1 -- 192.168.123.105:0/89268456 learned_addr learned my addr 192.168.123.105:0/89268456 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:19.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.774+0000 7f7a42ffd700 1 --2- 192.168.123.105:0/89268456 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7a44100560 0x7f7a44198020 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:19.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.774+0000 7f7a427fc700 1 -- 192.168.123.105:0/89268456 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7a44100560 msgr2=0x7f7a44198020 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:19.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.774+0000 7f7a427fc700 1 --2- 192.168.123.105:0/89268456 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7a44100560 0x7f7a44198020 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.774+0000 7f7a427fc700 1 -- 192.168.123.105:0/89268456 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7a2c009710 con 0x7f7a44101760 2026-03-09T14:58:19.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.774+0000 7f7a42ffd700 1 --2- 192.168.123.105:0/89268456 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7a44100560 0x7f7a44198020 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T14:58:19.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.774+0000 7f7a427fc700 1 --2- 192.168.123.105:0/89268456 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7a44101760 0x7f7a44198560 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f7a34009fd0 tx=0x7f7a3400bbf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:19.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.774+0000 7f7a3bfff700 1 -- 192.168.123.105:0/89268456 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7a3401d070 con 0x7f7a44101760 2026-03-09T14:58:19.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.774+0000 7f7a3bfff700 1 -- 192.168.123.105:0/89268456 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f7a34022470 con 0x7f7a44101760 2026-03-09T14:58:19.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.774+0000 7f7a4932c700 1 -- 192.168.123.105:0/89268456 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7a340097e0 con 0x7f7a44101760 2026-03-09T14:58:19.776 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.775+0000 7f7a3bfff700 1 -- 192.168.123.105:0/89268456 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7a3400f670 con 0x7f7a44101760 2026-03-09T14:58:19.776 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.775+0000 7f7a4932c700 1 -- 192.168.123.105:0/89268456 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7a4419da70 con 0x7f7a44101760 2026-03-09T14:58:19.777 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.776+0000 7f7a3bfff700 1 -- 192.168.123.105:0/89268456 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f7a340225e0 con 0x7f7a44101760 2026-03-09T14:58:19.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.777+0000 7f7a4932c700 1 -- 192.168.123.105:0/89268456 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7a44105840 con 0x7f7a44101760 2026-03-09T14:58:19.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.777+0000 7f7a3bfff700 1 --2- 192.168.123.105:0/89268456 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7a3006c630 0x7f7a3006eae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:19.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.777+0000 7f7a3bfff700 1 -- 192.168.123.105:0/89268456 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f7a3408ebe0 con 0x7f7a44101760 2026-03-09T14:58:19.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.777+0000 7f7a42ffd700 1 --2- 192.168.123.105:0/89268456 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7a3006c630 0x7f7a3006eae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:19.779 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.778+0000 7f7a42ffd700 1 --2- 192.168.123.105:0/89268456 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7a3006c630 0x7f7a3006eae0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f7a441015c0 tx=0x7f7a2c009450 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:19.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.780+0000 7f7a3bfff700 1 -- 192.168.123.105:0/89268456 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f7a34092050 con 0x7f7a44101760 2026-03-09T14:58:19.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.894+0000 7f7a4932c700 1 -- 192.168.123.105:0/89268456 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch ls", "target": ["mon-mgr", ""]}) v1 -- 0x7f7a44061190 con 0x7f7a3006c630 2026-03-09T14:58:19.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.897+0000 7f7a3bfff700 1 -- 192.168.123.105:0/89268456 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+1150 (secure 0 0 0) 0x7f7a44061190 con 0x7f7a3006c630 2026-03-09T14:58:19.898 INFO:teuthology.orchestra.run.vm05.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-09T14:58:19.898 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager ?:9093,9094 1/1 47s ago 2m count:1 2026-03-09T14:58:19.898 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter 2/2 47s ago 2m * 2026-03-09T14:58:19.898 INFO:teuthology.orchestra.run.vm05.stdout:crash 2/2 47s ago 2m * 2026-03-09T14:58:19.898 INFO:teuthology.orchestra.run.vm05.stdout:grafana ?:3000 1/1 47s ago 2m count:1 2026-03-09T14:58:19.898 INFO:teuthology.orchestra.run.vm05.stdout:mgr 2/2 47s ago 2m count:2 2026-03-09T14:58:19.898 INFO:teuthology.orchestra.run.vm05.stdout:mon 2/2 47s ago 2m vm05:192.168.123.105=vm05;vm09:192.168.123.109=vm09;count:2 2026-03-09T14:58:19.898 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter ?:9100 2/2 47s ago 2m * 2026-03-09T14:58:19.898 INFO:teuthology.orchestra.run.vm05.stdout:osd 6 47s ago - 2026-03-09T14:58:19.898 INFO:teuthology.orchestra.run.vm05.stdout:prometheus ?:9095 1/1 47s ago 2m count:1 2026-03-09T14:58:19.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.899+0000 7f7a4932c700 1 -- 192.168.123.105:0/89268456 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7a3006c630 msgr2=0x7f7a3006eae0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:19.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.899+0000 7f7a4932c700 1 --2- 192.168.123.105:0/89268456 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7a3006c630 0x7f7a3006eae0 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f7a441015c0 tx=0x7f7a2c009450 comp rx=0 tx=0).stop 2026-03-09T14:58:19.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.900+0000 7f7a4932c700 1 -- 192.168.123.105:0/89268456 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7a44101760 msgr2=0x7f7a44198560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:19.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.900+0000 7f7a4932c700 1 --2- 192.168.123.105:0/89268456 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7a44101760 0x7f7a44198560 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f7a34009fd0 tx=0x7f7a3400bbf0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.900+0000 7f7a4932c700 1 -- 192.168.123.105:0/89268456 shutdown_connections 2026-03-09T14:58:19.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.900+0000 7f7a4932c700 1 --2- 192.168.123.105:0/89268456 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f7a3006c630 0x7f7a3006eae0 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.900+0000 7f7a4932c700 1 --2- 192.168.123.105:0/89268456 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7a44100560 0x7f7a44198020 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.900+0000 7f7a4932c700 1 --2- 192.168.123.105:0/89268456 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7a44101760 0x7f7a44198560 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:19.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.900+0000 7f7a4932c700 1 -- 192.168.123.105:0/89268456 >> 192.168.123.105:0/89268456 conn(0x7f7a440fbb10 msgr2=0x7f7a44102980 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:19.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.900+0000 7f7a4932c700 1 -- 192.168.123.105:0/89268456 shutdown_connections 2026-03-09T14:58:19.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:19.900+0000 7f7a4932c700 1 -- 192.168.123.105:0/89268456 wait complete. 2026-03-09T14:58:19.962 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph orch host ls' 2026-03-09T14:58:20.125 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:20.404 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:20 vm05 ceph-mon[50611]: from='client.14480 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:20.404 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:20 vm05 ceph-mon[50611]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:20.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.401+0000 7f31237fd700 1 -- 192.168.123.105:0/4293018612 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f311c107d50 msgr2=0x7f311c1081c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:20.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.401+0000 7f31237fd700 1 --2- 192.168.123.105:0/4293018612 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f311c107d50 0x7f311c1081c0 secure :-1 s=READY pgs=262 cs=0 l=1 rev1=1 crypto rx=0x7f3110009b00 tx=0x7f3110009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:20.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.403+0000 7f31237fd700 1 -- 192.168.123.105:0/4293018612 shutdown_connections 2026-03-09T14:58:20.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.403+0000 7f31237fd700 1 --2- 192.168.123.105:0/4293018612 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f311c107d50 0x7f311c1081c0 unknown :-1 s=CLOSED pgs=262 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:20.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.403+0000 7f31237fd700 1 --2- 192.168.123.105:0/4293018612 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f311c071db0 0x7f311c0721c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:20.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.403+0000 7f31237fd700 1 -- 192.168.123.105:0/4293018612 >> 192.168.123.105:0/4293018612 conn(0x7f311c06d3e0 msgr2=0x7f311c06f830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:20.405 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.403+0000 7f31237fd700 1 -- 192.168.123.105:0/4293018612 shutdown_connections 2026-03-09T14:58:20.405 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.403+0000 7f31237fd700 1 -- 192.168.123.105:0/4293018612 wait complete. 2026-03-09T14:58:20.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.404+0000 7f31237fd700 1 Processor -- start 2026-03-09T14:58:20.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.404+0000 7f31237fd700 1 -- start start 2026-03-09T14:58:20.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.404+0000 7f31237fd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f311c071db0 0x7f311c1169f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:20.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.404+0000 7f31237fd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f311c107d50 0x7f311c116f30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:20.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.404+0000 7f31237fd700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f311c1174c0 con 0x7f311c071db0 2026-03-09T14:58:20.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.404+0000 7f31237fd700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f311c117630 con 0x7f311c107d50 2026-03-09T14:58:20.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.404+0000 7f3121599700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f311c071db0 0x7f311c1169f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:20.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.404+0000 7f3120d98700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f311c107d50 0x7f311c116f30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:20.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.405+0000 7f3121599700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f311c071db0 0x7f311c1169f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41674/0 (socket says 192.168.123.105:41674) 2026-03-09T14:58:20.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.405+0000 7f3121599700 1 -- 192.168.123.105:0/949768607 learned_addr learned my addr 192.168.123.105:0/949768607 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:20.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.405+0000 7f3121599700 1 -- 192.168.123.105:0/949768607 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f311c107d50 msgr2=0x7f311c116f30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:20.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.405+0000 7f3121599700 1 --2- 192.168.123.105:0/949768607 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f311c107d50 0x7f311c116f30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:20.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.405+0000 7f3121599700 1 -- 192.168.123.105:0/949768607 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f31100097e0 con 0x7f311c071db0 2026-03-09T14:58:20.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.405+0000 7f3121599700 1 --2- 192.168.123.105:0/949768607 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f311c071db0 0x7f311c1169f0 secure :-1 s=READY pgs=263 cs=0 l=1 rev1=1 crypto rx=0x7f311800cc60 tx=0x7f311800cf70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:20.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.406+0000 7f310e7fc700 1 -- 192.168.123.105:0/949768607 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f31180049e0 con 0x7f311c071db0 2026-03-09T14:58:20.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.406+0000 7f310e7fc700 1 -- 192.168.123.105:0/949768607 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3118007cf0 con 0x7f311c071db0 2026-03-09T14:58:20.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.406+0000 7f310e7fc700 1 -- 192.168.123.105:0/949768607 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f311800f450 con 0x7f311c071db0 2026-03-09T14:58:20.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.406+0000 7f31237fd700 1 -- 192.168.123.105:0/949768607 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f311c1b2a60 con 0x7f311c071db0 2026-03-09T14:58:20.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.406+0000 7f31237fd700 1 -- 192.168.123.105:0/949768607 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f311c1b2fb0 con 0x7f311c071db0 2026-03-09T14:58:20.409 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.407+0000 7f31237fd700 1 -- 192.168.123.105:0/949768607 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f311c066e40 con 0x7f311c071db0 2026-03-09T14:58:20.413 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.412+0000 7f310e7fc700 1 -- 192.168.123.105:0/949768607 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f3118004b40 con 0x7f311c071db0 2026-03-09T14:58:20.413 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.412+0000 7f310e7fc700 1 --2- 192.168.123.105:0/949768607 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f310806c750 0x7f310806ec00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:20.413 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.412+0000 7f310e7fc700 1 -- 192.168.123.105:0/949768607 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f311808b8e0 con 0x7f311c071db0 2026-03-09T14:58:20.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.412+0000 7f310e7fc700 1 -- 192.168.123.105:0/949768607 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f31180b77c0 con 0x7f311c071db0 2026-03-09T14:58:20.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.413+0000 7f3120d98700 1 --2- 192.168.123.105:0/949768607 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f310806c750 0x7f310806ec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:20.414 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.413+0000 7f3120d98700 1 --2- 192.168.123.105:0/949768607 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f310806c750 0x7f310806ec00 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f31100052d0 tx=0x7f311000b540 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:20.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.518+0000 7f31237fd700 1 -- 192.168.123.105:0/949768607 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch host ls", "target": ["mon-mgr", ""]}) v1 -- 0x7f311c1b3290 con 0x7f310806c750 2026-03-09T14:58:20.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.519+0000 7f310e7fc700 1 -- 192.168.123.105:0/949768607 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+139 (secure 0 0 0) 0x7f311c1b3290 con 0x7f310806c750 2026-03-09T14:58:20.520 INFO:teuthology.orchestra.run.vm05.stdout:HOST ADDR LABELS STATUS 2026-03-09T14:58:20.520 INFO:teuthology.orchestra.run.vm05.stdout:vm05 192.168.123.105 2026-03-09T14:58:20.520 INFO:teuthology.orchestra.run.vm05.stdout:vm09 192.168.123.109 2026-03-09T14:58:20.520 INFO:teuthology.orchestra.run.vm05.stdout:2 hosts in cluster 2026-03-09T14:58:20.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.522+0000 7f31237fd700 1 -- 192.168.123.105:0/949768607 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f310806c750 msgr2=0x7f310806ec00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:20.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.522+0000 7f31237fd700 1 --2- 192.168.123.105:0/949768607 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f310806c750 0x7f310806ec00 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f31100052d0 tx=0x7f311000b540 comp rx=0 tx=0).stop 2026-03-09T14:58:20.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.522+0000 7f31237fd700 1 -- 192.168.123.105:0/949768607 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f311c071db0 msgr2=0x7f311c1169f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:20.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.522+0000 7f31237fd700 1 --2- 192.168.123.105:0/949768607 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f311c071db0 0x7f311c1169f0 secure :-1 s=READY pgs=263 cs=0 l=1 rev1=1 crypto rx=0x7f311800cc60 tx=0x7f311800cf70 comp rx=0 tx=0).stop 2026-03-09T14:58:20.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.522+0000 7f31237fd700 1 -- 192.168.123.105:0/949768607 shutdown_connections 2026-03-09T14:58:20.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.522+0000 7f31237fd700 1 --2- 192.168.123.105:0/949768607 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f310806c750 0x7f310806ec00 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:20.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.522+0000 7f31237fd700 1 --2- 192.168.123.105:0/949768607 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f311c071db0 0x7f311c1169f0 unknown :-1 s=CLOSED pgs=263 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:20.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.522+0000 7f31237fd700 1 --2- 192.168.123.105:0/949768607 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f311c107d50 0x7f311c116f30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:20.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.522+0000 7f31237fd700 1 -- 192.168.123.105:0/949768607 >> 192.168.123.105:0/949768607 conn(0x7f311c06d3e0 msgr2=0x7f311c10af80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:20.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.522+0000 7f31237fd700 1 -- 192.168.123.105:0/949768607 shutdown_connections 2026-03-09T14:58:20.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.522+0000 7f31237fd700 1 -- 192.168.123.105:0/949768607 wait complete. 2026-03-09T14:58:20.568 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph orch device ls' 2026-03-09T14:58:20.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:20 vm09 ceph-mon[59673]: from='client.14480 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:20.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:20 vm09 ceph-mon[59673]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:20.728 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:20.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.989+0000 7f8b98b34700 1 -- 192.168.123.105:0/464214624 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b94105a40 msgr2=0x7f8b94107e20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:20.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.989+0000 7f8b98b34700 1 --2- 192.168.123.105:0/464214624 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b94105a40 0x7f8b94107e20 secure :-1 s=READY pgs=264 cs=0 l=1 rev1=1 crypto rx=0x7f8b84009b00 tx=0x7f8b84009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:20.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.989+0000 7f8b98b34700 1 -- 192.168.123.105:0/464214624 shutdown_connections 2026-03-09T14:58:20.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.989+0000 7f8b98b34700 1 --2- 192.168.123.105:0/464214624 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b94105a40 0x7f8b94107e20 unknown :-1 s=CLOSED pgs=264 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:20.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.989+0000 7f8b98b34700 1 --2- 192.168.123.105:0/464214624 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b940691c0 0x7f8b94105500 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:20.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.989+0000 7f8b98b34700 1 -- 192.168.123.105:0/464214624 >> 192.168.123.105:0/464214624 conn(0x7f8b940faa70 msgr2=0x7f8b940fcee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:20.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.989+0000 7f8b98b34700 1 -- 192.168.123.105:0/464214624 shutdown_connections 2026-03-09T14:58:20.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.990+0000 7f8b98b34700 1 -- 192.168.123.105:0/464214624 wait complete. 2026-03-09T14:58:20.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.990+0000 7f8b98b34700 1 Processor -- start 2026-03-09T14:58:20.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.990+0000 7f8b98b34700 1 -- start start 2026-03-09T14:58:20.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.990+0000 7f8b98b34700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b94105a40 0x7f8b94197fb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:20.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.990+0000 7f8b98b34700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b941984f0 0x7f8b9419d560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:20.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.990+0000 7f8b98b34700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8b94198960 con 0x7f8b941984f0 2026-03-09T14:58:20.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.990+0000 7f8b98b34700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8b94198ad0 con 0x7f8b94105a40 2026-03-09T14:58:20.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.991+0000 7f8b91d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b941984f0 0x7f8b9419d560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:20.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.991+0000 7f8b91d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b941984f0 0x7f8b9419d560 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41696/0 (socket says 192.168.123.105:41696) 2026-03-09T14:58:20.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.991+0000 7f8b91d9b700 1 -- 192.168.123.105:0/51444174 learned_addr learned my addr 192.168.123.105:0/51444174 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:20.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.991+0000 7f8b9259c700 1 --2- 192.168.123.105:0/51444174 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b94105a40 0x7f8b94197fb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:20.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.991+0000 7f8b91d9b700 1 -- 192.168.123.105:0/51444174 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b94105a40 msgr2=0x7f8b94197fb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:20.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.991+0000 7f8b91d9b700 1 --2- 192.168.123.105:0/51444174 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b94105a40 0x7f8b94197fb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:20.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.991+0000 7f8b91d9b700 1 -- 192.168.123.105:0/51444174 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8b840097e0 con 0x7f8b941984f0 2026-03-09T14:58:20.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.991+0000 7f8b91d9b700 1 --2- 192.168.123.105:0/51444174 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b941984f0 0x7f8b9419d560 secure :-1 s=READY pgs=265 cs=0 l=1 rev1=1 crypto rx=0x7f8b84004a60 tx=0x7f8b84004b40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:20.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.991+0000 7f8b8b7fe700 1 -- 192.168.123.105:0/51444174 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8b8401d070 con 0x7f8b941984f0 2026-03-09T14:58:20.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.991+0000 7f8b98b34700 1 -- 192.168.123.105:0/51444174 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8b9419daa0 con 0x7f8b941984f0 2026-03-09T14:58:20.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.992+0000 7f8b98b34700 1 -- 192.168.123.105:0/51444174 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8b9419dec0 con 0x7f8b941984f0 2026-03-09T14:58:20.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.991+0000 7f8b8b7fe700 1 -- 192.168.123.105:0/51444174 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8b8400bcd0 con 0x7f8b941984f0 2026-03-09T14:58:20.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.992+0000 7f8b8b7fe700 1 -- 192.168.123.105:0/51444174 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8b8400f950 con 0x7f8b941984f0 2026-03-09T14:58:20.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.994+0000 7f8b8b7fe700 1 -- 192.168.123.105:0/51444174 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f8b8400fab0 con 0x7f8b941984f0 2026-03-09T14:58:20.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.994+0000 7f8b98b34700 1 -- 192.168.123.105:0/51444174 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8b941921c0 con 0x7f8b941984f0 2026-03-09T14:58:20.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.994+0000 7f8b8b7fe700 1 --2- 192.168.123.105:0/51444174 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f8b8006c680 0x7f8b8006eb30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:20.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.995+0000 7f8b8b7fe700 1 -- 192.168.123.105:0/51444174 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f8b8408ccd0 con 0x7f8b941984f0 2026-03-09T14:58:20.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.998+0000 7f8b9259c700 1 --2- 192.168.123.105:0/51444174 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f8b8006c680 0x7f8b8006eb30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:20.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.998+0000 7f8b8b7fe700 1 -- 192.168.123.105:0/51444174 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f8b8405b2d0 con 0x7f8b941984f0 2026-03-09T14:58:20.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:20.998+0000 7f8b9259c700 1 --2- 192.168.123.105:0/51444174 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f8b8006c680 0x7f8b8006eb30 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f8b7c005ea0 tx=0x7f8b7c005e30 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:21.118 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.117+0000 7f8b98b34700 1 -- 192.168.123.105:0/51444174 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch device ls", "target": ["mon-mgr", ""]}) v1 -- 0x7f8b94061190 con 0x7f8b8006c680 2026-03-09T14:58:21.120 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.119+0000 7f8b8b7fe700 1 -- 192.168.123.105:0/51444174 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+1188 (secure 0 0 0) 0x7f8b94061190 con 0x7f8b8006c680 2026-03-09T14:58:21.120 INFO:teuthology.orchestra.run.vm05.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-09T14:58:21.120 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vdb hdd DWNBRSTVMM05001 20.0G Yes 47s ago 2026-03-09T14:58:21.120 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vdc hdd DWNBRSTVMM05002 20.0G No 47s ago Insufficient space (<10 extents) on vgs, LVM detected, locked 2026-03-09T14:58:21.120 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vdd hdd DWNBRSTVMM05003 20.0G No 47s ago Insufficient space (<10 extents) on vgs, LVM detected, locked 2026-03-09T14:58:21.120 INFO:teuthology.orchestra.run.vm05.stdout:vm05 /dev/vde hdd DWNBRSTVMM05004 20.0G No 47s ago Insufficient space (<10 extents) on vgs, LVM detected, locked 2026-03-09T14:58:21.120 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vdb hdd DWNBRSTVMM09001 20.0G Yes 18s ago 2026-03-09T14:58:21.120 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vdc hdd DWNBRSTVMM09002 20.0G No 18s ago Insufficient space (<10 extents) on vgs, LVM detected, locked 2026-03-09T14:58:21.120 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vdd hdd DWNBRSTVMM09003 20.0G No 18s ago Insufficient space (<10 extents) on vgs, LVM detected, locked 2026-03-09T14:58:21.120 INFO:teuthology.orchestra.run.vm05.stdout:vm09 /dev/vde hdd DWNBRSTVMM09004 20.0G No 18s ago Insufficient space (<10 extents) on vgs, LVM detected, locked 2026-03-09T14:58:21.122 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.121+0000 7f8b98b34700 1 -- 192.168.123.105:0/51444174 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f8b8006c680 msgr2=0x7f8b8006eb30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:21.122 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.122+0000 7f8b98b34700 1 --2- 192.168.123.105:0/51444174 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f8b8006c680 0x7f8b8006eb30 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f8b7c005ea0 tx=0x7f8b7c005e30 comp rx=0 tx=0).stop 2026-03-09T14:58:21.123 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.122+0000 7f8b98b34700 1 -- 192.168.123.105:0/51444174 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b941984f0 msgr2=0x7f8b9419d560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:21.123 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.122+0000 7f8b98b34700 1 --2- 192.168.123.105:0/51444174 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b941984f0 0x7f8b9419d560 secure :-1 s=READY pgs=265 cs=0 l=1 rev1=1 crypto rx=0x7f8b84004a60 tx=0x7f8b84004b40 comp rx=0 tx=0).stop 2026-03-09T14:58:21.123 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.122+0000 7f8b98b34700 1 -- 192.168.123.105:0/51444174 shutdown_connections 2026-03-09T14:58:21.123 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.122+0000 7f8b98b34700 1 --2- 192.168.123.105:0/51444174 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f8b8006c680 0x7f8b8006eb30 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:21.123 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.122+0000 7f8b98b34700 1 --2- 192.168.123.105:0/51444174 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b94105a40 0x7f8b94197fb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:21.123 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.122+0000 7f8b98b34700 1 --2- 192.168.123.105:0/51444174 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b941984f0 0x7f8b9419d560 unknown :-1 s=CLOSED pgs=265 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:21.123 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.122+0000 7f8b98b34700 1 -- 192.168.123.105:0/51444174 >> 192.168.123.105:0/51444174 conn(0x7f8b940faa70 msgr2=0x7f8b940fcee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:21.123 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.122+0000 7f8b98b34700 1 -- 192.168.123.105:0/51444174 shutdown_connections 2026-03-09T14:58:21.123 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.123+0000 7f8b98b34700 1 -- 192.168.123.105:0/51444174 wait complete. 2026-03-09T14:58:21.184 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T14:58:21.186 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-09T14:58:21.187 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph fs volume create cephfs --placement=4' 2026-03-09T14:58:21.348 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:21.378 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:21 vm05 ceph-mon[50611]: from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:21.378 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:21 vm05 ceph-mon[50611]: from='client.24281 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:21.615 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.614+0000 7f108830e700 1 -- 192.168.123.105:0/75556931 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1080102760 msgr2=0x7f1080102b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:21.615 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.614+0000 7f108830e700 1 --2- 192.168.123.105:0/75556931 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1080102760 0x7f1080102b70 secure :-1 s=READY pgs=266 cs=0 l=1 rev1=1 crypto rx=0x7f1070009b50 tx=0x7f1070009e60 comp rx=0 tx=0).stop 2026-03-09T14:58:21.615 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.614+0000 7f108830e700 1 -- 192.168.123.105:0/75556931 shutdown_connections 2026-03-09T14:58:21.615 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.614+0000 7f108830e700 1 --2- 192.168.123.105:0/75556931 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1080103a00 0x7f1080103e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:21.615 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.614+0000 7f108830e700 1 --2- 192.168.123.105:0/75556931 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1080102760 0x7f1080102b70 unknown :-1 s=CLOSED pgs=266 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:21.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.614+0000 7f108830e700 1 -- 192.168.123.105:0/75556931 >> 192.168.123.105:0/75556931 conn(0x7f10800fddb0 msgr2=0x7f10801001e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:21 vm09 ceph-mon[59673]: from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:21 vm09 ceph-mon[59673]: from='client.24281 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:21.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.615+0000 7f108830e700 1 -- 192.168.123.105:0/75556931 shutdown_connections 2026-03-09T14:58:21.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.615+0000 7f108830e700 1 -- 192.168.123.105:0/75556931 wait complete. 2026-03-09T14:58:21.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.615+0000 7f108830e700 1 Processor -- start 2026-03-09T14:58:21.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.615+0000 7f108830e700 1 -- start start 2026-03-09T14:58:21.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.615+0000 7f108830e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1080103a00 0x7f1080198270 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:21.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.615+0000 7f108830e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f10801987b0 0x7f108019d820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:21.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.615+0000 7f108830e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1080198cb0 con 0x7f10801987b0 2026-03-09T14:58:21.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.615+0000 7f108830e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1080198e20 con 0x7f1080103a00 2026-03-09T14:58:21.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.615+0000 7f10858a9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f10801987b0 0x7f108019d820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:21.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.616+0000 7f10858a9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f10801987b0 0x7f108019d820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41726/0 (socket says 192.168.123.105:41726) 2026-03-09T14:58:21.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.616+0000 7f10858a9700 1 -- 192.168.123.105:0/1366378726 learned_addr learned my addr 192.168.123.105:0/1366378726 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:21.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.616+0000 7f10860aa700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1080103a00 0x7f1080198270 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:21.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.616+0000 7f10858a9700 1 -- 192.168.123.105:0/1366378726 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1080103a00 msgr2=0x7f1080198270 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:21.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.616+0000 7f10858a9700 1 --2- 192.168.123.105:0/1366378726 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1080103a00 0x7f1080198270 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:21.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.616+0000 7f10858a9700 1 -- 192.168.123.105:0/1366378726 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f10700097e0 con 0x7f10801987b0 2026-03-09T14:58:21.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.616+0000 7f10858a9700 1 --2- 192.168.123.105:0/1366378726 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f10801987b0 0x7f108019d820 secure :-1 s=READY pgs=267 cs=0 l=1 rev1=1 crypto rx=0x7f107c00d8d0 tx=0x7f107c00dc90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:21.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.616+0000 7f10777fe700 1 -- 192.168.123.105:0/1366378726 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f107c009940 con 0x7f10801987b0 2026-03-09T14:58:21.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.616+0000 7f108830e700 1 -- 192.168.123.105:0/1366378726 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f108019ddc0 con 0x7f10801987b0 2026-03-09T14:58:21.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.616+0000 7f10777fe700 1 -- 192.168.123.105:0/1366378726 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f107c010460 con 0x7f10801987b0 2026-03-09T14:58:21.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.616+0000 7f10777fe700 1 -- 192.168.123.105:0/1366378726 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f107c00f5d0 con 0x7f10801987b0 2026-03-09T14:58:21.618 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.617+0000 7f108830e700 1 -- 192.168.123.105:0/1366378726 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f108019e2e0 con 0x7f10801987b0 2026-03-09T14:58:21.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.618+0000 7f108830e700 1 -- 192.168.123.105:0/1366378726 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1080066e40 con 0x7f10801987b0 2026-03-09T14:58:21.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.618+0000 7f10777fe700 1 -- 192.168.123.105:0/1366378726 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f107c0105d0 con 0x7f10801987b0 2026-03-09T14:58:21.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.619+0000 7f10777fe700 1 --2- 192.168.123.105:0/1366378726 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f106c06c680 0x7f106c06eb30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:21.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.619+0000 7f10777fe700 1 -- 192.168.123.105:0/1366378726 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(33..33 src has 1..33) v4 ==== 4446+0+0 (secure 0 0 0) 0x7f107c08b2b0 con 0x7f10801987b0 2026-03-09T14:58:21.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.621+0000 7f10860aa700 1 --2- 192.168.123.105:0/1366378726 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f106c06c680 0x7f106c06eb30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:21.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.621+0000 7f10777fe700 1 -- 192.168.123.105:0/1366378726 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f107c059800 con 0x7f10801987b0 2026-03-09T14:58:21.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.622+0000 7f10860aa700 1 --2- 192.168.123.105:0/1366378726 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f106c06c680 0x7f106c06eb30 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f1070009b20 tx=0x7f1070005e00 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:21.739 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:21.738+0000 7f108830e700 1 -- 192.168.123.105:0/1366378726 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "fs volume create", "name": "cephfs", "placement": "4", "target": ["mon-mgr", ""]}) v1 -- 0x7f1080108350 con 0x7f106c06c680 2026-03-09T14:58:22.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:22 vm09 ceph-mon[59673]: from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:22.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:22 vm09 ceph-mon[59673]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:22.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:22 vm09 ceph-mon[59673]: from='client.14494 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:22.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:22 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch 2026-03-09T14:58:22.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:22 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:58:22.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:22 vm05 ceph-mon[50611]: from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:22.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:22 vm05 ceph-mon[50611]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:22.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:22 vm05 ceph-mon[50611]: from='client.14494 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:22.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:22 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch 2026-03-09T14:58:22.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:22 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:58:23.206 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.204+0000 7f10777fe700 1 -- 192.168.123.105:0/1366378726 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+0 (secure 0 0 0) 0x7f1080108350 con 0x7f106c06c680 2026-03-09T14:58:23.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.207+0000 7f108830e700 1 -- 192.168.123.105:0/1366378726 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f106c06c680 msgr2=0x7f106c06eb30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:23.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.207+0000 7f108830e700 1 --2- 192.168.123.105:0/1366378726 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f106c06c680 0x7f106c06eb30 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f1070009b20 tx=0x7f1070005e00 comp rx=0 tx=0).stop 2026-03-09T14:58:23.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.207+0000 7f108830e700 1 -- 192.168.123.105:0/1366378726 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f10801987b0 msgr2=0x7f108019d820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:23.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.207+0000 7f108830e700 1 --2- 192.168.123.105:0/1366378726 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f10801987b0 0x7f108019d820 secure :-1 s=READY pgs=267 cs=0 l=1 rev1=1 crypto rx=0x7f107c00d8d0 tx=0x7f107c00dc90 comp rx=0 tx=0).stop 2026-03-09T14:58:23.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.207+0000 7f108830e700 1 -- 192.168.123.105:0/1366378726 shutdown_connections 2026-03-09T14:58:23.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.207+0000 7f108830e700 1 --2- 192.168.123.105:0/1366378726 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f106c06c680 0x7f106c06eb30 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:23.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.207+0000 7f108830e700 1 --2- 192.168.123.105:0/1366378726 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1080103a00 0x7f1080198270 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:23.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.207+0000 7f108830e700 1 --2- 192.168.123.105:0/1366378726 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f10801987b0 0x7f108019d820 unknown :-1 s=CLOSED pgs=267 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:23.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.207+0000 7f108830e700 1 -- 192.168.123.105:0/1366378726 >> 192.168.123.105:0/1366378726 conn(0x7f10800fddb0 msgr2=0x7f1080106c30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:23.209 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.208+0000 7f108830e700 1 -- 192.168.123.105:0/1366378726 shutdown_connections 2026-03-09T14:58:23.209 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.208+0000 7f108830e700 1 -- 192.168.123.105:0/1366378726 wait complete. 2026-03-09T14:58:23.269 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph fs dump' 2026-03-09T14:58:23.469 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05[50607]: 2026-03-09T14:58:23.181+0000 7f3e551a4700 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='client.14498 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "4", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]': finished 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: osdmap e34: 6 total, 6 up, 6 in 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: pgmap v75: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]': finished 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: osdmap e35: 6 total, 6 up, 6 in 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: osdmap e36: 6 total, 6 up, 6 in 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: fsmap cephfs:0 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: Saving service mds.cephfs spec with placement count:4 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.nrocqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.nrocqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T14:58:23.502 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:23 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='client.14498 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "4", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]': finished 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: osdmap e34: 6 total, 6 up, 6 in 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: pgmap v75: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]': finished 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: osdmap e35: 6 total, 6 up, 6 in 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: osdmap e36: 6 total, 6 up, 6 in 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: fsmap cephfs:0 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: Saving service mds.cephfs spec with placement count:4 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.nrocqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.nrocqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T14:58:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:23 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:23.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.790+0000 7f1d09cc9700 1 -- 192.168.123.105:0/3496288353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d04071a60 msgr2=0x7f1d04071e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:23.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.790+0000 7f1d09cc9700 1 --2- 192.168.123.105:0/3496288353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d04071a60 0x7f1d04071e70 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f1cf8009b00 tx=0x7f1cf8009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:23.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.791+0000 7f1d09cc9700 1 -- 192.168.123.105:0/3496288353 shutdown_connections 2026-03-09T14:58:23.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.791+0000 7f1d09cc9700 1 --2- 192.168.123.105:0/3496288353 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d04072440 0x7f1d0410be90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:23.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.791+0000 7f1d09cc9700 1 --2- 192.168.123.105:0/3496288353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d04071a60 0x7f1d04071e70 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:23.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.791+0000 7f1d09cc9700 1 -- 192.168.123.105:0/3496288353 >> 192.168.123.105:0/3496288353 conn(0x7f1d0406d1a0 msgr2=0x7f1d0406f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:23.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.791+0000 7f1d09cc9700 1 -- 192.168.123.105:0/3496288353 shutdown_connections 2026-03-09T14:58:23.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.791+0000 7f1d09cc9700 1 -- 192.168.123.105:0/3496288353 wait complete. 2026-03-09T14:58:23.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.792+0000 7f1d09cc9700 1 Processor -- start 2026-03-09T14:58:23.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.792+0000 7f1d09cc9700 1 -- start start 2026-03-09T14:58:23.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.792+0000 7f1d09cc9700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d04071a60 0x7f1d041af9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:23.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.792+0000 7f1d09cc9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d04072440 0x7f1d041b1f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:23.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.792+0000 7f1d09cc9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1d041b24e0 con 0x7f1d04072440 2026-03-09T14:58:23.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.792+0000 7f1d09cc9700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1d041b2650 con 0x7f1d04071a60 2026-03-09T14:58:23.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.792+0000 7f1d037fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d04071a60 0x7f1d041af9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:23.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.792+0000 7f1d02ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d04072440 0x7f1d041b1f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:23.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.792+0000 7f1d02ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d04072440 0x7f1d041b1f10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41728/0 (socket says 192.168.123.105:41728) 2026-03-09T14:58:23.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.792+0000 7f1d02ffd700 1 -- 192.168.123.105:0/833316436 learned_addr learned my addr 192.168.123.105:0/833316436 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:23.794 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.793+0000 7f1d02ffd700 1 -- 192.168.123.105:0/833316436 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d04071a60 msgr2=0x7f1d041af9c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:23.794 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.793+0000 7f1d02ffd700 1 --2- 192.168.123.105:0/833316436 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d04071a60 0x7f1d041af9c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:23.794 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.793+0000 7f1d02ffd700 1 -- 192.168.123.105:0/833316436 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1cf80097e0 con 0x7f1d04072440 2026-03-09T14:58:23.794 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.793+0000 7f1d02ffd700 1 --2- 192.168.123.105:0/833316436 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d04072440 0x7f1d041b1f10 secure :-1 s=READY pgs=268 cs=0 l=1 rev1=1 crypto rx=0x7f1cf000b700 tx=0x7f1cf000bac0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:23.794 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.793+0000 7f1d00ff9700 1 -- 192.168.123.105:0/833316436 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1cf0010820 con 0x7f1d04072440 2026-03-09T14:58:23.794 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.793+0000 7f1d00ff9700 1 -- 192.168.123.105:0/833316436 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1cf0010e60 con 0x7f1d04072440 2026-03-09T14:58:23.794 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.793+0000 7f1d00ff9700 1 -- 192.168.123.105:0/833316436 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1cf0017360 con 0x7f1d04072440 2026-03-09T14:58:23.794 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.793+0000 7f1d09cc9700 1 -- 192.168.123.105:0/833316436 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1d041b2930 con 0x7f1d04072440 2026-03-09T14:58:23.794 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.793+0000 7f1d09cc9700 1 -- 192.168.123.105:0/833316436 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1d041b2e80 con 0x7f1d04072440 2026-03-09T14:58:23.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.796+0000 7f1d00ff9700 1 -- 192.168.123.105:0/833316436 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f1cf000f7c0 con 0x7f1d04072440 2026-03-09T14:58:23.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.796+0000 7f1d00ff9700 1 --2- 192.168.123.105:0/833316436 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1cf406c7a0 0x7f1cf406ec50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:23.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.796+0000 7f1d00ff9700 1 -- 192.168.123.105:0/833316436 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(36..36 src has 1..36) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f1cf008b1a0 con 0x7f1d04072440 2026-03-09T14:58:23.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.796+0000 7f1d037fe700 1 --2- 192.168.123.105:0/833316436 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1cf406c7a0 0x7f1cf406ec50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:23.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.797+0000 7f1d037fe700 1 --2- 192.168.123.105:0/833316436 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1cf406c7a0 0x7f1cf406ec50 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f1cf8000c00 tx=0x7f1cf8005c80 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:23.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.797+0000 7f1d09cc9700 1 -- 192.168.123.105:0/833316436 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1d04111700 con 0x7f1d04072440 2026-03-09T14:58:23.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.808+0000 7f1d00ff9700 1 -- 192.168.123.105:0/833316436 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f1cf0055920 con 0x7f1d04072440 2026-03-09T14:58:23.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.938+0000 7f1d09cc9700 1 -- 192.168.123.105:0/833316436 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f1d0404ea50 con 0x7f1d04072440 2026-03-09T14:58:23.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.939+0000 7f1d00ff9700 1 -- 192.168.123.105:0/833316436 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 2 v2) v1 ==== 75+0+1093 (secure 0 0 0) 0x7f1cf0014460 con 0x7f1d04072440 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:e2 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:legacy client fscid: 1 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:Filesystem 'cephfs' (1) 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:fs_name cephfs 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:epoch 2 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:flags 12 joinable allow_snaps allow_multimds_snaps 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:created 2026-03-09T14:58:23.182447+0000 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:modified 2026-03-09T14:58:23.182492+0000 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:tableserver 0 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:root 0 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:session_timeout 60 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:session_autoclose 300 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:max_file_size 1099511627776 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:required_client_features {} 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:last_failure 0 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:last_failure_osd_epoch 0 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:max_mds 1 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:in 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:up {} 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:failed 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:damaged 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:stopped 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:data_pools [3] 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:metadata_pool 2 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:inline_data disabled 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:balancer 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:bal_rank_mask -1 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout:standby_count_wanted 0 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:23.941 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:23.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.942+0000 7f1cee7fc700 1 -- 192.168.123.105:0/833316436 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1cf406c7a0 msgr2=0x7f1cf406ec50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:23.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.942+0000 7f1cee7fc700 1 --2- 192.168.123.105:0/833316436 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1cf406c7a0 0x7f1cf406ec50 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f1cf8000c00 tx=0x7f1cf8005c80 comp rx=0 tx=0).stop 2026-03-09T14:58:23.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.942+0000 7f1cee7fc700 1 -- 192.168.123.105:0/833316436 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d04072440 msgr2=0x7f1d041b1f10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:23.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.942+0000 7f1cee7fc700 1 --2- 192.168.123.105:0/833316436 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d04072440 0x7f1d041b1f10 secure :-1 s=READY pgs=268 cs=0 l=1 rev1=1 crypto rx=0x7f1cf000b700 tx=0x7f1cf000bac0 comp rx=0 tx=0).stop 2026-03-09T14:58:23.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.942+0000 7f1cee7fc700 1 -- 192.168.123.105:0/833316436 shutdown_connections 2026-03-09T14:58:23.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.942+0000 7f1cee7fc700 1 --2- 192.168.123.105:0/833316436 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1cf406c7a0 0x7f1cf406ec50 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:23.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.942+0000 7f1cee7fc700 1 --2- 192.168.123.105:0/833316436 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d04071a60 0x7f1d041af9c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:23.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.942+0000 7f1cee7fc700 1 --2- 192.168.123.105:0/833316436 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d04072440 0x7f1d041b1f10 secure :-1 s=CLOSED pgs=268 cs=0 l=1 rev1=1 crypto rx=0x7f1cf000b700 tx=0x7f1cf000bac0 comp rx=0 tx=0).stop 2026-03-09T14:58:23.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.942+0000 7f1cee7fc700 1 -- 192.168.123.105:0/833316436 >> 192.168.123.105:0/833316436 conn(0x7f1d0406d1a0 msgr2=0x7f1d0410b4a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:23.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.942+0000 7f1cee7fc700 1 -- 192.168.123.105:0/833316436 shutdown_connections 2026-03-09T14:58:23.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:23.942+0000 7f1cee7fc700 1 -- 192.168.123.105:0/833316436 wait complete. 2026-03-09T14:58:23.946 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 2 2026-03-09T14:58:24.032 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T14:58:24.035 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-09T14:58:24.035 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph fs set cephfs max_mds 1' 2026-03-09T14:58:24.393 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:24.591 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:24 vm05 ceph-mon[50611]: Deploying daemon mds.cephfs.vm05.nrocqt on vm05 2026-03-09T14:58:24.591 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:24 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/833316436' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T14:58:24.591 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:24 vm05 ceph-mon[50611]: osdmap e37: 6 total, 6 up, 6 in 2026-03-09T14:58:24.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:24 vm09 ceph-mon[59673]: Deploying daemon mds.cephfs.vm05.nrocqt on vm05 2026-03-09T14:58:24.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:24 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/833316436' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T14:58:24.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:24 vm09 ceph-mon[59673]: osdmap e37: 6 total, 6 up, 6 in 2026-03-09T14:58:24.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.688+0000 7f8d388df700 1 -- 192.168.123.105:0/2336378699 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d30102760 msgr2=0x7f8d30102b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:24.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.688+0000 7f8d388df700 1 --2- 192.168.123.105:0/2336378699 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d30102760 0x7f8d30102b70 secure :-1 s=READY pgs=270 cs=0 l=1 rev1=1 crypto rx=0x7f8d1c009b50 tx=0x7f8d1c009e60 comp rx=0 tx=0).stop 2026-03-09T14:58:24.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.688+0000 7f8d388df700 1 -- 192.168.123.105:0/2336378699 shutdown_connections 2026-03-09T14:58:24.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.688+0000 7f8d388df700 1 --2- 192.168.123.105:0/2336378699 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8d30103960 0x7f8d30103db0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:24.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.688+0000 7f8d388df700 1 --2- 192.168.123.105:0/2336378699 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d30102760 0x7f8d30102b70 unknown :-1 s=CLOSED pgs=270 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:24.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.688+0000 7f8d388df700 1 -- 192.168.123.105:0/2336378699 >> 192.168.123.105:0/2336378699 conn(0x7f8d300fdcf0 msgr2=0x7f8d30100140 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:24.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.688+0000 7f8d388df700 1 -- 192.168.123.105:0/2336378699 shutdown_connections 2026-03-09T14:58:24.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.688+0000 7f8d388df700 1 -- 192.168.123.105:0/2336378699 wait complete. 2026-03-09T14:58:24.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.689+0000 7f8d388df700 1 Processor -- start 2026-03-09T14:58:24.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.689+0000 7f8d388df700 1 -- start start 2026-03-09T14:58:24.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.689+0000 7f8d388df700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d30102760 0x7f8d30198070 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:24.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.689+0000 7f8d388df700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8d30103960 0x7f8d301985b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:24.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.689+0000 7f8d388df700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8d30198bd0 con 0x7f8d30102760 2026-03-09T14:58:24.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.689+0000 7f8d388df700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8d30198d10 con 0x7f8d30103960 2026-03-09T14:58:24.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.690+0000 7f8d35e7a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8d30103960 0x7f8d301985b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:24.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.690+0000 7f8d35e7a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8d30103960 0x7f8d301985b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:50052/0 (socket says 192.168.123.105:50052) 2026-03-09T14:58:24.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.690+0000 7f8d35e7a700 1 -- 192.168.123.105:0/3350597820 learned_addr learned my addr 192.168.123.105:0/3350597820 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:24.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.690+0000 7f8d3667b700 1 --2- 192.168.123.105:0/3350597820 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d30102760 0x7f8d30198070 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:24.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.690+0000 7f8d35e7a700 1 -- 192.168.123.105:0/3350597820 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d30102760 msgr2=0x7f8d30198070 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:24.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.690+0000 7f8d35e7a700 1 --2- 192.168.123.105:0/3350597820 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d30102760 0x7f8d30198070 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:24.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.690+0000 7f8d35e7a700 1 -- 192.168.123.105:0/3350597820 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8d1c0097e0 con 0x7f8d30103960 2026-03-09T14:58:24.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.690+0000 7f8d3667b700 1 --2- 192.168.123.105:0/3350597820 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d30102760 0x7f8d30198070 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T14:58:24.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.690+0000 7f8d35e7a700 1 --2- 192.168.123.105:0/3350597820 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8d30103960 0x7f8d301985b0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f8d2400eb10 tx=0x7f8d2400ee20 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:24.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.691+0000 7f8d2b7fe700 1 -- 192.168.123.105:0/3350597820 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8d2400cc40 con 0x7f8d30103960 2026-03-09T14:58:24.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.691+0000 7f8d388df700 1 -- 192.168.123.105:0/3350597820 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8d3019d7c0 con 0x7f8d30103960 2026-03-09T14:58:24.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.691+0000 7f8d388df700 1 -- 192.168.123.105:0/3350597820 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8d3019dd10 con 0x7f8d30103960 2026-03-09T14:58:24.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.691+0000 7f8d2b7fe700 1 -- 192.168.123.105:0/3350597820 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f8d2400cda0 con 0x7f8d30103960 2026-03-09T14:58:24.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.691+0000 7f8d2b7fe700 1 -- 192.168.123.105:0/3350597820 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8d24018870 con 0x7f8d30103960 2026-03-09T14:58:24.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.692+0000 7f8d388df700 1 -- 192.168.123.105:0/3350597820 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8d14005320 con 0x7f8d30103960 2026-03-09T14:58:24.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.693+0000 7f8d2b7fe700 1 -- 192.168.123.105:0/3350597820 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f8d240189d0 con 0x7f8d30103960 2026-03-09T14:58:24.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.694+0000 7f8d2b7fe700 1 --2- 192.168.123.105:0/3350597820 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f8d2006c750 0x7f8d2006ec00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:24.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.694+0000 7f8d3667b700 1 --2- 192.168.123.105:0/3350597820 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f8d2006c750 0x7f8d2006ec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:24.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.694+0000 7f8d2b7fe700 1 -- 192.168.123.105:0/3350597820 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f8d24014070 con 0x7f8d30103960 2026-03-09T14:58:24.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.694+0000 7f8d3667b700 1 --2- 192.168.123.105:0/3350597820 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f8d2006c750 0x7f8d2006ec00 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f8d1c009b20 tx=0x7f8d1c005a90 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:24.697 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.696+0000 7f8d2b7fe700 1 -- 192.168.123.105:0/3350597820 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f8d2405a7c0 con 0x7f8d30103960 2026-03-09T14:58:24.831 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:24.829+0000 7f8d388df700 1 -- 192.168.123.105:0/3350597820 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"} v 0) v1 -- 0x7f8d14005f70 con 0x7f8d30103960 2026-03-09T14:58:25.341 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.339+0000 7f8d2b7fe700 1 -- 192.168.123.105:0/3350597820 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]=0 v3) v1 ==== 105+0+0 (secure 0 0 0) 0x7f8d2405a350 con 0x7f8d30103960 2026-03-09T14:58:25.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.345+0000 7f8d388df700 1 -- 192.168.123.105:0/3350597820 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f8d2006c750 msgr2=0x7f8d2006ec00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:25.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.345+0000 7f8d388df700 1 --2- 192.168.123.105:0/3350597820 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f8d2006c750 0x7f8d2006ec00 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f8d1c009b20 tx=0x7f8d1c005a90 comp rx=0 tx=0).stop 2026-03-09T14:58:25.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.345+0000 7f8d388df700 1 -- 192.168.123.105:0/3350597820 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8d30103960 msgr2=0x7f8d301985b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:25.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.345+0000 7f8d388df700 1 --2- 192.168.123.105:0/3350597820 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8d30103960 0x7f8d301985b0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f8d2400eb10 tx=0x7f8d2400ee20 comp rx=0 tx=0).stop 2026-03-09T14:58:25.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.346+0000 7f8d388df700 1 -- 192.168.123.105:0/3350597820 shutdown_connections 2026-03-09T14:58:25.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.346+0000 7f8d388df700 1 --2- 192.168.123.105:0/3350597820 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f8d2006c750 0x7f8d2006ec00 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:25.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.346+0000 7f8d388df700 1 --2- 192.168.123.105:0/3350597820 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8d30102760 0x7f8d30198070 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:25.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.346+0000 7f8d388df700 1 --2- 192.168.123.105:0/3350597820 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8d30103960 0x7f8d301985b0 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:25.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.346+0000 7f8d388df700 1 -- 192.168.123.105:0/3350597820 >> 192.168.123.105:0/3350597820 conn(0x7f8d300fdcf0 msgr2=0x7f8d30106b90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:25.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.346+0000 7f8d388df700 1 -- 192.168.123.105:0/3350597820 shutdown_connections 2026-03-09T14:58:25.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.346+0000 7f8d388df700 1 -- 192.168.123.105:0/3350597820 wait complete. 2026-03-09T14:58:25.424 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T14:58:25.427 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-09T14:58:25.427 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph fs set cephfs allow_standby_replay true' 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.ohmitn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.ohmitn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: Deploying daemon mds.cephfs.vm09.ohmitn on vm09 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: from='client.? ' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]: dispatch 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/3350597820' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]: dispatch 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: pgmap v79: 65 pgs: 3 creating+activating, 43 active+clean, 19 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: mds.? [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] up:boot 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: from='client.? ' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]': finished 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: daemon mds.cephfs.vm05.nrocqt assigned to filesystem cephfs as rank 0 (now has 1 ranks) 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: Cluster is now healthy 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: fsmap cephfs:0 1 up:standby 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.nrocqt"}]: dispatch 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:creating} 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: daemon mds.cephfs.vm05.nrocqt is now active in filesystem cephfs as rank 0 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:25 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:25.647 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.ohmitn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.ohmitn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: Deploying daemon mds.cephfs.vm09.ohmitn on vm09 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: from='client.? ' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]: dispatch 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3350597820' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]: dispatch 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: pgmap v79: 65 pgs: 3 creating+activating, 43 active+clean, 19 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: mds.? [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] up:boot 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: from='client.? ' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "max_mds", "val": "1"}]': finished 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: daemon mds.cephfs.vm05.nrocqt assigned to filesystem cephfs as rank 0 (now has 1 ranks) 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: Cluster is now healthy 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: fsmap cephfs:0 1 up:standby 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.nrocqt"}]: dispatch 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:creating} 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: daemon mds.cephfs.vm05.nrocqt is now active in filesystem cephfs as rank 0 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:25.678 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:25 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:25.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.987+0000 7f27d6672700 1 -- 192.168.123.105:0/3562643817 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f27c80a5430 msgr2=0x7f27c80a58a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:25.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.987+0000 7f27d6672700 1 --2- 192.168.123.105:0/3562643817 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f27c80a5430 0x7f27c80a58a0 secure :-1 s=READY pgs=272 cs=0 l=1 rev1=1 crypto rx=0x7f27c0009b50 tx=0x7f27c0009e60 comp rx=0 tx=0).stop 2026-03-09T14:58:25.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.988+0000 7f27d6672700 1 -- 192.168.123.105:0/3562643817 shutdown_connections 2026-03-09T14:58:25.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.988+0000 7f27d6672700 1 --2- 192.168.123.105:0/3562643817 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f27c80a5430 0x7f27c80a58a0 unknown :-1 s=CLOSED pgs=272 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:25.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.988+0000 7f27d6672700 1 --2- 192.168.123.105:0/3562643817 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f27c80a42f0 0x7f27c80a4700 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:25.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.988+0000 7f27d6672700 1 -- 192.168.123.105:0/3562643817 >> 192.168.123.105:0/3562643817 conn(0x7f27c809f7c0 msgr2=0x7f27c80a1c10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:25.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.988+0000 7f27d6672700 1 -- 192.168.123.105:0/3562643817 shutdown_connections 2026-03-09T14:58:25.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.989+0000 7f27d6672700 1 -- 192.168.123.105:0/3562643817 wait complete. 2026-03-09T14:58:25.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.989+0000 7f27d6672700 1 Processor -- start 2026-03-09T14:58:25.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.989+0000 7f27d6672700 1 -- start start 2026-03-09T14:58:25.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.989+0000 7f27d6672700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f27c80a42f0 0x7f27c80b3460 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:25.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.989+0000 7f27d6672700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f27c80a5430 0x7f27c80b39a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:25.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.989+0000 7f27d6672700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f27c80b3fc0 con 0x7f27c80a5430 2026-03-09T14:58:25.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.989+0000 7f27d6672700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f27c80b4100 con 0x7f27c80a42f0 2026-03-09T14:58:25.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.990+0000 7f27d4e6f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f27c80a5430 0x7f27c80b39a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.991+0000 7f27d5670700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f27c80a42f0 0x7f27c80b3460 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.991+0000 7f27d4e6f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f27c80a5430 0x7f27c80b39a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41772/0 (socket says 192.168.123.105:41772) 2026-03-09T14:58:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.991+0000 7f27d4e6f700 1 -- 192.168.123.105:0/1357109774 learned_addr learned my addr 192.168.123.105:0/1357109774 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.991+0000 7f27d5670700 1 -- 192.168.123.105:0/1357109774 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f27c80a5430 msgr2=0x7f27c80b39a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.991+0000 7f27d5670700 1 --2- 192.168.123.105:0/1357109774 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f27c80a5430 0x7f27c80b39a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.991+0000 7f27d5670700 1 -- 192.168.123.105:0/1357109774 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f27c00097e0 con 0x7f27c80a42f0 2026-03-09T14:58:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.992+0000 7f27d5670700 1 --2- 192.168.123.105:0/1357109774 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f27c80a42f0 0x7f27c80b3460 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f27cc00b6d0 tx=0x7f27cc00ba90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.992+0000 7f27c67fc700 1 -- 192.168.123.105:0/1357109774 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f27cc011840 con 0x7f27c80a42f0 2026-03-09T14:58:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.992+0000 7f27c67fc700 1 -- 192.168.123.105:0/1357109774 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f27cc011e80 con 0x7f27c80a42f0 2026-03-09T14:58:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.992+0000 7f27c67fc700 1 -- 192.168.123.105:0/1357109774 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f27cc00f550 con 0x7f27c80a42f0 2026-03-09T14:58:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.995+0000 7f27d6672700 1 -- 192.168.123.105:0/1357109774 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f27c81502a0 con 0x7f27c80a42f0 2026-03-09T14:58:25.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.995+0000 7f27d6672700 1 -- 192.168.123.105:0/1357109774 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f27c81507c0 con 0x7f27c80a42f0 2026-03-09T14:58:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.996+0000 7f27c67fc700 1 -- 192.168.123.105:0/1357109774 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f27cc0103e0 con 0x7f27c80a42f0 2026-03-09T14:58:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.997+0000 7f27c67fc700 1 --2- 192.168.123.105:0/1357109774 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f27bc06c6e0 0x7f27bc06eb90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:25.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.997+0000 7f27c67fc700 1 -- 192.168.123.105:0/1357109774 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f27cc08b560 con 0x7f27c80a42f0 2026-03-09T14:58:25.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.998+0000 7f27d4e6f700 1 --2- 192.168.123.105:0/1357109774 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f27bc06c6e0 0x7f27bc06eb90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:26.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:25.998+0000 7f27d4e6f700 1 --2- 192.168.123.105:0/1357109774 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f27bc06c6e0 0x7f27bc06eb90 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f27c000b5c0 tx=0x7f27c00055a0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:26.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.000+0000 7f27d6672700 1 -- 192.168.123.105:0/1357109774 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f27c8004510 con 0x7f27c80a42f0 2026-03-09T14:58:26.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.003+0000 7f27c67fc700 1 -- 192.168.123.105:0/1357109774 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f27cc059770 con 0x7f27c80a42f0 2026-03-09T14:58:26.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.149+0000 7f27d6672700 1 -- 192.168.123.105:0/1357109774 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"} v 0) v1 -- 0x7f27c8150af0 con 0x7f27c80a42f0 2026-03-09T14:58:26.353 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.349+0000 7f27c67fc700 1 -- 192.168.123.105:0/1357109774 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]=0 v5) v1 ==== 121+0+0 (secure 0 0 0) 0x7f27cc015020 con 0x7f27c80a42f0 2026-03-09T14:58:26.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.351+0000 7f27bbfff700 1 -- 192.168.123.105:0/1357109774 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f27bc06c6e0 msgr2=0x7f27bc06eb90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:26.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.352+0000 7f27bbfff700 1 --2- 192.168.123.105:0/1357109774 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f27bc06c6e0 0x7f27bc06eb90 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f27c000b5c0 tx=0x7f27c00055a0 comp rx=0 tx=0).stop 2026-03-09T14:58:26.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.352+0000 7f27bbfff700 1 -- 192.168.123.105:0/1357109774 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f27c80a42f0 msgr2=0x7f27c80b3460 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:26.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.352+0000 7f27bbfff700 1 --2- 192.168.123.105:0/1357109774 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f27c80a42f0 0x7f27c80b3460 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f27cc00b6d0 tx=0x7f27cc00ba90 comp rx=0 tx=0).stop 2026-03-09T14:58:26.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.352+0000 7f27bbfff700 1 -- 192.168.123.105:0/1357109774 shutdown_connections 2026-03-09T14:58:26.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.352+0000 7f27bbfff700 1 --2- 192.168.123.105:0/1357109774 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f27bc06c6e0 0x7f27bc06eb90 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:26.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.352+0000 7f27bbfff700 1 --2- 192.168.123.105:0/1357109774 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f27c80a42f0 0x7f27c80b3460 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:26.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.352+0000 7f27bbfff700 1 --2- 192.168.123.105:0/1357109774 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f27c80a5430 0x7f27c80b39a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:26.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.352+0000 7f27bbfff700 1 -- 192.168.123.105:0/1357109774 >> 192.168.123.105:0/1357109774 conn(0x7f27c809f7c0 msgr2=0x7f27c80a8660 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:26.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.352+0000 7f27bbfff700 1 -- 192.168.123.105:0/1357109774 shutdown_connections 2026-03-09T14:58:26.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.352+0000 7f27bbfff700 1 -- 192.168.123.105:0/1357109774 wait complete. 2026-03-09T14:58:26.429 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T14:58:26.431 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-09T14:58:26.431 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph fs set cephfs inline_data true --yes-i-really-really-mean-it' 2026-03-09T14:58:26.654 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.rrcyql", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.rrcyql", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: Deploying daemon mds.cephfs.vm05.rrcyql on vm05 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: from='client.? ' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]: dispatch 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/1357109774' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]: dispatch 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: mds.? [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] up:active 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: mds.? [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] up:boot 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: from='client.? ' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]': finished 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T14:58:26.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:26 vm05 ceph-mon[50611]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.rrcyql", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.rrcyql", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: Deploying daemon mds.cephfs.vm05.rrcyql on vm05 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: from='client.? ' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]: dispatch 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/1357109774' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]: dispatch 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: mds.? [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] up:active 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: mds.? [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] up:boot 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: from='client.? ' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "true"}]': finished 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T14:58:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:26 vm09 ceph-mon[59673]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2026-03-09T14:58:26.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.992+0000 7f2181213700 1 -- 192.168.123.105:0/2646151712 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f217c102760 msgr2=0x7f217c102b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:26.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.992+0000 7f2181213700 1 --2- 192.168.123.105:0/2646151712 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f217c102760 0x7f217c102b70 secure :-1 s=READY pgs=275 cs=0 l=1 rev1=1 crypto rx=0x7f2164009b00 tx=0x7f2164009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:26.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.994+0000 7f2181213700 1 -- 192.168.123.105:0/2646151712 shutdown_connections 2026-03-09T14:58:26.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.994+0000 7f2181213700 1 --2- 192.168.123.105:0/2646151712 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f217c103960 0x7f217c103db0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:26.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.994+0000 7f2181213700 1 --2- 192.168.123.105:0/2646151712 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f217c102760 0x7f217c102b70 unknown :-1 s=CLOSED pgs=275 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:26.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.994+0000 7f2181213700 1 -- 192.168.123.105:0/2646151712 >> 192.168.123.105:0/2646151712 conn(0x7f217c0fdcf0 msgr2=0x7f217c100140 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:26.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.994+0000 7f2181213700 1 -- 192.168.123.105:0/2646151712 shutdown_connections 2026-03-09T14:58:26.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.994+0000 7f2181213700 1 -- 192.168.123.105:0/2646151712 wait complete. 2026-03-09T14:58:26.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.995+0000 7f2181213700 1 Processor -- start 2026-03-09T14:58:26.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.995+0000 7f2181213700 1 -- start start 2026-03-09T14:58:26.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.995+0000 7f2181213700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f217c102760 0x7f217c1980c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:26.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.996+0000 7f217ad9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f217c102760 0x7f217c1980c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:26.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.996+0000 7f217ad9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f217c102760 0x7f217c1980c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41802/0 (socket says 192.168.123.105:41802) 2026-03-09T14:58:26.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.996+0000 7f2181213700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f217c103960 0x7f217c198600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:26.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.996+0000 7f2181213700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f217c198c20 con 0x7f217c102760 2026-03-09T14:58:26.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.996+0000 7f2181213700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f217c198d60 con 0x7f217c103960 2026-03-09T14:58:26.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.996+0000 7f217ad9d700 1 -- 192.168.123.105:0/2934848972 learned_addr learned my addr 192.168.123.105:0/2934848972 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:26.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.996+0000 7f217a59c700 1 --2- 192.168.123.105:0/2934848972 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f217c103960 0x7f217c198600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:26.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.996+0000 7f217ad9d700 1 -- 192.168.123.105:0/2934848972 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f217c103960 msgr2=0x7f217c198600 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:26.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.996+0000 7f217ad9d700 1 --2- 192.168.123.105:0/2934848972 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f217c103960 0x7f217c198600 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:26.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.997+0000 7f217ad9d700 1 -- 192.168.123.105:0/2934848972 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f21640097e0 con 0x7f217c102760 2026-03-09T14:58:26.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.997+0000 7f217ad9d700 1 --2- 192.168.123.105:0/2934848972 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f217c102760 0x7f217c1980c0 secure :-1 s=READY pgs=276 cs=0 l=1 rev1=1 crypto rx=0x7f2164009fd0 tx=0x7f2164004ab0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:26.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.997+0000 7f2173fff700 1 -- 192.168.123.105:0/2934848972 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f216401d070 con 0x7f217c102760 2026-03-09T14:58:26.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.997+0000 7f2173fff700 1 -- 192.168.123.105:0/2934848972 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f216400bc50 con 0x7f217c102760 2026-03-09T14:58:26.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.997+0000 7f2181213700 1 -- 192.168.123.105:0/2934848972 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f217c19d7b0 con 0x7f217c102760 2026-03-09T14:58:27.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.997+0000 7f2173fff700 1 -- 192.168.123.105:0/2934848972 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f216400f7e0 con 0x7f217c102760 2026-03-09T14:58:27.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.997+0000 7f2181213700 1 -- 192.168.123.105:0/2934848972 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f217c19dca0 con 0x7f217c102760 2026-03-09T14:58:27.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.999+0000 7f2173fff700 1 -- 192.168.123.105:0/2934848972 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f216400f940 con 0x7f217c102760 2026-03-09T14:58:27.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.999+0000 7f2173fff700 1 --2- 192.168.123.105:0/2934848972 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f216806c680 0x7f216806eb30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:27.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.999+0000 7f2173fff700 1 -- 192.168.123.105:0/2934848972 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f216408e090 con 0x7f217c102760 2026-03-09T14:58:27.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.999+0000 7f217a59c700 1 --2- 192.168.123.105:0/2934848972 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f216806c680 0x7f216806eb30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:27.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:26.999+0000 7f2181213700 1 -- 192.168.123.105:0/2934848972 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f217c066e40 con 0x7f217c102760 2026-03-09T14:58:27.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.003+0000 7f217a59c700 1 --2- 192.168.123.105:0/2934848972 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f216806c680 0x7f216806eb30 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f216c006fd0 tx=0x7f216c008040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:27.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.003+0000 7f2173fff700 1 -- 192.168.123.105:0/2934848972 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f216405c2d0 con 0x7f217c102760 2026-03-09T14:58:27.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.135+0000 7f2181213700 1 -- 192.168.123.105:0/2934848972 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "true", "yes_i_really_really_mean_it": true} v 0) v1 -- 0x7f217c19e390 con 0x7f217c102760 2026-03-09T14:58:27.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.451+0000 7f2173fff700 1 -- 192.168.123.105:0/2934848972 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "true", "yes_i_really_really_mean_it": true}]=0 inline data enabled v7) v1 ==== 168+0+0 (secure 0 0 0) 0x7f2164027030 con 0x7f217c102760 2026-03-09T14:58:27.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.453+0000 7f2181213700 1 -- 192.168.123.105:0/2934848972 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f216806c680 msgr2=0x7f216806eb30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:27.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.453+0000 7f2181213700 1 --2- 192.168.123.105:0/2934848972 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f216806c680 0x7f216806eb30 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f216c006fd0 tx=0x7f216c008040 comp rx=0 tx=0).stop 2026-03-09T14:58:27.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.453+0000 7f2181213700 1 -- 192.168.123.105:0/2934848972 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f217c102760 msgr2=0x7f217c1980c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:27.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.453+0000 7f2181213700 1 --2- 192.168.123.105:0/2934848972 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f217c102760 0x7f217c1980c0 secure :-1 s=READY pgs=276 cs=0 l=1 rev1=1 crypto rx=0x7f2164009fd0 tx=0x7f2164004ab0 comp rx=0 tx=0).stop 2026-03-09T14:58:27.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.453+0000 7f2181213700 1 -- 192.168.123.105:0/2934848972 shutdown_connections 2026-03-09T14:58:27.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.453+0000 7f2181213700 1 --2- 192.168.123.105:0/2934848972 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f216806c680 0x7f216806eb30 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:27.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.453+0000 7f2181213700 1 --2- 192.168.123.105:0/2934848972 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f217c102760 0x7f217c1980c0 unknown :-1 s=CLOSED pgs=276 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:27.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.453+0000 7f2181213700 1 --2- 192.168.123.105:0/2934848972 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f217c103960 0x7f217c198600 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:27.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.453+0000 7f2181213700 1 -- 192.168.123.105:0/2934848972 >> 192.168.123.105:0/2934848972 conn(0x7f217c0fdcf0 msgr2=0x7f217c106b90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:27.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.454+0000 7f2181213700 1 -- 192.168.123.105:0/2934848972 shutdown_connections 2026-03-09T14:58:27.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.454+0000 7f2181213700 1 -- 192.168.123.105:0/2934848972 wait complete. 2026-03-09T14:58:27.455 INFO:teuthology.orchestra.run.vm05.stderr:inline data enabled 2026-03-09T14:58:27.503 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T14:58:27.506 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-09T14:58:27.506 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph fs dump' 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.jrhwzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.jrhwzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: Deploying daemon mds.cephfs.vm09.jrhwzz on vm09 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: pgmap v80: 65 pgs: 3 creating+activating, 62 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 208 B/s wr, 0 op/s 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/2934848972' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "true", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: Health check failed: 1 filesystem with deprecated feature inline_data (FS_INLINE_DATA_DEPRECATED) 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: mds.? [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] up:boot 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/2934848972' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "true", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 1 up:standby 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:27 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.697 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.jrhwzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.jrhwzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: Deploying daemon mds.cephfs.vm09.jrhwzz on vm09 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: pgmap v80: 65 pgs: 3 creating+activating, 62 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 208 B/s wr, 0 op/s 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2934848972' entity='client.admin' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "true", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: Health check failed: 1 filesystem with deprecated feature inline_data (FS_INLINE_DATA_DEPRECATED) 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: mds.? [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] up:boot 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2934848972' entity='client.admin' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "inline_data", "val": "true", "yes_i_really_really_mean_it": true}]': finished 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 1 up:standby 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.722 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:27 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:27.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.989+0000 7f1fef59e700 1 -- 192.168.123.105:0/338018195 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1ff0071950 msgr2=0x7f1ff0071d60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:27.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.989+0000 7f1fef59e700 1 --2- 192.168.123.105:0/338018195 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1ff0071950 0x7f1ff0071d60 secure :-1 s=READY pgs=277 cs=0 l=1 rev1=1 crypto rx=0x7f1fe0009b00 tx=0x7f1fe0009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:27.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.989+0000 7f1fef59e700 1 -- 192.168.123.105:0/338018195 shutdown_connections 2026-03-09T14:58:27.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.989+0000 7f1fef59e700 1 --2- 192.168.123.105:0/338018195 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1ff0072330 0x7f1ff00770b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:27.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.989+0000 7f1fef59e700 1 --2- 192.168.123.105:0/338018195 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1ff0071950 0x7f1ff0071d60 unknown :-1 s=CLOSED pgs=277 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:27.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.989+0000 7f1fef59e700 1 -- 192.168.123.105:0/338018195 >> 192.168.123.105:0/338018195 conn(0x7f1ff006d1a0 msgr2=0x7f1ff006f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:27.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.989+0000 7f1fef59e700 1 -- 192.168.123.105:0/338018195 shutdown_connections 2026-03-09T14:58:27.991 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.989+0000 7f1fef59e700 1 -- 192.168.123.105:0/338018195 wait complete. 2026-03-09T14:58:27.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.991+0000 7f1fef59e700 1 Processor -- start 2026-03-09T14:58:27.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.991+0000 7f1fef59e700 1 -- start start 2026-03-09T14:58:27.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.991+0000 7f1fef59e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1ff0072330 0x7f1ff0082500 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:27.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.991+0000 7f1fef59e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1ff0082a40 0x7f1ff0082eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:27.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.992+0000 7f1fef59e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1ff012dd80 con 0x7f1ff0082a40 2026-03-09T14:58:27.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.992+0000 7f1fef59e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1ff012def0 con 0x7f1ff0072330 2026-03-09T14:58:27.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.992+0000 7f1fedd9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1ff0082a40 0x7f1ff0082eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:27.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.992+0000 7f1fee59c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1ff0072330 0x7f1ff0082500 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:27.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.992+0000 7f1fee59c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1ff0072330 0x7f1ff0082500 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:50132/0 (socket says 192.168.123.105:50132) 2026-03-09T14:58:27.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.992+0000 7f1fee59c700 1 -- 192.168.123.105:0/1207495835 learned_addr learned my addr 192.168.123.105:0/1207495835 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:27.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.992+0000 7f1fee59c700 1 -- 192.168.123.105:0/1207495835 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1ff0082a40 msgr2=0x7f1ff0082eb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:27.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.992+0000 7f1fee59c700 1 --2- 192.168.123.105:0/1207495835 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1ff0082a40 0x7f1ff0082eb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:27.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.992+0000 7f1fee59c700 1 -- 192.168.123.105:0/1207495835 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1fe00097e0 con 0x7f1ff0072330 2026-03-09T14:58:27.994 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.993+0000 7f1fee59c700 1 --2- 192.168.123.105:0/1207495835 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1ff0072330 0x7f1ff0082500 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f1fe0009ad0 tx=0x7f1fe000f710 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:27.994 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.994+0000 7f1fdf7fe700 1 -- 192.168.123.105:0/1207495835 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1fe001c070 con 0x7f1ff0072330 2026-03-09T14:58:27.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.994+0000 7f1fef59e700 1 -- 192.168.123.105:0/1207495835 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1ff012e110 con 0x7f1ff0072330 2026-03-09T14:58:27.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.994+0000 7f1fef59e700 1 -- 192.168.123.105:0/1207495835 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1ff012e600 con 0x7f1ff0072330 2026-03-09T14:58:27.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.994+0000 7f1fdf7fe700 1 -- 192.168.123.105:0/1207495835 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1fe000fe90 con 0x7f1ff0072330 2026-03-09T14:58:27.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.994+0000 7f1fdf7fe700 1 -- 192.168.123.105:0/1207495835 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1fe0017630 con 0x7f1ff0072330 2026-03-09T14:58:27.996 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.996+0000 7f1fdf7fe700 1 -- 192.168.123.105:0/1207495835 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f1fe0017790 con 0x7f1ff0072330 2026-03-09T14:58:27.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.996+0000 7f1fdf7fe700 1 --2- 192.168.123.105:0/1207495835 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1fd806c7a0 0x7f1fd806ec50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:27.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.996+0000 7f1fedd9b700 1 --2- 192.168.123.105:0/1207495835 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1fd806c7a0 0x7f1fd806ec50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:27.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.996+0000 7f1fdf7fe700 1 -- 192.168.123.105:0/1207495835 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f1fe008c8c0 con 0x7f1ff0072330 2026-03-09T14:58:27.997 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.997+0000 7f1fef59e700 1 -- 192.168.123.105:0/1207495835 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1fd0005320 con 0x7f1ff0072330 2026-03-09T14:58:27.998 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:27.997+0000 7f1fedd9b700 1 --2- 192.168.123.105:0/1207495835 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1fd806c7a0 0x7f1fd806ec50 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f1fe8009510 tx=0x7f1fe80093a0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:28.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.000+0000 7f1fdf7fe700 1 -- 192.168.123.105:0/1207495835 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f1fe005ab50 con 0x7f1ff0072330 2026-03-09T14:58:28.152 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.151+0000 7f1fef59e700 1 -- 192.168.123.105:0/1207495835 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f1fd0006200 con 0x7f1ff0072330 2026-03-09T14:58:28.153 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.152+0000 7f1fdf7fe700 1 -- 192.168.123.105:0/1207495835 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 7 v7) v1 ==== 75+0+1636 (secure 0 0 0) 0x7f1fe005a6e0 con 0x7f1ff0072330 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:e7 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:legacy client fscid: 1 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:Filesystem 'cephfs' (1) 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:fs_name cephfs 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:epoch 7 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:created 2026-03-09T14:58:23.182447+0000 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:modified 2026-03-09T14:58:27.446649+0000 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:tableserver 0 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:root 0 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:session_timeout 60 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:session_autoclose 300 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:max_file_size 1099511627776 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:required_client_features {} 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:last_failure 0 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:last_failure_osd_epoch 0 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:max_mds 1 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:in 0 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:up {0=14502} 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:failed 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:damaged 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:stopped 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:data_pools [3] 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:metadata_pool 2 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:inline_data enabled 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:balancer 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:bal_rank_mask -1 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:standby_count_wanted 1 2026-03-09T14:58:28.154 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.nrocqt{0:14502} state up:active seq 2 addr [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T14:58:28.155 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.ohmitn{0:14510} state up:standby-replay seq 1 addr [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T14:58:28.155 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:28.155 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:28.155 INFO:teuthology.orchestra.run.vm05.stdout:Standby daemons: 2026-03-09T14:58:28.155 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:28.155 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.rrcyql{-1:14518} state up:standby seq 1 addr [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T14:58:28.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.159+0000 7f1fef59e700 1 -- 192.168.123.105:0/1207495835 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1fd806c7a0 msgr2=0x7f1fd806ec50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:28.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.159+0000 7f1fef59e700 1 --2- 192.168.123.105:0/1207495835 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1fd806c7a0 0x7f1fd806ec50 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f1fe8009510 tx=0x7f1fe80093a0 comp rx=0 tx=0).stop 2026-03-09T14:58:28.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.159+0000 7f1fef59e700 1 -- 192.168.123.105:0/1207495835 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1ff0072330 msgr2=0x7f1ff0082500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:28.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.159+0000 7f1fef59e700 1 --2- 192.168.123.105:0/1207495835 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1ff0072330 0x7f1ff0082500 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f1fe0009ad0 tx=0x7f1fe000f710 comp rx=0 tx=0).stop 2026-03-09T14:58:28.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.159+0000 7f1fef59e700 1 -- 192.168.123.105:0/1207495835 shutdown_connections 2026-03-09T14:58:28.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.159+0000 7f1fef59e700 1 --2- 192.168.123.105:0/1207495835 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1fd806c7a0 0x7f1fd806ec50 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:28.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.159+0000 7f1fef59e700 1 --2- 192.168.123.105:0/1207495835 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1ff0072330 0x7f1ff0082500 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:28.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.159+0000 7f1fef59e700 1 --2- 192.168.123.105:0/1207495835 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1ff0082a40 0x7f1ff0082eb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:28.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.159+0000 7f1fef59e700 1 -- 192.168.123.105:0/1207495835 >> 192.168.123.105:0/1207495835 conn(0x7f1ff006d1a0 msgr2=0x7f1ff007b550 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:28.161 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.159+0000 7f1fef59e700 1 -- 192.168.123.105:0/1207495835 shutdown_connections 2026-03-09T14:58:28.161 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.159+0000 7f1fef59e700 1 -- 192.168.123.105:0/1207495835 wait complete. 2026-03-09T14:58:28.161 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 7 2026-03-09T14:58:28.230 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph --format=json fs dump | jq -e ".filesystems | length == 1"' 2026-03-09T14:58:28.450 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:28.788 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.786+0000 7fdc3f37f700 1 -- 192.168.123.105:0/1040002084 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc380fee80 msgr2=0x7fdc381012a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:28.788 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.786+0000 7fdc3f37f700 1 --2- 192.168.123.105:0/1040002084 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc380fee80 0x7fdc381012a0 secure :-1 s=READY pgs=278 cs=0 l=1 rev1=1 crypto rx=0x7fdc2c009b00 tx=0x7fdc2c009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:28.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:28 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:58:28.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:28 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/1207495835' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T14:58:28.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:28 vm05 ceph-mon[50611]: mds.? [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] up:boot 2026-03-09T14:58:28.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:28 vm05 ceph-mon[50611]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T14:58:28.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:28 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.jrhwzz"}]: dispatch 2026-03-09T14:58:28.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:28 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:28.788 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:28 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:28.790 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.790+0000 7fdc3f37f700 1 -- 192.168.123.105:0/1040002084 shutdown_connections 2026-03-09T14:58:28.790 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.790+0000 7fdc3f37f700 1 --2- 192.168.123.105:0/1040002084 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc381017e0 0x7fdc38103c60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:28.790 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.790+0000 7fdc3f37f700 1 --2- 192.168.123.105:0/1040002084 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc380fee80 0x7fdc381012a0 unknown :-1 s=CLOSED pgs=278 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:28.790 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.790+0000 7fdc3f37f700 1 -- 192.168.123.105:0/1040002084 >> 192.168.123.105:0/1040002084 conn(0x7fdc380faa70 msgr2=0x7fdc380fcee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:28.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.790+0000 7fdc3f37f700 1 -- 192.168.123.105:0/1040002084 shutdown_connections 2026-03-09T14:58:28.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.791+0000 7fdc3f37f700 1 -- 192.168.123.105:0/1040002084 wait complete. 2026-03-09T14:58:28.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.791+0000 7fdc3f37f700 1 Processor -- start 2026-03-09T14:58:28.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.791+0000 7fdc3f37f700 1 -- start start 2026-03-09T14:58:28.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.791+0000 7fdc3f37f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc380fee80 0x7fdc3810cad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:28.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.791+0000 7fdc3f37f700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc381017e0 0x7fdc3810d010 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:28.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.791+0000 7fdc3f37f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdc3810d630 con 0x7fdc380fee80 2026-03-09T14:58:28.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.791+0000 7fdc3f37f700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdc3810d770 con 0x7fdc381017e0 2026-03-09T14:58:28.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.791+0000 7fdc3c91a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc381017e0 0x7fdc3810d010 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:28.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.792+0000 7fdc3c91a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc381017e0 0x7fdc3810d010 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45734/0 (socket says 192.168.123.105:45734) 2026-03-09T14:58:28.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.792+0000 7fdc3c91a700 1 -- 192.168.123.105:0/2118061841 learned_addr learned my addr 192.168.123.105:0/2118061841 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:28.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.792+0000 7fdc3c91a700 1 -- 192.168.123.105:0/2118061841 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc380fee80 msgr2=0x7fdc3810cad0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T14:58:28.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.792+0000 7fdc3c91a700 1 --2- 192.168.123.105:0/2118061841 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc380fee80 0x7fdc3810cad0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:28.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.792+0000 7fdc3c91a700 1 -- 192.168.123.105:0/2118061841 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fdc2c0097e0 con 0x7fdc381017e0 2026-03-09T14:58:28.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.792+0000 7fdc3c91a700 1 --2- 192.168.123.105:0/2118061841 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc381017e0 0x7fdc3810d010 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fdc3400d8d0 tx=0x7fdc3400dc90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:28.795 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.794+0000 7fdc2a7fc700 1 -- 192.168.123.105:0/2118061841 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdc34009940 con 0x7fdc381017e0 2026-03-09T14:58:28.795 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.794+0000 7fdc2a7fc700 1 -- 192.168.123.105:0/2118061841 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fdc34010460 con 0x7fdc381017e0 2026-03-09T14:58:28.798 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.795+0000 7fdc2a7fc700 1 -- 192.168.123.105:0/2118061841 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdc3400f5d0 con 0x7fdc381017e0 2026-03-09T14:58:28.798 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.795+0000 7fdc3f37f700 1 -- 192.168.123.105:0/2118061841 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fdc3806a910 con 0x7fdc381017e0 2026-03-09T14:58:28.798 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.796+0000 7fdc3f37f700 1 -- 192.168.123.105:0/2118061841 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fdc3806ade0 con 0x7fdc381017e0 2026-03-09T14:58:28.798 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.796+0000 7fdc3f37f700 1 -- 192.168.123.105:0/2118061841 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fdc38106c40 con 0x7fdc381017e0 2026-03-09T14:58:28.798 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.797+0000 7fdc2a7fc700 1 -- 192.168.123.105:0/2118061841 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fdc34009aa0 con 0x7fdc381017e0 2026-03-09T14:58:28.798 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.797+0000 7fdc2a7fc700 1 --2- 192.168.123.105:0/2118061841 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdc2406c480 0x7fdc2406e930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:28.798 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.797+0000 7fdc3d11b700 1 --2- 192.168.123.105:0/2118061841 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdc2406c480 0x7fdc2406e930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:28.803 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.800+0000 7fdc2a7fc700 1 -- 192.168.123.105:0/2118061841 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fdc3408b570 con 0x7fdc381017e0 2026-03-09T14:58:28.803 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.801+0000 7fdc3d11b700 1 --2- 192.168.123.105:0/2118061841 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdc2406c480 0x7fdc2406e930 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7fdc2c000c00 tx=0x7fdc2c005fb0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:28.803 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.802+0000 7fdc2a7fc700 1 -- 192.168.123.105:0/2118061841 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fdc34059570 con 0x7fdc381017e0 2026-03-09T14:58:28.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:28 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:58:28.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:28 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/1207495835' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T14:58:28.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:28 vm09 ceph-mon[59673]: mds.? [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] up:boot 2026-03-09T14:58:28.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:28 vm09 ceph-mon[59673]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T14:58:28.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:28 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.jrhwzz"}]: dispatch 2026-03-09T14:58:28.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:28 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:28.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:28 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:28.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.941+0000 7fdc3f37f700 1 -- 192.168.123.105:0/2118061841 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump", "format": "json"} v 0) v1 -- 0x7fdc38066e40 con 0x7fdc381017e0 2026-03-09T14:58:28.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.942+0000 7fdc2a7fc700 1 -- 192.168.123.105:0/2118061841 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "format": "json"}]=0 dumped fsmap epoch 8 v8) v1 ==== 93+0+4761 (secure 0 0 0) 0x7fdc34059390 con 0x7fdc381017e0 2026-03-09T14:58:28.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.945+0000 7fdc23fff700 1 -- 192.168.123.105:0/2118061841 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdc2406c480 msgr2=0x7fdc2406e930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:28.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.945+0000 7fdc23fff700 1 --2- 192.168.123.105:0/2118061841 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdc2406c480 0x7fdc2406e930 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7fdc2c000c00 tx=0x7fdc2c005fb0 comp rx=0 tx=0).stop 2026-03-09T14:58:28.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.945+0000 7fdc23fff700 1 -- 192.168.123.105:0/2118061841 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc381017e0 msgr2=0x7fdc3810d010 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:28.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.945+0000 7fdc23fff700 1 --2- 192.168.123.105:0/2118061841 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc381017e0 0x7fdc3810d010 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fdc3400d8d0 tx=0x7fdc3400dc90 comp rx=0 tx=0).stop 2026-03-09T14:58:28.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.945+0000 7fdc23fff700 1 -- 192.168.123.105:0/2118061841 shutdown_connections 2026-03-09T14:58:28.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.945+0000 7fdc23fff700 1 --2- 192.168.123.105:0/2118061841 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fdc2406c480 0x7fdc2406e930 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:28.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.945+0000 7fdc23fff700 1 --2- 192.168.123.105:0/2118061841 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc380fee80 0x7fdc3810cad0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:28.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.945+0000 7fdc23fff700 1 --2- 192.168.123.105:0/2118061841 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc381017e0 0x7fdc3810d010 secure :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fdc3400d8d0 tx=0x7fdc3400dc90 comp rx=0 tx=0).stop 2026-03-09T14:58:28.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.945+0000 7fdc23fff700 1 -- 192.168.123.105:0/2118061841 >> 192.168.123.105:0/2118061841 conn(0x7fdc380faa70 msgr2=0x7fdc380fcee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:28.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.945+0000 7fdc23fff700 1 -- 192.168.123.105:0/2118061841 shutdown_connections 2026-03-09T14:58:28.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:28.945+0000 7fdc23fff700 1 -- 192.168.123.105:0/2118061841 wait complete. 2026-03-09T14:58:28.947 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 8 2026-03-09T14:58:28.957 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T14:58:29.003 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'while ! ceph --format=json mds versions | jq -e ". | add == 4"; do sleep 1; done' 2026-03-09T14:58:29.188 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.488+0000 7f6dcf59e700 1 -- 192.168.123.105:0/3246725721 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6dd0072440 msgr2=0x7f6dd010be90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.488+0000 7f6dcf59e700 1 --2- 192.168.123.105:0/3246725721 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6dd0072440 0x7f6dd010be90 secure :-1 s=READY pgs=279 cs=0 l=1 rev1=1 crypto rx=0x7f6dc0009b00 tx=0x7f6dc0009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.489+0000 7f6dcf59e700 1 -- 192.168.123.105:0/3246725721 shutdown_connections 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.489+0000 7f6dcf59e700 1 --2- 192.168.123.105:0/3246725721 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6dd0072440 0x7f6dd010be90 secure :-1 s=CLOSED pgs=279 cs=0 l=1 rev1=1 crypto rx=0x7f6dc0009b00 tx=0x7f6dc0009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.489+0000 7f6dcf59e700 1 --2- 192.168.123.105:0/3246725721 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6dd0071a60 0x7f6dd0071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.489+0000 7f6dcf59e700 1 -- 192.168.123.105:0/3246725721 >> 192.168.123.105:0/3246725721 conn(0x7f6dd006d1a0 msgr2=0x7f6dd006f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.489+0000 7f6dcf59e700 1 -- 192.168.123.105:0/3246725721 shutdown_connections 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.489+0000 7f6dcf59e700 1 -- 192.168.123.105:0/3246725721 wait complete. 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.490+0000 7f6dcf59e700 1 Processor -- start 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.490+0000 7f6dcf59e700 1 -- start start 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.490+0000 7f6dcf59e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6dd0071a60 0x7f6dd0116a70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.490+0000 7f6dcf59e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6dd0116fb0 0x7f6dd01b2800 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.490+0000 7f6dcf59e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6dd01174b0 con 0x7f6dd0116fb0 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.490+0000 7f6dcf59e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6dd01175f0 con 0x7f6dd0071a60 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.490+0000 7f6dce59c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6dd0071a60 0x7f6dd0116a70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.490+0000 7f6dcdd9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6dd0116fb0 0x7f6dd01b2800 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.490+0000 7f6dce59c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6dd0071a60 0x7f6dd0116a70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45746/0 (socket says 192.168.123.105:45746) 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.490+0000 7f6dce59c700 1 -- 192.168.123.105:0/393951613 learned_addr learned my addr 192.168.123.105:0/393951613 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.491+0000 7f6dce59c700 1 -- 192.168.123.105:0/393951613 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6dd0116fb0 msgr2=0x7f6dd01b2800 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.491+0000 7f6dce59c700 1 --2- 192.168.123.105:0/393951613 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6dd0116fb0 0x7f6dd01b2800 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.491+0000 7f6dce59c700 1 -- 192.168.123.105:0/393951613 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6dc00097e0 con 0x7f6dd0071a60 2026-03-09T14:58:29.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.491+0000 7f6dcdd9b700 1 --2- 192.168.123.105:0/393951613 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6dd0116fb0 0x7f6dd01b2800 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T14:58:29.493 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.492+0000 7f6dce59c700 1 --2- 192.168.123.105:0/393951613 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6dd0071a60 0x7f6dd0116a70 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f6dc4009fd0 tx=0x7f6dc400edf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:29.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.493+0000 7f6dbf7fe700 1 -- 192.168.123.105:0/393951613 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6dc4009980 con 0x7f6dd0071a60 2026-03-09T14:58:29.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.493+0000 7f6dbf7fe700 1 -- 192.168.123.105:0/393951613 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f6dc4004500 con 0x7f6dd0071a60 2026-03-09T14:58:29.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.493+0000 7f6dbf7fe700 1 -- 192.168.123.105:0/393951613 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6dc4010430 con 0x7f6dd0071a60 2026-03-09T14:58:29.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.493+0000 7f6dcf59e700 1 -- 192.168.123.105:0/393951613 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6dd01b2da0 con 0x7f6dd0071a60 2026-03-09T14:58:29.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.493+0000 7f6dcf59e700 1 -- 192.168.123.105:0/393951613 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6dd01b31c0 con 0x7f6dd0071a60 2026-03-09T14:58:29.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.494+0000 7f6dcf59e700 1 -- 192.168.123.105:0/393951613 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6dd004ea50 con 0x7f6dd0071a60 2026-03-09T14:58:29.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.498+0000 7f6dbf7fe700 1 -- 192.168.123.105:0/393951613 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f6dc4003680 con 0x7f6dd0071a60 2026-03-09T14:58:29.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.498+0000 7f6dbf7fe700 1 --2- 192.168.123.105:0/393951613 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6db806c4d0 0x7f6db806e980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:29.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.498+0000 7f6dbf7fe700 1 -- 192.168.123.105:0/393951613 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f6dc4014070 con 0x7f6dd0071a60 2026-03-09T14:58:29.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.498+0000 7f6dbf7fe700 1 -- 192.168.123.105:0/393951613 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f6dc408c0c0 con 0x7f6dd0071a60 2026-03-09T14:58:29.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.499+0000 7f6dcdd9b700 1 --2- 192.168.123.105:0/393951613 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6db806c4d0 0x7f6db806e980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:29.500 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.499+0000 7f6dcdd9b700 1 --2- 192.168.123.105:0/393951613 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6db806c4d0 0x7f6db806e980 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f6dc000b5c0 tx=0x7f6dc0019040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:29.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.680+0000 7f6dcf59e700 1 -- 192.168.123.105:0/393951613 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "mds versions", "format": "json"} v 0) v1 -- 0x7f6dd01b3350 con 0x7f6dd0071a60 2026-03-09T14:58:29.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.681+0000 7f6dbf7fe700 1 -- 192.168.123.105:0/393951613 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "mds versions", "format": "json"}]=0 v8) v1 ==== 78+0+83 (secure 0 0 0) 0x7f6dc405a140 con 0x7f6dd0071a60 2026-03-09T14:58:29.685 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.684+0000 7f6dcf59e700 1 -- 192.168.123.105:0/393951613 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6db806c4d0 msgr2=0x7f6db806e980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:29.685 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.684+0000 7f6dcf59e700 1 --2- 192.168.123.105:0/393951613 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6db806c4d0 0x7f6db806e980 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f6dc000b5c0 tx=0x7f6dc0019040 comp rx=0 tx=0).stop 2026-03-09T14:58:29.685 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.684+0000 7f6dcf59e700 1 -- 192.168.123.105:0/393951613 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6dd0071a60 msgr2=0x7f6dd0116a70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:29.685 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.684+0000 7f6dcf59e700 1 --2- 192.168.123.105:0/393951613 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6dd0071a60 0x7f6dd0116a70 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f6dc4009fd0 tx=0x7f6dc400edf0 comp rx=0 tx=0).stop 2026-03-09T14:58:29.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.685+0000 7f6dcf59e700 1 -- 192.168.123.105:0/393951613 shutdown_connections 2026-03-09T14:58:29.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.685+0000 7f6dcf59e700 1 --2- 192.168.123.105:0/393951613 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f6db806c4d0 0x7f6db806e980 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:29.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.685+0000 7f6dcf59e700 1 --2- 192.168.123.105:0/393951613 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6dd0071a60 0x7f6dd0116a70 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:29.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.685+0000 7f6dcf59e700 1 --2- 192.168.123.105:0/393951613 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6dd0116fb0 0x7f6dd01b2800 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:29.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.685+0000 7f6dcf59e700 1 -- 192.168.123.105:0/393951613 >> 192.168.123.105:0/393951613 conn(0x7f6dd006d1a0 msgr2=0x7f6dd0070620 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:29.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.685+0000 7f6dcf59e700 1 -- 192.168.123.105:0/393951613 shutdown_connections 2026-03-09T14:58:29.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:29.685+0000 7f6dcf59e700 1 -- 192.168.123.105:0/393951613 wait complete. 2026-03-09T14:58:29.697 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T14:58:29.745 INFO:teuthology.run_tasks:Running task fs.pre_upgrade_save... 2026-03-09T14:58:29.748 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 2026-03-09T14:58:29.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:29 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/2118061841' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-09T14:58:29.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:29 vm09 ceph-mon[59673]: pgmap v81: 65 pgs: 65 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 341 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T14:58:29.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:29 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:29.939 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:29.976 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:29 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/2118061841' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-09T14:58:29.976 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:29 vm05 ceph-mon[50611]: pgmap v81: 65 pgs: 65 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 341 B/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-09T14:58:29.976 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:29 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:30.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.331+0000 7fb0c5e6f700 1 -- 192.168.123.105:0/4032600556 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb0b809b450 msgr2=0x7fb0b809b820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:30.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.332+0000 7fb0c5e6f700 1 --2- 192.168.123.105:0/4032600556 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb0b809b450 0x7fb0b809b820 secure :-1 s=READY pgs=280 cs=0 l=1 rev1=1 crypto rx=0x7fb0b4009a60 tx=0x7fb0b4009d70 comp rx=0 tx=0).stop 2026-03-09T14:58:30.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.332+0000 7fb0c5e6f700 1 -- 192.168.123.105:0/4032600556 shutdown_connections 2026-03-09T14:58:30.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.332+0000 7fb0c5e6f700 1 --2- 192.168.123.105:0/4032600556 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb0b80062b0 0x7fb0b8006700 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:30.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.332+0000 7fb0c5e6f700 1 --2- 192.168.123.105:0/4032600556 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb0b809b450 0x7fb0b809b820 unknown :-1 s=CLOSED pgs=280 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:30.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.332+0000 7fb0c5e6f700 1 -- 192.168.123.105:0/4032600556 >> 192.168.123.105:0/4032600556 conn(0x7fb0b800b150 msgr2=0x7fb0b800b550 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:30.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.333+0000 7fb0c5e6f700 1 -- 192.168.123.105:0/4032600556 shutdown_connections 2026-03-09T14:58:30.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.333+0000 7fb0c5e6f700 1 -- 192.168.123.105:0/4032600556 wait complete. 2026-03-09T14:58:30.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.334+0000 7fb0c5e6f700 1 Processor -- start 2026-03-09T14:58:30.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.334+0000 7fb0c5e6f700 1 -- start start 2026-03-09T14:58:30.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.334+0000 7fb0c5e6f700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb0b80062b0 0x7fb0b8128e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:30.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.334+0000 7fb0c5e6f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb0b809b450 0x7fb0b81293d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:30.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.334+0000 7fb0c5e6f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb0b8129ab0 con 0x7fb0b809b450 2026-03-09T14:58:30.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.335+0000 7fb0c4e6d700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb0b80062b0 0x7fb0b8128e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:30.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.335+0000 7fb0c4e6d700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb0b80062b0 0x7fb0b8128e90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45774/0 (socket says 192.168.123.105:45774) 2026-03-09T14:58:30.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.335+0000 7fb0c4e6d700 1 -- 192.168.123.105:0/190249001 learned_addr learned my addr 192.168.123.105:0/190249001 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:30.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.335+0000 7fb0bffff700 1 --2- 192.168.123.105:0/190249001 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb0b809b450 0x7fb0b81293d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:30.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.335+0000 7fb0c5e6f700 1 -- 192.168.123.105:0/190249001 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb0b812d630 con 0x7fb0b80062b0 2026-03-09T14:58:30.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.336+0000 7fb0bffff700 1 -- 192.168.123.105:0/190249001 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb0b80062b0 msgr2=0x7fb0b8128e90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:30.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.336+0000 7fb0bffff700 1 --2- 192.168.123.105:0/190249001 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb0b80062b0 0x7fb0b8128e90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:30.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.336+0000 7fb0bffff700 1 -- 192.168.123.105:0/190249001 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb0b4009710 con 0x7fb0b809b450 2026-03-09T14:58:30.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.336+0000 7fb0bffff700 1 --2- 192.168.123.105:0/190249001 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb0b809b450 0x7fb0b81293d0 secure :-1 s=READY pgs=281 cs=0 l=1 rev1=1 crypto rx=0x7fb0ac009fd0 tx=0x7fb0ac00ec90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:30.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.336+0000 7fb0bdffb700 1 -- 192.168.123.105:0/190249001 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb0ac00cb80 con 0x7fb0b809b450 2026-03-09T14:58:30.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.337+0000 7fb0c5e6f700 1 -- 192.168.123.105:0/190249001 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb0b812d910 con 0x7fb0b809b450 2026-03-09T14:58:30.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.339+0000 7fb0c5e6f700 1 -- 192.168.123.105:0/190249001 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb0b812de60 con 0x7fb0b809b450 2026-03-09T14:58:30.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.339+0000 7fb0bdffb700 1 -- 192.168.123.105:0/190249001 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb0ac00eed0 con 0x7fb0b809b450 2026-03-09T14:58:30.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.341+0000 7fb0bdffb700 1 -- 192.168.123.105:0/190249001 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb0ac0186c0 con 0x7fb0b809b450 2026-03-09T14:58:30.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.341+0000 7fb0bdffb700 1 -- 192.168.123.105:0/190249001 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fb0ac0188a0 con 0x7fb0b809b450 2026-03-09T14:58:30.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.341+0000 7fb0bdffb700 1 --2- 192.168.123.105:0/190249001 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb0b00709b0 0x7fb0b0072e60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:30.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.342+0000 7fb0bdffb700 1 -- 192.168.123.105:0/190249001 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fb0ac014070 con 0x7fb0b809b450 2026-03-09T14:58:30.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.342+0000 7fb0c4e6d700 1 --2- 192.168.123.105:0/190249001 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb0b00709b0 0x7fb0b0072e60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:30.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.342+0000 7fb0c5e6f700 1 -- 192.168.123.105:0/190249001 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb0a4005320 con 0x7fb0b809b450 2026-03-09T14:58:30.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.342+0000 7fb0c4e6d700 1 --2- 192.168.123.105:0/190249001 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb0b00709b0 0x7fb0b0072e60 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7fb0b40039f0 tx=0x7fb0b400b540 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:30.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.345+0000 7fb0bdffb700 1 -- 192.168.123.105:0/190249001 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fb0ac057990 con 0x7fb0b809b450 2026-03-09T14:58:30.512 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.510+0000 7fb0c5e6f700 1 -- 192.168.123.105:0/190249001 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "format": "json"} v 0) v1 -- 0x7fb0a4005f70 con 0x7fb0b809b450 2026-03-09T14:58:30.512 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.511+0000 7fb0bdffb700 1 -- 192.168.123.105:0/190249001 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "format": "json"}]=0 dumped fsmap epoch 9 v9) v1 ==== 93+0+4759 (secure 0 0 0) 0x7fb0ac05afb0 con 0x7fb0b809b450 2026-03-09T14:58:30.513 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T14:58:30.513 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":9,"default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":14518,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6829/1321316558","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":1321316558},{"type":"v1","addr":"192.168.123.105:6829","nonce":1321316558}]},"join_fscid":-1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":7},{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":-1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":8}],"filesystems":[{"mdsmap":{"epoch":9,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T14:58:30.215642+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14502},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14502":{"gid":14502,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":4,"state":"up:active","state_seq":3,"addr":"192.168.123.105:6827/2659122886","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":2659122886},{"type":"v1","addr":"192.168.123.105:6827","nonce":2659122886}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_14510":{"gid":14510,"name":"cephfs.vm09.ohmitn","rank":0,"incarnation":0,"state":"up:standby-replay","state_seq":2,"addr":"192.168.123.109:6825/1947130211","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":1947130211},{"type":"v1","addr":"192.168.123.109:6825","nonce":1947130211}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1},"id":1}]} 2026-03-09T14:58:30.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.514+0000 7fb0ab7fe700 1 -- 192.168.123.105:0/190249001 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb0b00709b0 msgr2=0x7fb0b0072e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:30.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.514+0000 7fb0ab7fe700 1 --2- 192.168.123.105:0/190249001 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb0b00709b0 0x7fb0b0072e60 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7fb0b40039f0 tx=0x7fb0b400b540 comp rx=0 tx=0).stop 2026-03-09T14:58:30.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.514+0000 7fb0ab7fe700 1 -- 192.168.123.105:0/190249001 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb0b809b450 msgr2=0x7fb0b81293d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:30.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.515+0000 7fb0ab7fe700 1 --2- 192.168.123.105:0/190249001 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb0b809b450 0x7fb0b81293d0 secure :-1 s=READY pgs=281 cs=0 l=1 rev1=1 crypto rx=0x7fb0ac009fd0 tx=0x7fb0ac00ec90 comp rx=0 tx=0).stop 2026-03-09T14:58:30.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.515+0000 7fb0ab7fe700 1 -- 192.168.123.105:0/190249001 shutdown_connections 2026-03-09T14:58:30.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.515+0000 7fb0ab7fe700 1 --2- 192.168.123.105:0/190249001 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb0b00709b0 0x7fb0b0072e60 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:30.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.515+0000 7fb0ab7fe700 1 --2- 192.168.123.105:0/190249001 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb0b80062b0 0x7fb0b8128e90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:30.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.515+0000 7fb0ab7fe700 1 --2- 192.168.123.105:0/190249001 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb0b809b450 0x7fb0b81293d0 unknown :-1 s=CLOSED pgs=281 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:30.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.515+0000 7fb0ab7fe700 1 -- 192.168.123.105:0/190249001 >> 192.168.123.105:0/190249001 conn(0x7fb0b800b150 msgr2=0x7fb0b8092c20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:30.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.516+0000 7fb0ab7fe700 1 -- 192.168.123.105:0/190249001 shutdown_connections 2026-03-09T14:58:30.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:30.516+0000 7fb0ab7fe700 1 -- 192.168.123.105:0/190249001 wait complete. 2026-03-09T14:58:30.517 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 9 2026-03-09T14:58:30.579 DEBUG:tasks.fs:fs fscid=1,name=cephfs state = {'epoch': 9, 'max_mds': 1, 'flags': 50} 2026-03-09T14:58:30.579 INFO:teuthology.run_tasks:Running task ceph-fuse... 2026-03-09T14:58:30.589 INFO:tasks.ceph_fuse:Running ceph_fuse task... 2026-03-09T14:58:30.589 INFO:tasks.ceph_fuse:config is {'client.0': {}, 'client.1': {}} 2026-03-09T14:58:30.589 INFO:tasks.ceph_fuse:client.0 config is {} 2026-03-09T14:58:30.589 INFO:tasks.cephfs.mount:cephfs_mntpt = None 2026-03-09T14:58:30.589 INFO:tasks.cephfs.mount:hostfs_mntpt = /home/ubuntu/cephtest/mnt.0 2026-03-09T14:58:30.589 INFO:tasks.ceph_fuse:client.1 config is {} 2026-03-09T14:58:30.589 INFO:tasks.cephfs.mount:cephfs_mntpt = None 2026-03-09T14:58:30.589 INFO:tasks.cephfs.mount:hostfs_mntpt = /home/ubuntu/cephtest/mnt.1 2026-03-09T14:58:30.589 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:30.589 DEBUG:teuthology.orchestra.run.vm09:> ip netns list 2026-03-09T14:58:30.620 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:30.620 DEBUG:teuthology.orchestra.run.vm09:> sudo ip link delete ceph-brx 2026-03-09T14:58:30.698 INFO:teuthology.orchestra.run.vm09.stderr:Cannot find device "ceph-brx" 2026-03-09T14:58:30.700 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:58:30.700 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:30.700 DEBUG:teuthology.orchestra.run.vm05:> ip netns list 2026-03-09T14:58:30.736 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:30.736 DEBUG:teuthology.orchestra.run.vm05:> sudo ip link delete ceph-brx 2026-03-09T14:58:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:30 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:30 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:30 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:30 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:58:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:30 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:30 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/393951613' entity='client.admin' cmd=[{"prefix": "mds versions", "format": "json"}]: dispatch 2026-03-09T14:58:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:30 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:58:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:30 vm05 ceph-mon[50611]: mds.? [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] up:active 2026-03-09T14:58:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:30 vm05 ceph-mon[50611]: mds.? [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] up:standby-replay 2026-03-09T14:58:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:30 vm05 ceph-mon[50611]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T14:58:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:30 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/190249001' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-09T14:58:30.820 INFO:teuthology.orchestra.run.vm05.stderr:Cannot find device "ceph-brx" 2026-03-09T14:58:30.822 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:58:30.822 INFO:tasks.ceph_fuse:Mounting ceph-fuse clients... 2026-03-09T14:58:30.822 DEBUG:tasks.ceph_fuse:passing mntargs=[] 2026-03-09T14:58:30.822 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs ls 2026-03-09T14:58:31.113 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:30 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:30 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:30 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:30 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:58:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:30 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:30 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/393951613' entity='client.admin' cmd=[{"prefix": "mds versions", "format": "json"}]: dispatch 2026-03-09T14:58:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:30 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:58:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:30 vm09 ceph-mon[59673]: mds.? [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] up:active 2026-03-09T14:58:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:30 vm09 ceph-mon[59673]: mds.? [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] up:standby-replay 2026-03-09T14:58:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:30 vm09 ceph-mon[59673]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T14:58:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:30 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/190249001' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-09T14:58:31.420 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.416+0000 7f906664e700 1 -- 192.168.123.105:0/2542327766 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f90600ffc00 msgr2=0x7f906010c960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:31.420 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.416+0000 7f906664e700 1 --2- 192.168.123.105:0/2542327766 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f90600ffc00 0x7f906010c960 secure :-1 s=READY pgs=282 cs=0 l=1 rev1=1 crypto rx=0x7f9050009ab0 tx=0x7f9050009dc0 comp rx=0 tx=0).stop 2026-03-09T14:58:31.420 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.419+0000 7f906664e700 1 -- 192.168.123.105:0/2542327766 shutdown_connections 2026-03-09T14:58:31.420 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.419+0000 7f906664e700 1 --2- 192.168.123.105:0/2542327766 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f90600ffc00 0x7f906010c960 unknown :-1 s=CLOSED pgs=282 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:31.420 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.419+0000 7f906664e700 1 --2- 192.168.123.105:0/2542327766 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f90600ff2f0 0x7f90600ff6c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:31.420 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.419+0000 7f906664e700 1 -- 192.168.123.105:0/2542327766 >> 192.168.123.105:0/2542327766 conn(0x7f90600faf00 msgr2=0x7f90600fd310 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:31.420 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.419+0000 7f906664e700 1 -- 192.168.123.105:0/2542327766 shutdown_connections 2026-03-09T14:58:31.420 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.419+0000 7f906664e700 1 -- 192.168.123.105:0/2542327766 wait complete. 2026-03-09T14:58:31.421 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.420+0000 7f906664e700 1 Processor -- start 2026-03-09T14:58:31.421 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.420+0000 7f906664e700 1 -- start start 2026-03-09T14:58:31.421 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.420+0000 7f906664e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f90600ff2f0 0x7f9060198380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:31.421 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.420+0000 7f906664e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f90600ffc00 0x7f90601988c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:31.421 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.420+0000 7f906664e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9060198fa0 con 0x7f90600ff2f0 2026-03-09T14:58:31.421 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.420+0000 7f906664e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f906019cd30 con 0x7f90600ffc00 2026-03-09T14:58:31.421 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.421+0000 7f905ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f90600ff2f0 0x7f9060198380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:31.422 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.421+0000 7f905ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f90600ff2f0 0x7f9060198380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41058/0 (socket says 192.168.123.105:41058) 2026-03-09T14:58:31.422 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.421+0000 7f905ffff700 1 -- 192.168.123.105:0/3573678838 learned_addr learned my addr 192.168.123.105:0/3573678838 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:31.422 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.421+0000 7f905f7fe700 1 --2- 192.168.123.105:0/3573678838 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f90600ffc00 0x7f90601988c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:31.422 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.421+0000 7f905ffff700 1 -- 192.168.123.105:0/3573678838 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f90600ffc00 msgr2=0x7f90601988c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:31.422 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.421+0000 7f905ffff700 1 --2- 192.168.123.105:0/3573678838 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f90600ffc00 0x7f90601988c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:31.422 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.421+0000 7f905ffff700 1 -- 192.168.123.105:0/3573678838 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f90480097e0 con 0x7f90600ff2f0 2026-03-09T14:58:31.422 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.421+0000 7f905f7fe700 1 --2- 192.168.123.105:0/3573678838 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f90600ffc00 0x7f90601988c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T14:58:31.423 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.422+0000 7f905ffff700 1 --2- 192.168.123.105:0/3573678838 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f90600ff2f0 0x7f9060198380 secure :-1 s=READY pgs=283 cs=0 l=1 rev1=1 crypto rx=0x7f904800efd0 tx=0x7f904800c5b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:31.424 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.422+0000 7f905d7fa700 1 -- 192.168.123.105:0/3573678838 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9048004020 con 0x7f90600ff2f0 2026-03-09T14:58:31.424 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.422+0000 7f905d7fa700 1 -- 192.168.123.105:0/3573678838 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f9048003680 con 0x7f90600ff2f0 2026-03-09T14:58:31.424 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.422+0000 7f905d7fa700 1 -- 192.168.123.105:0/3573678838 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9048010780 con 0x7f90600ff2f0 2026-03-09T14:58:31.424 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.422+0000 7f906664e700 1 -- 192.168.123.105:0/3573678838 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9050009710 con 0x7f90600ff2f0 2026-03-09T14:58:31.424 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.422+0000 7f906664e700 1 -- 192.168.123.105:0/3573678838 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f906019d3d0 con 0x7f90600ff2f0 2026-03-09T14:58:31.425 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.423+0000 7f905d7fa700 1 -- 192.168.123.105:0/3573678838 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f9048010960 con 0x7f90600ff2f0 2026-03-09T14:58:31.425 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.424+0000 7f906664e700 1 -- 192.168.123.105:0/3573678838 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f906010a0d0 con 0x7f90600ff2f0 2026-03-09T14:58:31.428 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.424+0000 7f905d7fa700 1 --2- 192.168.123.105:0/3573678838 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f904c06c680 0x7f904c06eb30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:31.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.425+0000 7f905d7fa700 1 -- 192.168.123.105:0/3573678838 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f9048014070 con 0x7f90600ff2f0 2026-03-09T14:58:31.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.427+0000 7f905f7fe700 1 --2- 192.168.123.105:0/3573678838 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f904c06c680 0x7f904c06eb30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:31.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.427+0000 7f905f7fe700 1 --2- 192.168.123.105:0/3573678838 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f904c06c680 0x7f904c06eb30 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f905000b5c0 tx=0x7f9050005dc0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:31.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.428+0000 7f905d7fa700 1 -- 192.168.123.105:0/3573678838 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f9048057940 con 0x7f90600ff2f0 2026-03-09T14:58:31.576 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.574+0000 7f906664e700 1 -- 192.168.123.105:0/3573678838 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs ls"} v 0) v1 -- 0x7f906019cef0 con 0x7f90600ff2f0 2026-03-09T14:58:31.576 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.575+0000 7f905d7fa700 1 -- 192.168.123.105:0/3573678838 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs ls"}]=0 v10) v1 ==== 53+0+83 (secure 0 0 0) 0x7f904805af60 con 0x7f90600ff2f0 2026-03-09T14:58:31.576 INFO:teuthology.orchestra.run.vm05.stdout:name: cephfs, metadata pool: cephfs.cephfs.meta, data pools: [cephfs.cephfs.data ] 2026-03-09T14:58:31.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.579+0000 7f906664e700 1 -- 192.168.123.105:0/3573678838 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f904c06c680 msgr2=0x7f904c06eb30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:31.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.579+0000 7f906664e700 1 --2- 192.168.123.105:0/3573678838 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f904c06c680 0x7f904c06eb30 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f905000b5c0 tx=0x7f9050005dc0 comp rx=0 tx=0).stop 2026-03-09T14:58:31.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.579+0000 7f906664e700 1 -- 192.168.123.105:0/3573678838 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f90600ff2f0 msgr2=0x7f9060198380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:31.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.579+0000 7f906664e700 1 --2- 192.168.123.105:0/3573678838 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f90600ff2f0 0x7f9060198380 secure :-1 s=READY pgs=283 cs=0 l=1 rev1=1 crypto rx=0x7f904800efd0 tx=0x7f904800c5b0 comp rx=0 tx=0).stop 2026-03-09T14:58:31.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.579+0000 7f906664e700 1 -- 192.168.123.105:0/3573678838 shutdown_connections 2026-03-09T14:58:31.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.579+0000 7f906664e700 1 --2- 192.168.123.105:0/3573678838 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f904c06c680 0x7f904c06eb30 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:31.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.579+0000 7f906664e700 1 --2- 192.168.123.105:0/3573678838 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f90600ff2f0 0x7f9060198380 secure :-1 s=CLOSED pgs=283 cs=0 l=1 rev1=1 crypto rx=0x7f904800efd0 tx=0x7f904800c5b0 comp rx=0 tx=0).stop 2026-03-09T14:58:31.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.579+0000 7f906664e700 1 --2- 192.168.123.105:0/3573678838 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f90600ffc00 0x7f90601988c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:31.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.579+0000 7f906664e700 1 -- 192.168.123.105:0/3573678838 >> 192.168.123.105:0/3573678838 conn(0x7f90600faf00 msgr2=0x7f90600fc490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:31.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.580+0000 7f906664e700 1 -- 192.168.123.105:0/3573678838 shutdown_connections 2026-03-09T14:58:31.583 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31.580+0000 7f906664e700 1 -- 192.168.123.105:0/3573678838 wait complete. 2026-03-09T14:58:31.685 INFO:tasks.cephfs.mount:Mounting default Ceph FS; just confirmed its presence on cluster 2026-03-09T14:58:31.685 INFO:tasks.cephfs.mount:Mounting Ceph FS. Following are details of mount; remember "None" represents Python type None - 2026-03-09T14:58:31.685 INFO:tasks.cephfs.mount:self.client_remote.hostname = vm05.local 2026-03-09T14:58:31.685 INFO:tasks.cephfs.mount:self.client.name = client.0 2026-03-09T14:58:31.685 INFO:tasks.cephfs.mount:self.hostfs_mntpt = /home/ubuntu/cephtest/mnt.0 2026-03-09T14:58:31.685 INFO:tasks.cephfs.mount:self.cephfs_name = None 2026-03-09T14:58:31.685 INFO:tasks.cephfs.mount:self.cephfs_mntpt = None 2026-03-09T14:58:31.685 INFO:tasks.cephfs.mount:self.client_keyring_path = None 2026-03-09T14:58:31.685 INFO:tasks.cephfs.mount:Setting the 'None' netns for '/home/ubuntu/cephtest/mnt.0' 2026-03-09T14:58:31.685 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:31.685 DEBUG:teuthology.orchestra.run.vm05:> ip addr 2026-03-09T14:58:31.723 INFO:teuthology.orchestra.run.vm05.stdout:1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 2026-03-09T14:58:31.723 INFO:teuthology.orchestra.run.vm05.stdout: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2026-03-09T14:58:31.723 INFO:teuthology.orchestra.run.vm05.stdout: inet 127.0.0.1/8 scope host lo 2026-03-09T14:58:31.724 INFO:teuthology.orchestra.run.vm05.stdout: valid_lft forever preferred_lft forever 2026-03-09T14:58:31.724 INFO:teuthology.orchestra.run.vm05.stdout: inet6 ::1/128 scope host 2026-03-09T14:58:31.724 INFO:teuthology.orchestra.run.vm05.stdout: valid_lft forever preferred_lft forever 2026-03-09T14:58:31.724 INFO:teuthology.orchestra.run.vm05.stdout:2: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 2026-03-09T14:58:31.724 INFO:teuthology.orchestra.run.vm05.stdout: link/ether 52:55:00:00:00:05 brd ff:ff:ff:ff:ff:ff 2026-03-09T14:58:31.724 INFO:teuthology.orchestra.run.vm05.stdout: altname enp0s3 2026-03-09T14:58:31.724 INFO:teuthology.orchestra.run.vm05.stdout: altname ens3 2026-03-09T14:58:31.724 INFO:teuthology.orchestra.run.vm05.stdout: inet 192.168.123.105/24 brd 192.168.123.255 scope global dynamic noprefixroute eth0 2026-03-09T14:58:31.724 INFO:teuthology.orchestra.run.vm05.stdout: valid_lft 3089sec preferred_lft 3089sec 2026-03-09T14:58:31.724 INFO:teuthology.orchestra.run.vm05.stdout: inet6 fe80::5055:ff:fe00:5/64 scope link noprefixroute 2026-03-09T14:58:31.724 INFO:teuthology.orchestra.run.vm05.stdout: valid_lft forever preferred_lft forever 2026-03-09T14:58:31.724 INFO:tasks.cephfs.mount:Setuping the 'ceph-brx' with 192.168.159.254/20 2026-03-09T14:58:31.724 DEBUG:teuthology.orchestra.run.vm05:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T14:58:31.724 DEBUG:teuthology.orchestra.run.vm05:> set -e 2026-03-09T14:58:31.724 DEBUG:teuthology.orchestra.run.vm05:> sudo ip link add name ceph-brx type bridge 2026-03-09T14:58:31.724 DEBUG:teuthology.orchestra.run.vm05:> sudo ip addr flush dev ceph-brx 2026-03-09T14:58:31.724 DEBUG:teuthology.orchestra.run.vm05:> sudo ip link set ceph-brx up 2026-03-09T14:58:31.724 DEBUG:teuthology.orchestra.run.vm05:> sudo ip addr add 192.168.159.254/20 brd 192.168.159.255 dev ceph-brx 2026-03-09T14:58:31.724 DEBUG:teuthology.orchestra.run.vm05:> ') 2026-03-09T14:58:31.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T14:58:31.938 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:31 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:31.938 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:31 vm05 ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:31.938 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:31 vm05 ceph-mon[50611]: pgmap v82: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.5 KiB/s rd, 1.8 KiB/s wr, 7 op/s 2026-03-09T14:58:31.938 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:31 vm05 ceph-mon[50611]: mds.? [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] up:standby 2026-03-09T14:58:31.938 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:31 vm05 ceph-mon[50611]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T14:58:31.938 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:31 vm05 ceph-mon[50611]: from='client.? 192.168.123.105:0/3573678838' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch 2026-03-09T14:58:31.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:31 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T14:58:31.976 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:31.976 DEBUG:teuthology.orchestra.run.vm05:> echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward 2026-03-09T14:58:32.012 INFO:teuthology.orchestra.run.vm05.stdout:1 2026-03-09T14:58:32.014 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:32.014 DEBUG:teuthology.orchestra.run.vm05:> ip r 2026-03-09T14:58:32.087 INFO:teuthology.orchestra.run.vm05.stdout:default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.105 metric 100 2026-03-09T14:58:32.088 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.105 metric 100 2026-03-09T14:58:32.088 INFO:teuthology.orchestra.run.vm05.stdout:192.168.144.0/20 dev ceph-brx proto kernel scope link src 192.168.159.254 2026-03-09T14:58:32.088 DEBUG:teuthology.orchestra.run.vm05:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T14:58:32.088 DEBUG:teuthology.orchestra.run.vm05:> set -e 2026-03-09T14:58:32.088 DEBUG:teuthology.orchestra.run.vm05:> sudo iptables -A FORWARD -o eth0 -i ceph-brx -j ACCEPT 2026-03-09T14:58:32.088 DEBUG:teuthology.orchestra.run.vm05:> sudo iptables -A FORWARD -i eth0 -o ceph-brx -j ACCEPT 2026-03-09T14:58:32.088 DEBUG:teuthology.orchestra.run.vm05:> sudo iptables -t nat -A POSTROUTING -s 192.168.159.254/20 -o eth0 -j MASQUERADE 2026-03-09T14:58:32.088 DEBUG:teuthology.orchestra.run.vm05:> ') 2026-03-09T14:58:32.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:32 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T14:58:32.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:32 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T14:58:32.260 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:32.260 DEBUG:teuthology.orchestra.run.vm05:> ip netns list 2026-03-09T14:58:32.278 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:32.278 DEBUG:teuthology.orchestra.run.vm05:> ip netns list-id 2026-03-09T14:58:32.334 DEBUG:teuthology.orchestra.run.vm05:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T14:58:32.334 DEBUG:teuthology.orchestra.run.vm05:> set -e 2026-03-09T14:58:32.334 DEBUG:teuthology.orchestra.run.vm05:> sudo ip netns add ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-09T14:58:32.334 DEBUG:teuthology.orchestra.run.vm05:> sudo ip netns set ceph-ns--home-ubuntu-cephtest-mnt.0 0 2026-03-09T14:58:32.335 DEBUG:teuthology.orchestra.run.vm05:> ') 2026-03-09T14:58:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:31 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:31 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:31 vm09 ceph-mon[59673]: pgmap v82: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.5 KiB/s rd, 1.8 KiB/s wr, 7 op/s 2026-03-09T14:58:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:31 vm09 ceph-mon[59673]: mds.? [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] up:standby 2026-03-09T14:58:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:31 vm09 ceph-mon[59673]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T14:58:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:31 vm09 ceph-mon[59673]: from='client.? 192.168.123.105:0/3573678838' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch 2026-03-09T14:58:32.416 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:32 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T14:58:32.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:32 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T14:58:32.452 INFO:tasks.cephfs.mount:Setuping the netns 'ceph-ns--home-ubuntu-cephtest-mnt.0' with 192.168.144.1/20 2026-03-09T14:58:32.452 DEBUG:teuthology.orchestra.run.vm05:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T14:58:32.452 DEBUG:teuthology.orchestra.run.vm05:> set -e 2026-03-09T14:58:32.452 DEBUG:teuthology.orchestra.run.vm05:> sudo ip link add veth0 netns ceph-ns--home-ubuntu-cephtest-mnt.0 type veth peer name brx.0 2026-03-09T14:58:32.452 DEBUG:teuthology.orchestra.run.vm05:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.0 ip addr add 192.168.144.1/20 brd 192.168.159.255 dev veth0 2026-03-09T14:58:32.452 DEBUG:teuthology.orchestra.run.vm05:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.0 ip link set veth0 up 2026-03-09T14:58:32.452 DEBUG:teuthology.orchestra.run.vm05:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.0 ip link set lo up 2026-03-09T14:58:32.452 DEBUG:teuthology.orchestra.run.vm05:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.0 ip route add default via 192.168.159.254 2026-03-09T14:58:32.453 DEBUG:teuthology.orchestra.run.vm05:> ') 2026-03-09T14:58:32.533 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:32 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T14:58:32.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:32 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T14:58:32.611 DEBUG:teuthology.orchestra.run.vm05:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T14:58:32.611 DEBUG:teuthology.orchestra.run.vm05:> set -e 2026-03-09T14:58:32.611 DEBUG:teuthology.orchestra.run.vm05:> sudo ip link set brx.0 up 2026-03-09T14:58:32.611 DEBUG:teuthology.orchestra.run.vm05:> sudo ip link set dev brx.0 master ceph-brx 2026-03-09T14:58:32.611 DEBUG:teuthology.orchestra.run.vm05:> ') 2026-03-09T14:58:32.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:32 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T14:58:32.722 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:32 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T14:58:32.728 INFO:tasks.cephfs.fuse_mount:Client client.0 config is {} 2026-03-09T14:58:32.728 INFO:teuthology.orchestra.run:Running command with timeout 60 2026-03-09T14:58:32.728 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -v /home/ubuntu/cephtest/mnt.0 2026-03-09T14:58:32.788 INFO:teuthology.orchestra.run.vm05.stdout:mkdir: created directory '/home/ubuntu/cephtest/mnt.0' 2026-03-09T14:58:32.788 INFO:teuthology.orchestra.run:Running command with timeout 60 2026-03-09T14:58:32.788 DEBUG:teuthology.orchestra.run.vm05:> chmod 0000 /home/ubuntu/cephtest/mnt.0 2026-03-09T14:58:32.847 DEBUG:teuthology.orchestra.run.vm05:> sudo modprobe fuse 2026-03-09T14:58:32.919 DEBUG:teuthology.orchestra.run.vm05:> cat /proc/self/mounts | awk '{print $2}' 2026-03-09T14:58:32.977 INFO:teuthology.orchestra.run.vm05.stdout:/proc 2026-03-09T14:58:32.977 INFO:teuthology.orchestra.run.vm05.stdout:/sys 2026-03-09T14:58:32.977 INFO:teuthology.orchestra.run.vm05.stdout:/dev 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/security 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/dev/shm 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/dev/pts 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/run 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/cgroup 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/pstore 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/bpf 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/config 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/ 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/nfs/rpc_pipefs 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/selinux 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/dev/hugepages 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/dev/mqueue 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/debug 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/tracing 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-sysctl.service 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/fuse/connections 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-sysusers.service 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/run/user/1000 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/7f5ba7cdbc33d5be9325ee0ae6f7cdb3eae2c04a866c886757abc3cd09b4f4f8/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/7a08c07221d54e810a4861ba5e4d5426a85fabaa2406dfe8db8deae79bdd4e1d/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/run/user/0 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/351d72eb0254b5235b064b56677b75d89894396536c22b0b903a63e3b873d140/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/f889932990511e8c607b2ba15f01248a18b04d1dddff028d5838a3eaffc5b205/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/ed2d9b9b1b350515040d9310887f112b1443050bb9e2818aa84efd27ed7bb210/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/bdee58a23aa7b7a589e6a673400927c6ed9721feb21381368cffbf108ac7fa5c/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/92e4c5f4b0a813e21896f223012b5b49b503c9a1f03354f81c4929aa551f07b8/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/7c5dca5b4799474257a716d2fb0cf8297a9146941695f39e06bf42eb64dac517/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/75ba8c4e32b8270a8bb26e3d2d4308696ad2b2f55ea96c36334ac5edd143631e/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/c020c104528e4a01161683655337f59e5e01aa884266dab812161305c823ffc6/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/b3e2852dbf328deb9a14093a8b19d86c94ab8764228f7ad75264e230b5847ef3/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/69a08a19a9c2f72e1c8e0000095a4144906ff0a185e47082c70b333ffe73f29c/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/45b8a28ddd736d508e8fcb08035ac5f74916734ce978d13c8491b30268e0957f/merged 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/run/netns 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-09T14:58:32.978 INFO:teuthology.orchestra.run.vm05.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-09T14:58:32.979 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:32.979 DEBUG:teuthology.orchestra.run.vm05:> ls /sys/fs/fuse/connections 2026-03-09T14:58:32.987 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:32 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:32.987 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:32 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:32.987 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:32 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:32.987 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:32 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:58:32.987 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:32 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:32.987 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:32 vm05.local ceph-mon[50611]: mds.? [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] up:standby 2026-03-09T14:58:32.987 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:32 vm05.local ceph-mon[50611]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T14:58:33.007 INFO:tasks.cephfs.fuse_mount:Pre-mount connections: [] 2026-03-09T14:58:33.008 DEBUG:teuthology.orchestra.run.vm05:> (cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.0 --id 0) 2026-03-09T14:58:33.050 DEBUG:teuthology.orchestra.run.vm05:> sudo modprobe fuse 2026-03-09T14:58:33.081 DEBUG:teuthology.orchestra.run.vm05:> cat /proc/self/mounts | awk '{print $2}' 2026-03-09T14:58:33.132 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vm05.stderr:2026-03-09T14:58:33.131+0000 7f27d6bd4480 -1 init, newargv = 0x558dc52c17a0 newargc=15 2026-03-09T14:58:33.132 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vm05.stderr:ceph-fuse[92220]: starting ceph client 2026-03-09T14:58:33.141 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vm05.stderr:ceph-fuse[92220]: starting fuse 2026-03-09T14:58:33.150 INFO:teuthology.orchestra.run.vm05.stdout:/proc 2026-03-09T14:58:33.150 INFO:teuthology.orchestra.run.vm05.stdout:/sys 2026-03-09T14:58:33.150 INFO:teuthology.orchestra.run.vm05.stdout:/dev 2026-03-09T14:58:33.150 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/security 2026-03-09T14:58:33.150 INFO:teuthology.orchestra.run.vm05.stdout:/dev/shm 2026-03-09T14:58:33.150 INFO:teuthology.orchestra.run.vm05.stdout:/dev/pts 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/run 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/cgroup 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/pstore 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/bpf 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/config 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/ 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/nfs/rpc_pipefs 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/selinux 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/dev/hugepages 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/dev/mqueue 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/debug 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/sys/kernel/tracing 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-sysctl.service 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/sys/fs/fuse/connections 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-sysusers.service 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/run/user/1000 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/7f5ba7cdbc33d5be9325ee0ae6f7cdb3eae2c04a866c886757abc3cd09b4f4f8/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/7a08c07221d54e810a4861ba5e4d5426a85fabaa2406dfe8db8deae79bdd4e1d/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/run/user/0 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/351d72eb0254b5235b064b56677b75d89894396536c22b0b903a63e3b873d140/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/f889932990511e8c607b2ba15f01248a18b04d1dddff028d5838a3eaffc5b205/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/ed2d9b9b1b350515040d9310887f112b1443050bb9e2818aa84efd27ed7bb210/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/bdee58a23aa7b7a589e6a673400927c6ed9721feb21381368cffbf108ac7fa5c/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/92e4c5f4b0a813e21896f223012b5b49b503c9a1f03354f81c4929aa551f07b8/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/7c5dca5b4799474257a716d2fb0cf8297a9146941695f39e06bf42eb64dac517/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/75ba8c4e32b8270a8bb26e3d2d4308696ad2b2f55ea96c36334ac5edd143631e/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/c020c104528e4a01161683655337f59e5e01aa884266dab812161305c823ffc6/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/b3e2852dbf328deb9a14093a8b19d86c94ab8764228f7ad75264e230b5847ef3/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/69a08a19a9c2f72e1c8e0000095a4144906ff0a185e47082c70b333ffe73f29c/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/var/lib/containers/storage/overlay/45b8a28ddd736d508e8fcb08035ac5f74916734ce978d13c8491b30268e0957f/merged 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/run/netns 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run.vm05.stdout:/home/ubuntu/cephtest/mnt.0 2026-03-09T14:58:33.151 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:33.151 DEBUG:teuthology.orchestra.run.vm05:> ls /sys/fs/fuse/connections 2026-03-09T14:58:33.210 INFO:teuthology.orchestra.run.vm05.stdout:105 2026-03-09T14:58:33.210 INFO:tasks.cephfs.fuse_mount:Post-mount connections: [105] 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> sudo stdin-killer -- python3 -c ' 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> import glob 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> import re 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> import os 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> import subprocess 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> def _find_admin_socket(client_name): 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> asok_path = "/var/run/ceph/ceph-client.0.*.asok" 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> files = glob.glob(asok_path) 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> mountpoint = "/home/ubuntu/cephtest/mnt.0" 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> # Given a non-glob path, it better be there 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> if "*" not in asok_path: 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> assert(len(files) == 1) 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> return files[0] 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> for f in files: 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> pid = re.match(".*\.(\d+)\.asok$", f).group(1) 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> if os.path.exists("/proc/{0}".format(pid)): 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> with open("/proc/{0}/cmdline".format(pid), '"'"'r'"'"') as proc_f: 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> contents = proc_f.read() 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> if mountpoint in contents: 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> return f 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> raise RuntimeError("Client socket {0} not found".format(client_name)) 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> print(_find_admin_socket("client.0")) 2026-03-09T14:58:33.210 DEBUG:teuthology.orchestra.run.vm05:> ' 2026-03-09T14:58:33.317 INFO:teuthology.orchestra.run.vm05.stdout:/var/run/ceph/ceph-client.0.92220.asok 2026-03-09T14:58:33.320 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T14:58:33.329 INFO:tasks.cephfs.fuse_mount:Found client admin socket at /var/run/ceph/ceph-client.0.92220.asok 2026-03-09T14:58:33.329 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:33.329 DEBUG:teuthology.orchestra.run.vm05:> sudo ceph --admin-daemon /var/run/ceph/ceph-client.0.92220.asok status 2026-03-09T14:58:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:32 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:32 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:32 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:58:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:32 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:58:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:32 vm09 ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:32 vm09 ceph-mon[59673]: mds.? [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] up:standby 2026-03-09T14:58:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:32 vm09 ceph-mon[59673]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "metadata": { 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "ceph_sha1": "5dd24139a1eada541a3bc16b6941c5dde975e26d", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "ceph_version": "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "entity_id": "0", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "hostname": "vm05.local", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "mount_point": "/home/ubuntu/cephtest/mnt.0", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "pid": "92220", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "root": "/" 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "dentry_count": 0, 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "dentry_pinned_count": 0, 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "id": 14546, 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "inst": { 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "name": { 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "type": "client", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "num": 14546 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "addr": { 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "type": "v1", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "addr": "192.168.144.1:0", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "nonce": 3878864280 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "addr": { 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "type": "v1", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "addr": "192.168.144.1:0", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "nonce": 3878864280 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "inst_str": "client.14546 192.168.144.1:0/3878864280", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "addr_str": "192.168.144.1:0/3878864280", 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "inode_count": 1, 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "mds_epoch": 9, 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "osd_epoch": 37, 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "osd_epoch_barrier": 0, 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "blocklisted": false, 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout: "fs_name": "cephfs" 2026-03-09T14:58:33.451 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T14:58:33.458 DEBUG:tasks.ceph_fuse:passing mntargs=[] 2026-03-09T14:58:33.458 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs ls 2026-03-09T14:58:33.629 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:33.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.925+0000 7f4e0bb25700 1 -- 192.168.123.105:0/379618511 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4e040fff00 msgr2=0x7f4e04100370 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:33.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.925+0000 7f4e0bb25700 1 --2- 192.168.123.105:0/379618511 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4e040fff00 0x7f4e04100370 secure :-1 s=READY pgs=286 cs=0 l=1 rev1=1 crypto rx=0x7f4e00009b00 tx=0x7f4e00009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:33.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.927+0000 7f4e0bb25700 1 -- 192.168.123.105:0/379618511 shutdown_connections 2026-03-09T14:58:33.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.927+0000 7f4e0bb25700 1 --2- 192.168.123.105:0/379618511 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4e040fff00 0x7f4e04100370 unknown :-1 s=CLOSED pgs=286 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:33.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.927+0000 7f4e0bb25700 1 --2- 192.168.123.105:0/379618511 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4e04104520 0x7f4e041048f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:33.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.927+0000 7f4e0bb25700 1 -- 192.168.123.105:0/379618511 >> 192.168.123.105:0/379618511 conn(0x7f4e040754a0 msgr2=0x7f4e040758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:33.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.928+0000 7f4e0bb25700 1 -- 192.168.123.105:0/379618511 shutdown_connections 2026-03-09T14:58:33.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.928+0000 7f4e0bb25700 1 -- 192.168.123.105:0/379618511 wait complete. 2026-03-09T14:58:33.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.928+0000 7f4e0bb25700 1 Processor -- start 2026-03-09T14:58:33.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.929+0000 7f4e0bb25700 1 -- start start 2026-03-09T14:58:33.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.929+0000 7f4e0bb25700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4e040fff00 0x7f4e04198e00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:33.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.929+0000 7f4e0bb25700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4e04104520 0x7f4e04193e00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:33.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.929+0000 7f4e0bb25700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4e04194340 con 0x7f4e04104520 2026-03-09T14:58:33.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.929+0000 7f4e0bb25700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4e041944b0 con 0x7f4e040fff00 2026-03-09T14:58:33.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.929+0000 7f4e090c0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4e04104520 0x7f4e04193e00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:33.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.929+0000 7f4e090c0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4e04104520 0x7f4e04193e00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41090/0 (socket says 192.168.123.105:41090) 2026-03-09T14:58:33.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.929+0000 7f4e090c0700 1 -- 192.168.123.105:0/3676824848 learned_addr learned my addr 192.168.123.105:0/3676824848 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:33.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.929+0000 7f4e098c1700 1 --2- 192.168.123.105:0/3676824848 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4e040fff00 0x7f4e04198e00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:33.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.930+0000 7f4e090c0700 1 -- 192.168.123.105:0/3676824848 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4e040fff00 msgr2=0x7f4e04198e00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:33.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.930+0000 7f4e090c0700 1 --2- 192.168.123.105:0/3676824848 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4e040fff00 0x7f4e04198e00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:33.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.930+0000 7f4e090c0700 1 -- 192.168.123.105:0/3676824848 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4e000097e0 con 0x7f4e04104520 2026-03-09T14:58:33.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.930+0000 7f4e098c1700 1 --2- 192.168.123.105:0/3676824848 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4e040fff00 0x7f4e04198e00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T14:58:33.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.930+0000 7f4e090c0700 1 --2- 192.168.123.105:0/3676824848 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4e04104520 0x7f4e04193e00 secure :-1 s=READY pgs=287 cs=0 l=1 rev1=1 crypto rx=0x7f4e00009fd0 tx=0x7f4e00004a40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:33.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.930+0000 7f4dfaffd700 1 -- 192.168.123.105:0/3676824848 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4e0001d070 con 0x7f4e04104520 2026-03-09T14:58:33.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.930+0000 7f4e0bb25700 1 -- 192.168.123.105:0/3676824848 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4e04194730 con 0x7f4e04104520 2026-03-09T14:58:33.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.930+0000 7f4e0bb25700 1 -- 192.168.123.105:0/3676824848 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4e04194c20 con 0x7f4e04104520 2026-03-09T14:58:33.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.931+0000 7f4dfaffd700 1 -- 192.168.123.105:0/3676824848 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f4e00004510 con 0x7f4e04104520 2026-03-09T14:58:33.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.931+0000 7f4dfaffd700 1 -- 192.168.123.105:0/3676824848 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4e00003e90 con 0x7f4e04104520 2026-03-09T14:58:33.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.932+0000 7f4dfaffd700 1 -- 192.168.123.105:0/3676824848 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f4e0000f460 con 0x7f4e04104520 2026-03-09T14:58:33.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.933+0000 7f4e0bb25700 1 -- 192.168.123.105:0/3676824848 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4e0404ea50 con 0x7f4e04104520 2026-03-09T14:58:33.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.933+0000 7f4dfaffd700 1 --2- 192.168.123.105:0/3676824848 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4df006c750 0x7f4df006ec00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:33.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.933+0000 7f4dfaffd700 1 -- 192.168.123.105:0/3676824848 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f4e0008dcd0 con 0x7f4e04104520 2026-03-09T14:58:33.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.936+0000 7f4dfaffd700 1 -- 192.168.123.105:0/3676824848 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f4e000584d0 con 0x7f4e04104520 2026-03-09T14:58:33.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.936+0000 7f4e098c1700 1 --2- 192.168.123.105:0/3676824848 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4df006c750 0x7f4df006ec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:33.938 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:33.937+0000 7f4e098c1700 1 --2- 192.168.123.105:0/3676824848 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4df006c750 0x7f4df006ec00 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f4df4005fd0 tx=0x7f4df4005e20 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:34.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.082+0000 7f4e0bb25700 1 -- 192.168.123.105:0/3676824848 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs ls"} v 0) v1 -- 0x7f4e04195330 con 0x7f4e04104520 2026-03-09T14:58:34.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.083+0000 7f4dfaffd700 1 -- 192.168.123.105:0/3676824848 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs ls"}]=0 v11) v1 ==== 53+0+83 (secure 0 0 0) 0x7f4e0005baf0 con 0x7f4e04104520 2026-03-09T14:58:34.084 INFO:teuthology.orchestra.run.vm05.stdout:name: cephfs, metadata pool: cephfs.cephfs.meta, data pools: [cephfs.cephfs.data ] 2026-03-09T14:58:34.086 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.086+0000 7f4e0bb25700 1 -- 192.168.123.105:0/3676824848 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4df006c750 msgr2=0x7f4df006ec00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:34.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.086+0000 7f4e0bb25700 1 --2- 192.168.123.105:0/3676824848 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4df006c750 0x7f4df006ec00 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f4df4005fd0 tx=0x7f4df4005e20 comp rx=0 tx=0).stop 2026-03-09T14:58:34.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.086+0000 7f4e0bb25700 1 -- 192.168.123.105:0/3676824848 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4e04104520 msgr2=0x7f4e04193e00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:34.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.086+0000 7f4e0bb25700 1 --2- 192.168.123.105:0/3676824848 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4e04104520 0x7f4e04193e00 secure :-1 s=READY pgs=287 cs=0 l=1 rev1=1 crypto rx=0x7f4e00009fd0 tx=0x7f4e00004a40 comp rx=0 tx=0).stop 2026-03-09T14:58:34.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.086+0000 7f4e0bb25700 1 -- 192.168.123.105:0/3676824848 shutdown_connections 2026-03-09T14:58:34.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.086+0000 7f4e0bb25700 1 --2- 192.168.123.105:0/3676824848 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f4df006c750 0x7f4df006ec00 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:34.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.086+0000 7f4e0bb25700 1 --2- 192.168.123.105:0/3676824848 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4e040fff00 0x7f4e04198e00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:34.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.086+0000 7f4e0bb25700 1 --2- 192.168.123.105:0/3676824848 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4e04104520 0x7f4e04193e00 unknown :-1 s=CLOSED pgs=287 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:34.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.086+0000 7f4e0bb25700 1 -- 192.168.123.105:0/3676824848 >> 192.168.123.105:0/3676824848 conn(0x7f4e040754a0 msgr2=0x7f4e040feca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:34.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.086+0000 7f4e0bb25700 1 -- 192.168.123.105:0/3676824848 shutdown_connections 2026-03-09T14:58:34.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:34.087+0000 7f4e0bb25700 1 -- 192.168.123.105:0/3676824848 wait complete. 2026-03-09T14:58:34.135 INFO:tasks.cephfs.mount:Mounting default Ceph FS; just confirmed its presence on cluster 2026-03-09T14:58:34.135 INFO:tasks.cephfs.mount:Mounting Ceph FS. Following are details of mount; remember "None" represents Python type None - 2026-03-09T14:58:34.135 INFO:tasks.cephfs.mount:self.client_remote.hostname = vm09.local 2026-03-09T14:58:34.135 INFO:tasks.cephfs.mount:self.client.name = client.1 2026-03-09T14:58:34.135 INFO:tasks.cephfs.mount:self.hostfs_mntpt = /home/ubuntu/cephtest/mnt.1 2026-03-09T14:58:34.135 INFO:tasks.cephfs.mount:self.cephfs_name = None 2026-03-09T14:58:34.136 INFO:tasks.cephfs.mount:self.cephfs_mntpt = None 2026-03-09T14:58:34.136 INFO:tasks.cephfs.mount:self.client_keyring_path = None 2026-03-09T14:58:34.136 INFO:tasks.cephfs.mount:Setting the 'None' netns for '/home/ubuntu/cephtest/mnt.1' 2026-03-09T14:58:34.136 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:34.136 DEBUG:teuthology.orchestra.run.vm09:> ip addr 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout:1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout: link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout: inet 127.0.0.1/8 scope host lo 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout: valid_lft forever preferred_lft forever 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout: inet6 ::1/128 scope host 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout: valid_lft forever preferred_lft forever 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout:2: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout: link/ether 52:55:00:00:00:09 brd ff:ff:ff:ff:ff:ff 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout: altname enp0s3 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout: altname ens3 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout: inet 192.168.123.109/24 brd 192.168.123.255 scope global dynamic noprefixroute eth0 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout: valid_lft 3122sec preferred_lft 3122sec 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout: inet6 fe80::5055:ff:fe00:9/64 scope link noprefixroute 2026-03-09T14:58:34.153 INFO:teuthology.orchestra.run.vm09.stdout: valid_lft forever preferred_lft forever 2026-03-09T14:58:34.153 INFO:tasks.cephfs.mount:Setuping the 'ceph-brx' with 192.168.159.254/20 2026-03-09T14:58:34.153 DEBUG:teuthology.orchestra.run.vm09:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T14:58:34.153 DEBUG:teuthology.orchestra.run.vm09:> set -e 2026-03-09T14:58:34.153 DEBUG:teuthology.orchestra.run.vm09:> sudo ip link add name ceph-brx type bridge 2026-03-09T14:58:34.153 DEBUG:teuthology.orchestra.run.vm09:> sudo ip addr flush dev ceph-brx 2026-03-09T14:58:34.153 DEBUG:teuthology.orchestra.run.vm09:> sudo ip link set ceph-brx up 2026-03-09T14:58:34.153 DEBUG:teuthology.orchestra.run.vm09:> sudo ip addr add 192.168.159.254/20 brd 192.168.159.255 dev ceph-brx 2026-03-09T14:58:34.153 DEBUG:teuthology.orchestra.run.vm09:> ') 2026-03-09T14:58:34.236 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:34 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T14:58:34.245 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:33 vm09 ceph-mon[59673]: pgmap v83: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 1.4 KiB/s wr, 6 op/s 2026-03-09T14:58:34.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:33 vm05.local ceph-mon[50611]: pgmap v83: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 1.4 KiB/s wr, 6 op/s 2026-03-09T14:58:34.313 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:34 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T14:58:34.319 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:34.319 DEBUG:teuthology.orchestra.run.vm09:> echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward 2026-03-09T14:58:34.362 INFO:teuthology.orchestra.run.vm09.stdout:1 2026-03-09T14:58:34.364 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:34.364 DEBUG:teuthology.orchestra.run.vm09:> ip r 2026-03-09T14:58:34.386 INFO:teuthology.orchestra.run.vm09.stdout:default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.109 metric 100 2026-03-09T14:58:34.386 INFO:teuthology.orchestra.run.vm09.stdout:192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.109 metric 100 2026-03-09T14:58:34.386 INFO:teuthology.orchestra.run.vm09.stdout:192.168.144.0/20 dev ceph-brx proto kernel scope link src 192.168.159.254 2026-03-09T14:58:34.386 DEBUG:teuthology.orchestra.run.vm09:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T14:58:34.386 DEBUG:teuthology.orchestra.run.vm09:> set -e 2026-03-09T14:58:34.386 DEBUG:teuthology.orchestra.run.vm09:> sudo iptables -A FORWARD -o eth0 -i ceph-brx -j ACCEPT 2026-03-09T14:58:34.386 DEBUG:teuthology.orchestra.run.vm09:> sudo iptables -A FORWARD -i eth0 -o ceph-brx -j ACCEPT 2026-03-09T14:58:34.386 DEBUG:teuthology.orchestra.run.vm09:> sudo iptables -t nat -A POSTROUTING -s 192.168.159.254/20 -o eth0 -j MASQUERADE 2026-03-09T14:58:34.386 DEBUG:teuthology.orchestra.run.vm09:> ') 2026-03-09T14:58:34.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:34 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T14:58:34.541 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:34 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T14:58:34.546 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:34.546 DEBUG:teuthology.orchestra.run.vm09:> ip netns list 2026-03-09T14:58:34.604 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:34.604 DEBUG:teuthology.orchestra.run.vm09:> ip netns list-id 2026-03-09T14:58:34.663 DEBUG:teuthology.orchestra.run.vm09:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T14:58:34.663 DEBUG:teuthology.orchestra.run.vm09:> set -e 2026-03-09T14:58:34.664 DEBUG:teuthology.orchestra.run.vm09:> sudo ip netns add ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-09T14:58:34.664 DEBUG:teuthology.orchestra.run.vm09:> sudo ip netns set ceph-ns--home-ubuntu-cephtest-mnt.1 0 2026-03-09T14:58:34.664 DEBUG:teuthology.orchestra.run.vm09:> ') 2026-03-09T14:58:34.744 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:34 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T14:58:34.780 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:34 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T14:58:34.782 INFO:tasks.cephfs.mount:Setuping the netns 'ceph-ns--home-ubuntu-cephtest-mnt.1' with 192.168.144.1/20 2026-03-09T14:58:34.782 DEBUG:teuthology.orchestra.run.vm09:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T14:58:34.783 DEBUG:teuthology.orchestra.run.vm09:> set -e 2026-03-09T14:58:34.783 DEBUG:teuthology.orchestra.run.vm09:> sudo ip link add veth0 netns ceph-ns--home-ubuntu-cephtest-mnt.1 type veth peer name brx.0 2026-03-09T14:58:34.783 DEBUG:teuthology.orchestra.run.vm09:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.1 ip addr add 192.168.144.1/20 brd 192.168.159.255 dev veth0 2026-03-09T14:58:34.783 DEBUG:teuthology.orchestra.run.vm09:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.1 ip link set veth0 up 2026-03-09T14:58:34.783 DEBUG:teuthology.orchestra.run.vm09:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.1 ip link set lo up 2026-03-09T14:58:34.783 DEBUG:teuthology.orchestra.run.vm09:> sudo ip netns exec ceph-ns--home-ubuntu-cephtest-mnt.1 ip route add default via 192.168.159.254 2026-03-09T14:58:34.783 DEBUG:teuthology.orchestra.run.vm09:> ') 2026-03-09T14:58:34.860 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:34 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T14:58:34.936 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:34 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T14:58:34.941 DEBUG:teuthology.orchestra.run.vm09:> (cd / && exec stdin-killer --timeout=300 -- bash -c ' 2026-03-09T14:58:34.941 DEBUG:teuthology.orchestra.run.vm09:> set -e 2026-03-09T14:58:34.941 DEBUG:teuthology.orchestra.run.vm09:> sudo ip link set brx.0 up 2026-03-09T14:58:34.941 DEBUG:teuthology.orchestra.run.vm09:> sudo ip link set dev brx.0 master ceph-brx 2026-03-09T14:58:34.941 DEBUG:teuthology.orchestra.run.vm09:> ') 2026-03-09T14:58:35.023 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:35 stdin-killer INFO: expiration expected; waiting 300 seconds for command to complete 2026-03-09T14:58:35.031 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:34 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/3676824848' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch 2026-03-09T14:58:35.031 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:34 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:35.054 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:35 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T14:58:35.058 INFO:tasks.cephfs.fuse_mount:Client client.1 config is {} 2026-03-09T14:58:35.058 INFO:teuthology.orchestra.run:Running command with timeout 60 2026-03-09T14:58:35.058 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -v /home/ubuntu/cephtest/mnt.1 2026-03-09T14:58:35.116 INFO:teuthology.orchestra.run.vm09.stdout:mkdir: created directory '/home/ubuntu/cephtest/mnt.1' 2026-03-09T14:58:35.116 INFO:teuthology.orchestra.run:Running command with timeout 60 2026-03-09T14:58:35.116 DEBUG:teuthology.orchestra.run.vm09:> chmod 0000 /home/ubuntu/cephtest/mnt.1 2026-03-09T14:58:35.175 DEBUG:teuthology.orchestra.run.vm09:> sudo modprobe fuse 2026-03-09T14:58:35.244 DEBUG:teuthology.orchestra.run.vm09:> cat /proc/self/mounts | awk '{print $2}' 2026-03-09T14:58:35.302 INFO:teuthology.orchestra.run.vm09.stdout:/proc 2026-03-09T14:58:35.302 INFO:teuthology.orchestra.run.vm09.stdout:/sys 2026-03-09T14:58:35.302 INFO:teuthology.orchestra.run.vm09.stdout:/dev 2026-03-09T14:58:35.302 INFO:teuthology.orchestra.run.vm09.stdout:/sys/kernel/security 2026-03-09T14:58:35.302 INFO:teuthology.orchestra.run.vm09.stdout:/dev/shm 2026-03-09T14:58:35.302 INFO:teuthology.orchestra.run.vm09.stdout:/dev/pts 2026-03-09T14:58:35.302 INFO:teuthology.orchestra.run.vm09.stdout:/run 2026-03-09T14:58:35.302 INFO:teuthology.orchestra.run.vm09.stdout:/sys/fs/cgroup 2026-03-09T14:58:35.302 INFO:teuthology.orchestra.run.vm09.stdout:/sys/fs/pstore 2026-03-09T14:58:35.302 INFO:teuthology.orchestra.run.vm09.stdout:/sys/fs/bpf 2026-03-09T14:58:35.302 INFO:teuthology.orchestra.run.vm09.stdout:/sys/kernel/config 2026-03-09T14:58:35.302 INFO:teuthology.orchestra.run.vm09.stdout:/ 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/nfs/rpc_pipefs 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/sys/fs/selinux 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/dev/mqueue 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/dev/hugepages 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/sys/kernel/tracing 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/sys/kernel/debug 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/run/credentials/systemd-sysctl.service 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/sys/fs/fuse/connections 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/run/credentials/systemd-sysusers.service 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/run/user/1000 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/run/user/0 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/6378f606e686cc21d526f572b390c812b6120d4275369716af33e35fc1bd4007/merged 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/c45813ddd1b7d10d475cc39ca9ce49aebbfed92e2a7d77d7b6cd8e1ff873b6e7/merged 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/4c3d18015fff36b8b17edba82b7ea02f654da48b9b894abe65355dec2945ba3a/merged 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/a3e734ac1a11ab32f1d3c97ed8dd958018cd4a28f034ad4ceb7d321e6d8b7bbc/merged 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/8009392fd9f00bc74dfadc86186156ffc6e1e70065f18b67b6137aefbca0d781/merged 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/be2130b6772326310996e6532a68a5d428bbc7c0ba7dc30041485a8d7832926f/merged 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/fa66a16303dc946df9349946c282424c9a5b014d70ad242aa6a52756de0e2562/merged 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/c2dc146889e90d972b1ef69d7e2f23f38fae432fb246b8ad002e9b733940e76e/merged 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/e10438b4990af6536d48deda545b857ab742d5ead6a47a5da505a22d2575e170/merged 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/96efcd1f9b3eb78d972f6c090beb285a3ad7394bbe74b19488dd12237352faef/merged 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/run/netns 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run.vm09.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-09T14:58:35.303 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:35.303 DEBUG:teuthology.orchestra.run.vm09:> ls /sys/fs/fuse/connections 2026-03-09T14:58:35.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:34 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/3676824848' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch 2026-03-09T14:58:35.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:34 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:35.359 INFO:tasks.cephfs.fuse_mount:Pre-mount connections: [] 2026-03-09T14:58:35.359 DEBUG:teuthology.orchestra.run.vm09:> (cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.1 --id 1) 2026-03-09T14:58:35.403 DEBUG:teuthology.orchestra.run.vm09:> sudo modprobe fuse 2026-03-09T14:58:35.436 DEBUG:teuthology.orchestra.run.vm09:> cat /proc/self/mounts | awk '{print $2}' 2026-03-09T14:58:35.481 INFO:tasks.cephfs.fuse_mount.ceph-fuse.1.vm09.stderr:2026-03-09T14:58:35.482+0000 7f8ba85dd480 -1 init, newargv = 0x55a2e9a00a20 newargc=15 2026-03-09T14:58:35.481 INFO:tasks.cephfs.fuse_mount.ceph-fuse.1.vm09.stderr:ceph-fuse[81455]: starting ceph client 2026-03-09T14:58:35.491 INFO:tasks.cephfs.fuse_mount.ceph-fuse.1.vm09.stderr:ceph-fuse[81455]: starting fuse 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/proc 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/sys 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/dev 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/sys/kernel/security 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/dev/shm 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/dev/pts 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/run 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/sys/fs/cgroup 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/sys/fs/pstore 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/sys/fs/bpf 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/sys/kernel/config 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/ 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/nfs/rpc_pipefs 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/sys/fs/selinux 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/dev/mqueue 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/dev/hugepages 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/sys/kernel/tracing 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/sys/kernel/debug 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/run/credentials/systemd-sysctl.service 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/sys/fs/fuse/connections 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/run/credentials/systemd-sysusers.service 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/run/credentials/systemd-tmpfiles-setup-dev.service 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/run/credentials/systemd-tmpfiles-setup.service 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/run/user/1000 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/run/user/0 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/proc/sys/fs/binfmt_misc 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/6378f606e686cc21d526f572b390c812b6120d4275369716af33e35fc1bd4007/merged 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/c45813ddd1b7d10d475cc39ca9ce49aebbfed92e2a7d77d7b6cd8e1ff873b6e7/merged 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/4c3d18015fff36b8b17edba82b7ea02f654da48b9b894abe65355dec2945ba3a/merged 2026-03-09T14:58:35.507 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/a3e734ac1a11ab32f1d3c97ed8dd958018cd4a28f034ad4ceb7d321e6d8b7bbc/merged 2026-03-09T14:58:35.508 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/8009392fd9f00bc74dfadc86186156ffc6e1e70065f18b67b6137aefbca0d781/merged 2026-03-09T14:58:35.508 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/be2130b6772326310996e6532a68a5d428bbc7c0ba7dc30041485a8d7832926f/merged 2026-03-09T14:58:35.508 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/fa66a16303dc946df9349946c282424c9a5b014d70ad242aa6a52756de0e2562/merged 2026-03-09T14:58:35.508 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/c2dc146889e90d972b1ef69d7e2f23f38fae432fb246b8ad002e9b733940e76e/merged 2026-03-09T14:58:35.508 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/e10438b4990af6536d48deda545b857ab742d5ead6a47a5da505a22d2575e170/merged 2026-03-09T14:58:35.508 INFO:teuthology.orchestra.run.vm09.stdout:/var/lib/containers/storage/overlay/96efcd1f9b3eb78d972f6c090beb285a3ad7394bbe74b19488dd12237352faef/merged 2026-03-09T14:58:35.508 INFO:teuthology.orchestra.run.vm09.stdout:/run/netns 2026-03-09T14:58:35.508 INFO:teuthology.orchestra.run.vm09.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-09T14:58:35.508 INFO:teuthology.orchestra.run.vm09.stdout:/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.1 2026-03-09T14:58:35.508 INFO:teuthology.orchestra.run.vm09.stdout:/home/ubuntu/cephtest/mnt.1 2026-03-09T14:58:35.508 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:35.508 DEBUG:teuthology.orchestra.run.vm09:> ls /sys/fs/fuse/connections 2026-03-09T14:58:35.568 INFO:teuthology.orchestra.run.vm09.stdout:90 2026-03-09T14:58:35.568 INFO:tasks.cephfs.fuse_mount:Post-mount connections: [90] 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> sudo stdin-killer -- python3 -c ' 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> import glob 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> import re 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> import os 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> import subprocess 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> def _find_admin_socket(client_name): 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> asok_path = "/var/run/ceph/ceph-client.1.*.asok" 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> files = glob.glob(asok_path) 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> mountpoint = "/home/ubuntu/cephtest/mnt.1" 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> # Given a non-glob path, it better be there 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> if "*" not in asok_path: 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> assert(len(files) == 1) 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> return files[0] 2026-03-09T14:58:35.568 DEBUG:teuthology.orchestra.run.vm09:> 2026-03-09T14:58:35.569 DEBUG:teuthology.orchestra.run.vm09:> for f in files: 2026-03-09T14:58:35.569 DEBUG:teuthology.orchestra.run.vm09:> pid = re.match(".*\.(\d+)\.asok$", f).group(1) 2026-03-09T14:58:35.569 DEBUG:teuthology.orchestra.run.vm09:> if os.path.exists("/proc/{0}".format(pid)): 2026-03-09T14:58:35.569 DEBUG:teuthology.orchestra.run.vm09:> with open("/proc/{0}/cmdline".format(pid), '"'"'r'"'"') as proc_f: 2026-03-09T14:58:35.569 DEBUG:teuthology.orchestra.run.vm09:> contents = proc_f.read() 2026-03-09T14:58:35.569 DEBUG:teuthology.orchestra.run.vm09:> if mountpoint in contents: 2026-03-09T14:58:35.569 DEBUG:teuthology.orchestra.run.vm09:> return f 2026-03-09T14:58:35.569 DEBUG:teuthology.orchestra.run.vm09:> raise RuntimeError("Client socket {0} not found".format(client_name)) 2026-03-09T14:58:35.569 DEBUG:teuthology.orchestra.run.vm09:> 2026-03-09T14:58:35.569 DEBUG:teuthology.orchestra.run.vm09:> print(_find_admin_socket("client.1")) 2026-03-09T14:58:35.569 DEBUG:teuthology.orchestra.run.vm09:> ' 2026-03-09T14:58:35.673 INFO:teuthology.orchestra.run.vm09.stdout:/var/run/ceph/ceph-client.1.81455.asok 2026-03-09T14:58:35.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-09T14:58:35 stdin-killer INFO: command exited with status 0: exiting normally with same code! 2026-03-09T14:58:35.683 INFO:tasks.cephfs.fuse_mount:Found client admin socket at /var/run/ceph/ceph-client.1.81455.asok 2026-03-09T14:58:35.683 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:35.683 DEBUG:teuthology.orchestra.run.vm09:> sudo ceph --admin-daemon /var/run/ceph/ceph-client.1.81455.asok status 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout:{ 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "metadata": { 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_sha1": "5dd24139a1eada541a3bc16b6941c5dde975e26d", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_version": "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "entity_id": "1", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "hostname": "vm09.local", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "mount_point": "/home/ubuntu/cephtest/mnt.1", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "pid": "81455", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "root": "/" 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: }, 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "dentry_count": 0, 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "dentry_pinned_count": 0, 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "id": 24345, 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "inst": { 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "name": { 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "type": "client", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "num": 24345 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: }, 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "addr": { 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "type": "v1", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "addr": "192.168.144.1:0", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "nonce": 70265336 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: } 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: }, 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "addr": { 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "type": "v1", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "addr": "192.168.144.1:0", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "nonce": 70265336 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: }, 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "inst_str": "client.24345 192.168.144.1:0/70265336", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "addr_str": "192.168.144.1:0/70265336", 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "inode_count": 1, 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "mds_epoch": 9, 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "osd_epoch": 37, 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "osd_epoch_barrier": 0, 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "blocklisted": false, 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout: "fs_name": "cephfs" 2026-03-09T14:58:35.795 INFO:teuthology.orchestra.run.vm09.stdout:} 2026-03-09T14:58:35.803 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:35.803 DEBUG:teuthology.orchestra.run.vm05:> stat --file-system '--printf=%T 2026-03-09T14:58:35.803 DEBUG:teuthology.orchestra.run.vm05:> ' -- /home/ubuntu/cephtest/mnt.0 2026-03-09T14:58:35.820 INFO:teuthology.orchestra.run.vm05.stdout:fuseblk 2026-03-09T14:58:35.820 INFO:tasks.cephfs.fuse_mount:ceph-fuse is mounted on /home/ubuntu/cephtest/mnt.0 2026-03-09T14:58:35.820 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:35.820 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 1777 /home/ubuntu/cephtest/mnt.0 2026-03-09T14:58:35.895 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:35.895 DEBUG:teuthology.orchestra.run.vm09:> stat --file-system '--printf=%T 2026-03-09T14:58:35.895 DEBUG:teuthology.orchestra.run.vm09:> ' -- /home/ubuntu/cephtest/mnt.1 2026-03-09T14:58:35.912 INFO:teuthology.orchestra.run.vm09.stdout:fuseblk 2026-03-09T14:58:35.912 INFO:tasks.cephfs.fuse_mount:ceph-fuse is mounted on /home/ubuntu/cephtest/mnt.1 2026-03-09T14:58:35.912 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T14:58:35.912 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 1777 /home/ubuntu/cephtest/mnt.1 2026-03-09T14:58:35.985 INFO:teuthology.run_tasks:Running task print... 2026-03-09T14:58:35.988 INFO:teuthology.task.print:**** done client 2026-03-09T14:58:35.988 INFO:teuthology.run_tasks:Running task parallel... 2026-03-09T14:58:35.992 INFO:teuthology.task.parallel:starting parallel... 2026-03-09T14:58:35.992 INFO:teuthology.task.parallel:In parallel, running task sequential... 2026-03-09T14:58:35.992 INFO:teuthology.task.sequential:In sequential, running task cephadm.shell... 2026-03-09T14:58:35.992 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-09T14:58:35.992 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force' 2026-03-09T14:58:35.993 INFO:teuthology.task.parallel:In parallel, running task sequential... 2026-03-09T14:58:35.993 INFO:teuthology.task.sequential:In sequential, running task workunit... 2026-03-09T14:58:35.994 INFO:tasks.workunit:Pulling workunits from ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T14:58:35.994 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-09T14:58:35.994 INFO:tasks.workunit:timeout=3h 2026-03-09T14:58:35.994 INFO:tasks.workunit:cleanup=True 2026-03-09T14:58:35.995 DEBUG:teuthology.orchestra.run.vm05:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-09T14:58:36.023 INFO:teuthology.orchestra.run.vm05.stdout: File: /home/ubuntu/cephtest/mnt.0 2026-03-09T14:58:36.024 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 1 IO Block: 4194304 directory 2026-03-09T14:58:36.024 INFO:teuthology.orchestra.run.vm05.stdout:Device: 69h/105d Inode: 1 Links: 2 2026-03-09T14:58:36.024 INFO:teuthology.orchestra.run.vm05.stdout:Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-09T14:58:36.024 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fusefs_t:s0 2026-03-09T14:58:36.024 INFO:teuthology.orchestra.run.vm05.stdout:Access: 1970-01-01 00:00:00.000000000 +0000 2026-03-09T14:58:36.024 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-09 14:58:25.345873762 +0000 2026-03-09T14:58:36.024 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-09 14:58:35.893455324 +0000 2026-03-09T14:58:36.024 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-09T14:58:36.024 INFO:tasks.workunit:Did not need to create dir /home/ubuntu/cephtest/mnt.0 2026-03-09T14:58:36.024 DEBUG:teuthology.orchestra.run.vm05:> cd -- /home/ubuntu/cephtest/mnt.0 && sudo install -d -m 0755 --owner=ubuntu -- client.0 2026-03-09T14:58:36.101 DEBUG:teuthology.orchestra.run.vm09:> stat -- /home/ubuntu/cephtest/mnt.1 2026-03-09T14:58:36.121 INFO:teuthology.orchestra.run.vm09.stdout: File: /home/ubuntu/cephtest/mnt.1 2026-03-09T14:58:36.121 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 1 IO Block: 4194304 directory 2026-03-09T14:58:36.121 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5ah/90d Inode: 1 Links: 3 2026-03-09T14:58:36.121 INFO:teuthology.orchestra.run.vm09.stdout:Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-09T14:58:36.121 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fusefs_t:s0 2026-03-09T14:58:36.121 INFO:teuthology.orchestra.run.vm09.stdout:Access: 1970-01-01 00:00:00.000000000 +0000 2026-03-09T14:58:36.121 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 14:58:36.096219114 +0000 2026-03-09T14:58:36.121 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 14:58:36.096219114 +0000 2026-03-09T14:58:36.121 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-09T14:58:36.121 INFO:tasks.workunit:Did not need to create dir /home/ubuntu/cephtest/mnt.1 2026-03-09T14:58:36.121 DEBUG:teuthology.orchestra.run.vm09:> cd -- /home/ubuntu/cephtest/mnt.1 && sudo install -d -m 0755 --owner=ubuntu -- client.1 2026-03-09T14:58:36.174 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:36.196 DEBUG:teuthology.orchestra.run.vm05:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T14:58:36.196 DEBUG:teuthology.orchestra.run.vm09:> rm -rf /home/ubuntu/cephtest/clone.client.1 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.1 && cd /home/ubuntu/cephtest/clone.client.1 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T14:58:36.206 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:36 vm05.local ceph-mon[50611]: pgmap v84: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.7 KiB/s rd, 1.4 KiB/s wr, 6 op/s 2026-03-09T14:58:36.239 INFO:tasks.workunit.client.0.vm05.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-09T14:58:36.254 INFO:tasks.workunit.client.1.vm09.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.1'... 2026-03-09T14:58:36.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:36 vm09.local ceph-mon[59673]: pgmap v84: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.7 KiB/s rd, 1.4 KiB/s wr, 6 op/s 2026-03-09T14:58:36.467 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.465+0000 7fd87ab86700 1 -- 192.168.123.105:0/3657954946 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd874102760 msgr2=0x7fd874102b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:36.467 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.465+0000 7fd87ab86700 1 --2- 192.168.123.105:0/3657954946 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd874102760 0x7fd874102b70 secure :-1 s=READY pgs=288 cs=0 l=1 rev1=1 crypto rx=0x7fd860009b00 tx=0x7fd860009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:36.468 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.467+0000 7fd87ab86700 1 -- 192.168.123.105:0/3657954946 shutdown_connections 2026-03-09T14:58:36.468 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.467+0000 7fd87ab86700 1 --2- 192.168.123.105:0/3657954946 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd874103a00 0x7fd874103e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:36.468 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.467+0000 7fd87ab86700 1 --2- 192.168.123.105:0/3657954946 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd874102760 0x7fd874102b70 unknown :-1 s=CLOSED pgs=288 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:36.468 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.467+0000 7fd87ab86700 1 -- 192.168.123.105:0/3657954946 >> 192.168.123.105:0/3657954946 conn(0x7fd8740fddb0 msgr2=0x7fd8741001e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:36.468 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.467+0000 7fd87ab86700 1 -- 192.168.123.105:0/3657954946 shutdown_connections 2026-03-09T14:58:36.468 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.467+0000 7fd87ab86700 1 -- 192.168.123.105:0/3657954946 wait complete. 2026-03-09T14:58:36.468 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.468+0000 7fd87ab86700 1 Processor -- start 2026-03-09T14:58:36.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.468+0000 7fd87ab86700 1 -- start start 2026-03-09T14:58:36.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.468+0000 7fd87ab86700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd874102760 0x7fd874197fe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:36.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.468+0000 7fd87ab86700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd874103a00 0x7fd874198520 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:36.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.468+0000 7fd87ab86700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd874198b40 con 0x7fd874102760 2026-03-09T14:58:36.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.468+0000 7fd87ab86700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd874198c80 con 0x7fd874103a00 2026-03-09T14:58:36.469 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.469+0000 7fd878922700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd874102760 0x7fd874197fe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:36.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.469+0000 7fd878922700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd874102760 0x7fd874197fe0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41112/0 (socket says 192.168.123.105:41112) 2026-03-09T14:58:36.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.469+0000 7fd878922700 1 -- 192.168.123.105:0/2605253154 learned_addr learned my addr 192.168.123.105:0/2605253154 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:36.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.469+0000 7fd873fff700 1 --2- 192.168.123.105:0/2605253154 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd874103a00 0x7fd874198520 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:36.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.469+0000 7fd873fff700 1 -- 192.168.123.105:0/2605253154 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd874102760 msgr2=0x7fd874197fe0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:36.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.469+0000 7fd873fff700 1 --2- 192.168.123.105:0/2605253154 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd874102760 0x7fd874197fe0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:36.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.469+0000 7fd873fff700 1 -- 192.168.123.105:0/2605253154 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd8600097e0 con 0x7fd874103a00 2026-03-09T14:58:36.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.470+0000 7fd873fff700 1 --2- 192.168.123.105:0/2605253154 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd874103a00 0x7fd874198520 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fd86800eab0 tx=0x7fd86800ee70 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:36.471 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.470+0000 7fd871ffb700 1 -- 192.168.123.105:0/2605253154 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd86800cbe0 con 0x7fd874103a00 2026-03-09T14:58:36.471 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.470+0000 7fd87ab86700 1 -- 192.168.123.105:0/2605253154 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd87419d730 con 0x7fd874103a00 2026-03-09T14:58:36.471 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.470+0000 7fd87ab86700 1 -- 192.168.123.105:0/2605253154 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd87419dc80 con 0x7fd874103a00 2026-03-09T14:58:36.472 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.471+0000 7fd871ffb700 1 -- 192.168.123.105:0/2605253154 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd86800cd40 con 0x7fd874103a00 2026-03-09T14:58:36.472 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.471+0000 7fd871ffb700 1 -- 192.168.123.105:0/2605253154 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd868018930 con 0x7fd874103a00 2026-03-09T14:58:36.472 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.472+0000 7fd871ffb700 1 -- 192.168.123.105:0/2605253154 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fd868018b00 con 0x7fd874103a00 2026-03-09T14:58:36.473 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.472+0000 7fd871ffb700 1 --2- 192.168.123.105:0/2605253154 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd86406c6f0 0x7fd86406eba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:36.473 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.472+0000 7fd871ffb700 1 -- 192.168.123.105:0/2605253154 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fd868014070 con 0x7fd874103a00 2026-03-09T14:58:36.473 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.473+0000 7fd878922700 1 --2- 192.168.123.105:0/2605253154 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd86406c6f0 0x7fd86406eba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:36.474 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.473+0000 7fd87ab86700 1 -- 192.168.123.105:0/2605253154 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd874066e40 con 0x7fd874103a00 2026-03-09T14:58:36.474 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.473+0000 7fd878922700 1 --2- 192.168.123.105:0/2605253154 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd86406c6f0 0x7fd86406eba0 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7fd86000b5c0 tx=0x7fd86001a040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:36.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.476+0000 7fd871ffb700 1 -- 192.168.123.105:0/2605253154 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fd86805b400 con 0x7fd874103a00 2026-03-09T14:58:36.587 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.587+0000 7fd87ab86700 1 -- 192.168.123.105:0/2605253154 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim}] v 0) v1 -- 0x7fd87419e3e0 con 0x7fd874103a00 2026-03-09T14:58:36.588 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.587+0000 7fd871ffb700 1 -- 192.168.123.105:0/2605253154 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim}]=0 v15)=0 v15) v1 ==== 155+0+0 (secure 0 0 0) 0x7fd868018db0 con 0x7fd874103a00 2026-03-09T14:58:36.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.590+0000 7fd87ab86700 1 -- 192.168.123.105:0/2605253154 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd86406c6f0 msgr2=0x7fd86406eba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:36.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.590+0000 7fd87ab86700 1 --2- 192.168.123.105:0/2605253154 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd86406c6f0 0x7fd86406eba0 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7fd86000b5c0 tx=0x7fd86001a040 comp rx=0 tx=0).stop 2026-03-09T14:58:36.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.590+0000 7fd87ab86700 1 -- 192.168.123.105:0/2605253154 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd874103a00 msgr2=0x7fd874198520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:36.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.590+0000 7fd87ab86700 1 --2- 192.168.123.105:0/2605253154 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd874103a00 0x7fd874198520 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fd86800eab0 tx=0x7fd86800ee70 comp rx=0 tx=0).stop 2026-03-09T14:58:36.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.590+0000 7fd87ab86700 1 -- 192.168.123.105:0/2605253154 shutdown_connections 2026-03-09T14:58:36.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.590+0000 7fd87ab86700 1 --2- 192.168.123.105:0/2605253154 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd86406c6f0 0x7fd86406eba0 unknown :-1 s=CLOSED pgs=104 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:36.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.590+0000 7fd87ab86700 1 --2- 192.168.123.105:0/2605253154 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd874102760 0x7fd874197fe0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:36.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.590+0000 7fd87ab86700 1 --2- 192.168.123.105:0/2605253154 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd874103a00 0x7fd874198520 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:36.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.590+0000 7fd87ab86700 1 -- 192.168.123.105:0/2605253154 >> 192.168.123.105:0/2605253154 conn(0x7fd8740fddb0 msgr2=0x7fd874106c30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:36.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.590+0000 7fd87ab86700 1 -- 192.168.123.105:0/2605253154 shutdown_connections 2026-03-09T14:58:36.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:36.591+0000 7fd87ab86700 1 -- 192.168.123.105:0/2605253154 wait complete. 2026-03-09T14:58:36.646 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force' 2026-03-09T14:58:36.856 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:37.217 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.215+0000 7f0845fc2700 1 -- 192.168.123.105:0/2971468980 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0840101740 msgr2=0x7f0840101b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:37.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.215+0000 7f0845fc2700 1 --2- 192.168.123.105:0/2971468980 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0840101740 0x7f0840101b90 secure :-1 s=READY pgs=289 cs=0 l=1 rev1=1 crypto rx=0x7f0834009b00 tx=0x7f0834009e10 comp rx=0 tx=0).stop 2026-03-09T14:58:37.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.216+0000 7f0845fc2700 1 -- 192.168.123.105:0/2971468980 shutdown_connections 2026-03-09T14:58:37.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.216+0000 7f0845fc2700 1 --2- 192.168.123.105:0/2971468980 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0840101740 0x7f0840101b90 unknown :-1 s=CLOSED pgs=289 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:37.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.216+0000 7f0845fc2700 1 --2- 192.168.123.105:0/2971468980 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0840100540 0x7f0840100950 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:37.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.216+0000 7f0845fc2700 1 -- 192.168.123.105:0/2971468980 >> 192.168.123.105:0/2971468980 conn(0x7f08400fbaf0 msgr2=0x7f08400fdf20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:37.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.217+0000 7f0845fc2700 1 -- 192.168.123.105:0/2971468980 shutdown_connections 2026-03-09T14:58:37.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.217+0000 7f0845fc2700 1 -- 192.168.123.105:0/2971468980 wait complete. 2026-03-09T14:58:37.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.217+0000 7f0845fc2700 1 Processor -- start 2026-03-09T14:58:37.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.218+0000 7f0845fc2700 1 -- start start 2026-03-09T14:58:37.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.218+0000 7f0845fc2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0840100540 0x7f0840198000 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:37.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.218+0000 7f0845fc2700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0840101740 0x7f0840198540 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:37.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.218+0000 7f0845fc2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0840198b60 con 0x7f0840100540 2026-03-09T14:58:37.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.218+0000 7f0845fc2700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0840198ca0 con 0x7f0840101740 2026-03-09T14:58:37.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.218+0000 7f083f7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0840100540 0x7f0840198000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:37.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.218+0000 7f083f7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0840100540 0x7f0840198000 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41122/0 (socket says 192.168.123.105:41122) 2026-03-09T14:58:37.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.218+0000 7f083f7fe700 1 -- 192.168.123.105:0/1580997846 learned_addr learned my addr 192.168.123.105:0/1580997846 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:37.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.218+0000 7f083effd700 1 --2- 192.168.123.105:0/1580997846 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0840101740 0x7f0840198540 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:37.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.219+0000 7f083f7fe700 1 -- 192.168.123.105:0/1580997846 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0840101740 msgr2=0x7f0840198540 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:37.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.219+0000 7f083f7fe700 1 --2- 192.168.123.105:0/1580997846 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0840101740 0x7f0840198540 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:37.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.219+0000 7f083f7fe700 1 -- 192.168.123.105:0/1580997846 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f08340097e0 con 0x7f0840100540 2026-03-09T14:58:37.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.219+0000 7f083f7fe700 1 --2- 192.168.123.105:0/1580997846 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0840100540 0x7f0840198000 secure :-1 s=READY pgs=290 cs=0 l=1 rev1=1 crypto rx=0x7f083000d900 tx=0x7f083000dcc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:37.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.219+0000 7f083cff9700 1 -- 192.168.123.105:0/1580997846 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f08300041d0 con 0x7f0840100540 2026-03-09T14:58:37.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.219+0000 7f083cff9700 1 -- 192.168.123.105:0/1580997846 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f0830004330 con 0x7f0840100540 2026-03-09T14:58:37.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.219+0000 7f0845fc2700 1 -- 192.168.123.105:0/1580997846 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f084019d750 con 0x7f0840100540 2026-03-09T14:58:37.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.219+0000 7f083cff9700 1 -- 192.168.123.105:0/1580997846 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0830003de0 con 0x7f0840100540 2026-03-09T14:58:37.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.219+0000 7f0845fc2700 1 -- 192.168.123.105:0/1580997846 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f084019dca0 con 0x7f0840100540 2026-03-09T14:58:37.224 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.222+0000 7f0845fc2700 1 -- 192.168.123.105:0/1580997846 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0840105820 con 0x7f0840100540 2026-03-09T14:58:37.224 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.223+0000 7f083cff9700 1 -- 192.168.123.105:0/1580997846 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f0830009730 con 0x7f0840100540 2026-03-09T14:58:37.224 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.223+0000 7f083cff9700 1 --2- 192.168.123.105:0/1580997846 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f082806c6d0 0x7f082806eb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:37.224 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.223+0000 7f083cff9700 1 -- 192.168.123.105:0/1580997846 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f0830059f60 con 0x7f0840100540 2026-03-09T14:58:37.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.225+0000 7f083effd700 1 --2- 192.168.123.105:0/1580997846 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f082806c6d0 0x7f082806eb80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:37.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.225+0000 7f083effd700 1 --2- 192.168.123.105:0/1580997846 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f082806c6d0 0x7f082806eb80 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7f083400b5c0 tx=0x7f0834005c00 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:37.227 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.226+0000 7f083cff9700 1 -- 192.168.123.105:0/1580997846 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f0830059d30 con 0x7f0840100540 2026-03-09T14:58:37.339 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.338+0000 7f0845fc2700 1 -- 192.168.123.105:0/1580997846 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim_allowed}] v 0) v1 -- 0x7f0840066e40 con 0x7f0840100540 2026-03-09T14:58:37.339 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.339+0000 7f083cff9700 1 -- 192.168.123.105:0/1580997846 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim_allowed}]=0 v15)=0 v15) v1 ==== 163+0+0 (secure 0 0 0) 0x7f08300598c0 con 0x7f0840100540 2026-03-09T14:58:37.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.341+0000 7f0845fc2700 1 -- 192.168.123.105:0/1580997846 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f082806c6d0 msgr2=0x7f082806eb80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:37.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.341+0000 7f0845fc2700 1 --2- 192.168.123.105:0/1580997846 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f082806c6d0 0x7f082806eb80 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7f083400b5c0 tx=0x7f0834005c00 comp rx=0 tx=0).stop 2026-03-09T14:58:37.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.341+0000 7f0845fc2700 1 -- 192.168.123.105:0/1580997846 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0840100540 msgr2=0x7f0840198000 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:37.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.341+0000 7f0845fc2700 1 --2- 192.168.123.105:0/1580997846 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0840100540 0x7f0840198000 secure :-1 s=READY pgs=290 cs=0 l=1 rev1=1 crypto rx=0x7f083000d900 tx=0x7f083000dcc0 comp rx=0 tx=0).stop 2026-03-09T14:58:37.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.341+0000 7f0845fc2700 1 -- 192.168.123.105:0/1580997846 shutdown_connections 2026-03-09T14:58:37.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.341+0000 7f0845fc2700 1 --2- 192.168.123.105:0/1580997846 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f082806c6d0 0x7f082806eb80 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:37.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.341+0000 7f0845fc2700 1 --2- 192.168.123.105:0/1580997846 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0840100540 0x7f0840198000 unknown :-1 s=CLOSED pgs=290 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:37.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.341+0000 7f0845fc2700 1 --2- 192.168.123.105:0/1580997846 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0840101740 0x7f0840198540 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:37.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.341+0000 7f0845fc2700 1 -- 192.168.123.105:0/1580997846 >> 192.168.123.105:0/1580997846 conn(0x7f08400fbaf0 msgr2=0x7f0840102960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:37.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.342+0000 7f0845fc2700 1 -- 192.168.123.105:0/1580997846 shutdown_connections 2026-03-09T14:58:37.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.342+0000 7f0845fc2700 1 -- 192.168.123.105:0/1580997846 wait complete. 2026-03-09T14:58:37.394 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set global log_to_journald false --force' 2026-03-09T14:58:37.587 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:37.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.873+0000 7fd76cee2700 1 -- 192.168.123.105:0/3293429220 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd768103960 msgr2=0x7fd768103db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:37.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.873+0000 7fd76cee2700 1 --2- 192.168.123.105:0/3293429220 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd768103960 0x7fd768103db0 secure :-1 s=READY pgs=291 cs=0 l=1 rev1=1 crypto rx=0x7fd758009b50 tx=0x7fd758009e60 comp rx=0 tx=0).stop 2026-03-09T14:58:37.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.874+0000 7fd76cee2700 1 -- 192.168.123.105:0/3293429220 shutdown_connections 2026-03-09T14:58:37.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.874+0000 7fd76cee2700 1 --2- 192.168.123.105:0/3293429220 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd768103960 0x7fd768103db0 unknown :-1 s=CLOSED pgs=291 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:37.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.874+0000 7fd76cee2700 1 --2- 192.168.123.105:0/3293429220 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd768102760 0x7fd768102b70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:37.876 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.874+0000 7fd76cee2700 1 -- 192.168.123.105:0/3293429220 >> 192.168.123.105:0/3293429220 conn(0x7fd7680fdcf0 msgr2=0x7fd768100140 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:37.876 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.875+0000 7fd76cee2700 1 -- 192.168.123.105:0/3293429220 shutdown_connections 2026-03-09T14:58:37.876 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.875+0000 7fd76cee2700 1 -- 192.168.123.105:0/3293429220 wait complete. 2026-03-09T14:58:37.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.876+0000 7fd76cee2700 1 Processor -- start 2026-03-09T14:58:37.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.876+0000 7fd76cee2700 1 -- start start 2026-03-09T14:58:37.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.876+0000 7fd76cee2700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd768102760 0x7fd768198050 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:37.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.876+0000 7fd76cee2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd768103960 0x7fd768198590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:37.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.876+0000 7fd76cee2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd768198bb0 con 0x7fd768103960 2026-03-09T14:58:37.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.876+0000 7fd76cee2700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd768198cf0 con 0x7fd768102760 2026-03-09T14:58:37.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.876+0000 7fd76659c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd768102760 0x7fd768198050 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:37.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.876+0000 7fd76659c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd768102760 0x7fd768198050 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45852/0 (socket says 192.168.123.105:45852) 2026-03-09T14:58:37.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.876+0000 7fd76659c700 1 -- 192.168.123.105:0/44767190 learned_addr learned my addr 192.168.123.105:0/44767190 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:37.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.877+0000 7fd76659c700 1 -- 192.168.123.105:0/44767190 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd768103960 msgr2=0x7fd768198590 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:37.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.877+0000 7fd765d9b700 1 --2- 192.168.123.105:0/44767190 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd768103960 0x7fd768198590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:37.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.877+0000 7fd76659c700 1 --2- 192.168.123.105:0/44767190 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd768103960 0x7fd768198590 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:37.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.877+0000 7fd76659c700 1 -- 192.168.123.105:0/44767190 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd7580097e0 con 0x7fd768102760 2026-03-09T14:58:37.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.877+0000 7fd765d9b700 1 --2- 192.168.123.105:0/44767190 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd768103960 0x7fd768198590 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T14:58:37.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.878+0000 7fd76659c700 1 --2- 192.168.123.105:0/44767190 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd768102760 0x7fd768198050 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7fd750009fd0 tx=0x7fd75000eea0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:37.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.878+0000 7fd75f7fe700 1 -- 192.168.123.105:0/44767190 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd75000cca0 con 0x7fd768102760 2026-03-09T14:58:37.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.878+0000 7fd76cee2700 1 -- 192.168.123.105:0/44767190 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd76819d7a0 con 0x7fd768102760 2026-03-09T14:58:37.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.878+0000 7fd76cee2700 1 -- 192.168.123.105:0/44767190 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd76819dcf0 con 0x7fd768102760 2026-03-09T14:58:37.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.878+0000 7fd75f7fe700 1 -- 192.168.123.105:0/44767190 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fd750004500 con 0x7fd768102760 2026-03-09T14:58:37.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.878+0000 7fd75f7fe700 1 -- 192.168.123.105:0/44767190 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd750010640 con 0x7fd768102760 2026-03-09T14:58:37.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.880+0000 7fd75f7fe700 1 -- 192.168.123.105:0/44767190 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fd750004020 con 0x7fd768102760 2026-03-09T14:58:37.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.880+0000 7fd75f7fe700 1 --2- 192.168.123.105:0/44767190 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd75406c680 0x7fd75406eb30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:37.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.880+0000 7fd75f7fe700 1 -- 192.168.123.105:0/44767190 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fd750014070 con 0x7fd768102760 2026-03-09T14:58:37.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.881+0000 7fd76cee2700 1 -- 192.168.123.105:0/44767190 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd748005320 con 0x7fd768102760 2026-03-09T14:58:37.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.881+0000 7fd765d9b700 1 --2- 192.168.123.105:0/44767190 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd75406c680 0x7fd75406eb30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:37.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.882+0000 7fd765d9b700 1 --2- 192.168.123.105:0/44767190 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd75406c680 0x7fd75406eb30 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7fd75800b5c0 tx=0x7fd7580058e0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:37.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.884+0000 7fd75f7fe700 1 -- 192.168.123.105:0/44767190 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fd75005a050 con 0x7fd768102760 2026-03-09T14:58:38.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:37.997+0000 7fd76cee2700 1 -- 192.168.123.105:0/44767190 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command([{prefix=config set, name=log_to_journald}] v 0) v1 -- 0x7fd748005f70 con 0x7fd768102760 2026-03-09T14:58:38.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.000+0000 7fd75f7fe700 1 -- 192.168.123.105:0/44767190 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{prefix=config set, name=log_to_journald}]=0 v15)=0 v15) v1 ==== 135+0+0 (secure 0 0 0) 0x7fd750059be0 con 0x7fd768102760 2026-03-09T14:58:38.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.002+0000 7fd76cee2700 1 -- 192.168.123.105:0/44767190 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd75406c680 msgr2=0x7fd75406eb30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:38.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.002+0000 7fd76cee2700 1 --2- 192.168.123.105:0/44767190 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd75406c680 0x7fd75406eb30 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7fd75800b5c0 tx=0x7fd7580058e0 comp rx=0 tx=0).stop 2026-03-09T14:58:38.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.002+0000 7fd76cee2700 1 -- 192.168.123.105:0/44767190 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd768102760 msgr2=0x7fd768198050 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:38.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.002+0000 7fd76cee2700 1 --2- 192.168.123.105:0/44767190 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd768102760 0x7fd768198050 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7fd750009fd0 tx=0x7fd75000eea0 comp rx=0 tx=0).stop 2026-03-09T14:58:38.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.003+0000 7fd76cee2700 1 -- 192.168.123.105:0/44767190 shutdown_connections 2026-03-09T14:58:38.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.003+0000 7fd76cee2700 1 --2- 192.168.123.105:0/44767190 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fd75406c680 0x7fd75406eb30 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:38.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.003+0000 7fd76cee2700 1 --2- 192.168.123.105:0/44767190 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd768102760 0x7fd768198050 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:38.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.003+0000 7fd76cee2700 1 --2- 192.168.123.105:0/44767190 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd768103960 0x7fd768198590 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:38.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.003+0000 7fd76cee2700 1 -- 192.168.123.105:0/44767190 >> 192.168.123.105:0/44767190 conn(0x7fd7680fdcf0 msgr2=0x7fd768106b90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:38.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.003+0000 7fd76cee2700 1 -- 192.168.123.105:0/44767190 shutdown_connections 2026-03-09T14:58:38.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.003+0000 7fd76cee2700 1 -- 192.168.123.105:0/44767190 wait complete. 2026-03-09T14:58:38.092 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr' 2026-03-09T14:58:38.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:38 vm05.local ceph-mon[50611]: pgmap v85: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 6 op/s 2026-03-09T14:58:38.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:38 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:38.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:38 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:58:38.332 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T14:58:38.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:38 vm09.local ceph-mon[59673]: pgmap v85: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 6 op/s 2026-03-09T14:58:38.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:38 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:58:38.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:38 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.663+0000 7f1d4c8b1700 1 -- 192.168.123.105:0/1393745904 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d48072440 msgr2=0x7f1d4810be90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.663+0000 7f1d4c8b1700 1 --2- 192.168.123.105:0/1393745904 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d48072440 0x7f1d4810be90 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f1d4000b210 tx=0x7f1d4000b520 comp rx=0 tx=0).stop 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.663+0000 7f1d4c8b1700 1 -- 192.168.123.105:0/1393745904 shutdown_connections 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.663+0000 7f1d4c8b1700 1 --2- 192.168.123.105:0/1393745904 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d48072440 0x7f1d4810be90 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.663+0000 7f1d4c8b1700 1 --2- 192.168.123.105:0/1393745904 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d48071a60 0x7f1d48071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.663+0000 7f1d4c8b1700 1 -- 192.168.123.105:0/1393745904 >> 192.168.123.105:0/1393745904 conn(0x7f1d4806d1a0 msgr2=0x7f1d4806f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.663+0000 7f1d4c8b1700 1 -- 192.168.123.105:0/1393745904 shutdown_connections 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.663+0000 7f1d4c8b1700 1 -- 192.168.123.105:0/1393745904 wait complete. 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.664+0000 7f1d4c8b1700 1 Processor -- start 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.664+0000 7f1d4c8b1700 1 -- start start 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.664+0000 7f1d4c8b1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d48071a60 0x7f1d481a4ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.664+0000 7f1d4c8b1700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d481a5000 0x7f1d481aa070 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.664+0000 7f1d4c8b1700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1d481a5500 con 0x7f1d48071a60 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.664+0000 7f1d4c8b1700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1d481a5670 con 0x7f1d481a5000 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.665+0000 7f1d46d9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d48071a60 0x7f1d481a4ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.665+0000 7f1d46d9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d48071a60 0x7f1d481a4ac0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:43700/0 (socket says 192.168.123.105:43700) 2026-03-09T14:58:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.665+0000 7f1d46d9d700 1 -- 192.168.123.105:0/3351557532 learned_addr learned my addr 192.168.123.105:0/3351557532 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T14:58:38.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.665+0000 7f1d4659c700 1 --2- 192.168.123.105:0/3351557532 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d481a5000 0x7f1d481aa070 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:38.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.666+0000 7f1d46d9d700 1 -- 192.168.123.105:0/3351557532 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d481a5000 msgr2=0x7f1d481aa070 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T14:58:38.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.666+0000 7f1d46d9d700 1 --2- 192.168.123.105:0/3351557532 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d481a5000 0x7f1d481aa070 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T14:58:38.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.666+0000 7f1d46d9d700 1 -- 192.168.123.105:0/3351557532 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1d3c0097e0 con 0x7f1d48071a60 2026-03-09T14:58:38.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.666+0000 7f1d46d9d700 1 --2- 192.168.123.105:0/3351557532 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d48071a60 0x7f1d481a4ac0 secure :-1 s=READY pgs=292 cs=0 l=1 rev1=1 crypto rx=0x7f1d3c009fc0 tx=0x7f1d3c00c660 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:38.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.666+0000 7f1d2ffff700 1 -- 192.168.123.105:0/3351557532 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1d3c010430 con 0x7f1d48071a60 2026-03-09T14:58:38.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.667+0000 7f1d4c8b1700 1 -- 192.168.123.105:0/3351557532 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1d40009e30 con 0x7f1d48071a60 2026-03-09T14:58:38.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.668+0000 7f1d4c8b1700 1 -- 192.168.123.105:0/3351557532 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1d481aa9a0 con 0x7f1d48071a60 2026-03-09T14:58:38.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.669+0000 7f1d2ffff700 1 -- 192.168.123.105:0/3351557532 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1d3c010a70 con 0x7f1d48071a60 2026-03-09T14:58:38.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.669+0000 7f1d2ffff700 1 -- 192.168.123.105:0/3351557532 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1d3c018bf0 con 0x7f1d48071a60 2026-03-09T14:58:38.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.669+0000 7f1d2ffff700 1 -- 192.168.123.105:0/3351557532 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f1d3c00f3c0 con 0x7f1d48071a60 2026-03-09T14:58:38.671 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.670+0000 7f1d2ffff700 1 --2- 192.168.123.105:0/3351557532 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1d3006c870 0x7f1d3006ed20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T14:58:38.671 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.670+0000 7f1d4659c700 1 --2- 192.168.123.105:0/3351557532 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1d3006c870 0x7f1d3006ed20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T14:58:38.671 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.670+0000 7f1d2ffff700 1 -- 192.168.123.105:0/3351557532 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f1d3c014070 con 0x7f1d48071a60 2026-03-09T14:58:38.671 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.670+0000 7f1d4659c700 1 --2- 192.168.123.105:0/3351557532 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1d3006c870 0x7f1d3006ed20 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f1d400060b0 tx=0x7f1d40006040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T14:58:38.672 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.671+0000 7f1d4c8b1700 1 -- 192.168.123.105:0/3351557532 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1d34005320 con 0x7f1d48071a60 2026-03-09T14:58:38.676 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.674+0000 7f1d2ffff700 1 -- 192.168.123.105:0/3351557532 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f1d3c092050 con 0x7f1d48071a60 2026-03-09T14:58:38.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T14:58:38.799+0000 7f1d4c8b1700 1 -- 192.168.123.105:0/3351557532 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mgr", "target": ["mon-mgr", ""]}) v1 -- 0x7f1d34000ca0 con 0x7f1d3006c870 2026-03-09T14:58:40.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:40 vm05.local ceph-mon[50611]: from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:40.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:40 vm05.local ceph-mon[50611]: pgmap v86: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-09T14:58:40.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:40 vm09.local ceph-mon[59673]: from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:58:40.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:40 vm09.local ceph-mon[59673]: pgmap v86: 65 pgs: 65 active+clean; 451 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 5 op/s 2026-03-09T14:58:42.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:42 vm05.local ceph-mon[50611]: pgmap v87: 65 pgs: 65 active+clean; 457 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 6 op/s 2026-03-09T14:58:42.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:42 vm09.local ceph-mon[59673]: pgmap v87: 65 pgs: 65 active+clean; 457 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 6 op/s 2026-03-09T14:58:44.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:44 vm05.local ceph-mon[50611]: pgmap v88: 65 pgs: 65 active+clean; 457 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s 2026-03-09T14:58:44.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:44 vm09.local ceph-mon[59673]: pgmap v88: 65 pgs: 65 active+clean; 457 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.4 KiB/s rd, 682 B/s wr, 2 op/s 2026-03-09T14:58:45.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:45 vm05.local ceph-mon[50611]: pgmap v89: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T14:58:45.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:45 vm09.local ceph-mon[59673]: pgmap v89: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T14:58:48.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:47 vm09.local ceph-mon[59673]: pgmap v90: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.3 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T14:58:48.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:47 vm05.local ceph-mon[50611]: pgmap v90: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.3 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T14:58:50.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:50 vm05.local ceph-mon[50611]: pgmap v91: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.3 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T14:58:50.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:50 vm09.local ceph-mon[59673]: pgmap v91: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.3 KiB/s rd, 1.1 KiB/s wr, 2 op/s 2026-03-09T14:58:51.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:51 vm05.local ceph-mon[50611]: pgmap v92: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T14:58:51.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:51 vm09.local ceph-mon[59673]: pgmap v92: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 2.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-09T14:58:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:52 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:58:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:52 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:58:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:53 vm05.local ceph-mon[50611]: pgmap v93: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:58:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:53 vm09.local ceph-mon[59673]: pgmap v93: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:58:56.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:56 vm05.local ceph-mon[50611]: pgmap v94: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:58:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:56 vm09.local ceph-mon[59673]: pgmap v94: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:58:58.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:58:57 vm09.local ceph-mon[59673]: pgmap v95: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:58:58.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:58:57 vm05.local ceph-mon[50611]: pgmap v95: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:59:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:00 vm05.local ceph-mon[50611]: pgmap v96: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:59:00.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:00 vm09.local ceph-mon[59673]: pgmap v96: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:59:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:01 vm05.local ceph-mon[50611]: pgmap v97: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.6 KiB/s rd, 2 op/s 2026-03-09T14:59:01.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:01 vm09.local ceph-mon[59673]: pgmap v97: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.6 KiB/s rd, 2 op/s 2026-03-09T14:59:04.431 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:04 vm09.local ceph-mon[59673]: pgmap v98: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:04.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:04 vm05.local ceph-mon[50611]: pgmap v98: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:06.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:06 vm05.local ceph-mon[50611]: pgmap v99: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:06.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:06 vm09.local ceph-mon[59673]: pgmap v99: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:08.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:07 vm05.local ceph-mon[50611]: pgmap v100: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:08.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:07 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:59:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:07 vm09.local ceph-mon[59673]: pgmap v100: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:07 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:59:10.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:10 vm05.local ceph-mon[50611]: pgmap v101: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:10.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:10 vm09.local ceph-mon[59673]: pgmap v101: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:12 vm05.local ceph-mon[50611]: pgmap v102: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:12 vm09.local ceph-mon[59673]: pgmap v102: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:14.562 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:14 vm05.local ceph-mon[50611]: pgmap v103: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:14.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:14 vm09.local ceph-mon[59673]: pgmap v103: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:16 vm05.local ceph-mon[50611]: pgmap v104: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:16.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:16 vm09.local ceph-mon[59673]: pgmap v104: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:18.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:17 vm05.local ceph-mon[50611]: pgmap v105: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:18.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:17 vm09.local ceph-mon[59673]: pgmap v105: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:20.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:20 vm05.local ceph-mon[50611]: pgmap v106: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:20.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:20 vm09.local ceph-mon[59673]: pgmap v106: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:21.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:21 vm05.local ceph-mon[50611]: pgmap v107: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:21 vm09.local ceph-mon[59673]: pgmap v107: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:23.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:22 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:59:23.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:22 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:59:24.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:23 vm05.local ceph-mon[50611]: pgmap v108: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:23 vm09.local ceph-mon[59673]: pgmap v108: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:26.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:26 vm05.local ceph-mon[50611]: pgmap v109: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:26.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:26 vm09.local ceph-mon[59673]: pgmap v109: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:28.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:27 vm05.local ceph-mon[50611]: pgmap v110: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:28.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:27 vm09.local ceph-mon[59673]: pgmap v110: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:30.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:30 vm05.local ceph-mon[50611]: pgmap v111: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:30.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:30 vm09.local ceph-mon[59673]: pgmap v111: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:32.504 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:32 vm05.local ceph-mon[50611]: pgmap v112: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:32.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:32 vm09.local ceph-mon[59673]: pgmap v112: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:33 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:59:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:33 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:59:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:33 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:59:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:33 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:59:33.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:33 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:59:33.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:33 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:59:33.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:33 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:59:33.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:33 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T14:59:34.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:34 vm05.local ceph-mon[50611]: pgmap v113: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:34.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:34 vm09.local ceph-mon[59673]: pgmap v113: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:36.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:36 vm05.local ceph-mon[50611]: pgmap v114: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:36.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:36 vm09.local ceph-mon[59673]: pgmap v114: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:38.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:38 vm09.local ceph-mon[59673]: pgmap v115: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:38.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:38 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:59:38.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:38 vm05.local ceph-mon[50611]: pgmap v115: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:38.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:38 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:59:40.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:40 vm05.local ceph-mon[50611]: pgmap v116: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:40.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:40 vm09.local ceph-mon[59673]: pgmap v116: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr: 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr: 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr: 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr: git switch -c 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr: 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr:Or undo this operation with: 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr: 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr: git switch - 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr: 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr: 2026-03-09T14:59:41.864 INFO:tasks.workunit.client.1.vm09.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T14:59:41.869 DEBUG:teuthology.orchestra.run.vm09:> cd -- /home/ubuntu/cephtest/clone.client.1/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.1 2026-03-09T14:59:41.928 INFO:tasks.workunit.client.1.vm09.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-09T14:59:41.930 INFO:tasks.workunit.client.1.vm09.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.1/qa/workunits/direct_io' 2026-03-09T14:59:41.930 INFO:tasks.workunit.client.1.vm09.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-09T14:59:41.972 INFO:tasks.workunit.client.1.vm09.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-09T14:59:42.009 INFO:tasks.workunit.client.1.vm09.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-09T14:59:42.038 INFO:tasks.workunit.client.1.vm09.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.1/qa/workunits/direct_io' 2026-03-09T14:59:42.040 INFO:tasks.workunit.client.1.vm09.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.1/qa/workunits/fs' 2026-03-09T14:59:42.040 INFO:tasks.workunit.client.1.vm09.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-09T14:59:42.067 INFO:tasks.workunit.client.1.vm09.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.1/qa/workunits/fs' 2026-03-09T14:59:42.070 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T14:59:42.070 DEBUG:teuthology.orchestra.run.vm09:> dd if=/home/ubuntu/cephtest/workunits.list.client.1 of=/dev/stdout 2026-03-09T14:59:42.126 INFO:tasks.workunit:Running workunits matching suites/fsstress.sh on client.1... 2026-03-09T14:59:42.126 INFO:tasks.workunit:Running workunit suites/fsstress.sh... 2026-03-09T14:59:42.127 DEBUG:teuthology.orchestra.run.vm09:workunit test suites/fsstress.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh 2026-03-09T14:59:42.195 INFO:tasks.workunit.client.1.vm09.stderr:+ mkdir -p fsstress 2026-03-09T14:59:42.197 INFO:tasks.workunit.client.1.vm09.stderr:+ pushd fsstress 2026-03-09T14:59:42.198 INFO:tasks.workunit.client.1.vm09.stdout:~/cephtest/mnt.1/client.1/tmp/fsstress ~/cephtest/mnt.1/client.1/tmp 2026-03-09T14:59:42.198 INFO:tasks.workunit.client.1.vm09.stderr:+ wget -q -O ltp-full.tgz http://download.ceph.com/qa/ltp-full-20091231.tgz 2026-03-09T14:59:42.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:42 vm05.local ceph-mon[50611]: pgmap v117: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:42.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:42 vm09.local ceph-mon[59673]: pgmap v117: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T14:59:43.817 INFO:tasks.workunit.client.1.vm09.stderr:+ tar xzf ltp-full.tgz 2026-03-09T14:59:44.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:44 vm05.local ceph-mon[50611]: pgmap v118: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:44.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:44 vm09.local ceph-mon[59673]: pgmap v118: 65 pgs: 65 active+clean; 460 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T14:59:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:46 vm09.local ceph-mon[59673]: pgmap v119: 65 pgs: 65 active+clean; 8.1 MiB data, 179 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s rd, 661 KiB/s wr, 13 op/s 2026-03-09T14:59:46.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:46 vm05.local ceph-mon[50611]: pgmap v119: 65 pgs: 65 active+clean; 8.1 MiB data, 179 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s rd, 661 KiB/s wr, 13 op/s 2026-03-09T14:59:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:47 vm09.local ceph-mon[59673]: pgmap v120: 65 pgs: 65 active+clean; 13 MiB data, 192 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s rd, 1.1 MiB/s wr, 25 op/s 2026-03-09T14:59:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:47 vm05.local ceph-mon[50611]: pgmap v120: 65 pgs: 65 active+clean; 13 MiB data, 192 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s rd, 1.1 MiB/s wr, 25 op/s 2026-03-09T14:59:50.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:50 vm05.local ceph-mon[50611]: pgmap v121: 65 pgs: 65 active+clean; 14 MiB data, 198 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s rd, 1.1 MiB/s wr, 35 op/s 2026-03-09T14:59:50.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:50 vm09.local ceph-mon[59673]: pgmap v121: 65 pgs: 65 active+clean; 14 MiB data, 198 MiB used, 120 GiB / 120 GiB avail; 67 KiB/s rd, 1.1 MiB/s wr, 35 op/s 2026-03-09T14:59:51.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:51 vm09.local ceph-mon[59673]: pgmap v122: 65 pgs: 65 active+clean; 21 MiB data, 218 MiB used, 120 GiB / 120 GiB avail; 390 KiB/s rd, 1.8 MiB/s wr, 76 op/s 2026-03-09T14:59:51.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:51 vm05.local ceph-mon[50611]: pgmap v122: 65 pgs: 65 active+clean; 21 MiB data, 218 MiB used, 120 GiB / 120 GiB avail; 390 KiB/s rd, 1.8 MiB/s wr, 76 op/s 2026-03-09T14:59:52.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:52 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:59:52.648 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:52 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:59:53.432 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:53 vm09.local ceph-mon[59673]: pgmap v123: 65 pgs: 65 active+clean; 21 MiB data, 218 MiB used, 120 GiB / 120 GiB avail; 390 KiB/s rd, 1.8 MiB/s wr, 76 op/s 2026-03-09T14:59:53.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:53 vm05.local ceph-mon[50611]: pgmap v123: 65 pgs: 65 active+clean; 21 MiB data, 218 MiB used, 120 GiB / 120 GiB avail; 390 KiB/s rd, 1.8 MiB/s wr, 76 op/s 2026-03-09T14:59:55.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:55 vm05.local ceph-mon[50611]: pgmap v124: 65 pgs: 65 active+clean; 31 MiB data, 243 MiB used, 120 GiB / 120 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 99 op/s 2026-03-09T14:59:55.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:55 vm09.local ceph-mon[59673]: pgmap v124: 65 pgs: 65 active+clean; 31 MiB data, 243 MiB used, 120 GiB / 120 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 99 op/s 2026-03-09T14:59:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:57 vm09.local ceph-mon[59673]: pgmap v125: 65 pgs: 65 active+clean; 34 MiB data, 270 MiB used, 120 GiB / 120 GiB avail; 959 KiB/s rd, 2.2 MiB/s wr, 97 op/s 2026-03-09T14:59:57.709 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:57 vm05.local ceph-mon[50611]: pgmap v125: 65 pgs: 65 active+clean; 34 MiB data, 270 MiB used, 120 GiB / 120 GiB avail; 959 KiB/s rd, 2.2 MiB/s wr, 97 op/s 2026-03-09T15:00:00.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 14:59:59 vm09.local ceph-mon[59673]: pgmap v126: 65 pgs: 65 active+clean; 39 MiB data, 283 MiB used, 120 GiB / 120 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 97 op/s 2026-03-09T15:00:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 14:59:59 vm05.local ceph-mon[50611]: pgmap v126: 65 pgs: 65 active+clean; 39 MiB data, 283 MiB used, 120 GiB / 120 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 97 op/s 2026-03-09T15:00:01.309 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:01 vm05.local ceph-mon[50611]: Health detail: HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:00:01.310 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:01 vm05.local ceph-mon[50611]: [WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:00:01.310 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:01 vm05.local ceph-mon[50611]: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:00:01.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:01 vm09.local ceph-mon[59673]: Health detail: HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:00:01.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:01 vm09.local ceph-mon[59673]: [WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:00:01.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:01 vm09.local ceph-mon[59673]: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:00:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:02 vm09.local ceph-mon[59673]: pgmap v127: 65 pgs: 65 active+clean; 49 MiB data, 327 MiB used, 120 GiB / 120 GiB avail; 1.6 MiB/s rd, 3.0 MiB/s wr, 129 op/s 2026-03-09T15:00:02.778 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:02 vm05.local ceph-mon[50611]: pgmap v127: 65 pgs: 65 active+clean; 49 MiB data, 327 MiB used, 120 GiB / 120 GiB avail; 1.6 MiB/s rd, 3.0 MiB/s wr, 129 op/s 2026-03-09T15:00:03.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:03 vm09.local ceph-mon[59673]: pgmap v128: 65 pgs: 65 active+clean; 49 MiB data, 327 MiB used, 120 GiB / 120 GiB avail; 1.3 MiB/s rd, 2.3 MiB/s wr, 88 op/s 2026-03-09T15:00:03.685 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:03 vm05.local ceph-mon[50611]: pgmap v128: 65 pgs: 65 active+clean; 49 MiB data, 327 MiB used, 120 GiB / 120 GiB avail; 1.3 MiB/s rd, 2.3 MiB/s wr, 88 op/s 2026-03-09T15:00:03.875 INFO:teuthology.orchestra.run.vm05.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:00:03.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:03.872+0000 7f1d2ffff700 1 -- 192.168.123.105:0/3351557532 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+89 (secure 0 0 0) 0x7f1d34000ca0 con 0x7f1d3006c870 2026-03-09T15:00:03.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:03.877+0000 7f1d2dffb700 1 -- 192.168.123.105:0/3351557532 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1d3006c870 msgr2=0x7f1d3006ed20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:03.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:03.877+0000 7f1d2dffb700 1 --2- 192.168.123.105:0/3351557532 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1d3006c870 0x7f1d3006ed20 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f1d400060b0 tx=0x7f1d40006040 comp rx=0 tx=0).stop 2026-03-09T15:00:03.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:03.877+0000 7f1d2dffb700 1 -- 192.168.123.105:0/3351557532 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d48071a60 msgr2=0x7f1d481a4ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:03.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:03.877+0000 7f1d2dffb700 1 --2- 192.168.123.105:0/3351557532 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d48071a60 0x7f1d481a4ac0 secure :-1 s=READY pgs=292 cs=0 l=1 rev1=1 crypto rx=0x7f1d3c009fc0 tx=0x7f1d3c00c660 comp rx=0 tx=0).stop 2026-03-09T15:00:03.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:03.878+0000 7f1d2dffb700 1 -- 192.168.123.105:0/3351557532 shutdown_connections 2026-03-09T15:00:03.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:03.878+0000 7f1d2dffb700 1 --2- 192.168.123.105:0/3351557532 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1d3006c870 0x7f1d3006ed20 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:03.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:03.878+0000 7f1d2dffb700 1 --2- 192.168.123.105:0/3351557532 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1d48071a60 0x7f1d481a4ac0 unknown :-1 s=CLOSED pgs=292 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:03.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:03.878+0000 7f1d2dffb700 1 --2- 192.168.123.105:0/3351557532 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1d481a5000 0x7f1d481aa070 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:03.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:03.878+0000 7f1d2dffb700 1 -- 192.168.123.105:0/3351557532 >> 192.168.123.105:0/3351557532 conn(0x7f1d4806d1a0 msgr2=0x7f1d48070660 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:03.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:03.878+0000 7f1d2dffb700 1 -- 192.168.123.105:0/3351557532 shutdown_connections 2026-03-09T15:00:03.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:03.878+0000 7f1d2dffb700 1 -- 192.168.123.105:0/3351557532 wait complete. 2026-03-09T15:00:03.962 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-09T15:00:04.564 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:00:05.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:04 vm05.local ceph-mon[50611]: Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:00:05.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:04 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:00:05.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:04 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:00:05.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:04 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:00:05.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:04 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:00:05.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:04 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.359+0000 7f2872888700 1 -- 192.168.123.105:0/2851096681 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f286c072440 msgr2=0x7f286c10be90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.359+0000 7f2872888700 1 --2- 192.168.123.105:0/2851096681 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f286c072440 0x7f286c10be90 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f285c009a60 tx=0x7f285c009d70 comp rx=0 tx=0).stop 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.360+0000 7f2872888700 1 -- 192.168.123.105:0/2851096681 shutdown_connections 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.360+0000 7f2872888700 1 --2- 192.168.123.105:0/2851096681 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f286c072440 0x7f286c10be90 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.360+0000 7f2872888700 1 --2- 192.168.123.105:0/2851096681 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f286c071a60 0x7f286c071e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.360+0000 7f2872888700 1 -- 192.168.123.105:0/2851096681 >> 192.168.123.105:0/2851096681 conn(0x7f286c06d1a0 msgr2=0x7f286c06f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.361+0000 7f2872888700 1 -- 192.168.123.105:0/2851096681 shutdown_connections 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.361+0000 7f2872888700 1 -- 192.168.123.105:0/2851096681 wait complete. 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2872888700 1 Processor -- start 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2872888700 1 -- start start 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2872888700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f286c071a60 0x7f286c1af9c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2872888700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f286c072440 0x7f286c1b1f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2872888700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f286c1b24e0 con 0x7f286c072440 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2872888700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f286c1b2620 con 0x7f286c071a60 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2871886700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f286c071a60 0x7f286c1af9c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2871886700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f286c071a60 0x7f286c1af9c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:33422/0 (socket says 192.168.123.105:33422) 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2871886700 1 -- 192.168.123.105:0/1599841448 learned_addr learned my addr 192.168.123.105:0/1599841448 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:00:05.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2871085700 1 --2- 192.168.123.105:0/1599841448 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f286c072440 0x7f286c1b1f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:05.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:04 vm09.local ceph-mon[59673]: Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:00:05.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:04 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:00:05.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:04 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:00:05.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:04 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:00:05.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:04 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:00:05.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:04 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2871085700 1 -- 192.168.123.105:0/1599841448 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f286c071a60 msgr2=0x7f286c1af9c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2871085700 1 --2- 192.168.123.105:0/1599841448 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f286c071a60 0x7f286c1af9c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.362+0000 7f2871085700 1 -- 192.168.123.105:0/1599841448 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f285c009710 con 0x7f286c072440 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.363+0000 7f2871085700 1 --2- 192.168.123.105:0/1599841448 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f286c072440 0x7f286c1b1f10 secure :-1 s=READY pgs=293 cs=0 l=1 rev1=1 crypto rx=0x7f286c107dc0 tx=0x7f285c00f690 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.363+0000 7f2862ffd700 1 -- 192.168.123.105:0/1599841448 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f285c01d070 con 0x7f286c072440 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.363+0000 7f2862ffd700 1 -- 192.168.123.105:0/1599841448 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f285c00fbf0 con 0x7f286c072440 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.363+0000 7f2862ffd700 1 -- 192.168.123.105:0/1599841448 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f285c0176d0 con 0x7f286c072440 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.363+0000 7f2872888700 1 -- 192.168.123.105:0/1599841448 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f286c1b2850 con 0x7f286c072440 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.363+0000 7f2872888700 1 -- 192.168.123.105:0/1599841448 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f286c1b2d40 con 0x7f286c072440 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.364+0000 7f2872888700 1 -- 192.168.123.105:0/1599841448 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2850005320 con 0x7f286c072440 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.365+0000 7f2862ffd700 1 -- 192.168.123.105:0/1599841448 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f285c017830 con 0x7f286c072440 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.366+0000 7f2862ffd700 1 --2- 192.168.123.105:0/1599841448 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f285806c750 0x7f285806ec00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.366+0000 7f2862ffd700 1 -- 192.168.123.105:0/1599841448 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f285c08cc40 con 0x7f286c072440 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.367+0000 7f2871886700 1 --2- 192.168.123.105:0/1599841448 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f285806c750 0x7f285806ec00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:05.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.367+0000 7f2871886700 1 --2- 192.168.123.105:0/1599841448 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f285806c750 0x7f285806ec00 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f2868009ee0 tx=0x7f2868009500 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:05.370 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.368+0000 7f2862ffd700 1 -- 192.168.123.105:0/1599841448 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f285c05af80 con 0x7f286c072440 2026-03-09T15:00:05.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.596+0000 7f2872888700 1 -- 192.168.123.105:0/1599841448 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f2850000bf0 con 0x7f285806c750 2026-03-09T15:00:05.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.598+0000 7f2862ffd700 1 -- 192.168.123.105:0/1599841448 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+351 (secure 0 0 0) 0x7f2850000bf0 con 0x7f285806c750 2026-03-09T15:00:05.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.612+0000 7f2860fb9700 1 -- 192.168.123.105:0/1599841448 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f285806c750 msgr2=0x7f285806ec00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:05.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.614+0000 7f2860fb9700 1 --2- 192.168.123.105:0/1599841448 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f285806c750 0x7f285806ec00 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f2868009ee0 tx=0x7f2868009500 comp rx=0 tx=0).stop 2026-03-09T15:00:05.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.615+0000 7f2860fb9700 1 -- 192.168.123.105:0/1599841448 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f286c072440 msgr2=0x7f286c1b1f10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:05.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.615+0000 7f2860fb9700 1 --2- 192.168.123.105:0/1599841448 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f286c072440 0x7f286c1b1f10 secure :-1 s=READY pgs=293 cs=0 l=1 rev1=1 crypto rx=0x7f286c107dc0 tx=0x7f285c00f690 comp rx=0 tx=0).stop 2026-03-09T15:00:05.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.615+0000 7f2860fb9700 1 -- 192.168.123.105:0/1599841448 shutdown_connections 2026-03-09T15:00:05.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.615+0000 7f2860fb9700 1 --2- 192.168.123.105:0/1599841448 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f285806c750 0x7f285806ec00 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:05.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.615+0000 7f2860fb9700 1 --2- 192.168.123.105:0/1599841448 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f286c071a60 0x7f286c1af9c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:05.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.615+0000 7f2860fb9700 1 --2- 192.168.123.105:0/1599841448 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f286c072440 0x7f286c1b1f10 unknown :-1 s=CLOSED pgs=293 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:05.616 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.615+0000 7f2860fb9700 1 -- 192.168.123.105:0/1599841448 >> 192.168.123.105:0/1599841448 conn(0x7f286c06d1a0 msgr2=0x7f286c10a6d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:05.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.615+0000 7f2860fb9700 1 -- 192.168.123.105:0/1599841448 shutdown_connections 2026-03-09T15:00:05.617 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.615+0000 7f2860fb9700 1 -- 192.168.123.105:0/1599841448 wait complete. 2026-03-09T15:00:05.671 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:00:05.885 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:05 vm05.local ceph-mon[50611]: Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:00:05.885 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:05 vm05.local ceph-mon[50611]: pgmap v129: 65 pgs: 65 active+clean; 56 MiB data, 339 MiB used, 120 GiB / 120 GiB avail; 1.6 MiB/s rd, 3.0 MiB/s wr, 115 op/s 2026-03-09T15:00:05.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.906+0000 7fb6fe64d700 1 -- 192.168.123.105:0/1714727262 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb6f8107d50 msgr2=0x7fb6f81081c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:05.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.906+0000 7fb6fe64d700 1 --2- 192.168.123.105:0/1714727262 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb6f8107d50 0x7fb6f81081c0 secure :-1 s=READY pgs=294 cs=0 l=1 rev1=1 crypto rx=0x7fb6e0009b00 tx=0x7fb6e0009e10 comp rx=0 tx=0).stop 2026-03-09T15:00:05.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.906+0000 7fb6fe64d700 1 -- 192.168.123.105:0/1714727262 shutdown_connections 2026-03-09T15:00:05.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.906+0000 7fb6fe64d700 1 --2- 192.168.123.105:0/1714727262 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb6f8107d50 0x7fb6f81081c0 unknown :-1 s=CLOSED pgs=294 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:05.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.906+0000 7fb6fe64d700 1 --2- 192.168.123.105:0/1714727262 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6f8071db0 0x7fb6f80721c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:05.909 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.906+0000 7fb6fe64d700 1 -- 192.168.123.105:0/1714727262 >> 192.168.123.105:0/1714727262 conn(0x7fb6f806d3e0 msgr2=0x7fb6f806f830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:05.909 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.909+0000 7fb6fe64d700 1 -- 192.168.123.105:0/1714727262 shutdown_connections 2026-03-09T15:00:05.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.909+0000 7fb6fe64d700 1 -- 192.168.123.105:0/1714727262 wait complete. 2026-03-09T15:00:05.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.909+0000 7fb6fe64d700 1 Processor -- start 2026-03-09T15:00:05.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.909+0000 7fb6fe64d700 1 -- start start 2026-03-09T15:00:05.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.909+0000 7fb6fe64d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb6f8071db0 0x7fb6f8116d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:05.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.909+0000 7fb6fe64d700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6f8107d50 0x7fb6f81172b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:05.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.909+0000 7fb6fe64d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb6f81178d0 con 0x7fb6f8071db0 2026-03-09T15:00:05.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.909+0000 7fb6fe64d700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb6f81b2d50 con 0x7fb6f8107d50 2026-03-09T15:00:05.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.910+0000 7fb6f77fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6f8107d50 0x7fb6f81172b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:05.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.910+0000 7fb6f77fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6f8107d50 0x7fb6f81172b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:33448/0 (socket says 192.168.123.105:33448) 2026-03-09T15:00:05.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.910+0000 7fb6f77fe700 1 -- 192.168.123.105:0/1554915463 learned_addr learned my addr 192.168.123.105:0/1554915463 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:00:05.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.910+0000 7fb6f77fe700 1 -- 192.168.123.105:0/1554915463 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb6f8071db0 msgr2=0x7fb6f8116d70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:05.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.910+0000 7fb6f77fe700 1 --2- 192.168.123.105:0/1554915463 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb6f8071db0 0x7fb6f8116d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:05.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.910+0000 7fb6f77fe700 1 -- 192.168.123.105:0/1554915463 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb6e00097e0 con 0x7fb6f8107d50 2026-03-09T15:00:05.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.910+0000 7fb6f77fe700 1 --2- 192.168.123.105:0/1554915463 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6f8107d50 0x7fb6f81172b0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fb6e00052f0 tx=0x7fb6e0003730 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:05.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.911+0000 7fb6f57fa700 1 -- 192.168.123.105:0/1554915463 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb6e001d070 con 0x7fb6f8107d50 2026-03-09T15:00:05.913 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.911+0000 7fb6fe64d700 1 -- 192.168.123.105:0/1554915463 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb6f81b2e90 con 0x7fb6f8107d50 2026-03-09T15:00:05.913 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.911+0000 7fb6fe64d700 1 -- 192.168.123.105:0/1554915463 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb6f81b3300 con 0x7fb6f8107d50 2026-03-09T15:00:05.913 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.911+0000 7fb6f57fa700 1 -- 192.168.123.105:0/1554915463 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fb6e0003cc0 con 0x7fb6f8107d50 2026-03-09T15:00:05.914 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.911+0000 7fb6f57fa700 1 -- 192.168.123.105:0/1554915463 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb6e00218c0 con 0x7fb6f8107d50 2026-03-09T15:00:05.914 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.913+0000 7fb6f57fa700 1 -- 192.168.123.105:0/1554915463 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fb6e000f4a0 con 0x7fb6f8107d50 2026-03-09T15:00:05.914 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.913+0000 7fb6f57fa700 1 --2- 192.168.123.105:0/1554915463 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb6e406c6d0 0x7fb6e406eb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:05.914 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.913+0000 7fb6f57fa700 1 -- 192.168.123.105:0/1554915463 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fb6e008c8e0 con 0x7fb6f8107d50 2026-03-09T15:00:05.914 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.913+0000 7fb6f7fff700 1 --2- 192.168.123.105:0/1554915463 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb6e406c6d0 0x7fb6e406eb80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:05.914 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.913+0000 7fb6fe64d700 1 -- 192.168.123.105:0/1554915463 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb6d8005320 con 0x7fb6f8107d50 2026-03-09T15:00:05.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.918+0000 7fb6f57fa700 1 -- 192.168.123.105:0/1554915463 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fb6e00570e0 con 0x7fb6f8107d50 2026-03-09T15:00:05.922 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:05.918+0000 7fb6f7fff700 1 --2- 192.168.123.105:0/1554915463 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb6e406c6d0 0x7fb6e406eb80 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fb6e8005950 tx=0x7fb6e80058e0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:06.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.150+0000 7fb6fe64d700 1 -- 192.168.123.105:0/1554915463 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fb6d8000bf0 con 0x7fb6e406c6d0 2026-03-09T15:00:06.159 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.158+0000 7fb6f57fa700 1 -- 192.168.123.105:0/1554915463 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+341 (secure 0 0 0) 0x7fb6d8000bf0 con 0x7fb6e406c6d0 2026-03-09T15:00:06.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.163+0000 7fb6eeffd700 1 -- 192.168.123.105:0/1554915463 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb6e406c6d0 msgr2=0x7fb6e406eb80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:06.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.163+0000 7fb6eeffd700 1 --2- 192.168.123.105:0/1554915463 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb6e406c6d0 0x7fb6e406eb80 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fb6e8005950 tx=0x7fb6e80058e0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.163+0000 7fb6eeffd700 1 -- 192.168.123.105:0/1554915463 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6f8107d50 msgr2=0x7fb6f81172b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:06.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.163+0000 7fb6eeffd700 1 --2- 192.168.123.105:0/1554915463 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6f8107d50 0x7fb6f81172b0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fb6e00052f0 tx=0x7fb6e0003730 comp rx=0 tx=0).stop 2026-03-09T15:00:06.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.163+0000 7fb6eeffd700 1 -- 192.168.123.105:0/1554915463 shutdown_connections 2026-03-09T15:00:06.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.163+0000 7fb6eeffd700 1 --2- 192.168.123.105:0/1554915463 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fb6e406c6d0 0x7fb6e406eb80 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.163+0000 7fb6eeffd700 1 --2- 192.168.123.105:0/1554915463 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb6f8071db0 0x7fb6f8116d70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.163+0000 7fb6eeffd700 1 --2- 192.168.123.105:0/1554915463 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6f8107d50 0x7fb6f81172b0 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.163+0000 7fb6eeffd700 1 -- 192.168.123.105:0/1554915463 >> 192.168.123.105:0/1554915463 conn(0x7fb6f806d3e0 msgr2=0x7fb6f810af80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:06.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.163+0000 7fb6eeffd700 1 -- 192.168.123.105:0/1554915463 shutdown_connections 2026-03-09T15:00:06.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.164+0000 7fb6eeffd700 1 -- 192.168.123.105:0/1554915463 wait complete. 2026-03-09T15:00:06.332 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.331+0000 7fa404b29700 1 -- 192.168.123.105:0/3203123579 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa400071db0 msgr2=0x7fa4000721c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:06.332 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.331+0000 7fa404b29700 1 --2- 192.168.123.105:0/3203123579 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa400071db0 0x7fa4000721c0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7fa3f0009a60 tx=0x7fa3f0009d70 comp rx=0 tx=0).stop 2026-03-09T15:00:06.332 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.331+0000 7fa404b29700 1 -- 192.168.123.105:0/3203123579 shutdown_connections 2026-03-09T15:00:06.332 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.331+0000 7fa404b29700 1 --2- 192.168.123.105:0/3203123579 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa400107d50 0x7fa4001081c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.332 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.331+0000 7fa404b29700 1 --2- 192.168.123.105:0/3203123579 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa400071db0 0x7fa4000721c0 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.332 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.331+0000 7fa404b29700 1 -- 192.168.123.105:0/3203123579 >> 192.168.123.105:0/3203123579 conn(0x7fa40006d3e0 msgr2=0x7fa40006f830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:06.332 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.332+0000 7fa404b29700 1 -- 192.168.123.105:0/3203123579 shutdown_connections 2026-03-09T15:00:06.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.332+0000 7fa404b29700 1 -- 192.168.123.105:0/3203123579 wait complete. 2026-03-09T15:00:06.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.337+0000 7fa404b29700 1 Processor -- start 2026-03-09T15:00:06.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.337+0000 7fa404b29700 1 -- start start 2026-03-09T15:00:06.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.337+0000 7fa404b29700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa400071db0 0x7fa4001a4be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:06.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.337+0000 7fa404b29700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa400107d50 0x7fa4001a5120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:06.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.337+0000 7fa404b29700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa4001a5740 con 0x7fa400071db0 2026-03-09T15:00:06.338 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.337+0000 7fa404b29700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa4001a5880 con 0x7fa400107d50 2026-03-09T15:00:06.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.339+0000 7fa3fdd9b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa400107d50 0x7fa4001a5120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:06.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.339+0000 7fa3fdd9b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa400107d50 0x7fa4001a5120 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:33464/0 (socket says 192.168.123.105:33464) 2026-03-09T15:00:06.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.339+0000 7fa3fdd9b700 1 -- 192.168.123.105:0/1875910935 learned_addr learned my addr 192.168.123.105:0/1875910935 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:00:06.341 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.340+0000 7fa3fe59c700 1 --2- 192.168.123.105:0/1875910935 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa400071db0 0x7fa4001a4be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:06.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.348+0000 7fa3fe59c700 1 -- 192.168.123.105:0/1875910935 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa400107d50 msgr2=0x7fa4001a5120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:06.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.348+0000 7fa3fe59c700 1 --2- 192.168.123.105:0/1875910935 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa400107d50 0x7fa4001a5120 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.348+0000 7fa3fe59c700 1 -- 192.168.123.105:0/1875910935 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa3f0009710 con 0x7fa400071db0 2026-03-09T15:00:06.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.348+0000 7fa3fdd9b700 1 --2- 192.168.123.105:0/1875910935 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa400107d50 0x7fa4001a5120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:00:06.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.349+0000 7fa3fe59c700 1 --2- 192.168.123.105:0/1875910935 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa400071db0 0x7fa4001a4be0 secure :-1 s=READY pgs=295 cs=0 l=1 rev1=1 crypto rx=0x7fa3f0005950 tx=0x7fa3f00037e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:06.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.350+0000 7fa3ef7fe700 1 -- 192.168.123.105:0/1875910935 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa3f001d070 con 0x7fa400071db0 2026-03-09T15:00:06.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.350+0000 7fa3ef7fe700 1 -- 192.168.123.105:0/1875910935 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa3f000fb80 con 0x7fa400071db0 2026-03-09T15:00:06.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.350+0000 7fa3ef7fe700 1 -- 192.168.123.105:0/1875910935 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa3f0017690 con 0x7fa400071db0 2026-03-09T15:00:06.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.350+0000 7fa404b29700 1 -- 192.168.123.105:0/1875910935 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa4001aa2d0 con 0x7fa400071db0 2026-03-09T15:00:06.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.350+0000 7fa404b29700 1 -- 192.168.123.105:0/1875910935 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa4001aa740 con 0x7fa400071db0 2026-03-09T15:00:06.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.355+0000 7fa3ef7fe700 1 -- 192.168.123.105:0/1875910935 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fa3f000fcf0 con 0x7fa400071db0 2026-03-09T15:00:06.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.355+0000 7fa3ef7fe700 1 --2- 192.168.123.105:0/1875910935 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa3e806c480 0x7fa3e806e930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:06.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.355+0000 7fa3ef7fe700 1 -- 192.168.123.105:0/1875910935 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fa3f008c950 con 0x7fa400071db0 2026-03-09T15:00:06.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.357+0000 7fa3fdd9b700 1 --2- 192.168.123.105:0/1875910935 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa3e806c480 0x7fa3e806e930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:06.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.357+0000 7fa404b29700 1 -- 192.168.123.105:0/1875910935 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa400066e40 con 0x7fa400071db0 2026-03-09T15:00:06.366 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.363+0000 7fa3fdd9b700 1 --2- 192.168.123.105:0/1875910935 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa3e806c480 0x7fa3e806e930 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fa3f4005950 tx=0x7fa3f4009500 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:06.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:05 vm09.local ceph-mon[59673]: Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:00:06.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:05 vm09.local ceph-mon[59673]: pgmap v129: 65 pgs: 65 active+clean; 56 MiB data, 339 MiB used, 120 GiB / 120 GiB avail; 1.6 MiB/s rd, 3.0 MiB/s wr, 115 op/s 2026-03-09T15:00:06.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.365+0000 7fa3ef7fe700 1 -- 192.168.123.105:0/1875910935 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fa3f005ab60 con 0x7fa400071db0 2026-03-09T15:00:06.466 INFO:tasks.workunit.client.0.vm05.stderr:Updating files: 67% (9475/13941) Updating files: 68% (9480/13941) Updating files: 69% (9620/13941) Updating files: 70% (9759/13941) Updating files: 71% (9899/13941) Updating files: 72% (10038/13941) Updating files: 73% (10177/13941) Updating files: 74% (10317/13941) Updating files: 75% (10456/13941) Updating files: 76% (10596/13941) Updating files: 77% (10735/13941) Updating files: 78% (10874/13941) Updating files: 79% (11014/13941) Updating files: 80% (11153/13941) Updating files: 81% (11293/13941) Updating files: 82% (11432/13941) Updating files: 83% (11572/13941) Updating files: 84% (11711/13941) Updating files: 85% (11850/13941) Updating files: 86% (11990/13941) Updating files: 87% (12129/13941) Updating files: 88% (12269/13941) Updating files: 89% (12408/13941) Updating files: 90% (12547/13941) Updating files: 91% (12687/13941) Updating files: 92% (12826/13941) Updating files: 93% (12966/13941) Updating files: 94% (13105/13941) Updating files: 95% (13244/13941) Updating files: 96% (13384/13941) Updating files: 97% (13523/13941) Updating files: 98% (13663/13941) Updating files: 99% (13802/13941) Updating files: 100% (13941/13941) Updating files: 100% (13941/13941), done. 2026-03-09T15:00:06.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.537+0000 7fa404b29700 1 -- 192.168.123.105:0/1875910935 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fa4001aaae0 con 0x7fa3e806c480 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (3m) 94s ago 4m 22.6M - 0.25.0 c8568f914cd2 35e160b8d1de 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (4m) 94s ago 4m 8061k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (3m) 95s ago 3m 8242k - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (4m) 94s ago 4m 7411k - 18.2.0 dc2bc1663786 1c577d7a0de0 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (3m) 95s ago 3m 7402k - 18.2.0 dc2bc1663786 9e4961442551 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (3m) 94s ago 3m 82.7M - 9.4.7 954c08fa6188 46e00e5e5b38 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (102s) 94s ago 102s 16.0M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (100s) 94s ago 99s 12.8M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (99s) 95s ago 99s 12.9M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (101s) 95s ago 101s 15.7M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:9283,8765,8443 running (4m) 94s ago 4m 499M - 18.2.0 dc2bc1663786 528c75e7c581 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (3m) 95s ago 3m 444M - 18.2.0 dc2bc1663786 b7db289ecc14 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (4m) 94s ago 4m 49.2M 2048M 18.2.0 dc2bc1663786 c83e96b62251 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (3m) 95s ago 3m 45.0M 2048M 18.2.0 dc2bc1663786 7963792b5376 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (4m) 94s ago 4m 13.9M - 1.5.0 0da6a335fe13 925d94d1da6f 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (3m) 95s ago 3m 14.6M - 1.5.0 0da6a335fe13 e0b25e3a046e 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (2m) 94s ago 2m 45.7M 4096M 18.2.0 dc2bc1663786 50f3ca995318 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (2m) 94s ago 2m 46.9M 4096M 18.2.0 dc2bc1663786 23e35bdafe50 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (2m) 94s ago 2m 45.8M 4096M 18.2.0 dc2bc1663786 75097dc12979 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (2m) 95s ago 2m 44.4M 4096M 18.2.0 dc2bc1663786 e79644a0564f 2026-03-09T15:00:06.549 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (2m) 95s ago 2m 43.6M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T15:00:06.550 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (2m) 95s ago 2m 42.5M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:00:06.550 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (3m) 94s ago 3m 38.9M - 2.43.0 a07b618ecd1d c36363ff6641 2026-03-09T15:00:06.550 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.545+0000 7fa3ef7fe700 1 -- 192.168.123.105:0/1875910935 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3288 (secure 0 0 0) 0x7fa4001aaae0 con 0x7fa3e806c480 2026-03-09T15:00:06.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.550+0000 7fa404b29700 1 -- 192.168.123.105:0/1875910935 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa3e806c480 msgr2=0x7fa3e806e930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:06.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.550+0000 7fa404b29700 1 --2- 192.168.123.105:0/1875910935 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa3e806c480 0x7fa3e806e930 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7fa3f4005950 tx=0x7fa3f4009500 comp rx=0 tx=0).stop 2026-03-09T15:00:06.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.550+0000 7fa404b29700 1 -- 192.168.123.105:0/1875910935 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa400071db0 msgr2=0x7fa4001a4be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:06.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.551+0000 7fa404b29700 1 --2- 192.168.123.105:0/1875910935 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa400071db0 0x7fa4001a4be0 secure :-1 s=READY pgs=295 cs=0 l=1 rev1=1 crypto rx=0x7fa3f0005950 tx=0x7fa3f00037e0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.556+0000 7fa404b29700 1 -- 192.168.123.105:0/1875910935 shutdown_connections 2026-03-09T15:00:06.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.556+0000 7fa404b29700 1 --2- 192.168.123.105:0/1875910935 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fa3e806c480 0x7fa3e806e930 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.556+0000 7fa404b29700 1 --2- 192.168.123.105:0/1875910935 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa400071db0 0x7fa4001a4be0 unknown :-1 s=CLOSED pgs=295 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.556+0000 7fa404b29700 1 --2- 192.168.123.105:0/1875910935 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa400107d50 0x7fa4001a5120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.556+0000 7fa404b29700 1 -- 192.168.123.105:0/1875910935 >> 192.168.123.105:0/1875910935 conn(0x7fa40006d3e0 msgr2=0x7fa40010af80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:06.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.558+0000 7fa404b29700 1 -- 192.168.123.105:0/1875910935 shutdown_connections 2026-03-09T15:00:06.562 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.562+0000 7fa404b29700 1 -- 192.168.123.105:0/1875910935 wait complete. 2026-03-09T15:00:06.726 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.725+0000 7ff2d5455700 1 -- 192.168.123.105:0/392075540 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff2d0072360 msgr2=0x7ff2d00770e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:06.726 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.725+0000 7ff2d5455700 1 --2- 192.168.123.105:0/392075540 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff2d0072360 0x7ff2d00770e0 secure :-1 s=READY pgs=296 cs=0 l=1 rev1=1 crypto rx=0x7ff2c8009230 tx=0x7ff2c8009260 comp rx=0 tx=0).stop 2026-03-09T15:00:06.726 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.725+0000 7ff2d5455700 1 -- 192.168.123.105:0/392075540 shutdown_connections 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.725+0000 7ff2d5455700 1 --2- 192.168.123.105:0/392075540 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff2d0072360 0x7ff2d00770e0 unknown :-1 s=CLOSED pgs=296 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.725+0000 7ff2d5455700 1 --2- 192.168.123.105:0/392075540 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2d0071980 0x7ff2d0071d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.725+0000 7ff2d5455700 1 -- 192.168.123.105:0/392075540 >> 192.168.123.105:0/392075540 conn(0x7ff2d006d1a0 msgr2=0x7ff2d006f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.725+0000 7ff2d5455700 1 -- 192.168.123.105:0/392075540 shutdown_connections 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.725+0000 7ff2d5455700 1 -- 192.168.123.105:0/392075540 wait complete. 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.726+0000 7ff2d5455700 1 Processor -- start 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.726+0000 7ff2d5455700 1 -- start start 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.726+0000 7ff2d5455700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff2d0071980 0x7ff2d0082550 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.726+0000 7ff2d5455700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2d0082a90 0x7ff2d0082f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.726+0000 7ff2d5455700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff2d012dd80 con 0x7ff2d0071980 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.726+0000 7ff2d5455700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff2d012def0 con 0x7ff2d0082a90 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.726+0000 7ff2ce7fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2d0082a90 0x7ff2d0082f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:06.727 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.726+0000 7ff2ce7fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2d0082a90 0x7ff2d0082f00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:33488/0 (socket says 192.168.123.105:33488) 2026-03-09T15:00:06.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.726+0000 7ff2ce7fc700 1 -- 192.168.123.105:0/317864370 learned_addr learned my addr 192.168.123.105:0/317864370 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:00:06.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.727+0000 7ff2ce7fc700 1 -- 192.168.123.105:0/317864370 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff2d0071980 msgr2=0x7ff2d0082550 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:06.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.727+0000 7ff2ce7fc700 1 --2- 192.168.123.105:0/317864370 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff2d0071980 0x7ff2d0082550 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.727+0000 7ff2ce7fc700 1 -- 192.168.123.105:0/317864370 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff2c8008ee0 con 0x7ff2d0082a90 2026-03-09T15:00:06.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.727+0000 7ff2ce7fc700 1 --2- 192.168.123.105:0/317864370 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2d0082a90 0x7ff2d0082f00 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7ff2c80042a0 tx=0x7ff2c8004380 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:06.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.728+0000 7ff2b7fff700 1 -- 192.168.123.105:0/317864370 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff2c801c070 con 0x7ff2d0082a90 2026-03-09T15:00:06.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.728+0000 7ff2d5455700 1 -- 192.168.123.105:0/317864370 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff2d012e110 con 0x7ff2d0082a90 2026-03-09T15:00:06.730 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.728+0000 7ff2d5455700 1 -- 192.168.123.105:0/317864370 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff2d012e600 con 0x7ff2d0082a90 2026-03-09T15:00:06.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.729+0000 7ff2b7fff700 1 -- 192.168.123.105:0/317864370 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7ff2c80047e0 con 0x7ff2d0082a90 2026-03-09T15:00:06.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.729+0000 7ff2b7fff700 1 -- 192.168.123.105:0/317864370 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff2c8016610 con 0x7ff2d0082a90 2026-03-09T15:00:06.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.730+0000 7ff2d5455700 1 -- 192.168.123.105:0/317864370 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff2bc005320 con 0x7ff2d0082a90 2026-03-09T15:00:06.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.731+0000 7ff2b7fff700 1 -- 192.168.123.105:0/317864370 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7ff2c800e440 con 0x7ff2d0082a90 2026-03-09T15:00:06.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.731+0000 7ff2b7fff700 1 --2- 192.168.123.105:0/317864370 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff2b806c7a0 0x7ff2b806ec50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:06.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.732+0000 7ff2b7fff700 1 -- 192.168.123.105:0/317864370 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7ff2c8012070 con 0x7ff2d0082a90 2026-03-09T15:00:06.733 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.732+0000 7ff2ceffd700 1 --2- 192.168.123.105:0/317864370 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff2b806c7a0 0x7ff2b806ec50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:06.734 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.733+0000 7ff2ceffd700 1 --2- 192.168.123.105:0/317864370 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff2b806c7a0 0x7ff2b806ec50 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7ff2c000b3c0 tx=0x7ff2c000d040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:06.734 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.733+0000 7ff2b7fff700 1 -- 192.168.123.105:0/317864370 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7ff2c805b6f0 con 0x7ff2d0082a90 2026-03-09T15:00:06.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.980+0000 7ff2d5455700 1 -- 192.168.123.105:0/317864370 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7ff2bc005cc0 con 0x7ff2d0082a90 2026-03-09T15:00:06.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.981+0000 7ff2b7fff700 1 -- 192.168.123.105:0/317864370 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+558 (secure 0 0 0) 0x7ff2c805b280 con 0x7ff2d0082a90 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 6 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 14 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:00:06.983 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:00:06.986 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.984+0000 7ff2d5455700 1 -- 192.168.123.105:0/317864370 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff2b806c7a0 msgr2=0x7ff2b806ec50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:06.986 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.984+0000 7ff2d5455700 1 --2- 192.168.123.105:0/317864370 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff2b806c7a0 0x7ff2b806ec50 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7ff2c000b3c0 tx=0x7ff2c000d040 comp rx=0 tx=0).stop 2026-03-09T15:00:06.986 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.984+0000 7ff2d5455700 1 -- 192.168.123.105:0/317864370 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2d0082a90 msgr2=0x7ff2d0082f00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:06.986 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.984+0000 7ff2d5455700 1 --2- 192.168.123.105:0/317864370 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2d0082a90 0x7ff2d0082f00 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7ff2c80042a0 tx=0x7ff2c8004380 comp rx=0 tx=0).stop 2026-03-09T15:00:06.986 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.985+0000 7ff2d5455700 1 -- 192.168.123.105:0/317864370 shutdown_connections 2026-03-09T15:00:06.986 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.985+0000 7ff2d5455700 1 --2- 192.168.123.105:0/317864370 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7ff2b806c7a0 0x7ff2b806ec50 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.986 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.985+0000 7ff2d5455700 1 --2- 192.168.123.105:0/317864370 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff2d0071980 0x7ff2d0082550 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.986 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.985+0000 7ff2d5455700 1 --2- 192.168.123.105:0/317864370 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2d0082a90 0x7ff2d0082f00 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:06.986 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.985+0000 7ff2d5455700 1 -- 192.168.123.105:0/317864370 >> 192.168.123.105:0/317864370 conn(0x7ff2d006d1a0 msgr2=0x7ff2d0076520 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:06.988 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.985+0000 7ff2d5455700 1 -- 192.168.123.105:0/317864370 shutdown_connections 2026-03-09T15:00:06.988 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:06.986+0000 7ff2d5455700 1 -- 192.168.123.105:0/317864370 wait complete. 2026-03-09T15:00:07.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 -- 192.168.123.105:0/3255430359 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc908072360 msgr2=0x7fc9080770e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:07.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 --2- 192.168.123.105:0/3255430359 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc908072360 0x7fc9080770e0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7fc904009230 tx=0x7fc904009260 comp rx=0 tx=0).stop 2026-03-09T15:00:07.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 -- 192.168.123.105:0/3255430359 shutdown_connections 2026-03-09T15:00:07.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 --2- 192.168.123.105:0/3255430359 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc908072360 0x7fc9080770e0 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:07.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 --2- 192.168.123.105:0/3255430359 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc908071980 0x7fc908071d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:07.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 -- 192.168.123.105:0/3255430359 >> 192.168.123.105:0/3255430359 conn(0x7fc90806d1a0 msgr2=0x7fc90806f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:07.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 -- 192.168.123.105:0/3255430359 shutdown_connections 2026-03-09T15:00:07.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 -- 192.168.123.105:0/3255430359 wait complete. 2026-03-09T15:00:07.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 Processor -- start 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 -- start start 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc908071980 0x7fc9080824a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc9080829e0 0x7fc908082e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc908083e50 con 0x7fc9080829e0 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.063+0000 7fc9109b8700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc90812dd80 con 0x7fc908071980 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.064+0000 7fc90e754700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc908071980 0x7fc9080824a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.064+0000 7fc90e754700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc908071980 0x7fc9080824a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:33502/0 (socket says 192.168.123.105:33502) 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.064+0000 7fc90e754700 1 -- 192.168.123.105:0/1059557603 learned_addr learned my addr 192.168.123.105:0/1059557603 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.064+0000 7fc90df53700 1 --2- 192.168.123.105:0/1059557603 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc9080829e0 0x7fc908082e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.064+0000 7fc90df53700 1 -- 192.168.123.105:0/1059557603 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc908071980 msgr2=0x7fc9080824a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.064+0000 7fc90df53700 1 --2- 192.168.123.105:0/1059557603 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc908071980 0x7fc9080824a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.064+0000 7fc90df53700 1 -- 192.168.123.105:0/1059557603 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc904008ee0 con 0x7fc9080829e0 2026-03-09T15:00:07.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.064+0000 7fc90df53700 1 --2- 192.168.123.105:0/1059557603 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc9080829e0 0x7fc908082e50 secure :-1 s=READY pgs=297 cs=0 l=1 rev1=1 crypto rx=0x7fc904000f80 tx=0x7fc904004740 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:07.117 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.113+0000 7fc8fb7fe700 1 -- 192.168.123.105:0/1059557603 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc90401d070 con 0x7fc9080829e0 2026-03-09T15:00:07.117 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.113+0000 7fc9109b8700 1 -- 192.168.123.105:0/1059557603 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc90812dfa0 con 0x7fc9080829e0 2026-03-09T15:00:07.117 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.113+0000 7fc9109b8700 1 -- 192.168.123.105:0/1059557603 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc90812e490 con 0x7fc9080829e0 2026-03-09T15:00:07.117 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.113+0000 7fc9109b8700 1 -- 192.168.123.105:0/1059557603 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc90804ea50 con 0x7fc9080829e0 2026-03-09T15:00:07.117 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.114+0000 7fc8fb7fe700 1 -- 192.168.123.105:0/1059557603 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fc9040085f0 con 0x7fc9080829e0 2026-03-09T15:00:07.117 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.115+0000 7fc8fb7fe700 1 -- 192.168.123.105:0/1059557603 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc90400e7a0 con 0x7fc9080829e0 2026-03-09T15:00:07.121 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.117+0000 7fc8fb7fe700 1 -- 192.168.123.105:0/1059557603 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fc90400e900 con 0x7fc9080829e0 2026-03-09T15:00:07.121 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.117+0000 7fc8fb7fe700 1 --2- 192.168.123.105:0/1059557603 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc8f406c680 0x7fc8f406eb30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:07.121 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.118+0000 7fc8fb7fe700 1 -- 192.168.123.105:0/1059557603 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fc904012070 con 0x7fc9080829e0 2026-03-09T15:00:07.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.129+0000 7fc8fb7fe700 1 -- 192.168.123.105:0/1059557603 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fc90405b820 con 0x7fc9080829e0 2026-03-09T15:00:07.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.135+0000 7fc90e754700 1 --2- 192.168.123.105:0/1059557603 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc8f406c680 0x7fc8f406eb30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:07.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.141+0000 7fc90e754700 1 --2- 192.168.123.105:0/1059557603 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc8f406c680 0x7fc8f406eb30 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7fc8fc00b3c0 tx=0x7fc8fc00d040 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:07.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.318+0000 7fc9109b8700 1 -- 192.168.123.105:0/1059557603 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fc90807c760 con 0x7fc8f406c680 2026-03-09T15:00:07.320 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.319+0000 7fc8fb7fe700 1 -- 192.168.123.105:0/1059557603 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+434 (secure 0 0 0) 0x7fc90807c760 con 0x7fc8f406c680 2026-03-09T15:00:07.321 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:00:07.321 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:00:07.321 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:00:07.321 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) mgr", 2026-03-09T15:00:07.321 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-09T15:00:07.321 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "0/2 daemons upgraded", 2026-03-09T15:00:07.321 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc image on host vm09", 2026-03-09T15:00:07.321 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:00:07.321 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:00:07.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.324+0000 7fc9109b8700 1 -- 192.168.123.105:0/1059557603 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc8f406c680 msgr2=0x7fc8f406eb30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:07.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.324+0000 7fc9109b8700 1 --2- 192.168.123.105:0/1059557603 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc8f406c680 0x7fc8f406eb30 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7fc8fc00b3c0 tx=0x7fc8fc00d040 comp rx=0 tx=0).stop 2026-03-09T15:00:07.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.324+0000 7fc9109b8700 1 -- 192.168.123.105:0/1059557603 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc9080829e0 msgr2=0x7fc908082e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:07.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.324+0000 7fc9109b8700 1 --2- 192.168.123.105:0/1059557603 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc9080829e0 0x7fc908082e50 secure :-1 s=READY pgs=297 cs=0 l=1 rev1=1 crypto rx=0x7fc904000f80 tx=0x7fc904004740 comp rx=0 tx=0).stop 2026-03-09T15:00:07.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.324+0000 7fc9109b8700 1 -- 192.168.123.105:0/1059557603 shutdown_connections 2026-03-09T15:00:07.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.324+0000 7fc9109b8700 1 --2- 192.168.123.105:0/1059557603 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fc8f406c680 0x7fc8f406eb30 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:07.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.324+0000 7fc9109b8700 1 --2- 192.168.123.105:0/1059557603 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc908071980 0x7fc9080824a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:07.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.324+0000 7fc9109b8700 1 --2- 192.168.123.105:0/1059557603 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc9080829e0 0x7fc908082e50 unknown :-1 s=CLOSED pgs=297 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:07.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.324+0000 7fc9109b8700 1 -- 192.168.123.105:0/1059557603 >> 192.168.123.105:0/1059557603 conn(0x7fc90806d1a0 msgr2=0x7fc908076470 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:07.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.325+0000 7fc9109b8700 1 -- 192.168.123.105:0/1059557603 shutdown_connections 2026-03-09T15:00:07.326 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:07.325+0000 7fc9109b8700 1 -- 192.168.123.105:0/1059557603 wait complete. 2026-03-09T15:00:07.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:07 vm05.local ceph-mon[50611]: from='client.14572 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:07.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:07 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:00:07.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:07 vm05.local ceph-mon[50611]: Upgrade: Target is version 19.2.3-678-ge911bdeb (unknown) 2026-03-09T15:00:07.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:07 vm05.local ceph-mon[50611]: Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T15:00:07.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:07 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:00:07.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:07 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:00:07.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:07 vm05.local ceph-mon[50611]: Upgrade: Need to upgrade myself (mgr.vm05.lhsexd) 2026-03-09T15:00:07.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:07 vm05.local ceph-mon[50611]: from='client.24369 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:07.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:07 vm05.local ceph-mon[50611]: Upgrade: Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc on vm09 2026-03-09T15:00:07.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:07 vm09.local ceph-mon[59673]: from='client.14572 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:07.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:07 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:00:07.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:07 vm09.local ceph-mon[59673]: Upgrade: Target is version 19.2.3-678-ge911bdeb (unknown) 2026-03-09T15:00:07.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:07 vm09.local ceph-mon[59673]: Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T15:00:07.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:07 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:00:07.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:07 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:00:07.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:07 vm09.local ceph-mon[59673]: Upgrade: Need to upgrade myself (mgr.vm05.lhsexd) 2026-03-09T15:00:07.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:07 vm09.local ceph-mon[59673]: from='client.24369 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:07.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:07 vm09.local ceph-mon[59673]: Upgrade: Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc on vm09 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr: git switch -c 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr:Or undo this operation with: 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr: git switch - 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-09T15:00:07.731 INFO:tasks.workunit.client.0.vm05.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T15:00:07.737 DEBUG:teuthology.orchestra.run.vm05:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-09T15:00:07.755 INFO:tasks.workunit.client.0.vm05.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-09T15:00:07.757 INFO:tasks.workunit.client.0.vm05.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T15:00:07.757 INFO:tasks.workunit.client.0.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-09T15:00:07.820 INFO:tasks.workunit.client.0.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-09T15:00:07.863 INFO:tasks.workunit.client.0.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-09T15:00:07.901 INFO:tasks.workunit.client.0.vm05.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T15:00:07.904 INFO:tasks.workunit.client.0.vm05.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T15:00:07.904 INFO:tasks.workunit.client.0.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-09T15:00:07.934 INFO:tasks.workunit.client.0.vm05.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T15:00:07.936 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T15:00:07.937 DEBUG:teuthology.orchestra.run.vm05:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-09T15:00:07.999 INFO:tasks.workunit:Running workunits matching suites/fsstress.sh on client.0... 2026-03-09T15:00:07.999 INFO:tasks.workunit:Running workunit suites/fsstress.sh... 2026-03-09T15:00:07.999 DEBUG:teuthology.orchestra.run.vm05:workunit test suites/fsstress.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh 2026-03-09T15:00:08.078 INFO:tasks.workunit.client.0.vm05.stderr:+ mkdir -p fsstress 2026-03-09T15:00:08.081 INFO:tasks.workunit.client.0.vm05.stderr:+ pushd fsstress 2026-03-09T15:00:08.082 INFO:tasks.workunit.client.0.vm05.stdout:~/cephtest/mnt.0/client.0/tmp/fsstress ~/cephtest/mnt.0/client.0/tmp 2026-03-09T15:00:08.082 INFO:tasks.workunit.client.0.vm05.stderr:+ wget -q -O ltp-full.tgz http://download.ceph.com/qa/ltp-full-20091231.tgz 2026-03-09T15:00:08.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:08 vm05.local ceph-mon[50611]: from='client.14578 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:08.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:08 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/317864370' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:00:08.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:08 vm05.local ceph-mon[50611]: pgmap v130: 65 pgs: 65 active+clean; 57 MiB data, 351 MiB used, 120 GiB / 120 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 106 op/s 2026-03-09T15:00:08.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:08 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:00:08.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:08 vm05.local ceph-mon[50611]: from='client.14584 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:08.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:08 vm09.local ceph-mon[59673]: from='client.14578 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:08.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:08 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/317864370' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:00:08.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:08 vm09.local ceph-mon[59673]: pgmap v130: 65 pgs: 65 active+clean; 57 MiB data, 351 MiB used, 120 GiB / 120 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 106 op/s 2026-03-09T15:00:08.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:08 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:00:08.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:08 vm09.local ceph-mon[59673]: from='client.14584 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:09.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:09 vm05.local ceph-mon[50611]: pgmap v131: 65 pgs: 65 active+clean; 57 MiB data, 359 MiB used, 120 GiB / 120 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 100 op/s 2026-03-09T15:00:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:09 vm09.local ceph-mon[59673]: pgmap v131: 65 pgs: 65 active+clean; 57 MiB data, 359 MiB used, 120 GiB / 120 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 100 op/s 2026-03-09T15:00:10.450 INFO:tasks.workunit.client.0.vm05.stderr:+ tar xzf ltp-full.tgz 2026-03-09T15:00:11.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:11 vm09.local ceph-mon[59673]: pgmap v132: 65 pgs: 65 active+clean; 74 MiB data, 392 MiB used, 120 GiB / 120 GiB avail; 1.6 MiB/s rd, 3.0 MiB/s wr, 147 op/s 2026-03-09T15:00:12.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:11 vm05.local ceph-mon[50611]: pgmap v132: 65 pgs: 65 active+clean; 74 MiB data, 392 MiB used, 120 GiB / 120 GiB avail; 1.6 MiB/s rd, 3.0 MiB/s wr, 147 op/s 2026-03-09T15:00:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:13 vm09.local ceph-mon[59673]: pgmap v133: 65 pgs: 65 active+clean; 74 MiB data, 392 MiB used, 120 GiB / 120 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 105 op/s 2026-03-09T15:00:13.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:13 vm05.local ceph-mon[50611]: pgmap v133: 65 pgs: 65 active+clean; 74 MiB data, 392 MiB used, 120 GiB / 120 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 105 op/s 2026-03-09T15:00:15.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:15 vm09.local ceph-mon[59673]: pgmap v134: 65 pgs: 65 active+clean; 90 MiB data, 497 MiB used, 119 GiB / 120 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 178 op/s 2026-03-09T15:00:15.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:15 vm05.local ceph-mon[50611]: pgmap v134: 65 pgs: 65 active+clean; 90 MiB data, 497 MiB used, 119 GiB / 120 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 178 op/s 2026-03-09T15:00:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:17 vm05.local ceph-mon[50611]: pgmap v135: 65 pgs: 65 active+clean; 96 MiB data, 541 MiB used, 119 GiB / 120 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 182 op/s 2026-03-09T15:00:17.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:17 vm09.local ceph-mon[59673]: pgmap v135: 65 pgs: 65 active+clean; 96 MiB data, 541 MiB used, 119 GiB / 120 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 182 op/s 2026-03-09T15:00:20.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:19 vm05.local ceph-mon[50611]: pgmap v136: 65 pgs: 65 active+clean; 101 MiB data, 555 MiB used, 119 GiB / 120 GiB avail; 1.7 MiB/s rd, 3.7 MiB/s wr, 183 op/s 2026-03-09T15:00:20.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:19 vm09.local ceph-mon[59673]: pgmap v136: 65 pgs: 65 active+clean; 101 MiB data, 555 MiB used, 119 GiB / 120 GiB avail; 1.7 MiB/s rd, 3.7 MiB/s wr, 183 op/s 2026-03-09T15:00:22.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:22 vm05.local ceph-mon[50611]: pgmap v137: 65 pgs: 65 active+clean; 116 MiB data, 667 MiB used, 119 GiB / 120 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 275 op/s 2026-03-09T15:00:22.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:22 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:00:22.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:22 vm09.local ceph-mon[59673]: pgmap v137: 65 pgs: 65 active+clean; 116 MiB data, 667 MiB used, 119 GiB / 120 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 275 op/s 2026-03-09T15:00:22.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:22 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:00:23.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:23 vm05.local ceph-mon[50611]: pgmap v138: 65 pgs: 65 active+clean; 116 MiB data, 667 MiB used, 119 GiB / 120 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 216 op/s 2026-03-09T15:00:23.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:23 vm09.local ceph-mon[59673]: pgmap v138: 65 pgs: 65 active+clean; 116 MiB data, 667 MiB used, 119 GiB / 120 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 216 op/s 2026-03-09T15:00:26.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:26 vm09.local ceph-mon[59673]: pgmap v139: 65 pgs: 65 active+clean; 131 MiB data, 803 MiB used, 119 GiB / 120 GiB avail; 2.4 MiB/s rd, 4.8 MiB/s wr, 309 op/s 2026-03-09T15:00:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:26 vm05.local ceph-mon[50611]: pgmap v139: 65 pgs: 65 active+clean; 131 MiB data, 803 MiB used, 119 GiB / 120 GiB avail; 2.4 MiB/s rd, 4.8 MiB/s wr, 309 op/s 2026-03-09T15:00:28.034 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:27 vm05.local ceph-mon[50611]: pgmap v140: 65 pgs: 65 active+clean; 142 MiB data, 865 MiB used, 119 GiB / 120 GiB avail; 2.5 MiB/s rd, 4.4 MiB/s wr, 276 op/s 2026-03-09T15:00:28.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:27 vm09.local ceph-mon[59673]: pgmap v140: 65 pgs: 65 active+clean; 142 MiB data, 865 MiB used, 119 GiB / 120 GiB avail; 2.5 MiB/s rd, 4.4 MiB/s wr, 276 op/s 2026-03-09T15:00:29.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:29 vm09.local ceph-mon[59673]: pgmap v141: 65 pgs: 65 active+clean; 143 MiB data, 884 MiB used, 119 GiB / 120 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 262 op/s 2026-03-09T15:00:29.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:29 vm05.local ceph-mon[50611]: pgmap v141: 65 pgs: 65 active+clean; 143 MiB data, 884 MiB used, 119 GiB / 120 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 262 op/s 2026-03-09T15:00:32.423 INFO:tasks.workunit.client.1.vm09.stderr:+ pushd ltp-full-20091231/testcases/kernel/fs/fsstress 2026-03-09T15:00:32.428 INFO:tasks.workunit.client.1.vm09.stdout:~/cephtest/mnt.1/client.1/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress ~/cephtest/mnt.1/client.1/tmp/fsstress ~/cephtest/mnt.1/client.1/tmp 2026-03-09T15:00:32.428 INFO:tasks.workunit.client.1.vm09.stderr:+ make 2026-03-09T15:00:32.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:32 vm05.local ceph-mon[50611]: pgmap v142: 65 pgs: 65 active+clean; 159 MiB data, 937 MiB used, 119 GiB / 120 GiB avail; 3.1 MiB/s rd, 5.0 MiB/s wr, 320 op/s 2026-03-09T15:00:32.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:32 vm09.local ceph-mon[59673]: pgmap v142: 65 pgs: 65 active+clean; 159 MiB data, 937 MiB used, 119 GiB / 120 GiB avail; 3.1 MiB/s rd, 5.0 MiB/s wr, 320 op/s 2026-03-09T15:00:33.015 INFO:tasks.workunit.client.1.vm09.stdout:cc -DNO_XFS -I/home/ubuntu/cephtest/mnt.1/client.1/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress -D_LARGEFILE64_SOURCE -D_GNU_SOURCE -I../../../../include -I../../../../include -L../../../../lib fsstress.c -o fsstress 2026-03-09T15:00:33.372 INFO:tasks.workunit.client.1.vm09.stderr:++ readlink -f fsstress 2026-03-09T15:00:33.373 INFO:tasks.workunit.client.1.vm09.stderr:+ BIN=/home/ubuntu/cephtest/mnt.1/client.1/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress 2026-03-09T15:00:33.374 INFO:tasks.workunit.client.1.vm09.stderr:+ popd 2026-03-09T15:00:33.375 INFO:tasks.workunit.client.1.vm09.stdout:~/cephtest/mnt.1/client.1/tmp/fsstress ~/cephtest/mnt.1/client.1/tmp 2026-03-09T15:00:33.375 INFO:tasks.workunit.client.1.vm09.stderr:+ popd 2026-03-09T15:00:33.375 INFO:tasks.workunit.client.1.vm09.stdout:~/cephtest/mnt.1/client.1/tmp 2026-03-09T15:00:33.376 INFO:tasks.workunit.client.1.vm09.stderr:++ mktemp -d -p . 2026-03-09T15:00:33.379 INFO:tasks.workunit.client.1.vm09.stderr:+ T=./tmp.eO2gMYMgiH 2026-03-09T15:00:33.379 INFO:tasks.workunit.client.1.vm09.stderr:+ /home/ubuntu/cephtest/mnt.1/client.1/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress -d ./tmp.eO2gMYMgiH -l 1 -n 1000 -p 10 -v 2026-03-09T15:00:33.385 INFO:tasks.workunit.client.1.vm09.stdout:seed = 1772815671 2026-03-09T15:00:33.391 INFO:tasks.workunit.client.1.vm09.stdout:0/0: fsync - no filename 2026-03-09T15:00:33.391 INFO:tasks.workunit.client.1.vm09.stdout:0/1: dread - no filename 2026-03-09T15:00:33.391 INFO:tasks.workunit.client.1.vm09.stdout:0/2: dwrite - no filename 2026-03-09T15:00:33.392 INFO:tasks.workunit.client.1.vm09.stdout:2/0: rename - no filename 2026-03-09T15:00:33.392 INFO:tasks.workunit.client.1.vm09.stdout:2/1: rename - no filename 2026-03-09T15:00:33.392 INFO:tasks.workunit.client.1.vm09.stdout:2/2: dread - no filename 2026-03-09T15:00:33.394 INFO:tasks.workunit.client.1.vm09.stdout:0/3: creat f0 x:0 0 0 2026-03-09T15:00:33.396 INFO:tasks.workunit.client.1.vm09.stdout:5/0: dread - no filename 2026-03-09T15:00:33.396 INFO:tasks.workunit.client.1.vm09.stdout:5/1: dwrite - no filename 2026-03-09T15:00:33.396 INFO:tasks.workunit.client.1.vm09.stdout:5/2: write - no filename 2026-03-09T15:00:33.396 INFO:tasks.workunit.client.1.vm09.stdout:2/3: creat f0 x:0 0 0 2026-03-09T15:00:33.397 INFO:tasks.workunit.client.1.vm09.stdout:2/4: fdatasync f0 0 2026-03-09T15:00:33.400 INFO:tasks.workunit.client.1.vm09.stdout:6/0: write - no filename 2026-03-09T15:00:33.401 INFO:tasks.workunit.client.1.vm09.stdout:4/0: creat f0 x:0 0 0 2026-03-09T15:00:33.413 INFO:tasks.workunit.client.1.vm09.stdout:7/0: dwrite - no filename 2026-03-09T15:00:33.413 INFO:tasks.workunit.client.1.vm09.stdout:7/1: write - no filename 2026-03-09T15:00:33.413 INFO:tasks.workunit.client.1.vm09.stdout:7/2: dwrite - no filename 2026-03-09T15:00:33.413 INFO:tasks.workunit.client.1.vm09.stdout:7/3: chown . 2 1 2026-03-09T15:00:33.413 INFO:tasks.workunit.client.1.vm09.stdout:7/4: stat - no entries 2026-03-09T15:00:33.413 INFO:tasks.workunit.client.1.vm09.stdout:7/5: write - no filename 2026-03-09T15:00:33.413 INFO:tasks.workunit.client.1.vm09.stdout:7/6: truncate - no filename 2026-03-09T15:00:33.413 INFO:tasks.workunit.client.1.vm09.stdout:7/7: rename - no filename 2026-03-09T15:00:33.413 INFO:tasks.workunit.client.1.vm09.stdout:7/8: readlink - no filename 2026-03-09T15:00:33.413 INFO:tasks.workunit.client.1.vm09.stdout:7/9: unlink - no file 2026-03-09T15:00:33.414 INFO:tasks.workunit.client.1.vm09.stdout:5/3: getdents . 0 2026-03-09T15:00:33.416 INFO:tasks.workunit.client.1.vm09.stdout:2/5: dwrite f0 [0,4194304] 0 2026-03-09T15:00:33.432 INFO:tasks.workunit.client.1.vm09.stdout:6/1: creat f0 x:0 0 0 2026-03-09T15:00:33.433 INFO:tasks.workunit.client.1.vm09.stdout:8/0: dwrite - no filename 2026-03-09T15:00:33.433 INFO:tasks.workunit.client.1.vm09.stdout:8/1: truncate - no filename 2026-03-09T15:00:33.433 INFO:tasks.workunit.client.1.vm09.stdout:8/2: dwrite - no filename 2026-03-09T15:00:33.433 INFO:tasks.workunit.client.1.vm09.stdout:8/3: rename - no filename 2026-03-09T15:00:33.433 INFO:tasks.workunit.client.1.vm09.stdout:8/4: write - no filename 2026-03-09T15:00:33.434 INFO:tasks.workunit.client.1.vm09.stdout:8/5: unlink - no file 2026-03-09T15:00:33.434 INFO:tasks.workunit.client.1.vm09.stdout:8/6: rename - no filename 2026-03-09T15:00:33.434 INFO:tasks.workunit.client.1.vm09.stdout:8/7: link - no file 2026-03-09T15:00:33.434 INFO:tasks.workunit.client.1.vm09.stdout:9/0: chown . 9414022 1 2026-03-09T15:00:33.434 INFO:tasks.workunit.client.1.vm09.stdout:0/4: dwrite f0 [0,4194304] 0 2026-03-09T15:00:33.434 INFO:tasks.workunit.client.1.vm09.stdout:9/1: chown . 390079 1 2026-03-09T15:00:33.435 INFO:tasks.workunit.client.1.vm09.stdout:5/4: creat f0 x:0 0 0 2026-03-09T15:00:33.438 INFO:tasks.workunit.client.1.vm09.stdout:6/2: chown f0 57183 1 2026-03-09T15:00:33.450 INFO:tasks.workunit.client.1.vm09.stdout:9/2: mknod c0 0 2026-03-09T15:00:33.450 INFO:tasks.workunit.client.1.vm09.stdout:5/5: mknod c1 0 2026-03-09T15:00:33.451 INFO:tasks.workunit.client.1.vm09.stdout:9/3: stat c0 0 2026-03-09T15:00:33.452 INFO:tasks.workunit.client.1.vm09.stdout:1/0: stat - no entries 2026-03-09T15:00:33.452 INFO:tasks.workunit.client.1.vm09.stdout:9/4: stat c0 0 2026-03-09T15:00:33.453 INFO:tasks.workunit.client.1.vm09.stdout:1/1: fdatasync - no filename 2026-03-09T15:00:33.459 INFO:tasks.workunit.client.1.vm09.stdout:8/8: mkdir d0 0 2026-03-09T15:00:33.474 INFO:tasks.workunit.client.1.vm09.stdout:8/9: dread - no filename 2026-03-09T15:00:33.474 INFO:tasks.workunit.client.1.vm09.stdout:8/10: write - no filename 2026-03-09T15:00:33.474 INFO:tasks.workunit.client.1.vm09.stdout:8/11: dread - no filename 2026-03-09T15:00:33.474 INFO:tasks.workunit.client.1.vm09.stdout:8/12: truncate - no filename 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:6/3: link f0 f1 0 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:9/5: mkdir d1 0 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:9/6: dwrite - no filename 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:9/7: chown d1 62 1 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:9/8: dread - no filename 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:3/0: mknod c0 0 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:3/1: dwrite - no filename 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:3/2: chown c0 183 1 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:3/3: dread - no filename 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:6/4: symlink l2 0 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:9/9: rename c0 to d1/c2 0 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:9/10: fsync - no filename 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:9/11: truncate - no filename 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:1/2: creat f0 x:0 0 0 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:1/3: rmdir - no directory 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:1/4: truncate f0 484109 0 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:3/4: creat f1 x:0 0 0 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:6/5: rename f1 to f3 0 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:6/6: chown f3 21382 1 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:6/7: chown f3 0 1 2026-03-09T15:00:33.475 INFO:tasks.workunit.client.1.vm09.stdout:1/5: dread f0 [0,4194304] 0 2026-03-09T15:00:33.477 INFO:tasks.workunit.client.1.vm09.stdout:6/8: mknod c4 0 2026-03-09T15:00:33.479 INFO:tasks.workunit.client.1.vm09.stdout:6/9: chown f0 2 1 2026-03-09T15:00:33.482 INFO:tasks.workunit.client.1.vm09.stdout:6/10: symlink l5 0 2026-03-09T15:00:33.483 INFO:tasks.workunit.client.1.vm09.stdout:3/5: dwrite f1 [0,4194304] 0 2026-03-09T15:00:33.484 INFO:tasks.workunit.client.1.vm09.stdout:6/11: mkdir d6 0 2026-03-09T15:00:33.484 INFO:tasks.workunit.client.1.vm09.stdout:3/6: stat c0 0 2026-03-09T15:00:33.484 INFO:tasks.workunit.client.1.vm09.stdout:3/7: rmdir - no directory 2026-03-09T15:00:33.496 INFO:tasks.workunit.client.1.vm09.stdout:6/12: write f3 [540181,15321] 0 2026-03-09T15:00:33.505 INFO:tasks.workunit.client.1.vm09.stdout:3/8: dwrite f1 [0,4194304] 0 2026-03-09T15:00:33.505 INFO:tasks.workunit.client.1.vm09.stdout:6/13: dwrite f0 [0,4194304] 0 2026-03-09T15:00:33.537 INFO:tasks.workunit.client.1.vm09.stdout:6/14: link l2 d6/l7 0 2026-03-09T15:00:33.538 INFO:tasks.workunit.client.1.vm09.stdout:6/15: write f3 [1409774,51659] 0 2026-03-09T15:00:33.871 INFO:tasks.workunit.client.1.vm09.stdout:0/5: fdatasync f0 0 2026-03-09T15:00:33.872 INFO:tasks.workunit.client.1.vm09.stdout:0/6: chown f0 3645634 1 2026-03-09T15:00:33.872 INFO:tasks.workunit.client.1.vm09.stdout:0/7: stat f0 0 2026-03-09T15:00:33.873 INFO:tasks.workunit.client.1.vm09.stdout:0/8: creat f1 x:0 0 0 2026-03-09T15:00:33.874 INFO:tasks.workunit.client.1.vm09.stdout:0/9: write f0 [744698,128587] 0 2026-03-09T15:00:33.890 INFO:tasks.workunit.client.1.vm09.stdout:4/1: sync 2026-03-09T15:00:33.891 INFO:tasks.workunit.client.1.vm09.stdout:4/2: truncate f0 356786 0 2026-03-09T15:00:33.892 INFO:tasks.workunit.client.1.vm09.stdout:4/3: dread f0 [0,4194304] 0 2026-03-09T15:00:33.894 INFO:tasks.workunit.client.1.vm09.stdout:9/12: chown d1/c2 25 1 2026-03-09T15:00:33.894 INFO:tasks.workunit.client.1.vm09.stdout:9/13: dread - no filename 2026-03-09T15:00:33.894 INFO:tasks.workunit.client.1.vm09.stdout:9/14: dread - no filename 2026-03-09T15:00:33.895 INFO:tasks.workunit.client.1.vm09.stdout:2/6: sync 2026-03-09T15:00:33.896 INFO:tasks.workunit.client.1.vm09.stdout:0/10: sync 2026-03-09T15:00:33.897 INFO:tasks.workunit.client.1.vm09.stdout:8/13: sync 2026-03-09T15:00:33.897 INFO:tasks.workunit.client.1.vm09.stdout:7/10: sync 2026-03-09T15:00:33.897 INFO:tasks.workunit.client.1.vm09.stdout:5/6: sync 2026-03-09T15:00:33.897 INFO:tasks.workunit.client.1.vm09.stdout:3/9: sync 2026-03-09T15:00:33.904 INFO:tasks.workunit.client.1.vm09.stdout:5/7: read - f0 zero size 2026-03-09T15:00:33.905 INFO:tasks.workunit.client.1.vm09.stdout:3/10: dread f1 [0,4194304] 0 2026-03-09T15:00:33.919 INFO:tasks.workunit.client.1.vm09.stdout:4/4: dwrite f0 [0,4194304] 0 2026-03-09T15:00:33.920 INFO:tasks.workunit.client.1.vm09.stdout:4/5: rmdir - no directory 2026-03-09T15:00:33.929 INFO:tasks.workunit.client.1.vm09.stdout:2/7: dwrite f0 [4194304,4194304] 0 2026-03-09T15:00:33.930 INFO:tasks.workunit.client.1.vm09.stdout:1/6: truncate f0 1114790 0 2026-03-09T15:00:33.931 INFO:tasks.workunit.client.1.vm09.stdout:3/11: symlink l2 0 2026-03-09T15:00:33.932 INFO:tasks.workunit.client.1.vm09.stdout:3/12: stat f1 0 2026-03-09T15:00:33.933 INFO:tasks.workunit.client.1.vm09.stdout:0/11: dwrite f0 [4194304,4194304] 0 2026-03-09T15:00:33.934 INFO:tasks.workunit.client.1.vm09.stdout:9/15: rename d1/c2 to d1/c3 0 2026-03-09T15:00:33.934 INFO:tasks.workunit.client.1.vm09.stdout:9/16: dread - no filename 2026-03-09T15:00:33.934 INFO:tasks.workunit.client.1.vm09.stdout:9/17: dread - no filename 2026-03-09T15:00:33.937 INFO:tasks.workunit.client.1.vm09.stdout:7/11: symlink l0 0 2026-03-09T15:00:33.937 INFO:tasks.workunit.client.1.vm09.stdout:7/12: write - no filename 2026-03-09T15:00:33.937 INFO:tasks.workunit.client.1.vm09.stdout:7/13: fsync - no filename 2026-03-09T15:00:33.938 INFO:tasks.workunit.client.1.vm09.stdout:0/12: write f0 [9280553,113752] 0 2026-03-09T15:00:33.940 INFO:tasks.workunit.client.1.vm09.stdout:8/14: rmdir d0 0 2026-03-09T15:00:33.941 INFO:tasks.workunit.client.1.vm09.stdout:2/8: creat f1 x:0 0 0 2026-03-09T15:00:33.941 INFO:tasks.workunit.client.1.vm09.stdout:2/9: chown f1 1468 1 2026-03-09T15:00:33.947 INFO:tasks.workunit.client.1.vm09.stdout:8/15: sync 2026-03-09T15:00:33.949 INFO:tasks.workunit.client.1.vm09.stdout:4/6: symlink l1 0 2026-03-09T15:00:33.951 INFO:tasks.workunit.client.1.vm09.stdout:2/10: dwrite f0 [0,4194304] 0 2026-03-09T15:00:33.951 INFO:tasks.workunit.client.1.vm09.stdout:2/11: write f0 [1791212,37238] 0 2026-03-09T15:00:33.954 INFO:tasks.workunit.client.1.vm09.stdout:5/8: dwrite f0 [0,4194304] 0 2026-03-09T15:00:33.955 INFO:tasks.workunit.client.1.vm09.stdout:5/9: rmdir - no directory 2026-03-09T15:00:33.955 INFO:tasks.workunit.client.1.vm09.stdout:3/13: write f1 [2496099,104826] 0 2026-03-09T15:00:33.958 INFO:tasks.workunit.client.1.vm09.stdout:9/18: rmdir d1 39 2026-03-09T15:00:33.960 INFO:tasks.workunit.client.1.vm09.stdout:9/19: write - no filename 2026-03-09T15:00:33.960 INFO:tasks.workunit.client.1.vm09.stdout:9/20: write - no filename 2026-03-09T15:00:33.961 INFO:tasks.workunit.client.1.vm09.stdout:6/16: truncate f3 3961770 0 2026-03-09T15:00:33.961 INFO:tasks.workunit.client.1.vm09.stdout:7/14: creat f1 x:0 0 0 2026-03-09T15:00:33.961 INFO:tasks.workunit.client.1.vm09.stdout:8/16: creat f1 x:0 0 0 2026-03-09T15:00:33.961 INFO:tasks.workunit.client.1.vm09.stdout:1/7: stat f0 0 2026-03-09T15:00:33.962 INFO:tasks.workunit.client.1.vm09.stdout:4/7: rename l1 to l2 0 2026-03-09T15:00:33.964 INFO:tasks.workunit.client.1.vm09.stdout:9/21: sync 2026-03-09T15:00:33.964 INFO:tasks.workunit.client.1.vm09.stdout:4/8: write f0 [1843475,64263] 0 2026-03-09T15:00:33.965 INFO:tasks.workunit.client.1.vm09.stdout:3/14: mkdir d3 0 2026-03-09T15:00:33.966 INFO:tasks.workunit.client.1.vm09.stdout:3/15: stat d3 0 2026-03-09T15:00:33.966 INFO:tasks.workunit.client.1.vm09.stdout:4/9: chown f0 465375 1 2026-03-09T15:00:33.968 INFO:tasks.workunit.client.1.vm09.stdout:2/12: mknod c2 0 2026-03-09T15:00:33.968 INFO:tasks.workunit.client.1.vm09.stdout:5/10: mkdir d2 0 2026-03-09T15:00:33.969 INFO:tasks.workunit.client.1.vm09.stdout:7/15: rename l0 to l2 0 2026-03-09T15:00:33.972 INFO:tasks.workunit.client.1.vm09.stdout:7/16: dread - f1 zero size 2026-03-09T15:00:33.983 INFO:tasks.workunit.client.1.vm09.stdout:7/17: write f1 [261604,121744] 0 2026-03-09T15:00:33.984 INFO:tasks.workunit.client.1.vm09.stdout:8/17: mknod c2 0 2026-03-09T15:00:33.984 INFO:tasks.workunit.client.1.vm09.stdout:3/16: unlink c0 0 2026-03-09T15:00:33.984 INFO:tasks.workunit.client.1.vm09.stdout:0/13: creat f2 x:0 0 0 2026-03-09T15:00:33.984 INFO:tasks.workunit.client.1.vm09.stdout:8/18: dread - f1 zero size 2026-03-09T15:00:33.984 INFO:tasks.workunit.client.1.vm09.stdout:8/19: readlink - no filename 2026-03-09T15:00:33.984 INFO:tasks.workunit.client.1.vm09.stdout:9/22: creat d1/f4 x:0 0 0 2026-03-09T15:00:33.984 INFO:tasks.workunit.client.1.vm09.stdout:8/20: chown f1 10506137 1 2026-03-09T15:00:33.984 INFO:tasks.workunit.client.1.vm09.stdout:8/21: chown c2 226 1 2026-03-09T15:00:33.984 INFO:tasks.workunit.client.1.vm09.stdout:8/22: read - f1 zero size 2026-03-09T15:00:33.984 INFO:tasks.workunit.client.1.vm09.stdout:3/17: chown f1 5745049 1 2026-03-09T15:00:33.984 INFO:tasks.workunit.client.1.vm09.stdout:0/14: rename f0 to f3 0 2026-03-09T15:00:33.985 INFO:tasks.workunit.client.1.vm09.stdout:3/18: truncate f1 4618960 0 2026-03-09T15:00:33.985 INFO:tasks.workunit.client.1.vm09.stdout:2/13: dwrite f0 [4194304,4194304] 0 2026-03-09T15:00:33.985 INFO:tasks.workunit.client.1.vm09.stdout:3/19: chown l2 0 1 2026-03-09T15:00:33.986 INFO:tasks.workunit.client.1.vm09.stdout:8/23: dread - f1 zero size 2026-03-09T15:00:33.992 INFO:tasks.workunit.client.1.vm09.stdout:7/18: dwrite f1 [0,4194304] 0 2026-03-09T15:00:33.994 INFO:tasks.workunit.client.1.vm09.stdout:2/14: creat f3 x:0 0 0 2026-03-09T15:00:33.997 INFO:tasks.workunit.client.1.vm09.stdout:3/20: mkdir d3/d4 0 2026-03-09T15:00:34.001 INFO:tasks.workunit.client.1.vm09.stdout:2/15: stat f1 0 2026-03-09T15:00:34.004 INFO:tasks.workunit.client.1.vm09.stdout:3/21: write f1 [4474435,19400] 0 2026-03-09T15:00:34.022 INFO:tasks.workunit.client.1.vm09.stdout:7/19: mkdir d3 0 2026-03-09T15:00:34.027 INFO:tasks.workunit.client.1.vm09.stdout:8/24: chown f1 3 1 2026-03-09T15:00:34.038 INFO:tasks.workunit.client.1.vm09.stdout:0/15: dwrite f3 [0,4194304] 0 2026-03-09T15:00:34.075 INFO:tasks.workunit.client.1.vm09.stdout:3/22: fsync f1 0 2026-03-09T15:00:34.075 INFO:tasks.workunit.client.1.vm09.stdout:3/23: chown d3/d4 1 1 2026-03-09T15:00:34.079 INFO:tasks.workunit.client.1.vm09.stdout:4/10: fsync f0 0 2026-03-09T15:00:34.081 INFO:tasks.workunit.client.1.vm09.stdout:4/11: dread f0 [0,4194304] 0 2026-03-09T15:00:34.082 INFO:tasks.workunit.client.1.vm09.stdout:4/12: write f0 [3461617,68078] 0 2026-03-09T15:00:34.134 INFO:tasks.workunit.client.1.vm09.stdout:1/8: rename f0 to f1 0 2026-03-09T15:00:34.140 INFO:tasks.workunit.client.1.vm09.stdout:6/17: write f0 [4687925,31619] 0 2026-03-09T15:00:34.153 INFO:tasks.workunit.client.1.vm09.stdout:9/23: fsync d1/f4 0 2026-03-09T15:00:34.153 INFO:tasks.workunit.client.1.vm09.stdout:5/11: truncate f0 3612595 0 2026-03-09T15:00:34.157 INFO:tasks.workunit.client.1.vm09.stdout:9/24: dwrite d1/f4 [0,4194304] 0 2026-03-09T15:00:34.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:33 vm05.local ceph-mon[50611]: pgmap v143: 65 pgs: 65 active+clean; 159 MiB data, 937 MiB used, 119 GiB / 120 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 224 op/s 2026-03-09T15:00:34.304 INFO:tasks.workunit.client.1.vm09.stdout:2/16: creat f4 x:0 0 0 2026-03-09T15:00:34.304 INFO:tasks.workunit.client.1.vm09.stdout:2/17: dread - f1 zero size 2026-03-09T15:00:34.304 INFO:tasks.workunit.client.1.vm09.stdout:2/18: truncate f1 436365 0 2026-03-09T15:00:34.305 INFO:tasks.workunit.client.1.vm09.stdout:2/19: fsync f0 0 2026-03-09T15:00:34.305 INFO:tasks.workunit.client.1.vm09.stdout:8/25: symlink l3 0 2026-03-09T15:00:34.312 INFO:tasks.workunit.client.1.vm09.stdout:9/25: symlink d1/l5 0 2026-03-09T15:00:34.313 INFO:tasks.workunit.client.1.vm09.stdout:7/20: creat d3/f4 x:0 0 0 2026-03-09T15:00:34.315 INFO:tasks.workunit.client.1.vm09.stdout:0/16: unlink f1 0 2026-03-09T15:00:34.315 INFO:tasks.workunit.client.1.vm09.stdout:3/24: symlink d3/d4/l5 0 2026-03-09T15:00:34.316 INFO:tasks.workunit.client.1.vm09.stdout:0/17: write f2 [178332,107211] 0 2026-03-09T15:00:34.316 INFO:tasks.workunit.client.1.vm09.stdout:6/18: mknod d6/c8 0 2026-03-09T15:00:34.318 INFO:tasks.workunit.client.1.vm09.stdout:5/12: symlink d2/l3 0 2026-03-09T15:00:34.318 INFO:tasks.workunit.client.1.vm09.stdout:9/26: symlink d1/l6 0 2026-03-09T15:00:34.318 INFO:tasks.workunit.client.1.vm09.stdout:2/20: link f0 f5 0 2026-03-09T15:00:34.318 INFO:tasks.workunit.client.1.vm09.stdout:7/21: creat d3/f5 x:0 0 0 2026-03-09T15:00:34.320 INFO:tasks.workunit.client.1.vm09.stdout:9/27: truncate d1/f4 4712339 0 2026-03-09T15:00:34.320 INFO:tasks.workunit.client.1.vm09.stdout:5/13: mkdir d2/d4 0 2026-03-09T15:00:34.321 INFO:tasks.workunit.client.1.vm09.stdout:0/18: write f2 [1060321,102571] 0 2026-03-09T15:00:34.321 INFO:tasks.workunit.client.1.vm09.stdout:4/13: getdents . 0 2026-03-09T15:00:34.324 INFO:tasks.workunit.client.1.vm09.stdout:7/22: symlink d3/l6 0 2026-03-09T15:00:34.325 INFO:tasks.workunit.client.1.vm09.stdout:7/23: write d3/f4 [795169,91746] 0 2026-03-09T15:00:34.325 INFO:tasks.workunit.client.1.vm09.stdout:4/14: creat f3 x:0 0 0 2026-03-09T15:00:34.328 INFO:tasks.workunit.client.1.vm09.stdout:5/14: symlink d2/d4/l5 0 2026-03-09T15:00:34.328 INFO:tasks.workunit.client.1.vm09.stdout:2/21: dread f5 [0,4194304] 0 2026-03-09T15:00:34.335 INFO:tasks.workunit.client.1.vm09.stdout:6/19: dwrite f0 [0,4194304] 0 2026-03-09T15:00:34.335 INFO:tasks.workunit.client.1.vm09.stdout:3/25: dread f1 [0,4194304] 0 2026-03-09T15:00:34.336 INFO:tasks.workunit.client.1.vm09.stdout:6/20: read f3 [3689213,130672] 0 2026-03-09T15:00:34.337 INFO:tasks.workunit.client.1.vm09.stdout:6/21: read f0 [2663190,114453] 0 2026-03-09T15:00:34.340 INFO:tasks.workunit.client.1.vm09.stdout:7/24: link d3/l6 d3/l7 0 2026-03-09T15:00:34.342 INFO:tasks.workunit.client.1.vm09.stdout:0/19: dwrite f3 [4194304,4194304] 0 2026-03-09T15:00:34.342 INFO:tasks.workunit.client.1.vm09.stdout:5/15: dread f0 [0,4194304] 0 2026-03-09T15:00:34.345 INFO:tasks.workunit.client.1.vm09.stdout:6/22: dread f0 [0,4194304] 0 2026-03-09T15:00:34.350 INFO:tasks.workunit.client.1.vm09.stdout:6/23: dread f0 [0,4194304] 0 2026-03-09T15:00:34.355 INFO:tasks.workunit.client.1.vm09.stdout:2/22: mknod c6 0 2026-03-09T15:00:34.356 INFO:tasks.workunit.client.1.vm09.stdout:4/15: creat f4 x:0 0 0 2026-03-09T15:00:34.356 INFO:tasks.workunit.client.1.vm09.stdout:3/26: creat d3/f6 x:0 0 0 2026-03-09T15:00:34.356 INFO:tasks.workunit.client.1.vm09.stdout:7/25: creat d3/f8 x:0 0 0 2026-03-09T15:00:34.357 INFO:tasks.workunit.client.1.vm09.stdout:2/23: dread f1 [0,4194304] 0 2026-03-09T15:00:34.359 INFO:tasks.workunit.client.1.vm09.stdout:3/27: stat d3/d4/l5 0 2026-03-09T15:00:34.360 INFO:tasks.workunit.client.1.vm09.stdout:7/26: truncate d3/f8 905784 0 2026-03-09T15:00:34.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:33 vm09.local ceph-mon[59673]: pgmap v143: 65 pgs: 65 active+clean; 159 MiB data, 937 MiB used, 119 GiB / 120 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 224 op/s 2026-03-09T15:00:34.368 INFO:tasks.workunit.client.1.vm09.stdout:3/28: dwrite f1 [0,4194304] 0 2026-03-09T15:00:34.370 INFO:tasks.workunit.client.1.vm09.stdout:2/24: dwrite f4 [0,4194304] 0 2026-03-09T15:00:34.370 INFO:tasks.workunit.client.1.vm09.stdout:0/20: creat f4 x:0 0 0 2026-03-09T15:00:34.373 INFO:tasks.workunit.client.1.vm09.stdout:2/25: stat f4 0 2026-03-09T15:00:34.380 INFO:tasks.workunit.client.1.vm09.stdout:3/29: truncate d3/f6 655009 0 2026-03-09T15:00:34.385 INFO:tasks.workunit.client.1.vm09.stdout:0/21: dwrite f2 [0,4194304] 0 2026-03-09T15:00:34.388 INFO:tasks.workunit.client.1.vm09.stdout:5/16: fsync f0 0 2026-03-09T15:00:34.402 INFO:tasks.workunit.client.1.vm09.stdout:6/24: symlink d6/l9 0 2026-03-09T15:00:34.409 INFO:tasks.workunit.client.1.vm09.stdout:6/25: dread f3 [0,4194304] 0 2026-03-09T15:00:34.415 INFO:tasks.workunit.client.1.vm09.stdout:6/26: dread f3 [0,4194304] 0 2026-03-09T15:00:34.416 INFO:tasks.workunit.client.1.vm09.stdout:7/27: creat d3/f9 x:0 0 0 2026-03-09T15:00:34.419 INFO:tasks.workunit.client.1.vm09.stdout:2/26: symlink l7 0 2026-03-09T15:00:34.419 INFO:tasks.workunit.client.1.vm09.stdout:3/30: unlink f1 0 2026-03-09T15:00:34.421 INFO:tasks.workunit.client.1.vm09.stdout:7/28: dread d3/f8 [0,4194304] 0 2026-03-09T15:00:34.422 INFO:tasks.workunit.client.1.vm09.stdout:0/22: rename f4 to f5 0 2026-03-09T15:00:34.422 INFO:tasks.workunit.client.1.vm09.stdout:7/29: dread d3/f8 [0,4194304] 0 2026-03-09T15:00:34.422 INFO:tasks.workunit.client.1.vm09.stdout:0/23: chown f2 0 1 2026-03-09T15:00:34.424 INFO:tasks.workunit.client.1.vm09.stdout:5/17: mknod d2/d4/c6 0 2026-03-09T15:00:34.426 INFO:tasks.workunit.client.1.vm09.stdout:6/27: symlink d6/la 0 2026-03-09T15:00:34.429 INFO:tasks.workunit.client.1.vm09.stdout:4/16: symlink l5 0 2026-03-09T15:00:34.435 INFO:tasks.workunit.client.1.vm09.stdout:2/27: mknod c8 0 2026-03-09T15:00:34.436 INFO:tasks.workunit.client.1.vm09.stdout:2/28: read f4 [4079753,112658] 0 2026-03-09T15:00:34.486 INFO:tasks.workunit.client.1.vm09.stdout:7/30: chown l2 101182861 1 2026-03-09T15:00:34.486 INFO:tasks.workunit.client.1.vm09.stdout:0/24: creat f6 x:0 0 0 2026-03-09T15:00:34.487 INFO:tasks.workunit.client.1.vm09.stdout:7/31: chown d3/f9 161613 1 2026-03-09T15:00:34.487 INFO:tasks.workunit.client.1.vm09.stdout:0/25: write f6 [988797,27198] 0 2026-03-09T15:00:34.490 INFO:tasks.workunit.client.1.vm09.stdout:7/32: write d3/f5 [463272,53934] 0 2026-03-09T15:00:34.492 INFO:tasks.workunit.client.1.vm09.stdout:5/18: mknod d2/d4/c7 0 2026-03-09T15:00:34.493 INFO:tasks.workunit.client.1.vm09.stdout:5/19: write f0 [3773576,39063] 0 2026-03-09T15:00:34.494 INFO:tasks.workunit.client.1.vm09.stdout:8/26: getdents . 0 2026-03-09T15:00:34.499 INFO:tasks.workunit.client.1.vm09.stdout:0/26: dwrite f2 [0,4194304] 0 2026-03-09T15:00:34.499 INFO:tasks.workunit.client.1.vm09.stdout:2/29: mknod c9 0 2026-03-09T15:00:34.499 INFO:tasks.workunit.client.1.vm09.stdout:1/9: write f1 [671141,85616] 0 2026-03-09T15:00:34.503 INFO:tasks.workunit.client.1.vm09.stdout:9/28: getdents d1 0 2026-03-09T15:00:34.510 INFO:tasks.workunit.client.1.vm09.stdout:6/28: dwrite f0 [0,4194304] 0 2026-03-09T15:00:34.513 INFO:tasks.workunit.client.1.vm09.stdout:8/27: creat f4 x:0 0 0 2026-03-09T15:00:34.513 INFO:tasks.workunit.client.1.vm09.stdout:8/28: rmdir - no directory 2026-03-09T15:00:34.513 INFO:tasks.workunit.client.1.vm09.stdout:0/27: rename f2 to f7 0 2026-03-09T15:00:34.514 INFO:tasks.workunit.client.1.vm09.stdout:8/29: stat c2 0 2026-03-09T15:00:34.514 INFO:tasks.workunit.client.1.vm09.stdout:0/28: dread - f5 zero size 2026-03-09T15:00:34.515 INFO:tasks.workunit.client.1.vm09.stdout:7/33: dwrite d3/f4 [0,4194304] 0 2026-03-09T15:00:34.519 INFO:tasks.workunit.client.1.vm09.stdout:5/20: rename d2/d4/c6 to d2/d4/c8 0 2026-03-09T15:00:34.522 INFO:tasks.workunit.client.1.vm09.stdout:7/34: write d3/f9 [208711,11308] 0 2026-03-09T15:00:34.530 INFO:tasks.workunit.client.1.vm09.stdout:7/35: dwrite f1 [0,4194304] 0 2026-03-09T15:00:34.534 INFO:tasks.workunit.client.1.vm09.stdout:5/21: dread f0 [0,4194304] 0 2026-03-09T15:00:34.541 INFO:tasks.workunit.client.1.vm09.stdout:2/30: symlink la 0 2026-03-09T15:00:34.545 INFO:tasks.workunit.client.1.vm09.stdout:7/36: dwrite d3/f5 [0,4194304] 0 2026-03-09T15:00:34.553 INFO:tasks.workunit.client.1.vm09.stdout:1/10: dread f1 [0,4194304] 0 2026-03-09T15:00:34.556 INFO:tasks.workunit.client.1.vm09.stdout:7/37: dwrite f1 [0,4194304] 0 2026-03-09T15:00:34.560 INFO:tasks.workunit.client.1.vm09.stdout:9/29: mkdir d1/d7 0 2026-03-09T15:00:34.566 INFO:tasks.workunit.client.1.vm09.stdout:3/31: link d3/d4/l5 d3/d4/l7 0 2026-03-09T15:00:34.569 INFO:tasks.workunit.client.1.vm09.stdout:6/29: mkdir d6/db 0 2026-03-09T15:00:34.575 INFO:tasks.workunit.client.1.vm09.stdout:0/29: creat f8 x:0 0 0 2026-03-09T15:00:34.583 INFO:tasks.workunit.client.1.vm09.stdout:5/22: mknod d2/d4/c9 0 2026-03-09T15:00:34.584 INFO:tasks.workunit.client.1.vm09.stdout:2/31: rename f1 to fb 0 2026-03-09T15:00:34.591 INFO:tasks.workunit.client.1.vm09.stdout:1/11: rename f1 to f2 0 2026-03-09T15:00:34.596 INFO:tasks.workunit.client.1.vm09.stdout:1/12: dread f2 [0,4194304] 0 2026-03-09T15:00:34.596 INFO:tasks.workunit.client.1.vm09.stdout:7/38: creat d3/fa x:0 0 0 2026-03-09T15:00:34.596 INFO:tasks.workunit.client.1.vm09.stdout:3/32: creat d3/d4/f8 x:0 0 0 2026-03-09T15:00:34.597 INFO:tasks.workunit.client.1.vm09.stdout:6/30: rename l5 to d6/lc 0 2026-03-09T15:00:34.600 INFO:tasks.workunit.client.1.vm09.stdout:0/30: rename f6 to f9 0 2026-03-09T15:00:34.603 INFO:tasks.workunit.client.1.vm09.stdout:5/23: symlink d2/la 0 2026-03-09T15:00:34.603 INFO:tasks.workunit.client.1.vm09.stdout:5/24: chown d2/d4/l5 25408 1 2026-03-09T15:00:34.604 INFO:tasks.workunit.client.1.vm09.stdout:2/32: creat fc x:0 0 0 2026-03-09T15:00:34.604 INFO:tasks.workunit.client.1.vm09.stdout:9/30: creat d1/d7/f8 x:0 0 0 2026-03-09T15:00:34.610 INFO:tasks.workunit.client.1.vm09.stdout:7/39: mkdir d3/db 0 2026-03-09T15:00:34.611 INFO:tasks.workunit.client.1.vm09.stdout:7/40: write d3/fa [719265,104947] 0 2026-03-09T15:00:34.614 INFO:tasks.workunit.client.1.vm09.stdout:6/31: readlink l2 0 2026-03-09T15:00:34.616 INFO:tasks.workunit.client.1.vm09.stdout:6/32: chown d6/la 74 1 2026-03-09T15:00:34.623 INFO:tasks.workunit.client.1.vm09.stdout:5/25: fsync f0 0 2026-03-09T15:00:34.623 INFO:tasks.workunit.client.1.vm09.stdout:0/31: dwrite f9 [0,4194304] 0 2026-03-09T15:00:34.636 INFO:tasks.workunit.client.1.vm09.stdout:9/31: mkdir d1/d7/d9 0 2026-03-09T15:00:34.637 INFO:tasks.workunit.client.1.vm09.stdout:6/33: unlink f3 0 2026-03-09T15:00:34.640 INFO:tasks.workunit.client.1.vm09.stdout:0/32: mkdir da 0 2026-03-09T15:00:34.641 INFO:tasks.workunit.client.1.vm09.stdout:0/33: write f9 [3311876,72919] 0 2026-03-09T15:00:34.642 INFO:tasks.workunit.client.1.vm09.stdout:0/34: write f8 [777641,34519] 0 2026-03-09T15:00:34.646 INFO:tasks.workunit.client.1.vm09.stdout:7/41: mkdir d3/db/dc 0 2026-03-09T15:00:34.646 INFO:tasks.workunit.client.1.vm09.stdout:1/13: sync 2026-03-09T15:00:34.649 INFO:tasks.workunit.client.1.vm09.stdout:6/34: symlink d6/db/ld 0 2026-03-09T15:00:34.649 INFO:tasks.workunit.client.1.vm09.stdout:7/42: creat d3/fd x:0 0 0 2026-03-09T15:00:34.652 INFO:tasks.workunit.client.1.vm09.stdout:5/26: link d2/la d2/lb 0 2026-03-09T15:00:34.655 INFO:tasks.workunit.client.1.vm09.stdout:0/35: creat da/fb x:0 0 0 2026-03-09T15:00:34.655 INFO:tasks.workunit.client.1.vm09.stdout:7/43: read d3/f8 [559373,70810] 0 2026-03-09T15:00:34.656 INFO:tasks.workunit.client.1.vm09.stdout:7/44: read d3/f8 [127051,16711] 0 2026-03-09T15:00:34.656 INFO:tasks.workunit.client.1.vm09.stdout:9/32: link d1/c3 d1/d7/d9/ca 0 2026-03-09T15:00:34.658 INFO:tasks.workunit.client.1.vm09.stdout:5/27: stat d2/lb 0 2026-03-09T15:00:34.663 INFO:tasks.workunit.client.1.vm09.stdout:7/45: creat d3/db/fe x:0 0 0 2026-03-09T15:00:34.666 INFO:tasks.workunit.client.1.vm09.stdout:0/36: dwrite f8 [0,4194304] 0 2026-03-09T15:00:34.671 INFO:tasks.workunit.client.1.vm09.stdout:9/33: dwrite d1/d7/f8 [0,4194304] 0 2026-03-09T15:00:34.685 INFO:tasks.workunit.client.1.vm09.stdout:0/37: dwrite f3 [4194304,4194304] 0 2026-03-09T15:00:34.685 INFO:tasks.workunit.client.1.vm09.stdout:8/30: getdents . 0 2026-03-09T15:00:34.685 INFO:tasks.workunit.client.1.vm09.stdout:8/31: dread - f4 zero size 2026-03-09T15:00:34.685 INFO:tasks.workunit.client.1.vm09.stdout:8/32: rmdir - no directory 2026-03-09T15:00:34.690 INFO:tasks.workunit.client.1.vm09.stdout:0/38: dread - f5 zero size 2026-03-09T15:00:34.694 INFO:tasks.workunit.client.1.vm09.stdout:5/28: dwrite f0 [0,4194304] 0 2026-03-09T15:00:34.694 INFO:tasks.workunit.client.1.vm09.stdout:8/33: chown c2 993 1 2026-03-09T15:00:34.694 INFO:tasks.workunit.client.1.vm09.stdout:4/17: truncate f0 2678877 0 2026-03-09T15:00:34.697 INFO:tasks.workunit.client.1.vm09.stdout:9/34: dwrite d1/d7/f8 [0,4194304] 0 2026-03-09T15:00:34.709 INFO:tasks.workunit.client.1.vm09.stdout:5/29: dwrite f0 [0,4194304] 0 2026-03-09T15:00:34.725 INFO:tasks.workunit.client.1.vm09.stdout:1/14: unlink f2 0 2026-03-09T15:00:34.725 INFO:tasks.workunit.client.1.vm09.stdout:1/15: rmdir - no directory 2026-03-09T15:00:34.725 INFO:tasks.workunit.client.1.vm09.stdout:1/16: chown . 619 1 2026-03-09T15:00:34.728 INFO:tasks.workunit.client.1.vm09.stdout:7/46: creat d3/db/dc/ff x:0 0 0 2026-03-09T15:00:34.728 INFO:tasks.workunit.client.1.vm09.stdout:7/47: write d3/f4 [3010395,38832] 0 2026-03-09T15:00:34.729 INFO:tasks.workunit.client.1.vm09.stdout:0/39: mkdir da/dc 0 2026-03-09T15:00:34.730 INFO:tasks.workunit.client.1.vm09.stdout:0/40: write f3 [9318372,22311] 0 2026-03-09T15:00:34.731 INFO:tasks.workunit.client.1.vm09.stdout:8/34: symlink l5 0 2026-03-09T15:00:34.731 INFO:tasks.workunit.client.1.vm09.stdout:3/33: rmdir d3 39 2026-03-09T15:00:34.737 INFO:tasks.workunit.client.1.vm09.stdout:2/33: truncate fb 1474778 0 2026-03-09T15:00:34.738 INFO:tasks.workunit.client.1.vm09.stdout:5/30: readlink d2/lb 0 2026-03-09T15:00:34.740 INFO:tasks.workunit.client.1.vm09.stdout:1/17: creat f3 x:0 0 0 2026-03-09T15:00:34.742 INFO:tasks.workunit.client.1.vm09.stdout:1/18: truncate f3 122118 0 2026-03-09T15:00:34.746 INFO:tasks.workunit.client.1.vm09.stdout:3/34: chown d3/d4/f8 133 1 2026-03-09T15:00:34.747 INFO:tasks.workunit.client.1.vm09.stdout:3/35: fdatasync d3/d4/f8 0 2026-03-09T15:00:34.747 INFO:tasks.workunit.client.1.vm09.stdout:8/35: rename f4 to f6 0 2026-03-09T15:00:34.748 INFO:tasks.workunit.client.1.vm09.stdout:3/36: chown d3 256 1 2026-03-09T15:00:34.751 INFO:tasks.workunit.client.1.vm09.stdout:1/19: dwrite f3 [0,4194304] 0 2026-03-09T15:00:34.753 INFO:tasks.workunit.client.1.vm09.stdout:4/18: symlink l6 0 2026-03-09T15:00:34.759 INFO:tasks.workunit.client.1.vm09.stdout:2/34: rename l7 to ld 0 2026-03-09T15:00:34.764 INFO:tasks.workunit.client.1.vm09.stdout:7/48: link d3/f4 d3/db/dc/f10 0 2026-03-09T15:00:34.767 INFO:tasks.workunit.client.1.vm09.stdout:0/41: creat da/dc/fd x:0 0 0 2026-03-09T15:00:34.770 INFO:tasks.workunit.client.1.vm09.stdout:0/42: stat da/dc 0 2026-03-09T15:00:34.770 INFO:tasks.workunit.client.1.vm09.stdout:8/36: fdatasync f6 0 2026-03-09T15:00:34.772 INFO:tasks.workunit.client.1.vm09.stdout:7/49: dwrite f1 [0,4194304] 0 2026-03-09T15:00:34.773 INFO:tasks.workunit.client.1.vm09.stdout:3/37: creat d3/f9 x:0 0 0 2026-03-09T15:00:34.779 INFO:tasks.workunit.client.1.vm09.stdout:3/38: write d3/d4/f8 [579072,94185] 0 2026-03-09T15:00:34.785 INFO:tasks.workunit.client.1.vm09.stdout:1/20: rename f3 to f4 0 2026-03-09T15:00:34.791 INFO:tasks.workunit.client.1.vm09.stdout:9/35: creat d1/fb x:0 0 0 2026-03-09T15:00:34.792 INFO:tasks.workunit.client.1.vm09.stdout:9/36: write d1/fb [232825,50682] 0 2026-03-09T15:00:34.796 INFO:tasks.workunit.client.1.vm09.stdout:0/43: creat da/dc/fe x:0 0 0 2026-03-09T15:00:34.797 INFO:tasks.workunit.client.1.vm09.stdout:8/37: rename c2 to c7 0 2026-03-09T15:00:34.800 INFO:tasks.workunit.client.1.vm09.stdout:7/50: creat d3/db/dc/f11 x:0 0 0 2026-03-09T15:00:34.802 INFO:tasks.workunit.client.1.vm09.stdout:3/39: symlink d3/d4/la 0 2026-03-09T15:00:34.802 INFO:tasks.workunit.client.1.vm09.stdout:3/40: fdatasync d3/f6 0 2026-03-09T15:00:34.802 INFO:tasks.workunit.client.1.vm09.stdout:1/21: write f4 [3892067,45882] 0 2026-03-09T15:00:34.804 INFO:tasks.workunit.client.1.vm09.stdout:3/41: truncate d3/d4/f8 1547502 0 2026-03-09T15:00:34.808 INFO:tasks.workunit.client.1.vm09.stdout:5/31: fsync f0 0 2026-03-09T15:00:34.812 INFO:tasks.workunit.client.1.vm09.stdout:1/22: dread f4 [0,4194304] 0 2026-03-09T15:00:34.817 INFO:tasks.workunit.client.1.vm09.stdout:0/44: mknod da/dc/cf 0 2026-03-09T15:00:34.821 INFO:tasks.workunit.client.1.vm09.stdout:7/51: rename d3/db/dc/f10 to d3/f12 0 2026-03-09T15:00:34.825 INFO:tasks.workunit.client.1.vm09.stdout:6/35: dwrite f0 [0,4194304] 0 2026-03-09T15:00:34.825 INFO:tasks.workunit.client.1.vm09.stdout:3/42: mknod d3/cb 0 2026-03-09T15:00:34.826 INFO:tasks.workunit.client.1.vm09.stdout:3/43: dread - d3/f9 zero size 2026-03-09T15:00:34.828 INFO:tasks.workunit.client.1.vm09.stdout:3/44: chown d3/d4/f8 5 1 2026-03-09T15:00:34.829 INFO:tasks.workunit.client.1.vm09.stdout:2/35: dwrite f4 [4194304,4194304] 0 2026-03-09T15:00:34.839 INFO:tasks.workunit.client.1.vm09.stdout:5/32: creat d2/fc x:0 0 0 2026-03-09T15:00:34.839 INFO:tasks.workunit.client.1.vm09.stdout:5/33: write d2/fc [397111,15789] 0 2026-03-09T15:00:34.845 INFO:tasks.workunit.client.1.vm09.stdout:0/45: mkdir da/dc/d10 0 2026-03-09T15:00:34.846 INFO:tasks.workunit.client.1.vm09.stdout:0/46: chown da/dc/fe 244 1 2026-03-09T15:00:34.847 INFO:tasks.workunit.client.1.vm09.stdout:7/52: readlink d3/l6 0 2026-03-09T15:00:34.865 INFO:tasks.workunit.client.1.vm09.stdout:6/36: rmdir d6 39 2026-03-09T15:00:34.865 INFO:tasks.workunit.client.1.vm09.stdout:3/45: creat d3/fc x:0 0 0 2026-03-09T15:00:34.865 INFO:tasks.workunit.client.1.vm09.stdout:2/36: symlink le 0 2026-03-09T15:00:34.865 INFO:tasks.workunit.client.1.vm09.stdout:5/34: rename d2/fc to d2/d4/fd 0 2026-03-09T15:00:34.865 INFO:tasks.workunit.client.1.vm09.stdout:1/23: link f4 f5 0 2026-03-09T15:00:34.865 INFO:tasks.workunit.client.1.vm09.stdout:7/53: mknod d3/db/c13 0 2026-03-09T15:00:34.865 INFO:tasks.workunit.client.1.vm09.stdout:7/54: dread d3/f9 [0,4194304] 0 2026-03-09T15:00:34.866 INFO:tasks.workunit.client.1.vm09.stdout:2/37: mkdir df 0 2026-03-09T15:00:34.866 INFO:tasks.workunit.client.1.vm09.stdout:5/35: symlink d2/le 0 2026-03-09T15:00:34.872 INFO:tasks.workunit.client.1.vm09.stdout:0/47: creat da/dc/d10/f11 x:0 0 0 2026-03-09T15:00:34.873 INFO:tasks.workunit.client.1.vm09.stdout:6/37: mknod d6/db/ce 0 2026-03-09T15:00:34.873 INFO:tasks.workunit.client.1.vm09.stdout:8/38: getdents . 0 2026-03-09T15:00:34.873 INFO:tasks.workunit.client.1.vm09.stdout:8/39: fdatasync f6 0 2026-03-09T15:00:34.879 INFO:tasks.workunit.client.1.vm09.stdout:1/24: link f4 f6 0 2026-03-09T15:00:34.880 INFO:tasks.workunit.client.1.vm09.stdout:3/46: link l2 d3/d4/ld 0 2026-03-09T15:00:34.882 INFO:tasks.workunit.client.1.vm09.stdout:6/38: dread f0 [0,4194304] 0 2026-03-09T15:00:34.882 INFO:tasks.workunit.client.1.vm09.stdout:2/38: symlink df/l10 0 2026-03-09T15:00:34.883 INFO:tasks.workunit.client.1.vm09.stdout:6/39: readlink d6/l7 0 2026-03-09T15:00:34.883 INFO:tasks.workunit.client.1.vm09.stdout:6/40: chown d6/l9 13152949 1 2026-03-09T15:00:34.884 INFO:tasks.workunit.client.1.vm09.stdout:5/36: dwrite f0 [0,4194304] 0 2026-03-09T15:00:34.884 INFO:tasks.workunit.client.1.vm09.stdout:6/41: readlink d6/l7 0 2026-03-09T15:00:34.894 INFO:tasks.workunit.client.1.vm09.stdout:5/37: dread d2/d4/fd [0,4194304] 0 2026-03-09T15:00:34.895 INFO:tasks.workunit.client.1.vm09.stdout:8/40: link f6 f8 0 2026-03-09T15:00:34.896 INFO:tasks.workunit.client.1.vm09.stdout:3/47: symlink d3/le 0 2026-03-09T15:00:34.898 INFO:tasks.workunit.client.1.vm09.stdout:3/48: dread d3/f6 [0,4194304] 0 2026-03-09T15:00:34.898 INFO:tasks.workunit.client.1.vm09.stdout:6/42: fdatasync f0 0 2026-03-09T15:00:34.901 INFO:tasks.workunit.client.1.vm09.stdout:3/49: write d3/f6 [1127332,101988] 0 2026-03-09T15:00:34.901 INFO:tasks.workunit.client.1.vm09.stdout:6/43: write f0 [2335948,125144] 0 2026-03-09T15:00:34.903 INFO:tasks.workunit.client.1.vm09.stdout:1/25: link f4 f7 0 2026-03-09T15:00:34.909 INFO:tasks.workunit.client.1.vm09.stdout:5/38: symlink d2/lf 0 2026-03-09T15:00:34.909 INFO:tasks.workunit.client.1.vm09.stdout:1/26: chown f7 61432 1 2026-03-09T15:00:34.909 INFO:tasks.workunit.client.1.vm09.stdout:9/37: truncate d1/f4 3017108 0 2026-03-09T15:00:34.909 INFO:tasks.workunit.client.1.vm09.stdout:6/44: write f0 [1894467,47960] 0 2026-03-09T15:00:34.909 INFO:tasks.workunit.client.1.vm09.stdout:5/39: truncate d2/d4/fd 912559 0 2026-03-09T15:00:34.909 INFO:tasks.workunit.client.1.vm09.stdout:8/41: rename l3 to l9 0 2026-03-09T15:00:34.909 INFO:tasks.workunit.client.1.vm09.stdout:4/19: dwrite f0 [0,4194304] 0 2026-03-09T15:00:34.919 INFO:tasks.workunit.client.1.vm09.stdout:0/48: getdents da/dc 0 2026-03-09T15:00:34.920 INFO:tasks.workunit.client.1.vm09.stdout:2/39: fdatasync fb 0 2026-03-09T15:00:34.923 INFO:tasks.workunit.client.1.vm09.stdout:2/40: chown fc 128143489 1 2026-03-09T15:00:34.923 INFO:tasks.workunit.client.1.vm09.stdout:5/40: mknod d2/c10 0 2026-03-09T15:00:34.924 INFO:tasks.workunit.client.1.vm09.stdout:8/42: sync 2026-03-09T15:00:34.924 INFO:tasks.workunit.client.1.vm09.stdout:3/50: chown d3/d4/ld 10 1 2026-03-09T15:00:34.927 INFO:tasks.workunit.client.1.vm09.stdout:0/49: creat da/f12 x:0 0 0 2026-03-09T15:00:34.927 INFO:tasks.workunit.client.1.vm09.stdout:5/41: write f0 [4927402,84164] 0 2026-03-09T15:00:34.927 INFO:tasks.workunit.client.1.vm09.stdout:8/43: dread - f6 zero size 2026-03-09T15:00:34.927 INFO:tasks.workunit.client.1.vm09.stdout:0/50: truncate da/dc/d10/f11 980778 0 2026-03-09T15:00:34.931 INFO:tasks.workunit.client.1.vm09.stdout:0/51: write da/f12 [735486,4939] 0 2026-03-09T15:00:34.934 INFO:tasks.workunit.client.1.vm09.stdout:8/44: chown f8 1955209 1 2026-03-09T15:00:34.934 INFO:tasks.workunit.client.1.vm09.stdout:3/51: creat d3/ff x:0 0 0 2026-03-09T15:00:34.934 INFO:tasks.workunit.client.1.vm09.stdout:6/45: dread f0 [0,4194304] 0 2026-03-09T15:00:34.941 INFO:tasks.workunit.client.1.vm09.stdout:4/20: rename f0 to f7 0 2026-03-09T15:00:34.943 INFO:tasks.workunit.client.1.vm09.stdout:4/21: dread - f3 zero size 2026-03-09T15:00:34.944 INFO:tasks.workunit.client.1.vm09.stdout:3/52: unlink d3/d4/la 0 2026-03-09T15:00:34.945 INFO:tasks.workunit.client.1.vm09.stdout:4/22: readlink l5 0 2026-03-09T15:00:34.946 INFO:tasks.workunit.client.1.vm09.stdout:4/23: rmdir - no directory 2026-03-09T15:00:34.947 INFO:tasks.workunit.client.1.vm09.stdout:2/41: truncate f5 1250300 0 2026-03-09T15:00:34.947 INFO:tasks.workunit.client.1.vm09.stdout:2/42: chown f4 24 1 2026-03-09T15:00:34.952 INFO:tasks.workunit.client.1.vm09.stdout:1/27: dwrite f4 [0,4194304] 0 2026-03-09T15:00:34.956 INFO:tasks.workunit.client.1.vm09.stdout:6/46: chown d6/lc 257535106 1 2026-03-09T15:00:34.956 INFO:tasks.workunit.client.1.vm09.stdout:0/52: dread f9 [0,4194304] 0 2026-03-09T15:00:34.957 INFO:tasks.workunit.client.1.vm09.stdout:6/47: read f0 [4554324,100888] 0 2026-03-09T15:00:34.957 INFO:tasks.workunit.client.1.vm09.stdout:6/48: stat d6/db 0 2026-03-09T15:00:34.957 INFO:tasks.workunit.client.1.vm09.stdout:7/55: dwrite d3/f4 [0,4194304] 0 2026-03-09T15:00:34.958 INFO:tasks.workunit.client.1.vm09.stdout:7/56: fdatasync d3/f5 0 2026-03-09T15:00:34.960 INFO:tasks.workunit.client.1.vm09.stdout:3/53: rename d3/cb to d3/d4/c10 0 2026-03-09T15:00:34.961 INFO:tasks.workunit.client.1.vm09.stdout:3/54: truncate d3/f9 475619 0 2026-03-09T15:00:34.962 INFO:tasks.workunit.client.1.vm09.stdout:3/55: write d3/fc [729733,13000] 0 2026-03-09T15:00:34.965 INFO:tasks.workunit.client.1.vm09.stdout:6/49: mkdir d6/df 0 2026-03-09T15:00:34.966 INFO:tasks.workunit.client.1.vm09.stdout:7/57: dread d3/f8 [0,4194304] 0 2026-03-09T15:00:34.967 INFO:tasks.workunit.client.1.vm09.stdout:7/58: write d3/db/fe [536254,121397] 0 2026-03-09T15:00:34.968 INFO:tasks.workunit.client.1.vm09.stdout:7/59: rename d3 to d3/d14 22 2026-03-09T15:00:34.971 INFO:tasks.workunit.client.1.vm09.stdout:1/28: mkdir d8 0 2026-03-09T15:00:34.975 INFO:tasks.workunit.client.1.vm09.stdout:0/53: write da/dc/fd [199069,71605] 0 2026-03-09T15:00:34.975 INFO:tasks.workunit.client.1.vm09.stdout:8/45: fsync f8 0 2026-03-09T15:00:34.979 INFO:tasks.workunit.client.1.vm09.stdout:3/56: mknod d3/d4/c11 0 2026-03-09T15:00:34.981 INFO:tasks.workunit.client.1.vm09.stdout:6/50: mkdir d6/db/d10 0 2026-03-09T15:00:34.981 INFO:tasks.workunit.client.1.vm09.stdout:5/42: dwrite d2/d4/fd [0,4194304] 0 2026-03-09T15:00:34.986 INFO:tasks.workunit.client.1.vm09.stdout:7/60: unlink d3/f4 0 2026-03-09T15:00:34.989 INFO:tasks.workunit.client.1.vm09.stdout:7/61: truncate d3/fd 555646 0 2026-03-09T15:00:34.989 INFO:tasks.workunit.client.1.vm09.stdout:9/38: write d1/f4 [1919950,56979] 0 2026-03-09T15:00:34.993 INFO:tasks.workunit.client.1.vm09.stdout:7/62: dread - d3/db/dc/f11 zero size 2026-03-09T15:00:34.993 INFO:tasks.workunit.client.1.vm09.stdout:7/63: chown d3/db/dc 177627 1 2026-03-09T15:00:34.993 INFO:tasks.workunit.client.1.vm09.stdout:7/64: stat f1 0 2026-03-09T15:00:34.998 INFO:tasks.workunit.client.1.vm09.stdout:7/65: readlink d3/l7 0 2026-03-09T15:00:34.999 INFO:tasks.workunit.client.1.vm09.stdout:7/66: fsync d3/db/dc/ff 0 2026-03-09T15:00:35.000 INFO:tasks.workunit.client.1.vm09.stdout:4/24: rename l2 to l8 0 2026-03-09T15:00:35.002 INFO:tasks.workunit.client.1.vm09.stdout:9/39: dwrite d1/fb [0,4194304] 0 2026-03-09T15:00:35.014 INFO:tasks.workunit.client.1.vm09.stdout:0/54: unlink da/dc/cf 0 2026-03-09T15:00:35.014 INFO:tasks.workunit.client.1.vm09.stdout:0/55: stat f7 0 2026-03-09T15:00:35.014 INFO:tasks.workunit.client.1.vm09.stdout:9/40: stat d1/d7/d9 0 2026-03-09T15:00:35.014 INFO:tasks.workunit.client.1.vm09.stdout:9/41: chown d1/f4 16 1 2026-03-09T15:00:35.014 INFO:tasks.workunit.client.1.vm09.stdout:3/57: readlink d3/d4/l5 0 2026-03-09T15:00:35.014 INFO:tasks.workunit.client.1.vm09.stdout:3/58: rename d3/d4 to d3/d4/d12 22 2026-03-09T15:00:35.014 INFO:tasks.workunit.client.1.vm09.stdout:3/59: write d3/fc [7845,86918] 0 2026-03-09T15:00:35.014 INFO:tasks.workunit.client.1.vm09.stdout:6/51: write f0 [3211667,87486] 0 2026-03-09T15:00:35.016 INFO:tasks.workunit.client.1.vm09.stdout:5/43: readlink d2/lb 0 2026-03-09T15:00:35.019 INFO:tasks.workunit.client.1.vm09.stdout:6/52: dread f0 [0,4194304] 0 2026-03-09T15:00:35.025 INFO:tasks.workunit.client.1.vm09.stdout:1/29: symlink d8/l9 0 2026-03-09T15:00:35.025 INFO:tasks.workunit.client.1.vm09.stdout:8/46: write f8 [1009180,2708] 0 2026-03-09T15:00:35.025 INFO:tasks.workunit.client.1.vm09.stdout:7/67: mkdir d3/db/d15 0 2026-03-09T15:00:35.027 INFO:tasks.workunit.client.1.vm09.stdout:7/68: read f1 [3687362,117238] 0 2026-03-09T15:00:35.027 INFO:tasks.workunit.client.1.vm09.stdout:8/47: write f6 [1987957,38389] 0 2026-03-09T15:00:35.027 INFO:tasks.workunit.client.1.vm09.stdout:8/48: stat f6 0 2026-03-09T15:00:35.028 INFO:tasks.workunit.client.1.vm09.stdout:0/56: write f7 [3911502,20889] 0 2026-03-09T15:00:35.029 INFO:tasks.workunit.client.1.vm09.stdout:5/44: fsync d2/d4/fd 0 2026-03-09T15:00:35.030 INFO:tasks.workunit.client.1.vm09.stdout:9/42: symlink d1/d7/lc 0 2026-03-09T15:00:35.034 INFO:tasks.workunit.client.1.vm09.stdout:3/60: mknod d3/c13 0 2026-03-09T15:00:35.042 INFO:tasks.workunit.client.1.vm09.stdout:2/43: read f5 [299693,71715] 0 2026-03-09T15:00:35.044 INFO:tasks.workunit.client.1.vm09.stdout:2/44: chown c8 2377 1 2026-03-09T15:00:35.050 INFO:tasks.workunit.client.1.vm09.stdout:2/45: dwrite f3 [0,4194304] 0 2026-03-09T15:00:35.123 INFO:tasks.workunit.client.1.vm09.stdout:1/30: creat d8/fa x:0 0 0 2026-03-09T15:00:35.125 INFO:tasks.workunit.client.1.vm09.stdout:7/69: creat d3/f16 x:0 0 0 2026-03-09T15:00:35.126 INFO:tasks.workunit.client.1.vm09.stdout:0/57: symlink da/dc/d10/l13 0 2026-03-09T15:00:35.126 INFO:tasks.workunit.client.1.vm09.stdout:0/58: read - da/fb zero size 2026-03-09T15:00:35.127 INFO:tasks.workunit.client.1.vm09.stdout:5/45: mknod d2/d4/c11 0 2026-03-09T15:00:35.127 INFO:tasks.workunit.client.1.vm09.stdout:5/46: write f0 [1331503,73024] 0 2026-03-09T15:00:35.128 INFO:tasks.workunit.client.1.vm09.stdout:3/61: mknod d3/d4/c14 0 2026-03-09T15:00:35.130 INFO:tasks.workunit.client.1.vm09.stdout:2/46: write f5 [921537,107402] 0 2026-03-09T15:00:35.131 INFO:tasks.workunit.client.1.vm09.stdout:6/53: symlink d6/db/d10/l11 0 2026-03-09T15:00:35.134 INFO:tasks.workunit.client.1.vm09.stdout:1/31: mknod d8/cb 0 2026-03-09T15:00:35.136 INFO:tasks.workunit.client.1.vm09.stdout:0/59: creat da/f14 x:0 0 0 2026-03-09T15:00:35.137 INFO:tasks.workunit.client.1.vm09.stdout:9/43: rename d1/l6 to d1/d7/d9/ld 0 2026-03-09T15:00:35.141 INFO:tasks.workunit.client.1.vm09.stdout:6/54: symlink d6/db/d10/l12 0 2026-03-09T15:00:35.141 INFO:tasks.workunit.client.1.vm09.stdout:1/32: dread f7 [0,4194304] 0 2026-03-09T15:00:35.141 INFO:tasks.workunit.client.1.vm09.stdout:4/25: link l5 l9 0 2026-03-09T15:00:35.141 INFO:tasks.workunit.client.1.vm09.stdout:4/26: write f4 [711140,18950] 0 2026-03-09T15:00:35.144 INFO:tasks.workunit.client.1.vm09.stdout:9/44: mknod d1/d7/ce 0 2026-03-09T15:00:35.146 INFO:tasks.workunit.client.1.vm09.stdout:2/47: rename la to df/l11 0 2026-03-09T15:00:35.150 INFO:tasks.workunit.client.1.vm09.stdout:2/48: dread f3 [0,4194304] 0 2026-03-09T15:00:35.150 INFO:tasks.workunit.client.1.vm09.stdout:2/49: fdatasync f4 0 2026-03-09T15:00:35.158 INFO:tasks.workunit.client.1.vm09.stdout:2/50: dwrite f4 [8388608,4194304] 0 2026-03-09T15:00:35.159 INFO:tasks.workunit.client.1.vm09.stdout:4/27: sync 2026-03-09T15:00:35.175 INFO:tasks.workunit.client.1.vm09.stdout:8/49: getdents . 0 2026-03-09T15:00:35.188 INFO:tasks.workunit.client.1.vm09.stdout:9/45: mknod d1/d7/cf 0 2026-03-09T15:00:35.189 INFO:tasks.workunit.client.1.vm09.stdout:9/46: write d1/f4 [203848,84761] 0 2026-03-09T15:00:35.194 INFO:tasks.workunit.client.1.vm09.stdout:1/33: rename f4 to d8/fc 0 2026-03-09T15:00:35.195 INFO:tasks.workunit.client.1.vm09.stdout:9/47: dwrite d1/d7/f8 [0,4194304] 0 2026-03-09T15:00:35.203 INFO:tasks.workunit.client.1.vm09.stdout:9/48: dread d1/fb [0,4194304] 0 2026-03-09T15:00:35.217 INFO:tasks.workunit.client.1.vm09.stdout:6/55: mknod d6/df/c13 0 2026-03-09T15:00:35.218 INFO:tasks.workunit.client.1.vm09.stdout:2/51: symlink df/l12 0 2026-03-09T15:00:35.228 INFO:tasks.workunit.client.1.vm09.stdout:0/60: link da/dc/d10/l13 da/l15 0 2026-03-09T15:00:35.228 INFO:tasks.workunit.client.1.vm09.stdout:4/28: chown l8 28686 1 2026-03-09T15:00:35.228 INFO:tasks.workunit.client.1.vm09.stdout:0/61: write da/f14 [570496,43211] 0 2026-03-09T15:00:35.228 INFO:tasks.workunit.client.1.vm09.stdout:2/52: dread f4 [4194304,4194304] 0 2026-03-09T15:00:35.228 INFO:tasks.workunit.client.1.vm09.stdout:2/53: write fc [583012,106111] 0 2026-03-09T15:00:35.228 INFO:tasks.workunit.client.1.vm09.stdout:2/54: write f4 [8042586,34291] 0 2026-03-09T15:00:35.232 INFO:tasks.workunit.client.1.vm09.stdout:1/34: write f5 [4991240,35641] 0 2026-03-09T15:00:35.234 INFO:tasks.workunit.client.1.vm09.stdout:3/62: link l2 d3/d4/l15 0 2026-03-09T15:00:35.239 INFO:tasks.workunit.client.1.vm09.stdout:4/29: write f7 [2326597,97155] 0 2026-03-09T15:00:35.246 INFO:tasks.workunit.client.1.vm09.stdout:1/35: dread d8/fc [0,4194304] 0 2026-03-09T15:00:35.249 INFO:tasks.workunit.client.1.vm09.stdout:1/36: write f6 [4797640,27582] 0 2026-03-09T15:00:35.250 INFO:tasks.workunit.client.1.vm09.stdout:9/49: link d1/d7/d9/ca d1/d7/d9/c10 0 2026-03-09T15:00:35.251 INFO:tasks.workunit.client.1.vm09.stdout:4/30: sync 2026-03-09T15:00:35.255 INFO:tasks.workunit.client.1.vm09.stdout:9/50: rename d1/fb to d1/d7/d9/f11 0 2026-03-09T15:00:35.257 INFO:tasks.workunit.client.1.vm09.stdout:4/31: mknod ca 0 2026-03-09T15:00:35.261 INFO:tasks.workunit.client.1.vm09.stdout:9/51: creat d1/d7/d9/f12 x:0 0 0 2026-03-09T15:00:35.265 INFO:tasks.workunit.client.1.vm09.stdout:4/32: dwrite f3 [0,4194304] 0 2026-03-09T15:00:35.269 INFO:tasks.workunit.client.1.vm09.stdout:4/33: chown f4 0 1 2026-03-09T15:00:35.269 INFO:tasks.workunit.client.1.vm09.stdout:9/52: sync 2026-03-09T15:00:35.272 INFO:tasks.workunit.client.1.vm09.stdout:5/47: truncate f0 216561 0 2026-03-09T15:00:35.277 INFO:tasks.workunit.client.1.vm09.stdout:9/53: creat d1/d7/f13 x:0 0 0 2026-03-09T15:00:35.277 INFO:tasks.workunit.client.1.vm09.stdout:5/48: creat d2/d4/f12 x:0 0 0 2026-03-09T15:00:35.277 INFO:tasks.workunit.client.1.vm09.stdout:0/62: rmdir da/dc/d10 39 2026-03-09T15:00:35.278 INFO:tasks.workunit.client.1.vm09.stdout:5/49: stat d2/d4/l5 0 2026-03-09T15:00:35.279 INFO:tasks.workunit.client.1.vm09.stdout:0/63: truncate da/dc/fe 198341 0 2026-03-09T15:00:35.279 INFO:tasks.workunit.client.1.vm09.stdout:5/50: write d2/d4/f12 [319203,111096] 0 2026-03-09T15:00:35.280 INFO:tasks.workunit.client.1.vm09.stdout:1/37: dwrite d8/fa [0,4194304] 0 2026-03-09T15:00:35.291 INFO:tasks.workunit.client.1.vm09.stdout:5/51: symlink d2/d4/l13 0 2026-03-09T15:00:35.292 INFO:tasks.workunit.client.1.vm09.stdout:1/38: rename f6 to d8/fd 0 2026-03-09T15:00:35.292 INFO:tasks.workunit.client.1.vm09.stdout:4/34: dwrite f7 [0,4194304] 0 2026-03-09T15:00:35.296 INFO:tasks.workunit.client.1.vm09.stdout:6/56: getdents d6/df 0 2026-03-09T15:00:35.300 INFO:tasks.workunit.client.1.vm09.stdout:0/64: creat da/dc/d10/f16 x:0 0 0 2026-03-09T15:00:35.303 INFO:tasks.workunit.client.1.vm09.stdout:9/54: creat d1/f14 x:0 0 0 2026-03-09T15:00:35.310 INFO:tasks.workunit.client.1.vm09.stdout:0/65: dwrite da/dc/fe [0,4194304] 0 2026-03-09T15:00:35.317 INFO:tasks.workunit.client.1.vm09.stdout:6/57: unlink d6/df/c13 0 2026-03-09T15:00:35.321 INFO:tasks.workunit.client.1.vm09.stdout:9/55: mknod d1/d7/c15 0 2026-03-09T15:00:35.322 INFO:tasks.workunit.client.1.vm09.stdout:8/50: truncate f8 881829 0 2026-03-09T15:00:35.331 INFO:tasks.workunit.client.1.vm09.stdout:4/35: mkdir db 0 2026-03-09T15:00:35.333 INFO:tasks.workunit.client.1.vm09.stdout:3/63: truncate d3/f9 66548 0 2026-03-09T15:00:35.333 INFO:tasks.workunit.client.1.vm09.stdout:3/64: dread - d3/ff zero size 2026-03-09T15:00:35.341 INFO:tasks.workunit.client.1.vm09.stdout:0/66: creat da/dc/f17 x:0 0 0 2026-03-09T15:00:35.342 INFO:tasks.workunit.client.1.vm09.stdout:4/36: symlink db/lc 0 2026-03-09T15:00:35.342 INFO:tasks.workunit.client.1.vm09.stdout:0/67: chown da 32 1 2026-03-09T15:00:35.342 INFO:tasks.workunit.client.1.vm09.stdout:2/55: dwrite fb [0,4194304] 0 2026-03-09T15:00:35.344 INFO:tasks.workunit.client.1.vm09.stdout:4/37: chown f7 121 1 2026-03-09T15:00:35.345 INFO:tasks.workunit.client.1.vm09.stdout:3/65: creat d3/d4/f16 x:0 0 0 2026-03-09T15:00:35.347 INFO:tasks.workunit.client.1.vm09.stdout:3/66: rename d3/fc to d3/f17 0 2026-03-09T15:00:35.351 INFO:tasks.workunit.client.1.vm09.stdout:0/68: creat da/dc/f18 x:0 0 0 2026-03-09T15:00:35.355 INFO:tasks.workunit.client.1.vm09.stdout:3/67: mkdir d3/d4/d18 0 2026-03-09T15:00:35.359 INFO:tasks.workunit.client.1.vm09.stdout:2/56: dwrite fb [0,4194304] 0 2026-03-09T15:00:35.363 INFO:tasks.workunit.client.1.vm09.stdout:4/38: dread f3 [0,4194304] 0 2026-03-09T15:00:35.364 INFO:tasks.workunit.client.1.vm09.stdout:2/57: creat df/f13 x:0 0 0 2026-03-09T15:00:35.365 INFO:tasks.workunit.client.1.vm09.stdout:0/69: sync 2026-03-09T15:00:35.365 INFO:tasks.workunit.client.1.vm09.stdout:3/68: sync 2026-03-09T15:00:35.366 INFO:tasks.workunit.client.1.vm09.stdout:4/39: unlink ca 0 2026-03-09T15:00:35.368 INFO:tasks.workunit.client.1.vm09.stdout:0/70: stat da/l15 0 2026-03-09T15:00:35.368 INFO:tasks.workunit.client.1.vm09.stdout:0/71: fsync f7 0 2026-03-09T15:00:35.369 INFO:tasks.workunit.client.1.vm09.stdout:2/58: creat df/f14 x:0 0 0 2026-03-09T15:00:35.375 INFO:tasks.workunit.client.1.vm09.stdout:3/69: dwrite d3/d4/f8 [0,4194304] 0 2026-03-09T15:00:35.382 INFO:tasks.workunit.client.1.vm09.stdout:7/70: write d3/f16 [599494,7354] 0 2026-03-09T15:00:35.387 INFO:tasks.workunit.client.1.vm09.stdout:0/72: mkdir da/dc/d10/d19 0 2026-03-09T15:00:35.388 INFO:tasks.workunit.client.1.vm09.stdout:0/73: truncate da/dc/fd 826149 0 2026-03-09T15:00:35.392 INFO:tasks.workunit.client.1.vm09.stdout:4/40: symlink db/ld 0 2026-03-09T15:00:35.397 INFO:tasks.workunit.client.1.vm09.stdout:2/59: symlink df/l15 0 2026-03-09T15:00:35.397 INFO:tasks.workunit.client.1.vm09.stdout:3/70: creat d3/d4/f19 x:0 0 0 2026-03-09T15:00:35.397 INFO:tasks.workunit.client.1.vm09.stdout:0/74: unlink da/f14 0 2026-03-09T15:00:35.408 INFO:tasks.workunit.client.1.vm09.stdout:3/71: creat d3/d4/f1a x:0 0 0 2026-03-09T15:00:35.409 INFO:tasks.workunit.client.1.vm09.stdout:0/75: unlink da/dc/fd 0 2026-03-09T15:00:35.410 INFO:tasks.workunit.client.1.vm09.stdout:0/76: dread - da/dc/f17 zero size 2026-03-09T15:00:35.411 INFO:tasks.workunit.client.1.vm09.stdout:4/41: link f3 db/fe 0 2026-03-09T15:00:35.414 INFO:tasks.workunit.client.1.vm09.stdout:3/72: sync 2026-03-09T15:00:35.417 INFO:tasks.workunit.client.1.vm09.stdout:0/77: dwrite da/dc/f17 [0,4194304] 0 2026-03-09T15:00:35.418 INFO:tasks.workunit.client.1.vm09.stdout:0/78: chown da 148059 1 2026-03-09T15:00:35.418 INFO:tasks.workunit.client.1.vm09.stdout:4/42: dread f3 [0,4194304] 0 2026-03-09T15:00:35.428 INFO:tasks.workunit.client.1.vm09.stdout:0/79: dwrite da/fb [0,4194304] 0 2026-03-09T15:00:35.433 INFO:tasks.workunit.client.1.vm09.stdout:0/80: chown da/dc/d10/f16 5272536 1 2026-03-09T15:00:35.433 INFO:tasks.workunit.client.1.vm09.stdout:0/81: fsync da/dc/d10/f11 0 2026-03-09T15:00:35.438 INFO:tasks.workunit.client.1.vm09.stdout:4/43: dread f7 [0,4194304] 0 2026-03-09T15:00:35.444 INFO:tasks.workunit.client.1.vm09.stdout:4/44: dread db/fe [0,4194304] 0 2026-03-09T15:00:35.460 INFO:tasks.workunit.client.1.vm09.stdout:5/52: rename f0 to d2/f14 0 2026-03-09T15:00:35.461 INFO:tasks.workunit.client.1.vm09.stdout:5/53: chown d2/l3 2191577 1 2026-03-09T15:00:35.461 INFO:tasks.workunit.client.1.vm09.stdout:5/54: chown d2/d4 58 1 2026-03-09T15:00:35.461 INFO:tasks.workunit.client.1.vm09.stdout:1/39: truncate d8/fd 319505 0 2026-03-09T15:00:35.461 INFO:tasks.workunit.client.1.vm09.stdout:1/40: truncate d8/fa 5069013 0 2026-03-09T15:00:35.461 INFO:tasks.workunit.client.1.vm09.stdout:9/56: getdents d1/d7 0 2026-03-09T15:00:35.461 INFO:tasks.workunit.client.1.vm09.stdout:6/58: rename d6/l7 to d6/db/d10/l14 0 2026-03-09T15:00:35.461 INFO:tasks.workunit.client.1.vm09.stdout:9/57: readlink d1/l5 0 2026-03-09T15:00:35.462 INFO:tasks.workunit.client.1.vm09.stdout:9/58: stat d1 0 2026-03-09T15:00:35.465 INFO:tasks.workunit.client.1.vm09.stdout:5/55: rename d2/d4/f12 to d2/f15 0 2026-03-09T15:00:35.467 INFO:tasks.workunit.client.1.vm09.stdout:2/60: truncate f5 571253 0 2026-03-09T15:00:35.468 INFO:tasks.workunit.client.1.vm09.stdout:6/59: creat d6/db/f15 x:0 0 0 2026-03-09T15:00:35.471 INFO:tasks.workunit.client.1.vm09.stdout:1/41: chown f5 3512 1 2026-03-09T15:00:35.472 INFO:tasks.workunit.client.1.vm09.stdout:7/71: dwrite d3/f8 [0,4194304] 0 2026-03-09T15:00:35.472 INFO:tasks.workunit.client.1.vm09.stdout:9/59: rename d1/d7/f8 to d1/d7/d9/f16 0 2026-03-09T15:00:35.473 INFO:tasks.workunit.client.1.vm09.stdout:2/61: creat df/f16 x:0 0 0 2026-03-09T15:00:35.473 INFO:tasks.workunit.client.1.vm09.stdout:6/60: fdatasync f0 0 2026-03-09T15:00:35.478 INFO:tasks.workunit.client.1.vm09.stdout:6/61: dread f0 [0,4194304] 0 2026-03-09T15:00:35.480 INFO:tasks.workunit.client.1.vm09.stdout:4/45: getdents db 0 2026-03-09T15:00:35.480 INFO:tasks.workunit.client.1.vm09.stdout:0/82: getdents da/dc/d10 0 2026-03-09T15:00:35.480 INFO:tasks.workunit.client.1.vm09.stdout:0/83: stat f5 0 2026-03-09T15:00:35.481 INFO:tasks.workunit.client.1.vm09.stdout:1/42: creat d8/fe x:0 0 0 2026-03-09T15:00:35.488 INFO:tasks.workunit.client.1.vm09.stdout:7/72: dread - d3/db/dc/ff zero size 2026-03-09T15:00:35.493 INFO:tasks.workunit.client.1.vm09.stdout:1/43: dwrite d8/fe [0,4194304] 0 2026-03-09T15:00:35.493 INFO:tasks.workunit.client.1.vm09.stdout:6/62: creat d6/df/f16 x:0 0 0 2026-03-09T15:00:35.494 INFO:tasks.workunit.client.1.vm09.stdout:7/73: unlink d3/db/dc/ff 0 2026-03-09T15:00:35.502 INFO:tasks.workunit.client.1.vm09.stdout:4/46: getdents db 0 2026-03-09T15:00:35.503 INFO:tasks.workunit.client.1.vm09.stdout:6/63: rmdir d6 39 2026-03-09T15:00:35.503 INFO:tasks.workunit.client.1.vm09.stdout:0/84: mknod da/dc/c1a 0 2026-03-09T15:00:35.504 INFO:tasks.workunit.client.1.vm09.stdout:0/85: chown da/dc/d10/d19 125 1 2026-03-09T15:00:35.506 INFO:tasks.workunit.client.1.vm09.stdout:6/64: chown d6/l9 480 1 2026-03-09T15:00:35.506 INFO:tasks.workunit.client.1.vm09.stdout:6/65: read f0 [3928942,20249] 0 2026-03-09T15:00:35.507 INFO:tasks.workunit.client.1.vm09.stdout:7/74: getdents d3/db/d15 0 2026-03-09T15:00:35.511 INFO:tasks.workunit.client.1.vm09.stdout:6/66: creat d6/f17 x:0 0 0 2026-03-09T15:00:35.511 INFO:tasks.workunit.client.1.vm09.stdout:0/86: write f9 [584900,38034] 0 2026-03-09T15:00:35.512 INFO:tasks.workunit.client.1.vm09.stdout:0/87: readlink da/dc/d10/l13 0 2026-03-09T15:00:35.512 INFO:tasks.workunit.client.1.vm09.stdout:6/67: write d6/f17 [591192,74381] 0 2026-03-09T15:00:35.512 INFO:tasks.workunit.client.1.vm09.stdout:0/88: stat da/dc 0 2026-03-09T15:00:35.514 INFO:tasks.workunit.client.1.vm09.stdout:7/75: symlink d3/l17 0 2026-03-09T15:00:35.514 INFO:tasks.workunit.client.1.vm09.stdout:6/68: creat d6/db/d10/f18 x:0 0 0 2026-03-09T15:00:35.515 INFO:tasks.workunit.client.1.vm09.stdout:6/69: dread - d6/df/f16 zero size 2026-03-09T15:00:35.517 INFO:tasks.workunit.client.1.vm09.stdout:6/70: dread f0 [0,4194304] 0 2026-03-09T15:00:35.518 INFO:tasks.workunit.client.1.vm09.stdout:0/89: mknod da/dc/d10/c1b 0 2026-03-09T15:00:35.519 INFO:tasks.workunit.client.1.vm09.stdout:6/71: creat d6/db/d10/f19 x:0 0 0 2026-03-09T15:00:35.523 INFO:tasks.workunit.client.1.vm09.stdout:6/72: link d6/l9 d6/l1a 0 2026-03-09T15:00:35.524 INFO:tasks.workunit.client.1.vm09.stdout:0/90: dwrite da/dc/f17 [0,4194304] 0 2026-03-09T15:00:35.525 INFO:tasks.workunit.client.1.vm09.stdout:6/73: write d6/db/f15 [732669,108881] 0 2026-03-09T15:00:35.542 INFO:tasks.workunit.client.1.vm09.stdout:8/51: dread f8 [0,4194304] 0 2026-03-09T15:00:35.544 INFO:tasks.workunit.client.1.vm09.stdout:0/91: dwrite f5 [0,4194304] 0 2026-03-09T15:00:35.544 INFO:tasks.workunit.client.1.vm09.stdout:6/74: sync 2026-03-09T15:00:35.544 INFO:tasks.workunit.client.1.vm09.stdout:7/76: sync 2026-03-09T15:00:35.545 INFO:tasks.workunit.client.1.vm09.stdout:7/77: readlink d3/l17 0 2026-03-09T15:00:35.557 INFO:tasks.workunit.client.1.vm09.stdout:8/52: creat fa x:0 0 0 2026-03-09T15:00:35.558 INFO:tasks.workunit.client.1.vm09.stdout:6/75: creat d6/db/d10/f1b x:0 0 0 2026-03-09T15:00:35.560 INFO:tasks.workunit.client.1.vm09.stdout:2/62: fsync f0 0 2026-03-09T15:00:35.564 INFO:tasks.workunit.client.1.vm09.stdout:2/63: write df/f13 [395561,94203] 0 2026-03-09T15:00:35.565 INFO:tasks.workunit.client.1.vm09.stdout:2/64: truncate fc 1355143 0 2026-03-09T15:00:35.565 INFO:tasks.workunit.client.1.vm09.stdout:2/65: chown df/l10 170 1 2026-03-09T15:00:35.567 INFO:tasks.workunit.client.1.vm09.stdout:8/53: sync 2026-03-09T15:00:35.568 INFO:tasks.workunit.client.1.vm09.stdout:6/76: creat d6/db/d10/f1c x:0 0 0 2026-03-09T15:00:35.571 INFO:tasks.workunit.client.1.vm09.stdout:3/73: truncate d3/f9 753922 0 2026-03-09T15:00:35.578 INFO:tasks.workunit.client.1.vm09.stdout:2/66: dwrite fc [0,4194304] 0 2026-03-09T15:00:35.578 INFO:tasks.workunit.client.1.vm09.stdout:7/78: truncate d3/f9 355900 0 2026-03-09T15:00:35.578 INFO:tasks.workunit.client.1.vm09.stdout:6/77: dread f0 [0,4194304] 0 2026-03-09T15:00:35.578 INFO:tasks.workunit.client.1.vm09.stdout:1/44: rmdir d8 39 2026-03-09T15:00:35.579 INFO:tasks.workunit.client.1.vm09.stdout:6/78: write d6/f17 [583989,124100] 0 2026-03-09T15:00:35.579 INFO:tasks.workunit.client.1.vm09.stdout:9/60: write d1/d7/d9/f11 [3479335,67464] 0 2026-03-09T15:00:35.579 INFO:tasks.workunit.client.1.vm09.stdout:8/54: write f6 [475501,80092] 0 2026-03-09T15:00:35.582 INFO:tasks.workunit.client.1.vm09.stdout:6/79: sync 2026-03-09T15:00:35.585 INFO:tasks.workunit.client.1.vm09.stdout:6/80: write d6/db/d10/f1b [891532,106722] 0 2026-03-09T15:00:35.587 INFO:tasks.workunit.client.1.vm09.stdout:0/92: rename da/dc/d10/d19 to da/dc/d1c 0 2026-03-09T15:00:35.588 INFO:tasks.workunit.client.1.vm09.stdout:6/81: truncate d6/db/f15 1018752 0 2026-03-09T15:00:35.590 INFO:tasks.workunit.client.1.vm09.stdout:2/67: creat df/f17 x:0 0 0 2026-03-09T15:00:35.594 INFO:tasks.workunit.client.1.vm09.stdout:9/61: creat d1/d7/d9/f17 x:0 0 0 2026-03-09T15:00:35.594 INFO:tasks.workunit.client.1.vm09.stdout:9/62: write d1/d7/f13 [252218,25994] 0 2026-03-09T15:00:35.599 INFO:tasks.workunit.client.1.vm09.stdout:9/63: fsync d1/f4 0 2026-03-09T15:00:35.599 INFO:tasks.workunit.client.1.vm09.stdout:4/47: write f3 [885836,68152] 0 2026-03-09T15:00:35.599 INFO:tasks.workunit.client.1.vm09.stdout:8/55: dwrite f1 [0,4194304] 0 2026-03-09T15:00:35.599 INFO:tasks.workunit.client.1.vm09.stdout:0/93: mknod da/dc/d1c/c1d 0 2026-03-09T15:00:35.603 INFO:tasks.workunit.client.1.vm09.stdout:0/94: dread f8 [0,4194304] 0 2026-03-09T15:00:35.603 INFO:tasks.workunit.client.1.vm09.stdout:3/74: rename d3/f17 to d3/d4/f1b 0 2026-03-09T15:00:35.606 INFO:tasks.workunit.client.1.vm09.stdout:3/75: fdatasync d3/d4/f19 0 2026-03-09T15:00:35.606 INFO:tasks.workunit.client.1.vm09.stdout:0/95: chown da/dc/d10/l13 8 1 2026-03-09T15:00:35.606 INFO:tasks.workunit.client.1.vm09.stdout:7/79: symlink d3/l18 0 2026-03-09T15:00:35.618 INFO:tasks.workunit.client.1.vm09.stdout:1/45: creat d8/ff x:0 0 0 2026-03-09T15:00:35.620 INFO:tasks.workunit.client.1.vm09.stdout:8/56: creat fb x:0 0 0 2026-03-09T15:00:35.623 INFO:tasks.workunit.client.1.vm09.stdout:4/48: mknod db/cf 0 2026-03-09T15:00:35.624 INFO:tasks.workunit.client.1.vm09.stdout:9/64: rename d1/d7/d9/f17 to d1/d7/f18 0 2026-03-09T15:00:35.626 INFO:tasks.workunit.client.1.vm09.stdout:7/80: mkdir d3/db/d19 0 2026-03-09T15:00:35.636 INFO:tasks.workunit.client.1.vm09.stdout:0/96: rename da/dc/f18 to da/dc/d10/f1e 0 2026-03-09T15:00:35.640 INFO:tasks.workunit.client.1.vm09.stdout:3/76: link d3/d4/f1b d3/d4/d18/f1c 0 2026-03-09T15:00:35.640 INFO:tasks.workunit.client.1.vm09.stdout:6/82: link d6/c8 d6/db/c1d 0 2026-03-09T15:00:35.641 INFO:tasks.workunit.client.1.vm09.stdout:0/97: fdatasync da/dc/d10/f16 0 2026-03-09T15:00:35.641 INFO:tasks.workunit.client.1.vm09.stdout:3/77: chown d3/le 389904 1 2026-03-09T15:00:35.645 INFO:tasks.workunit.client.1.vm09.stdout:3/78: readlink d3/d4/l5 0 2026-03-09T15:00:35.646 INFO:tasks.workunit.client.1.vm09.stdout:5/56: write d2/f14 [1249886,46957] 0 2026-03-09T15:00:35.646 INFO:tasks.workunit.client.1.vm09.stdout:7/81: dwrite d3/db/dc/f11 [0,4194304] 0 2026-03-09T15:00:35.647 INFO:tasks.workunit.client.1.vm09.stdout:4/49: dread db/fe [0,4194304] 0 2026-03-09T15:00:35.649 INFO:tasks.workunit.client.1.vm09.stdout:5/57: write d2/d4/fd [4694271,92914] 0 2026-03-09T15:00:35.652 INFO:tasks.workunit.client.1.vm09.stdout:8/57: dwrite fa [0,4194304] 0 2026-03-09T15:00:35.652 INFO:tasks.workunit.client.1.vm09.stdout:0/98: symlink da/l1f 0 2026-03-09T15:00:35.656 INFO:tasks.workunit.client.1.vm09.stdout:5/58: chown d2/lf 469 1 2026-03-09T15:00:35.656 INFO:tasks.workunit.client.1.vm09.stdout:5/59: chown d2/lf 8767 1 2026-03-09T15:00:35.662 INFO:tasks.workunit.client.1.vm09.stdout:8/58: fdatasync fb 0 2026-03-09T15:00:35.663 INFO:tasks.workunit.client.1.vm09.stdout:6/83: symlink d6/db/d10/l1e 0 2026-03-09T15:00:35.664 INFO:tasks.workunit.client.1.vm09.stdout:6/84: read - d6/df/f16 zero size 2026-03-09T15:00:35.672 INFO:tasks.workunit.client.1.vm09.stdout:2/68: dwrite f3 [0,4194304] 0 2026-03-09T15:00:35.681 INFO:tasks.workunit.client.1.vm09.stdout:9/65: rmdir d1 39 2026-03-09T15:00:35.681 INFO:tasks.workunit.client.1.vm09.stdout:0/99: creat da/dc/d10/f20 x:0 0 0 2026-03-09T15:00:35.681 INFO:tasks.workunit.client.1.vm09.stdout:7/82: stat d3/fa 0 2026-03-09T15:00:35.681 INFO:tasks.workunit.client.1.vm09.stdout:6/85: truncate f0 2367025 0 2026-03-09T15:00:35.682 INFO:tasks.workunit.client.1.vm09.stdout:5/60: creat d2/d4/f16 x:0 0 0 2026-03-09T15:00:35.687 INFO:tasks.workunit.client.1.vm09.stdout:8/59: dwrite fb [0,4194304] 0 2026-03-09T15:00:35.694 INFO:tasks.workunit.client.1.vm09.stdout:0/100: dread da/fb [0,4194304] 0 2026-03-09T15:00:35.698 INFO:tasks.workunit.client.1.vm09.stdout:3/79: creat d3/d4/d18/f1d x:0 0 0 2026-03-09T15:00:35.698 INFO:tasks.workunit.client.1.vm09.stdout:3/80: readlink d3/d4/l7 0 2026-03-09T15:00:35.698 INFO:tasks.workunit.client.1.vm09.stdout:0/101: read da/dc/fe [4161625,51375] 0 2026-03-09T15:00:35.703 INFO:tasks.workunit.client.1.vm09.stdout:7/83: dwrite d3/f8 [4194304,4194304] 0 2026-03-09T15:00:35.703 INFO:tasks.workunit.client.1.vm09.stdout:6/86: creat d6/db/f1f x:0 0 0 2026-03-09T15:00:35.703 INFO:tasks.workunit.client.1.vm09.stdout:3/81: symlink d3/d4/d18/l1e 0 2026-03-09T15:00:35.704 INFO:tasks.workunit.client.1.vm09.stdout:0/102: mknod da/c21 0 2026-03-09T15:00:35.704 INFO:tasks.workunit.client.1.vm09.stdout:4/50: link db/ld db/l10 0 2026-03-09T15:00:35.706 INFO:tasks.workunit.client.1.vm09.stdout:4/51: write f4 [397735,25432] 0 2026-03-09T15:00:35.715 INFO:tasks.workunit.client.1.vm09.stdout:3/82: readlink l2 0 2026-03-09T15:00:35.717 INFO:tasks.workunit.client.1.vm09.stdout:7/84: creat d3/db/d15/f1a x:0 0 0 2026-03-09T15:00:35.718 INFO:tasks.workunit.client.1.vm09.stdout:2/69: dwrite fb [0,4194304] 0 2026-03-09T15:00:35.721 INFO:tasks.workunit.client.1.vm09.stdout:6/87: chown d6/db/d10/f1b 380077396 1 2026-03-09T15:00:35.721 INFO:tasks.workunit.client.1.vm09.stdout:8/60: getdents . 0 2026-03-09T15:00:35.725 INFO:tasks.workunit.client.1.vm09.stdout:6/88: stat d6/db/f15 0 2026-03-09T15:00:35.728 INFO:tasks.workunit.client.1.vm09.stdout:3/83: rename l2 to d3/l1f 0 2026-03-09T15:00:35.728 INFO:tasks.workunit.client.1.vm09.stdout:4/52: mknod db/c11 0 2026-03-09T15:00:35.728 INFO:tasks.workunit.client.1.vm09.stdout:7/85: truncate d3/db/dc/f11 5033482 0 2026-03-09T15:00:35.729 INFO:tasks.workunit.client.1.vm09.stdout:0/103: getdents da/dc/d1c 0 2026-03-09T15:00:35.745 INFO:tasks.workunit.client.1.vm09.stdout:9/66: dwrite d1/d7/f13 [0,4194304] 0 2026-03-09T15:00:35.745 INFO:tasks.workunit.client.1.vm09.stdout:9/67: readlink d1/d7/lc 0 2026-03-09T15:00:35.750 INFO:tasks.workunit.client.1.vm09.stdout:2/70: rename df/l10 to df/l18 0 2026-03-09T15:00:35.757 INFO:tasks.workunit.client.1.vm09.stdout:6/89: sync 2026-03-09T15:00:35.757 INFO:tasks.workunit.client.1.vm09.stdout:9/68: sync 2026-03-09T15:00:35.759 INFO:tasks.workunit.client.1.vm09.stdout:7/86: fsync d3/fd 0 2026-03-09T15:00:35.775 INFO:tasks.workunit.client.1.vm09.stdout:8/61: dwrite fa [4194304,4194304] 0 2026-03-09T15:00:35.775 INFO:tasks.workunit.client.1.vm09.stdout:4/53: fsync f7 0 2026-03-09T15:00:35.775 INFO:tasks.workunit.client.1.vm09.stdout:8/62: rmdir - no directory 2026-03-09T15:00:35.775 INFO:tasks.workunit.client.1.vm09.stdout:6/90: mkdir d6/d20 0 2026-03-09T15:00:35.784 INFO:tasks.workunit.client.1.vm09.stdout:3/84: dwrite d3/d4/d18/f1d [0,4194304] 0 2026-03-09T15:00:35.785 INFO:tasks.workunit.client.1.vm09.stdout:4/54: read f7 [3438365,53384] 0 2026-03-09T15:00:35.785 INFO:tasks.workunit.client.1.vm09.stdout:2/71: rename df/l15 to df/l19 0 2026-03-09T15:00:35.785 INFO:tasks.workunit.client.1.vm09.stdout:9/69: rename d1/d7/d9/f11 to d1/d7/d9/f19 0 2026-03-09T15:00:35.786 INFO:tasks.workunit.client.1.vm09.stdout:7/87: symlink d3/db/dc/l1b 0 2026-03-09T15:00:35.787 INFO:tasks.workunit.client.1.vm09.stdout:0/104: mkdir da/dc/d22 0 2026-03-09T15:00:35.787 INFO:tasks.workunit.client.1.vm09.stdout:3/85: write d3/d4/f19 [126098,18638] 0 2026-03-09T15:00:35.790 INFO:tasks.workunit.client.1.vm09.stdout:5/61: fsync d2/d4/f16 0 2026-03-09T15:00:35.794 INFO:tasks.workunit.client.1.vm09.stdout:2/72: dread fb [0,4194304] 0 2026-03-09T15:00:35.798 INFO:tasks.workunit.client.1.vm09.stdout:6/91: unlink d6/db/d10/f18 0 2026-03-09T15:00:35.806 INFO:tasks.workunit.client.1.vm09.stdout:7/88: rmdir d3 39 2026-03-09T15:00:35.807 INFO:tasks.workunit.client.1.vm09.stdout:8/63: dwrite fb [4194304,4194304] 0 2026-03-09T15:00:35.809 INFO:tasks.workunit.client.1.vm09.stdout:1/46: dwrite d8/fd [0,4194304] 0 2026-03-09T15:00:35.809 INFO:tasks.workunit.client.1.vm09.stdout:3/86: mknod d3/d4/c20 0 2026-03-09T15:00:35.816 INFO:tasks.workunit.client.1.vm09.stdout:2/73: symlink df/l1a 0 2026-03-09T15:00:35.819 INFO:tasks.workunit.client.1.vm09.stdout:0/105: dwrite da/f12 [0,4194304] 0 2026-03-09T15:00:35.830 INFO:tasks.workunit.client.1.vm09.stdout:6/92: creat d6/db/f21 x:0 0 0 2026-03-09T15:00:35.830 INFO:tasks.workunit.client.1.vm09.stdout:6/93: write d6/db/f15 [1888052,91378] 0 2026-03-09T15:00:35.830 INFO:tasks.workunit.client.1.vm09.stdout:9/70: dwrite d1/f14 [0,4194304] 0 2026-03-09T15:00:35.830 INFO:tasks.workunit.client.1.vm09.stdout:0/106: dread da/f12 [0,4194304] 0 2026-03-09T15:00:35.835 INFO:tasks.workunit.client.1.vm09.stdout:9/71: dwrite d1/d7/d9/f12 [0,4194304] 0 2026-03-09T15:00:35.838 INFO:tasks.workunit.client.1.vm09.stdout:7/89: sync 2026-03-09T15:00:35.843 INFO:tasks.workunit.client.1.vm09.stdout:8/64: rename l9 to lc 0 2026-03-09T15:00:35.843 INFO:tasks.workunit.client.1.vm09.stdout:3/87: symlink d3/d4/d18/l21 0 2026-03-09T15:00:35.850 INFO:tasks.workunit.client.1.vm09.stdout:2/74: creat df/f1b x:0 0 0 2026-03-09T15:00:35.852 INFO:tasks.workunit.client.1.vm09.stdout:2/75: stat c9 0 2026-03-09T15:00:35.853 INFO:tasks.workunit.client.1.vm09.stdout:2/76: write df/f14 [650474,22819] 0 2026-03-09T15:00:35.853 INFO:tasks.workunit.client.1.vm09.stdout:6/94: symlink d6/l22 0 2026-03-09T15:00:35.857 INFO:tasks.workunit.client.1.vm09.stdout:7/90: write d3/f16 [657006,51878] 0 2026-03-09T15:00:35.864 INFO:tasks.workunit.client.1.vm09.stdout:3/88: unlink d3/d4/f19 0 2026-03-09T15:00:35.864 INFO:tasks.workunit.client.1.vm09.stdout:6/95: mkdir d6/df/d23 0 2026-03-09T15:00:35.865 INFO:tasks.workunit.client.1.vm09.stdout:8/65: sync 2026-03-09T15:00:35.867 INFO:tasks.workunit.client.1.vm09.stdout:8/66: stat f6 0 2026-03-09T15:00:35.867 INFO:tasks.workunit.client.1.vm09.stdout:9/72: rename d1/d7/d9/c10 to d1/d7/c1a 0 2026-03-09T15:00:35.868 INFO:tasks.workunit.client.1.vm09.stdout:9/73: readlink d1/d7/lc 0 2026-03-09T15:00:35.868 INFO:tasks.workunit.client.1.vm09.stdout:9/74: chown d1/d7/d9/f16 508 1 2026-03-09T15:00:35.869 INFO:tasks.workunit.client.1.vm09.stdout:7/91: read d3/f9 [3398,22894] 0 2026-03-09T15:00:35.870 INFO:tasks.workunit.client.1.vm09.stdout:5/62: link d2/d4/c9 d2/d4/c17 0 2026-03-09T15:00:35.871 INFO:tasks.workunit.client.1.vm09.stdout:4/55: getdents db 0 2026-03-09T15:00:35.872 INFO:tasks.workunit.client.1.vm09.stdout:4/56: truncate f4 847668 0 2026-03-09T15:00:35.872 INFO:tasks.workunit.client.1.vm09.stdout:3/89: mknod d3/d4/d18/c22 0 2026-03-09T15:00:35.875 INFO:tasks.workunit.client.1.vm09.stdout:8/67: mknod cd 0 2026-03-09T15:00:35.876 INFO:tasks.workunit.client.1.vm09.stdout:2/77: rename fc to df/f1c 0 2026-03-09T15:00:35.876 INFO:tasks.workunit.client.1.vm09.stdout:2/78: chown df 158 1 2026-03-09T15:00:35.878 INFO:tasks.workunit.client.1.vm09.stdout:5/63: dwrite d2/d4/fd [0,4194304] 0 2026-03-09T15:00:35.880 INFO:tasks.workunit.client.1.vm09.stdout:1/47: getdents d8 0 2026-03-09T15:00:35.880 INFO:tasks.workunit.client.1.vm09.stdout:4/57: stat db/l10 0 2026-03-09T15:00:35.880 INFO:tasks.workunit.client.1.vm09.stdout:4/58: read f3 [3699157,955] 0 2026-03-09T15:00:35.880 INFO:tasks.workunit.client.1.vm09.stdout:6/96: mkdir d6/d20/d24 0 2026-03-09T15:00:35.881 INFO:tasks.workunit.client.1.vm09.stdout:6/97: truncate d6/db/f21 763100 0 2026-03-09T15:00:35.882 INFO:tasks.workunit.client.1.vm09.stdout:6/98: truncate d6/df/f16 16521 0 2026-03-09T15:00:35.882 INFO:tasks.workunit.client.1.vm09.stdout:6/99: stat d6/db/f21 0 2026-03-09T15:00:35.883 INFO:tasks.workunit.client.1.vm09.stdout:6/100: truncate d6/df/f16 836506 0 2026-03-09T15:00:35.883 INFO:tasks.workunit.client.1.vm09.stdout:6/101: truncate d6/db/f1f 492776 0 2026-03-09T15:00:35.886 INFO:tasks.workunit.client.1.vm09.stdout:6/102: write d6/df/f16 [452185,103641] 0 2026-03-09T15:00:35.893 INFO:tasks.workunit.client.1.vm09.stdout:2/79: creat df/f1d x:0 0 0 2026-03-09T15:00:35.898 INFO:tasks.workunit.client.1.vm09.stdout:7/92: unlink d3/l7 0 2026-03-09T15:00:35.899 INFO:tasks.workunit.client.1.vm09.stdout:2/80: dread f5 [0,4194304] 0 2026-03-09T15:00:35.899 INFO:tasks.workunit.client.1.vm09.stdout:6/103: dread d6/db/d10/f1b [0,4194304] 0 2026-03-09T15:00:35.899 INFO:tasks.workunit.client.1.vm09.stdout:1/48: rmdir d8 39 2026-03-09T15:00:35.909 INFO:tasks.workunit.client.1.vm09.stdout:2/81: dwrite f3 [0,4194304] 0 2026-03-09T15:00:35.909 INFO:tasks.workunit.client.1.vm09.stdout:4/59: mkdir db/d12 0 2026-03-09T15:00:35.909 INFO:tasks.workunit.client.1.vm09.stdout:5/64: symlink d2/l18 0 2026-03-09T15:00:35.909 INFO:tasks.workunit.client.1.vm09.stdout:1/49: read f7 [2014428,35307] 0 2026-03-09T15:00:35.911 INFO:tasks.workunit.client.1.vm09.stdout:8/68: link f8 fe 0 2026-03-09T15:00:35.913 INFO:tasks.workunit.client.1.vm09.stdout:6/104: dwrite d6/f17 [0,4194304] 0 2026-03-09T15:00:35.913 INFO:tasks.workunit.client.1.vm09.stdout:9/75: link d1/d7/d9/ca d1/d7/d9/c1b 0 2026-03-09T15:00:35.913 INFO:tasks.workunit.client.1.vm09.stdout:7/93: symlink d3/db/d15/l1c 0 2026-03-09T15:00:35.918 INFO:tasks.workunit.client.1.vm09.stdout:4/60: readlink l5 0 2026-03-09T15:00:35.919 INFO:tasks.workunit.client.1.vm09.stdout:5/65: mknod d2/d4/c19 0 2026-03-09T15:00:35.924 INFO:tasks.workunit.client.1.vm09.stdout:8/69: mkdir df 0 2026-03-09T15:00:35.925 INFO:tasks.workunit.client.1.vm09.stdout:6/105: unlink d6/db/d10/l1e 0 2026-03-09T15:00:35.931 INFO:tasks.workunit.client.1.vm09.stdout:9/76: write d1/d7/d9/f19 [3332894,27460] 0 2026-03-09T15:00:35.938 INFO:tasks.workunit.client.1.vm09.stdout:1/50: dwrite d8/fe [4194304,4194304] 0 2026-03-09T15:00:35.939 INFO:tasks.workunit.client.1.vm09.stdout:5/66: symlink d2/d4/l1a 0 2026-03-09T15:00:35.942 INFO:tasks.workunit.client.1.vm09.stdout:6/106: write d6/db/d10/f1b [1142834,43120] 0 2026-03-09T15:00:35.949 INFO:tasks.workunit.client.1.vm09.stdout:6/107: dread d6/db/f1f [0,4194304] 0 2026-03-09T15:00:35.957 INFO:tasks.workunit.client.1.vm09.stdout:0/107: truncate f9 2450723 0 2026-03-09T15:00:35.957 INFO:tasks.workunit.client.1.vm09.stdout:5/67: dread d2/f15 [0,4194304] 0 2026-03-09T15:00:35.957 INFO:tasks.workunit.client.1.vm09.stdout:2/82: truncate f5 231120 0 2026-03-09T15:00:35.958 INFO:tasks.workunit.client.1.vm09.stdout:1/51: sync 2026-03-09T15:00:35.958 INFO:tasks.workunit.client.1.vm09.stdout:6/108: sync 2026-03-09T15:00:35.958 INFO:tasks.workunit.client.1.vm09.stdout:7/94: rename d3/db/dc to d3/d1d 0 2026-03-09T15:00:35.961 INFO:tasks.workunit.client.1.vm09.stdout:4/61: rename db to db/d12/d13 22 2026-03-09T15:00:35.962 INFO:tasks.workunit.client.1.vm09.stdout:8/70: rename df to df/d10 22 2026-03-09T15:00:35.962 INFO:tasks.workunit.client.1.vm09.stdout:3/90: rmdir d3 39 2026-03-09T15:00:35.964 INFO:tasks.workunit.client.1.vm09.stdout:6/109: creat d6/f25 x:0 0 0 2026-03-09T15:00:35.964 INFO:tasks.workunit.client.1.vm09.stdout:0/108: mknod da/dc/d22/c23 0 2026-03-09T15:00:35.965 INFO:tasks.workunit.client.1.vm09.stdout:8/71: sync 2026-03-09T15:00:35.966 INFO:tasks.workunit.client.1.vm09.stdout:8/72: stat c7 0 2026-03-09T15:00:35.969 INFO:tasks.workunit.client.1.vm09.stdout:7/95: mkdir d3/db/d15/d1e 0 2026-03-09T15:00:35.969 INFO:tasks.workunit.client.1.vm09.stdout:2/83: dread df/f14 [0,4194304] 0 2026-03-09T15:00:35.970 INFO:tasks.workunit.client.1.vm09.stdout:4/62: creat db/f14 x:0 0 0 2026-03-09T15:00:35.971 INFO:tasks.workunit.client.1.vm09.stdout:4/63: chown db/d12 2 1 2026-03-09T15:00:35.971 INFO:tasks.workunit.client.1.vm09.stdout:0/109: symlink da/dc/d1c/l24 0 2026-03-09T15:00:35.975 INFO:tasks.workunit.client.1.vm09.stdout:6/110: unlink d6/db/f21 0 2026-03-09T15:00:35.976 INFO:tasks.workunit.client.1.vm09.stdout:9/77: rename d1/d7/d9/ld to d1/d7/l1c 0 2026-03-09T15:00:35.976 INFO:tasks.workunit.client.1.vm09.stdout:5/68: rename d2/d4 to d2/d4/d1b 22 2026-03-09T15:00:35.976 INFO:tasks.workunit.client.1.vm09.stdout:3/91: symlink d3/l23 0 2026-03-09T15:00:35.977 INFO:tasks.workunit.client.1.vm09.stdout:9/78: fsync d1/f14 0 2026-03-09T15:00:35.977 INFO:tasks.workunit.client.1.vm09.stdout:2/84: readlink df/l18 0 2026-03-09T15:00:35.977 INFO:tasks.workunit.client.1.vm09.stdout:5/69: readlink d2/lb 0 2026-03-09T15:00:35.978 INFO:tasks.workunit.client.1.vm09.stdout:4/64: fdatasync f3 0 2026-03-09T15:00:35.981 INFO:tasks.workunit.client.1.vm09.stdout:0/110: creat da/dc/d22/f25 x:0 0 0 2026-03-09T15:00:35.981 INFO:tasks.workunit.client.1.vm09.stdout:6/111: mknod d6/d20/c26 0 2026-03-09T15:00:35.982 INFO:tasks.workunit.client.1.vm09.stdout:6/112: chown d6/db/ld 285312 1 2026-03-09T15:00:35.982 INFO:tasks.workunit.client.1.vm09.stdout:6/113: dread - d6/f25 zero size 2026-03-09T15:00:35.982 INFO:tasks.workunit.client.1.vm09.stdout:4/65: dread f4 [0,4194304] 0 2026-03-09T15:00:35.982 INFO:tasks.workunit.client.1.vm09.stdout:4/66: fdatasync db/fe 0 2026-03-09T15:00:35.986 INFO:tasks.workunit.client.1.vm09.stdout:8/73: rename c7 to df/c11 0 2026-03-09T15:00:35.988 INFO:tasks.workunit.client.1.vm09.stdout:8/74: fdatasync fa 0 2026-03-09T15:00:35.988 INFO:tasks.workunit.client.1.vm09.stdout:6/114: unlink d6/db/d10/l12 0 2026-03-09T15:00:35.988 INFO:tasks.workunit.client.1.vm09.stdout:4/67: unlink db/l10 0 2026-03-09T15:00:35.989 INFO:tasks.workunit.client.1.vm09.stdout:5/70: write d2/f15 [191722,10722] 0 2026-03-09T15:00:35.991 INFO:tasks.workunit.client.1.vm09.stdout:8/75: write fb [2419785,15461] 0 2026-03-09T15:00:35.998 INFO:tasks.workunit.client.1.vm09.stdout:8/76: chown df 31676 1 2026-03-09T15:00:36.005 INFO:tasks.workunit.client.1.vm09.stdout:4/68: dwrite f4 [0,4194304] 0 2026-03-09T15:00:36.009 INFO:tasks.workunit.client.1.vm09.stdout:9/79: dwrite d1/d7/f18 [0,4194304] 0 2026-03-09T15:00:36.009 INFO:tasks.workunit.client.1.vm09.stdout:6/115: rename d6/db/f15 to d6/d20/f27 0 2026-03-09T15:00:36.015 INFO:tasks.workunit.client.1.vm09.stdout:1/52: write d8/fe [8897841,10534] 0 2026-03-09T15:00:36.015 INFO:tasks.workunit.client.1.vm09.stdout:5/71: mknod d2/d4/c1c 0 2026-03-09T15:00:36.015 INFO:tasks.workunit.client.1.vm09.stdout:0/111: write f9 [787505,28927] 0 2026-03-09T15:00:36.020 INFO:tasks.workunit.client.1.vm09.stdout:1/53: mkdir d8/d10 0 2026-03-09T15:00:36.022 INFO:tasks.workunit.client.1.vm09.stdout:0/112: write da/dc/d22/f25 [101909,38975] 0 2026-03-09T15:00:36.023 INFO:tasks.workunit.client.1.vm09.stdout:7/96: dread d3/f16 [0,4194304] 0 2026-03-09T15:00:36.032 INFO:tasks.workunit.client.1.vm09.stdout:6/116: mknod d6/df/d23/c28 0 2026-03-09T15:00:36.033 INFO:tasks.workunit.client.1.vm09.stdout:5/72: write d2/d4/fd [5404187,94037] 0 2026-03-09T15:00:36.037 INFO:tasks.workunit.client.1.vm09.stdout:5/73: chown d2/la 18 1 2026-03-09T15:00:36.040 INFO:tasks.workunit.client.1.vm09.stdout:4/69: dwrite f7 [0,4194304] 0 2026-03-09T15:00:36.043 INFO:tasks.workunit.client.1.vm09.stdout:8/77: dwrite fa [0,4194304] 0 2026-03-09T15:00:36.068 INFO:tasks.workunit.client.1.vm09.stdout:2/85: dwrite fb [0,4194304] 0 2026-03-09T15:00:36.069 INFO:tasks.workunit.client.1.vm09.stdout:2/86: stat df/f1b 0 2026-03-09T15:00:36.069 INFO:tasks.workunit.client.1.vm09.stdout:2/87: fdatasync df/f16 0 2026-03-09T15:00:36.070 INFO:tasks.workunit.client.1.vm09.stdout:2/88: truncate df/f1b 722368 0 2026-03-09T15:00:36.071 INFO:tasks.workunit.client.1.vm09.stdout:2/89: write df/f16 [1022924,125209] 0 2026-03-09T15:00:36.077 INFO:tasks.workunit.client.1.vm09.stdout:3/92: dwrite d3/f9 [0,4194304] 0 2026-03-09T15:00:36.087 INFO:tasks.workunit.client.1.vm09.stdout:8/78: sync 2026-03-09T15:00:36.087 INFO:tasks.workunit.client.1.vm09.stdout:3/93: chown d3/l23 67004979 1 2026-03-09T15:00:36.091 INFO:tasks.workunit.client.1.vm09.stdout:2/90: dwrite df/f1d [0,4194304] 0 2026-03-09T15:00:36.091 INFO:tasks.workunit.client.1.vm09.stdout:3/94: chown d3/le 119 1 2026-03-09T15:00:36.104 INFO:tasks.workunit.client.1.vm09.stdout:2/91: dwrite df/f1b [0,4194304] 0 2026-03-09T15:00:36.105 INFO:tasks.workunit.client.1.vm09.stdout:2/92: read df/f1c [3485410,37516] 0 2026-03-09T15:00:36.117 INFO:tasks.workunit.client.1.vm09.stdout:8/79: fdatasync fa 0 2026-03-09T15:00:36.149 INFO:tasks.workunit.client.1.vm09.stdout:6/117: creat d6/df/d23/f29 x:0 0 0 2026-03-09T15:00:36.149 INFO:tasks.workunit.client.1.vm09.stdout:0/113: mknod da/dc/d10/c26 0 2026-03-09T15:00:36.149 INFO:tasks.workunit.client.1.vm09.stdout:5/74: creat d2/d4/f1d x:0 0 0 2026-03-09T15:00:36.150 INFO:tasks.workunit.client.1.vm09.stdout:9/80: creat d1/d7/d9/f1d x:0 0 0 2026-03-09T15:00:36.151 INFO:tasks.workunit.client.1.vm09.stdout:7/97: rename d3/db/d15/f1a to d3/d1d/f1f 0 2026-03-09T15:00:36.151 INFO:tasks.workunit.client.1.vm09.stdout:7/98: readlink d3/l17 0 2026-03-09T15:00:36.155 INFO:tasks.workunit.client.1.vm09.stdout:2/93: rmdir df 39 2026-03-09T15:00:36.156 INFO:tasks.workunit.client.1.vm09.stdout:8/80: creat df/f12 x:0 0 0 2026-03-09T15:00:36.160 INFO:tasks.workunit.client.1.vm09.stdout:4/70: symlink db/d12/l15 0 2026-03-09T15:00:36.160 INFO:tasks.workunit.client.1.vm09.stdout:9/81: mkdir d1/d7/d1e 0 2026-03-09T15:00:36.160 INFO:tasks.workunit.client.1.vm09.stdout:5/75: mknod d2/c1e 0 2026-03-09T15:00:36.161 INFO:tasks.workunit.client.1.vm09.stdout:9/82: chown d1/d7/d1e 8204 1 2026-03-09T15:00:36.161 INFO:tasks.workunit.client.1.vm09.stdout:8/81: sync 2026-03-09T15:00:36.163 INFO:tasks.workunit.client.1.vm09.stdout:4/71: mkdir db/d12/d16 0 2026-03-09T15:00:36.167 INFO:tasks.workunit.client.1.vm09.stdout:4/72: fdatasync db/f14 0 2026-03-09T15:00:36.167 INFO:tasks.workunit.client.1.vm09.stdout:4/73: fdatasync f7 0 2026-03-09T15:00:36.167 INFO:tasks.workunit.client.1.vm09.stdout:2/94: dread f3 [0,4194304] 0 2026-03-09T15:00:36.168 INFO:tasks.workunit.client.1.vm09.stdout:5/76: creat d2/d4/f1f x:0 0 0 2026-03-09T15:00:36.168 INFO:tasks.workunit.client.1.vm09.stdout:3/95: getdents d3 0 2026-03-09T15:00:36.168 INFO:tasks.workunit.client.1.vm09.stdout:7/99: write d3/f16 [818716,105990] 0 2026-03-09T15:00:36.183 INFO:tasks.workunit.client.1.vm09.stdout:5/77: dwrite d2/d4/f16 [0,4194304] 0 2026-03-09T15:00:36.185 INFO:tasks.workunit.client.1.vm09.stdout:9/83: dwrite d1/d7/f18 [0,4194304] 0 2026-03-09T15:00:36.187 INFO:tasks.workunit.client.1.vm09.stdout:5/78: dread d2/f15 [0,4194304] 0 2026-03-09T15:00:36.198 INFO:tasks.workunit.client.1.vm09.stdout:9/84: write d1/d7/d9/f19 [2172104,58087] 0 2026-03-09T15:00:36.200 INFO:tasks.workunit.client.1.vm09.stdout:3/96: symlink d3/d4/d18/l24 0 2026-03-09T15:00:36.208 INFO:tasks.workunit.client.1.vm09.stdout:3/97: dwrite d3/ff [0,4194304] 0 2026-03-09T15:00:36.226 INFO:tasks.workunit.client.1.vm09.stdout:4/74: symlink db/d12/d16/l17 0 2026-03-09T15:00:36.226 INFO:tasks.workunit.client.1.vm09.stdout:5/79: mknod d2/c20 0 2026-03-09T15:00:36.226 INFO:tasks.workunit.client.1.vm09.stdout:9/85: readlink d1/d7/l1c 0 2026-03-09T15:00:36.226 INFO:tasks.workunit.client.1.vm09.stdout:9/86: chown d1/d7/ce 53347 1 2026-03-09T15:00:36.226 INFO:tasks.workunit.client.1.vm09.stdout:4/75: symlink db/d12/d16/l18 0 2026-03-09T15:00:36.231 INFO:tasks.workunit.client.1.vm09.stdout:9/87: fsync d1/d7/d9/f16 0 2026-03-09T15:00:36.232 INFO:tasks.workunit.client.1.vm09.stdout:4/76: unlink db/d12/d16/l17 0 2026-03-09T15:00:36.232 INFO:tasks.workunit.client.1.vm09.stdout:9/88: write d1/d7/f18 [2747896,29742] 0 2026-03-09T15:00:36.233 INFO:tasks.workunit.client.1.vm09.stdout:2/95: link df/l12 df/l1e 0 2026-03-09T15:00:36.233 INFO:tasks.workunit.client.1.vm09.stdout:4/77: chown db/f14 3 1 2026-03-09T15:00:36.236 INFO:tasks.workunit.client.1.vm09.stdout:2/96: chown f0 271335 1 2026-03-09T15:00:36.240 INFO:tasks.workunit.client.1.vm09.stdout:9/89: creat d1/f1f x:0 0 0 2026-03-09T15:00:36.241 INFO:tasks.workunit.client.1.vm09.stdout:4/78: truncate f3 2104195 0 2026-03-09T15:00:36.242 INFO:tasks.workunit.client.1.vm09.stdout:9/90: unlink d1/d7/f18 0 2026-03-09T15:00:36.242 INFO:tasks.workunit.client.1.vm09.stdout:9/91: fsync d1/f4 0 2026-03-09T15:00:36.242 INFO:tasks.workunit.client.1.vm09.stdout:4/79: mkdir db/d19 0 2026-03-09T15:00:36.243 INFO:tasks.workunit.client.1.vm09.stdout:4/80: chown db/d19 556 1 2026-03-09T15:00:36.248 INFO:tasks.workunit.client.1.vm09.stdout:9/92: dwrite d1/f1f [0,4194304] 0 2026-03-09T15:00:36.259 INFO:tasks.workunit.client.1.vm09.stdout:9/93: creat d1/d7/d1e/f20 x:0 0 0 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:9/94: chown d1/d7/d9/f12 232138065 1 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:9/95: unlink d1/f14 0 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:9/96: symlink d1/d7/l21 0 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:9/97: write d1/d7/f13 [1543493,70235] 0 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:0/114: getdents da/dc/d10 0 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:0/115: write da/dc/d10/f20 [248968,56543] 0 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:5/80: fsync d2/d4/f1d 0 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:7/100: rmdir d3/db/d15 39 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:5/81: mkdir d2/d4/d21 0 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:0/116: truncate da/fb 2366323 0 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:5/82: creat d2/f22 x:0 0 0 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:7/101: dwrite d3/fa [0,4194304] 0 2026-03-09T15:00:36.279 INFO:tasks.workunit.client.1.vm09.stdout:0/117: unlink da/dc/d10/f20 0 2026-03-09T15:00:36.363 INFO:tasks.workunit.client.1.vm09.stdout:8/82: truncate fa 2850079 0 2026-03-09T15:00:36.365 INFO:tasks.workunit.client.1.vm09.stdout:1/54: truncate f5 3985027 0 2026-03-09T15:00:36.365 INFO:tasks.workunit.client.1.vm09.stdout:8/83: creat df/f13 x:0 0 0 2026-03-09T15:00:36.372 INFO:tasks.workunit.client.1.vm09.stdout:8/84: dwrite df/f13 [0,4194304] 0 2026-03-09T15:00:36.382 INFO:tasks.workunit.client.1.vm09.stdout:8/85: dwrite fb [8388608,4194304] 0 2026-03-09T15:00:36.396 INFO:tasks.workunit.client.1.vm09.stdout:8/86: rename df/f13 to df/f14 0 2026-03-09T15:00:36.397 INFO:tasks.workunit.client.1.vm09.stdout:8/87: symlink df/l15 0 2026-03-09T15:00:36.398 INFO:tasks.workunit.client.1.vm09.stdout:8/88: fdatasync fa 0 2026-03-09T15:00:36.400 INFO:tasks.workunit.client.1.vm09.stdout:8/89: symlink df/l16 0 2026-03-09T15:00:36.424 INFO:tasks.workunit.client.1.vm09.stdout:8/90: unlink fb 0 2026-03-09T15:00:36.424 INFO:tasks.workunit.client.1.vm09.stdout:8/91: dread f8 [0,4194304] 0 2026-03-09T15:00:36.518 INFO:tasks.workunit.client.1.vm09.stdout:6/118: dwrite f0 [0,4194304] 0 2026-03-09T15:00:36.521 INFO:tasks.workunit.client.1.vm09.stdout:6/119: chown d6/df/d23/c28 70678676 1 2026-03-09T15:00:36.521 INFO:tasks.workunit.client.1.vm09.stdout:6/120: write d6/f25 [960390,118898] 0 2026-03-09T15:00:36.527 INFO:tasks.workunit.client.1.vm09.stdout:6/121: dread d6/db/f1f [0,4194304] 0 2026-03-09T15:00:36.529 INFO:tasks.workunit.client.1.vm09.stdout:6/122: mkdir d6/d20/d2a 0 2026-03-09T15:00:36.530 INFO:tasks.workunit.client.1.vm09.stdout:6/123: truncate d6/d20/f27 2867571 0 2026-03-09T15:00:36.535 INFO:tasks.workunit.client.1.vm09.stdout:6/124: unlink d6/c8 0 2026-03-09T15:00:36.537 INFO:tasks.workunit.client.1.vm09.stdout:6/125: chown d6/l9 1 1 2026-03-09T15:00:36.571 INFO:tasks.workunit.client.1.vm09.stdout:6/126: dread d6/f25 [0,4194304] 0 2026-03-09T15:00:36.575 INFO:tasks.workunit.client.1.vm09.stdout:6/127: rename d6/db/d10/f1b to d6/db/d10/f2b 0 2026-03-09T15:00:36.576 INFO:tasks.workunit.client.1.vm09.stdout:6/128: dread - d6/db/d10/f19 zero size 2026-03-09T15:00:36.587 INFO:tasks.workunit.client.1.vm09.stdout:6/129: getdents d6/db 0 2026-03-09T15:00:36.587 INFO:tasks.workunit.client.1.vm09.stdout:6/130: stat d6/d20/d24 0 2026-03-09T15:00:36.591 INFO:tasks.workunit.client.1.vm09.stdout:6/131: creat d6/db/d10/f2c x:0 0 0 2026-03-09T15:00:36.597 INFO:tasks.workunit.client.1.vm09.stdout:6/132: symlink d6/d20/d2a/l2d 0 2026-03-09T15:00:36.599 INFO:tasks.workunit.client.1.vm09.stdout:6/133: dread d6/f25 [0,4194304] 0 2026-03-09T15:00:36.616 INFO:tasks.workunit.client.1.vm09.stdout:6/134: sync 2026-03-09T15:00:36.617 INFO:tasks.workunit.client.1.vm09.stdout:6/135: write d6/df/f16 [1271072,115617] 0 2026-03-09T15:00:36.628 INFO:tasks.workunit.client.1.vm09.stdout:3/98: truncate d3/f9 3662173 0 2026-03-09T15:00:36.634 INFO:tasks.workunit.client.1.vm09.stdout:4/81: rmdir db/d12/d16 39 2026-03-09T15:00:36.637 INFO:tasks.workunit.client.1.vm09.stdout:3/99: link d3/d4/ld d3/d4/l25 0 2026-03-09T15:00:36.638 INFO:tasks.workunit.client.1.vm09.stdout:3/100: write d3/f6 [844911,113117] 0 2026-03-09T15:00:36.638 INFO:tasks.workunit.client.1.vm09.stdout:2/97: dwrite f0 [0,4194304] 0 2026-03-09T15:00:36.640 INFO:tasks.workunit.client.1.vm09.stdout:3/101: write d3/d4/f1a [638512,34826] 0 2026-03-09T15:00:36.643 INFO:tasks.workunit.client.1.vm09.stdout:2/98: mkdir df/d1f 0 2026-03-09T15:00:36.644 INFO:tasks.workunit.client.1.vm09.stdout:2/99: chown df/d1f 0 1 2026-03-09T15:00:36.646 INFO:tasks.workunit.client.1.vm09.stdout:3/102: link d3/d4/f1a d3/d4/f26 0 2026-03-09T15:00:36.647 INFO:tasks.workunit.client.1.vm09.stdout:2/100: mkdir df/d20 0 2026-03-09T15:00:36.648 INFO:tasks.workunit.client.1.vm09.stdout:2/101: chown c9 62 1 2026-03-09T15:00:36.650 INFO:tasks.workunit.client.1.vm09.stdout:2/102: dread df/f1b [0,4194304] 0 2026-03-09T15:00:36.656 INFO:tasks.workunit.client.1.vm09.stdout:2/103: dread f4 [8388608,4194304] 0 2026-03-09T15:00:36.661 INFO:tasks.workunit.client.1.vm09.stdout:3/103: dread d3/d4/f1a [0,4194304] 0 2026-03-09T15:00:36.663 INFO:tasks.workunit.client.1.vm09.stdout:3/104: chown d3/d4/f8 0 1 2026-03-09T15:00:36.663 INFO:tasks.workunit.client.1.vm09.stdout:2/104: dwrite df/f17 [0,4194304] 0 2026-03-09T15:00:36.665 INFO:tasks.workunit.client.1.vm09.stdout:3/105: truncate d3/ff 5051155 0 2026-03-09T15:00:36.665 INFO:tasks.workunit.client.1.vm09.stdout:2/105: chown c8 0 1 2026-03-09T15:00:36.667 INFO:tasks.workunit.client.1.vm09.stdout:3/106: write d3/d4/f26 [1318281,121080] 0 2026-03-09T15:00:36.671 INFO:tasks.workunit.client.1.vm09.stdout:2/106: mknod df/d1f/c21 0 2026-03-09T15:00:36.672 INFO:tasks.workunit.client.1.vm09.stdout:3/107: link d3/c13 d3/c27 0 2026-03-09T15:00:36.673 INFO:tasks.workunit.client.1.vm09.stdout:3/108: mknod d3/d4/d18/c28 0 2026-03-09T15:00:36.675 INFO:tasks.workunit.client.1.vm09.stdout:2/107: dread df/f16 [0,4194304] 0 2026-03-09T15:00:36.685 INFO:tasks.workunit.client.1.vm09.stdout:3/109: sync 2026-03-09T15:00:36.686 INFO:tasks.workunit.client.1.vm09.stdout:3/110: chown d3/d4/c11 9407186 1 2026-03-09T15:00:36.689 INFO:tasks.workunit.client.1.vm09.stdout:2/108: fsync f0 0 2026-03-09T15:00:36.695 INFO:tasks.workunit.client.1.vm09.stdout:2/109: dwrite fb [0,4194304] 0 2026-03-09T15:00:36.698 INFO:tasks.workunit.client.1.vm09.stdout:2/110: chown df/f1c 12508831 1 2026-03-09T15:00:36.707 INFO:tasks.workunit.client.1.vm09.stdout:2/111: rmdir df 39 2026-03-09T15:00:36.710 INFO:tasks.workunit.client.1.vm09.stdout:2/112: mknod df/d1f/c22 0 2026-03-09T15:00:36.712 INFO:tasks.workunit.client.1.vm09.stdout:2/113: dread f4 [8388608,4194304] 0 2026-03-09T15:00:36.713 INFO:tasks.workunit.client.1.vm09.stdout:2/114: creat df/f23 x:0 0 0 2026-03-09T15:00:36.715 INFO:tasks.workunit.client.1.vm09.stdout:2/115: creat df/d20/f24 x:0 0 0 2026-03-09T15:00:36.716 INFO:tasks.workunit.client.1.vm09.stdout:2/116: mknod df/d1f/c25 0 2026-03-09T15:00:36.717 INFO:tasks.workunit.client.1.vm09.stdout:2/117: write df/f1d [1071885,19842] 0 2026-03-09T15:00:36.735 INFO:tasks.workunit.client.1.vm09.stdout:9/98: truncate d1/f1f 1916435 0 2026-03-09T15:00:36.736 INFO:tasks.workunit.client.1.vm09.stdout:9/99: creat d1/d7/d1e/f22 x:0 0 0 2026-03-09T15:00:36.741 INFO:tasks.workunit.client.1.vm09.stdout:9/100: sync 2026-03-09T15:00:36.742 INFO:tasks.workunit.client.1.vm09.stdout:9/101: creat d1/d7/d9/f23 x:0 0 0 2026-03-09T15:00:36.746 INFO:tasks.workunit.client.1.vm09.stdout:9/102: creat d1/f24 x:0 0 0 2026-03-09T15:00:36.748 INFO:tasks.workunit.client.1.vm09.stdout:9/103: mkdir d1/d7/d25 0 2026-03-09T15:00:36.750 INFO:tasks.workunit.client.1.vm09.stdout:9/104: write d1/d7/d9/f1d [896699,51861] 0 2026-03-09T15:00:36.755 INFO:tasks.workunit.client.1.vm09.stdout:9/105: symlink d1/l26 0 2026-03-09T15:00:36.760 INFO:tasks.workunit.client.1.vm09.stdout:7/102: dwrite f1 [0,4194304] 0 2026-03-09T15:00:36.767 INFO:tasks.workunit.client.1.vm09.stdout:0/118: write f8 [3763593,44405] 0 2026-03-09T15:00:36.767 INFO:tasks.workunit.client.1.vm09.stdout:7/103: write d3/d1d/f1f [668134,64407] 0 2026-03-09T15:00:36.770 INFO:tasks.workunit.client.1.vm09.stdout:0/119: rename da/dc/d1c/c1d to da/dc/d10/c27 0 2026-03-09T15:00:36.771 INFO:tasks.workunit.client.1.vm09.stdout:7/104: dread d3/d1d/f11 [0,4194304] 0 2026-03-09T15:00:36.772 INFO:tasks.workunit.client.1.vm09.stdout:7/105: write f1 [1226823,50205] 0 2026-03-09T15:00:36.772 INFO:tasks.workunit.client.1.vm09.stdout:0/120: unlink f9 0 2026-03-09T15:00:36.773 INFO:tasks.workunit.client.1.vm09.stdout:0/121: readlink da/dc/d1c/l24 0 2026-03-09T15:00:36.776 INFO:tasks.workunit.client.1.vm09.stdout:7/106: rmdir d3/db/d19 0 2026-03-09T15:00:36.780 INFO:tasks.workunit.client.1.vm09.stdout:5/83: truncate d2/d4/fd 3265060 0 2026-03-09T15:00:36.784 INFO:tasks.workunit.client.1.vm09.stdout:0/122: dwrite da/dc/d10/f11 [0,4194304] 0 2026-03-09T15:00:36.791 INFO:tasks.workunit.client.1.vm09.stdout:5/84: dwrite d2/f14 [0,4194304] 0 2026-03-09T15:00:36.796 INFO:tasks.workunit.client.1.vm09.stdout:0/123: creat da/dc/f28 x:0 0 0 2026-03-09T15:00:36.803 INFO:tasks.workunit.client.1.vm09.stdout:0/124: creat da/dc/d10/f29 x:0 0 0 2026-03-09T15:00:36.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:36 vm05.local ceph-mon[50611]: pgmap v144: 65 pgs: 65 active+clean; 210 MiB data, 1.0 GiB used, 119 GiB / 120 GiB avail; 2.7 MiB/s rd, 9.6 MiB/s wr, 284 op/s 2026-03-09T15:00:36.807 INFO:tasks.workunit.client.1.vm09.stdout:8/92: truncate fa 3352228 0 2026-03-09T15:00:36.807 INFO:tasks.workunit.client.1.vm09.stdout:0/125: symlink da/dc/l2a 0 2026-03-09T15:00:36.809 INFO:tasks.workunit.client.1.vm09.stdout:0/126: fsync da/dc/d10/f1e 0 2026-03-09T15:00:36.816 INFO:tasks.workunit.client.1.vm09.stdout:0/127: dwrite da/dc/f28 [0,4194304] 0 2026-03-09T15:00:36.820 INFO:tasks.workunit.client.1.vm09.stdout:0/128: dread da/dc/fe [0,4194304] 0 2026-03-09T15:00:36.821 INFO:tasks.workunit.client.1.vm09.stdout:0/129: truncate da/dc/d10/f29 54483 0 2026-03-09T15:00:36.841 INFO:tasks.workunit.client.1.vm09.stdout:0/130: sync 2026-03-09T15:00:36.841 INFO:tasks.workunit.client.1.vm09.stdout:0/131: chown da 2221244 1 2026-03-09T15:00:36.841 INFO:tasks.workunit.client.1.vm09.stdout:0/132: chown da/dc/d10 400 1 2026-03-09T15:00:36.847 INFO:tasks.workunit.client.1.vm09.stdout:0/133: link da/dc/d1c/l24 da/dc/l2b 0 2026-03-09T15:00:36.847 INFO:tasks.workunit.client.1.vm09.stdout:0/134: read f7 [753022,19522] 0 2026-03-09T15:00:36.849 INFO:tasks.workunit.client.1.vm09.stdout:0/135: write da/dc/d10/f29 [220273,18400] 0 2026-03-09T15:00:36.854 INFO:tasks.workunit.client.1.vm09.stdout:0/136: symlink da/l2c 0 2026-03-09T15:00:36.855 INFO:tasks.workunit.client.1.vm09.stdout:0/137: creat da/dc/d10/f2d x:0 0 0 2026-03-09T15:00:36.857 INFO:tasks.workunit.client.1.vm09.stdout:0/138: symlink da/l2e 0 2026-03-09T15:00:36.859 INFO:tasks.workunit.client.1.vm09.stdout:0/139: mknod da/dc/d1c/c2f 0 2026-03-09T15:00:36.860 INFO:tasks.workunit.client.1.vm09.stdout:0/140: mkdir da/d30 0 2026-03-09T15:00:36.864 INFO:tasks.workunit.client.1.vm09.stdout:0/141: dwrite da/dc/d10/f29 [0,4194304] 0 2026-03-09T15:00:36.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:36 vm09.local ceph-mon[59673]: pgmap v144: 65 pgs: 65 active+clean; 210 MiB data, 1.0 GiB used, 119 GiB / 120 GiB avail; 2.7 MiB/s rd, 9.6 MiB/s wr, 284 op/s 2026-03-09T15:00:36.883 INFO:tasks.workunit.client.1.vm09.stdout:8/93: dwrite f8 [0,4194304] 0 2026-03-09T15:00:37.134 INFO:tasks.workunit.client.1.vm09.stdout:6/136: dwrite d6/db/f1f [0,4194304] 0 2026-03-09T15:00:37.164 INFO:tasks.workunit.client.1.vm09.stdout:3/111: truncate d3/d4/d18/f1d 1797821 0 2026-03-09T15:00:37.192 INFO:tasks.workunit.client.1.vm09.stdout:2/118: getdents df 0 2026-03-09T15:00:37.192 INFO:tasks.workunit.client.1.vm09.stdout:4/82: dwrite db/fe [0,4194304] 0 2026-03-09T15:00:37.192 INFO:tasks.workunit.client.1.vm09.stdout:4/83: chown f7 65370 1 2026-03-09T15:00:37.192 INFO:tasks.workunit.client.1.vm09.stdout:4/84: write f7 [4727347,54442] 0 2026-03-09T15:00:37.192 INFO:tasks.workunit.client.1.vm09.stdout:9/106: getdents d1 0 2026-03-09T15:00:37.192 INFO:tasks.workunit.client.1.vm09.stdout:9/107: fdatasync d1/d7/d9/f1d 0 2026-03-09T15:00:37.192 INFO:tasks.workunit.client.1.vm09.stdout:9/108: chown d1/d7/d9 2170 1 2026-03-09T15:00:37.192 INFO:tasks.workunit.client.1.vm09.stdout:4/85: dwrite f3 [0,4194304] 0 2026-03-09T15:00:37.192 INFO:tasks.workunit.client.1.vm09.stdout:4/86: dwrite db/f14 [0,4194304] 0 2026-03-09T15:00:37.195 INFO:tasks.workunit.client.1.vm09.stdout:7/107: getdents d3/db 0 2026-03-09T15:00:37.198 INFO:tasks.workunit.client.1.vm09.stdout:7/108: chown d3/fd 3960800 1 2026-03-09T15:00:37.202 INFO:tasks.workunit.client.1.vm09.stdout:5/85: write d2/d4/fd [3378880,63596] 0 2026-03-09T15:00:37.223 INFO:tasks.workunit.client.1.vm09.stdout:8/94: dread fa [0,4194304] 0 2026-03-09T15:00:37.223 INFO:tasks.workunit.client.1.vm09.stdout:1/55: dwrite d8/fc [0,4194304] 0 2026-03-09T15:00:37.223 INFO:tasks.workunit.client.1.vm09.stdout:1/56: write f5 [1997576,89214] 0 2026-03-09T15:00:37.223 INFO:tasks.workunit.client.1.vm09.stdout:0/142: write da/dc/fe [3064335,54468] 0 2026-03-09T15:00:37.232 INFO:tasks.workunit.client.1.vm09.stdout:3/112: rmdir d3 39 2026-03-09T15:00:37.232 INFO:tasks.workunit.client.1.vm09.stdout:2/119: symlink df/l26 0 2026-03-09T15:00:37.235 INFO:tasks.workunit.client.1.vm09.stdout:4/87: rmdir db/d12 39 2026-03-09T15:00:37.235 INFO:tasks.workunit.client.1.vm09.stdout:4/88: stat db/d19 0 2026-03-09T15:00:37.236 INFO:tasks.workunit.client.1.vm09.stdout:7/109: mknod d3/db/c20 0 2026-03-09T15:00:37.236 INFO:tasks.workunit.client.1.vm09.stdout:7/110: stat f1 0 2026-03-09T15:00:37.238 INFO:tasks.workunit.client.1.vm09.stdout:5/86: creat d2/d4/f23 x:0 0 0 2026-03-09T15:00:37.238 INFO:tasks.workunit.client.1.vm09.stdout:8/95: rmdir df 39 2026-03-09T15:00:37.238 INFO:tasks.workunit.client.1.vm09.stdout:8/96: read fa [1430024,59479] 0 2026-03-09T15:00:37.240 INFO:tasks.workunit.client.1.vm09.stdout:2/120: dwrite df/f17 [0,4194304] 0 2026-03-09T15:00:37.242 INFO:tasks.workunit.client.1.vm09.stdout:4/89: dread db/fe [0,4194304] 0 2026-03-09T15:00:37.244 INFO:tasks.workunit.client.1.vm09.stdout:0/143: rmdir da 39 2026-03-09T15:00:37.244 INFO:tasks.workunit.client.1.vm09.stdout:7/111: rename d3/d1d/l1b to d3/d1d/l21 0 2026-03-09T15:00:37.248 INFO:tasks.workunit.client.1.vm09.stdout:7/112: chown l2 31 1 2026-03-09T15:00:37.249 INFO:tasks.workunit.client.1.vm09.stdout:2/121: mknod df/d20/c27 0 2026-03-09T15:00:37.249 INFO:tasks.workunit.client.1.vm09.stdout:1/57: symlink d8/d10/l11 0 2026-03-09T15:00:37.250 INFO:tasks.workunit.client.1.vm09.stdout:2/122: stat df/l26 0 2026-03-09T15:00:37.251 INFO:tasks.workunit.client.1.vm09.stdout:4/90: dwrite f7 [0,4194304] 0 2026-03-09T15:00:37.256 INFO:tasks.workunit.client.1.vm09.stdout:5/87: dwrite d2/d4/fd [0,4194304] 0 2026-03-09T15:00:37.256 INFO:tasks.workunit.client.1.vm09.stdout:7/113: truncate d3/fa 5169939 0 2026-03-09T15:00:37.256 INFO:tasks.workunit.client.1.vm09.stdout:9/109: rmdir d1/d7/d25 0 2026-03-09T15:00:37.256 INFO:tasks.workunit.client.1.vm09.stdout:3/113: creat d3/f29 x:0 0 0 2026-03-09T15:00:37.256 INFO:tasks.workunit.client.1.vm09.stdout:8/97: symlink df/l17 0 2026-03-09T15:00:37.256 INFO:tasks.workunit.client.1.vm09.stdout:1/58: creat d8/d10/f12 x:0 0 0 2026-03-09T15:00:37.256 INFO:tasks.workunit.client.1.vm09.stdout:0/144: stat da/dc/fe 0 2026-03-09T15:00:37.257 INFO:tasks.workunit.client.1.vm09.stdout:2/123: write f0 [3240547,106266] 0 2026-03-09T15:00:37.273 INFO:tasks.workunit.client.1.vm09.stdout:9/110: creat d1/d7/d9/f27 x:0 0 0 2026-03-09T15:00:37.274 INFO:tasks.workunit.client.1.vm09.stdout:3/114: symlink d3/d4/l2a 0 2026-03-09T15:00:37.275 INFO:tasks.workunit.client.1.vm09.stdout:2/124: creat df/f28 x:0 0 0 2026-03-09T15:00:37.278 INFO:tasks.workunit.client.1.vm09.stdout:5/88: unlink d2/d4/c17 0 2026-03-09T15:00:37.278 INFO:tasks.workunit.client.1.vm09.stdout:8/98: rename df/l16 to df/l18 0 2026-03-09T15:00:37.286 INFO:tasks.workunit.client.1.vm09.stdout:7/114: dwrite d3/f12 [0,4194304] 0 2026-03-09T15:00:37.287 INFO:tasks.workunit.client.1.vm09.stdout:0/145: mknod da/d30/c31 0 2026-03-09T15:00:37.287 INFO:tasks.workunit.client.1.vm09.stdout:3/115: read d3/d4/d18/f1c [700927,126226] 0 2026-03-09T15:00:37.288 INFO:tasks.workunit.client.1.vm09.stdout:3/116: truncate d3/f29 896588 0 2026-03-09T15:00:37.289 INFO:tasks.workunit.client.1.vm09.stdout:1/59: link d8/fa d8/d10/f13 0 2026-03-09T15:00:37.289 INFO:tasks.workunit.client.1.vm09.stdout:5/89: creat d2/d4/f24 x:0 0 0 2026-03-09T15:00:37.293 INFO:tasks.workunit.client.1.vm09.stdout:5/90: write d2/d4/f1f [878283,128230] 0 2026-03-09T15:00:37.307 INFO:tasks.workunit.client.1.vm09.stdout:5/91: write d2/d4/fd [153864,22833] 0 2026-03-09T15:00:37.307 INFO:tasks.workunit.client.1.vm09.stdout:1/60: dread d8/fd [0,4194304] 0 2026-03-09T15:00:37.307 INFO:tasks.workunit.client.1.vm09.stdout:8/99: symlink df/l19 0 2026-03-09T15:00:37.307 INFO:tasks.workunit.client.1.vm09.stdout:7/115: readlink d3/l6 0 2026-03-09T15:00:37.307 INFO:tasks.workunit.client.1.vm09.stdout:1/61: chown d8/fe 1740 1 2026-03-09T15:00:37.307 INFO:tasks.workunit.client.1.vm09.stdout:4/91: creat db/d12/f1a x:0 0 0 2026-03-09T15:00:37.307 INFO:tasks.workunit.client.1.vm09.stdout:4/92: readlink l5 0 2026-03-09T15:00:37.315 INFO:tasks.workunit.client.1.vm09.stdout:5/92: dwrite d2/f14 [0,4194304] 0 2026-03-09T15:00:37.318 INFO:tasks.workunit.client.1.vm09.stdout:4/93: dread f4 [0,4194304] 0 2026-03-09T15:00:37.318 INFO:tasks.workunit.client.1.vm09.stdout:4/94: dread - db/d12/f1a zero size 2026-03-09T15:00:37.322 INFO:tasks.workunit.client.1.vm09.stdout:4/95: truncate db/f14 5079374 0 2026-03-09T15:00:37.322 INFO:tasks.workunit.client.1.vm09.stdout:4/96: fdatasync db/d12/f1a 0 2026-03-09T15:00:37.323 INFO:tasks.workunit.client.1.vm09.stdout:4/97: fdatasync f4 0 2026-03-09T15:00:37.323 INFO:tasks.workunit.client.1.vm09.stdout:8/100: creat df/f1a x:0 0 0 2026-03-09T15:00:37.324 INFO:tasks.workunit.client.1.vm09.stdout:1/62: mknod d8/d10/c14 0 2026-03-09T15:00:37.325 INFO:tasks.workunit.client.1.vm09.stdout:7/116: dwrite d3/d1d/f1f [0,4194304] 0 2026-03-09T15:00:37.328 INFO:tasks.workunit.client.1.vm09.stdout:5/93: dread d2/f15 [0,4194304] 0 2026-03-09T15:00:37.332 INFO:tasks.workunit.client.1.vm09.stdout:8/101: dread f1 [0,4194304] 0 2026-03-09T15:00:37.332 INFO:tasks.workunit.client.1.vm09.stdout:1/63: dread f5 [0,4194304] 0 2026-03-09T15:00:37.334 INFO:tasks.workunit.client.1.vm09.stdout:8/102: readlink df/l19 0 2026-03-09T15:00:37.334 INFO:tasks.workunit.client.1.vm09.stdout:0/146: link da/dc/d10/c1b da/c32 0 2026-03-09T15:00:37.336 INFO:tasks.workunit.client.1.vm09.stdout:5/94: symlink d2/d4/l25 0 2026-03-09T15:00:37.341 INFO:tasks.workunit.client.1.vm09.stdout:8/103: rename df/l15 to df/l1b 0 2026-03-09T15:00:37.347 INFO:tasks.workunit.client.1.vm09.stdout:5/95: dread d2/f14 [0,4194304] 0 2026-03-09T15:00:37.352 INFO:tasks.workunit.client.1.vm09.stdout:5/96: dwrite d2/f22 [0,4194304] 0 2026-03-09T15:00:37.354 INFO:tasks.workunit.client.1.vm09.stdout:4/98: fdatasync f3 0 2026-03-09T15:00:37.357 INFO:tasks.workunit.client.1.vm09.stdout:0/147: dread da/dc/d10/f29 [0,4194304] 0 2026-03-09T15:00:37.360 INFO:tasks.workunit.client.1.vm09.stdout:5/97: dwrite d2/d4/f23 [0,4194304] 0 2026-03-09T15:00:37.361 INFO:tasks.workunit.client.1.vm09.stdout:0/148: truncate f3 10046135 0 2026-03-09T15:00:37.369 INFO:tasks.workunit.client.1.vm09.stdout:5/98: symlink d2/d4/l26 0 2026-03-09T15:00:37.371 INFO:tasks.workunit.client.1.vm09.stdout:5/99: truncate d2/d4/f23 4820481 0 2026-03-09T15:00:37.371 INFO:tasks.workunit.client.1.vm09.stdout:5/100: stat d2 0 2026-03-09T15:00:37.374 INFO:tasks.workunit.client.1.vm09.stdout:7/117: dread d3/f5 [0,4194304] 0 2026-03-09T15:00:37.374 INFO:tasks.workunit.client.1.vm09.stdout:4/99: creat db/d12/f1b x:0 0 0 2026-03-09T15:00:37.379 INFO:tasks.workunit.client.1.vm09.stdout:7/118: rename d3/d1d/f1f to d3/db/d15/d1e/f22 0 2026-03-09T15:00:37.380 INFO:tasks.workunit.client.1.vm09.stdout:7/119: chown d3/f9 382079927 1 2026-03-09T15:00:37.391 INFO:tasks.workunit.client.1.vm09.stdout:4/100: creat db/f1c x:0 0 0 2026-03-09T15:00:37.392 INFO:tasks.workunit.client.1.vm09.stdout:4/101: read db/f14 [1767850,73458] 0 2026-03-09T15:00:37.393 INFO:tasks.workunit.client.1.vm09.stdout:4/102: stat db/cf 0 2026-03-09T15:00:37.393 INFO:tasks.workunit.client.1.vm09.stdout:2/125: fdatasync f0 0 2026-03-09T15:00:37.396 INFO:tasks.workunit.client.1.vm09.stdout:1/64: sync 2026-03-09T15:00:37.396 INFO:tasks.workunit.client.1.vm09.stdout:8/104: sync 2026-03-09T15:00:37.398 INFO:tasks.workunit.client.1.vm09.stdout:8/105: write f8 [3559665,34022] 0 2026-03-09T15:00:37.400 INFO:tasks.workunit.client.1.vm09.stdout:2/126: dwrite df/d20/f24 [0,4194304] 0 2026-03-09T15:00:37.414 INFO:tasks.workunit.client.1.vm09.stdout:1/65: sync 2026-03-09T15:00:37.414 INFO:tasks.workunit.client.1.vm09.stdout:5/101: sync 2026-03-09T15:00:37.415 INFO:tasks.workunit.client.1.vm09.stdout:5/102: chown d2/d4/c11 509629 1 2026-03-09T15:00:37.419 INFO:tasks.workunit.client.1.vm09.stdout:8/106: mkdir df/d1c 0 2026-03-09T15:00:37.433 INFO:tasks.workunit.client.1.vm09.stdout:2/127: write df/f14 [1290016,21009] 0 2026-03-09T15:00:37.437 INFO:tasks.workunit.client.1.vm09.stdout:2/128: dwrite df/f1d [0,4194304] 0 2026-03-09T15:00:37.439 INFO:tasks.workunit.client.1.vm09.stdout:4/103: mknod db/d19/c1d 0 2026-03-09T15:00:37.444 INFO:tasks.workunit.client.1.vm09.stdout:1/66: write d8/fd [1175010,16937] 0 2026-03-09T15:00:37.445 INFO:tasks.workunit.client.1.vm09.stdout:1/67: fdatasync d8/fe 0 2026-03-09T15:00:37.446 INFO:tasks.workunit.client.1.vm09.stdout:5/103: symlink d2/d4/l27 0 2026-03-09T15:00:37.447 INFO:tasks.workunit.client.1.vm09.stdout:1/68: stat d8 0 2026-03-09T15:00:37.454 INFO:tasks.workunit.client.1.vm09.stdout:5/104: dwrite d2/d4/f1d [0,4194304] 0 2026-03-09T15:00:37.466 INFO:tasks.workunit.client.1.vm09.stdout:2/129: mkdir df/d20/d29 0 2026-03-09T15:00:37.469 INFO:tasks.workunit.client.1.vm09.stdout:2/130: dread df/f17 [0,4194304] 0 2026-03-09T15:00:37.472 INFO:tasks.workunit.client.1.vm09.stdout:2/131: truncate df/f28 911448 0 2026-03-09T15:00:37.475 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.473+0000 7f00d5dd4700 1 -- 192.168.123.105:0/512714296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00d0071980 msgr2=0x7f00d0071d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:37.480 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.473+0000 7f00d5dd4700 1 --2- 192.168.123.105:0/512714296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00d0071980 0x7f00d0071d90 secure :-1 s=READY pgs=298 cs=0 l=1 rev1=1 crypto rx=0x7f00c00099c0 tx=0x7f00c0009cd0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.480 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.474+0000 7f00d5dd4700 1 -- 192.168.123.105:0/512714296 shutdown_connections 2026-03-09T15:00:37.480 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.474+0000 7f00d5dd4700 1 --2- 192.168.123.105:0/512714296 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f00d0072360 0x7f00d00770e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.480 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.474+0000 7f00d5dd4700 1 --2- 192.168.123.105:0/512714296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00d0071980 0x7f00d0071d90 unknown :-1 s=CLOSED pgs=298 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.480 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.474+0000 7f00d5dd4700 1 -- 192.168.123.105:0/512714296 >> 192.168.123.105:0/512714296 conn(0x7f00d006d1a0 msgr2=0x7f00d006f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:37.480 INFO:tasks.workunit.client.1.vm09.stdout:1/69: rename f5 to d8/d10/f15 0 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.474+0000 7f00d5dd4700 1 -- 192.168.123.105:0/512714296 shutdown_connections 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.474+0000 7f00d5dd4700 1 -- 192.168.123.105:0/512714296 wait complete. 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.474+0000 7f00d5dd4700 1 Processor -- start 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.474+0000 7f00d5dd4700 1 -- start start 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.474+0000 7f00d5dd4700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f00d0072360 0x7f00d0082500 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.474+0000 7f00d5dd4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00d0082a40 0x7f00d0082eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.474+0000 7f00d5dd4700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f00d01b2a90 con 0x7f00d0082a40 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.474+0000 7f00d5dd4700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f00d01b2bd0 con 0x7f00d0072360 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.475+0000 7f00ceffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00d0082a40 0x7f00d0082eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.475+0000 7f00cf7fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f00d0072360 0x7f00d0082500 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.475+0000 7f00cf7fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f00d0072360 0x7f00d0082500 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:47766/0 (socket says 192.168.123.105:47766) 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.475+0000 7f00cf7fe700 1 -- 192.168.123.105:0/2817217221 learned_addr learned my addr 192.168.123.105:0/2817217221 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.475+0000 7f00cf7fe700 1 -- 192.168.123.105:0/2817217221 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00d0082a40 msgr2=0x7f00d0082eb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.475+0000 7f00cf7fe700 1 --2- 192.168.123.105:0/2817217221 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00d0082a40 0x7f00d0082eb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.475+0000 7f00cf7fe700 1 -- 192.168.123.105:0/2817217221 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f00c00096b0 con 0x7f00d0072360 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.475+0000 7f00cf7fe700 1 --2- 192.168.123.105:0/2817217221 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f00d0072360 0x7f00d0082500 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f00c0009910 tx=0x7f00c000c7e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:37.481 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.476+0000 7f00ccff9700 1 -- 192.168.123.105:0/2817217221 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f00c000cdf0 con 0x7f00d0072360 2026-03-09T15:00:37.482 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.476+0000 7f00d5dd4700 1 -- 192.168.123.105:0/2817217221 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f00d01b2d10 con 0x7f00d0072360 2026-03-09T15:00:37.482 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.476+0000 7f00d5dd4700 1 -- 192.168.123.105:0/2817217221 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f00d01b31b0 con 0x7f00d0072360 2026-03-09T15:00:37.482 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.476+0000 7f00ccff9700 1 -- 192.168.123.105:0/2817217221 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f00c0003b60 con 0x7f00d0072360 2026-03-09T15:00:37.482 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.476+0000 7f00ccff9700 1 -- 192.168.123.105:0/2817217221 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f00c000ed20 con 0x7f00d0072360 2026-03-09T15:00:37.482 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.477+0000 7f00d5dd4700 1 -- 192.168.123.105:0/2817217221 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f00bc005320 con 0x7f00d0072360 2026-03-09T15:00:37.482 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.477+0000 7f00ccff9700 1 -- 192.168.123.105:0/2817217221 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f00c0003cd0 con 0x7f00d0072360 2026-03-09T15:00:37.482 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.478+0000 7f00ccff9700 1 --2- 192.168.123.105:0/2817217221 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f00b806c6d0 0x7f00b806eb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:37.482 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.478+0000 7f00ccff9700 1 -- 192.168.123.105:0/2817217221 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f00c00909f0 con 0x7f00d0072360 2026-03-09T15:00:37.482 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.479+0000 7f00ceffd700 1 --2- 192.168.123.105:0/2817217221 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f00b806c6d0 0x7f00b806eb80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:37.482 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.479+0000 7f00ceffd700 1 --2- 192.168.123.105:0/2817217221 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f00b806c6d0 0x7f00b806eb80 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f00c8009fd0 tx=0x7f00c8009b10 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:37.483 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.483+0000 7f00ccff9700 1 -- 192.168.123.105:0/2817217221 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f00c005ed40 con 0x7f00d0072360 2026-03-09T15:00:37.487 INFO:tasks.workunit.client.1.vm09.stdout:8/107: mkdir df/d1c/d1d 0 2026-03-09T15:00:37.496 INFO:tasks.workunit.client.1.vm09.stdout:2/132: symlink df/d20/l2a 0 2026-03-09T15:00:37.499 INFO:tasks.workunit.client.1.vm09.stdout:1/70: symlink d8/l16 0 2026-03-09T15:00:37.499 INFO:tasks.workunit.client.1.vm09.stdout:1/71: dread - d8/ff zero size 2026-03-09T15:00:37.503 INFO:tasks.workunit.client.1.vm09.stdout:8/108: symlink df/d1c/l1e 0 2026-03-09T15:00:37.510 INFO:tasks.workunit.client.1.vm09.stdout:2/133: creat df/d20/f2b x:0 0 0 2026-03-09T15:00:37.520 INFO:tasks.workunit.client.1.vm09.stdout:1/72: rename d8/fd to d8/f17 0 2026-03-09T15:00:37.523 INFO:tasks.workunit.client.1.vm09.stdout:8/109: mkdir df/d1f 0 2026-03-09T15:00:37.524 INFO:tasks.workunit.client.1.vm09.stdout:1/73: dread d8/fe [0,4194304] 0 2026-03-09T15:00:37.527 INFO:tasks.workunit.client.1.vm09.stdout:2/134: mknod df/d1f/c2c 0 2026-03-09T15:00:37.529 INFO:tasks.workunit.client.1.vm09.stdout:5/105: getdents d2 0 2026-03-09T15:00:37.531 INFO:tasks.workunit.client.1.vm09.stdout:1/74: sync 2026-03-09T15:00:37.532 INFO:tasks.workunit.client.1.vm09.stdout:8/110: creat df/d1c/f20 x:0 0 0 2026-03-09T15:00:37.533 INFO:tasks.workunit.client.1.vm09.stdout:8/111: write df/f1a [764389,72059] 0 2026-03-09T15:00:37.541 INFO:tasks.workunit.client.1.vm09.stdout:1/75: mknod d8/c18 0 2026-03-09T15:00:37.542 INFO:tasks.workunit.client.1.vm09.stdout:2/135: dwrite df/f17 [0,4194304] 0 2026-03-09T15:00:37.580 INFO:tasks.workunit.client.1.vm09.stdout:5/106: mknod d2/d4/d21/c28 0 2026-03-09T15:00:37.602 INFO:tasks.workunit.client.1.vm09.stdout:8/112: creat df/d1f/f21 x:0 0 0 2026-03-09T15:00:37.603 INFO:tasks.workunit.client.1.vm09.stdout:5/107: creat d2/f29 x:0 0 0 2026-03-09T15:00:37.604 INFO:tasks.workunit.client.1.vm09.stdout:5/108: write d2/d4/f1d [3997800,89839] 0 2026-03-09T15:00:37.605 INFO:tasks.workunit.client.1.vm09.stdout:2/136: dwrite df/f16 [0,4194304] 0 2026-03-09T15:00:37.606 INFO:tasks.workunit.client.1.vm09.stdout:8/113: dread f8 [0,4194304] 0 2026-03-09T15:00:37.608 INFO:tasks.workunit.client.1.vm09.stdout:8/114: truncate df/f14 4551396 0 2026-03-09T15:00:37.615 INFO:tasks.workunit.client.1.vm09.stdout:8/115: write df/d1c/f20 [896842,55801] 0 2026-03-09T15:00:37.619 INFO:tasks.workunit.client.1.vm09.stdout:5/109: mknod d2/d4/d21/c2a 0 2026-03-09T15:00:37.622 INFO:tasks.workunit.client.1.vm09.stdout:2/137: dwrite df/f28 [0,4194304] 0 2026-03-09T15:00:37.623 INFO:tasks.workunit.client.1.vm09.stdout:2/138: dread - df/d20/f2b zero size 2026-03-09T15:00:37.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.626+0000 7f00d5dd4700 1 -- 192.168.123.105:0/2817217221 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f00bc000bf0 con 0x7f00b806c6d0 2026-03-09T15:00:37.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.629+0000 7f00ccff9700 1 -- 192.168.123.105:0/2817217221 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+434 (secure 0 0 0) 0x7f00bc000bf0 con 0x7f00b806c6d0 2026-03-09T15:00:37.636 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.634+0000 7f00b67fc700 1 -- 192.168.123.105:0/2817217221 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f00b806c6d0 msgr2=0x7f00b806eb80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:37.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.634+0000 7f00b67fc700 1 --2- 192.168.123.105:0/2817217221 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f00b806c6d0 0x7f00b806eb80 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f00c8009fd0 tx=0x7f00c8009b10 comp rx=0 tx=0).stop 2026-03-09T15:00:37.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.634+0000 7f00b67fc700 1 -- 192.168.123.105:0/2817217221 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f00d0072360 msgr2=0x7f00d0082500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:37.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.634+0000 7f00b67fc700 1 --2- 192.168.123.105:0/2817217221 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f00d0072360 0x7f00d0082500 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f00c0009910 tx=0x7f00c000c7e0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.634+0000 7f00b67fc700 1 -- 192.168.123.105:0/2817217221 shutdown_connections 2026-03-09T15:00:37.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.634+0000 7f00b67fc700 1 --2- 192.168.123.105:0/2817217221 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f00b806c6d0 0x7f00b806eb80 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.634+0000 7f00b67fc700 1 --2- 192.168.123.105:0/2817217221 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f00d0072360 0x7f00d0082500 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.634+0000 7f00b67fc700 1 --2- 192.168.123.105:0/2817217221 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f00d0082a40 0x7f00d0082eb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.635+0000 7f00b67fc700 1 -- 192.168.123.105:0/2817217221 >> 192.168.123.105:0/2817217221 conn(0x7f00d006d1a0 msgr2=0x7f00d00705c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:37.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.635+0000 7f00b67fc700 1 -- 192.168.123.105:0/2817217221 shutdown_connections 2026-03-09T15:00:37.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.635+0000 7f00b67fc700 1 -- 192.168.123.105:0/2817217221 wait complete. 2026-03-09T15:00:37.650 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:00:37.654 INFO:tasks.workunit.client.1.vm09.stdout:9/111: rmdir d1/d7 39 2026-03-09T15:00:37.660 INFO:tasks.workunit.client.1.vm09.stdout:0/149: getdents da/d30 0 2026-03-09T15:00:37.662 INFO:tasks.workunit.client.1.vm09.stdout:8/116: dwrite df/f14 [0,4194304] 0 2026-03-09T15:00:37.662 INFO:tasks.workunit.client.1.vm09.stdout:1/76: truncate d8/fa 2810779 0 2026-03-09T15:00:37.663 INFO:tasks.workunit.client.1.vm09.stdout:8/117: stat df/d1f/f21 0 2026-03-09T15:00:37.665 INFO:tasks.workunit.client.1.vm09.stdout:5/110: creat d2/d4/d21/f2b x:0 0 0 2026-03-09T15:00:37.665 INFO:tasks.workunit.client.1.vm09.stdout:3/117: truncate d3/d4/f1b 117821 0 2026-03-09T15:00:37.666 INFO:tasks.workunit.client.1.vm09.stdout:5/111: write d2/f29 [415645,28616] 0 2026-03-09T15:00:37.666 INFO:tasks.workunit.client.1.vm09.stdout:2/139: mkdir df/d2d 0 2026-03-09T15:00:37.669 INFO:tasks.workunit.client.1.vm09.stdout:2/140: write df/f1d [4453027,18948] 0 2026-03-09T15:00:37.676 INFO:tasks.workunit.client.1.vm09.stdout:5/112: dread d2/d4/f1f [0,4194304] 0 2026-03-09T15:00:37.682 INFO:tasks.workunit.client.1.vm09.stdout:6/137: mknod d6/db/c2e 0 2026-03-09T15:00:37.682 INFO:tasks.workunit.client.1.vm09.stdout:6/138: chown d6/d20 85192253 1 2026-03-09T15:00:37.690 INFO:tasks.workunit.client.1.vm09.stdout:1/77: rename f7 to d8/f19 0 2026-03-09T15:00:37.690 INFO:tasks.workunit.client.1.vm09.stdout:1/78: chown d8/d10/f15 1835865965 1 2026-03-09T15:00:37.690 INFO:tasks.workunit.client.1.vm09.stdout:1/79: stat d8/f17 0 2026-03-09T15:00:37.693 INFO:tasks.workunit.client.1.vm09.stdout:3/118: mkdir d3/d4/d18/d2b 0 2026-03-09T15:00:37.698 INFO:tasks.workunit.client.1.vm09.stdout:7/120: truncate d3/f16 699362 0 2026-03-09T15:00:37.699 INFO:tasks.workunit.client.1.vm09.stdout:3/119: dread d3/d4/f26 [0,4194304] 0 2026-03-09T15:00:37.705 INFO:tasks.workunit.client.1.vm09.stdout:4/104: getdents db 0 2026-03-09T15:00:37.705 INFO:tasks.workunit.client.1.vm09.stdout:2/141: write df/f1c [511934,46077] 0 2026-03-09T15:00:37.707 INFO:tasks.workunit.client.1.vm09.stdout:2/142: dread - df/d20/f2b zero size 2026-03-09T15:00:37.715 INFO:tasks.workunit.client.1.vm09.stdout:1/80: creat d8/d10/f1a x:0 0 0 2026-03-09T15:00:37.720 INFO:tasks.workunit.client.1.vm09.stdout:7/121: rename d3/fa to d3/db/d15/f23 0 2026-03-09T15:00:37.721 INFO:tasks.workunit.client.1.vm09.stdout:3/120: mknod d3/d4/c2c 0 2026-03-09T15:00:37.722 INFO:tasks.workunit.client.1.vm09.stdout:3/121: write d3/ff [4691288,47685] 0 2026-03-09T15:00:37.723 INFO:tasks.workunit.client.1.vm09.stdout:4/105: mknod db/d19/c1e 0 2026-03-09T15:00:37.727 INFO:tasks.workunit.client.1.vm09.stdout:0/150: link da/dc/c1a da/c33 0 2026-03-09T15:00:37.734 INFO:tasks.workunit.client.1.vm09.stdout:1/81: unlink d8/fc 0 2026-03-09T15:00:37.735 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.734+0000 7f1f8ddd9700 1 -- 192.168.123.105:0/2800212500 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f88072330 msgr2=0x7f1f880770b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:37.736 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.734+0000 7f1f8ddd9700 1 --2- 192.168.123.105:0/2800212500 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f88072330 0x7f1f880770b0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f1f8000b780 tx=0x7f1f8000ba90 comp rx=0 tx=0).stop 2026-03-09T15:00:37.736 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:37 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:00:37.736 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:37 vm05.local ceph-mon[50611]: pgmap v145: 65 pgs: 65 active+clean; 263 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 2.3 MiB/s rd, 17 MiB/s wr, 216 op/s 2026-03-09T15:00:37.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.734+0000 7f1f8ddd9700 1 -- 192.168.123.105:0/2800212500 shutdown_connections 2026-03-09T15:00:37.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.734+0000 7f1f8ddd9700 1 --2- 192.168.123.105:0/2800212500 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f88072330 0x7f1f880770b0 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.734+0000 7f1f8ddd9700 1 --2- 192.168.123.105:0/2800212500 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f88071950 0x7f1f88071d60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.734+0000 7f1f8ddd9700 1 -- 192.168.123.105:0/2800212500 >> 192.168.123.105:0/2800212500 conn(0x7f1f8806d1a0 msgr2=0x7f1f8806f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:37.739 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.737+0000 7f1f8ddd9700 1 -- 192.168.123.105:0/2800212500 shutdown_connections 2026-03-09T15:00:37.739 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.737+0000 7f1f8ddd9700 1 -- 192.168.123.105:0/2800212500 wait complete. 2026-03-09T15:00:37.739 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.737+0000 7f1f8ddd9700 1 Processor -- start 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.737+0000 7f1f8ddd9700 1 -- start start 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.737+0000 7f1f8ddd9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f88071950 0x7f1f88131390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.737+0000 7f1f8ddd9700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f881318d0 0x7f1f8807f500 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.737+0000 7f1f8ddd9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1f88131dd0 con 0x7f1f88071950 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.737+0000 7f1f8ddd9700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1f88131f40 con 0x7f1f881318d0 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.738+0000 7f1f87fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f881318d0 0x7f1f8807f500 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.738+0000 7f1f87fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f881318d0 0x7f1f8807f500 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:47778/0 (socket says 192.168.123.105:47778) 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.738+0000 7f1f87fff700 1 -- 192.168.123.105:0/2121332086 learned_addr learned my addr 192.168.123.105:0/2121332086 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.738+0000 7f1f8cdd7700 1 --2- 192.168.123.105:0/2121332086 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f88071950 0x7f1f88131390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.738+0000 7f1f87fff700 1 -- 192.168.123.105:0/2121332086 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f88071950 msgr2=0x7f1f88131390 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.738+0000 7f1f87fff700 1 --2- 192.168.123.105:0/2121332086 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f88071950 0x7f1f88131390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.738+0000 7f1f87fff700 1 -- 192.168.123.105:0/2121332086 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1f8000b050 con 0x7f1f881318d0 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.738+0000 7f1f87fff700 1 --2- 192.168.123.105:0/2121332086 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f881318d0 0x7f1f8807f500 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f1f8000be90 tx=0x7f1f800119a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:37.740 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.739+0000 7f1f85ffb700 1 -- 192.168.123.105:0/2121332086 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1f80003bb0 con 0x7f1f881318d0 2026-03-09T15:00:37.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.739+0000 7f1f8ddd9700 1 -- 192.168.123.105:0/2121332086 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1f8807fa40 con 0x7f1f881318d0 2026-03-09T15:00:37.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.739+0000 7f1f8ddd9700 1 -- 192.168.123.105:0/2121332086 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1f8807ff00 con 0x7f1f881318d0 2026-03-09T15:00:37.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.740+0000 7f1f85ffb700 1 -- 192.168.123.105:0/2121332086 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1f80011e80 con 0x7f1f881318d0 2026-03-09T15:00:37.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.740+0000 7f1f85ffb700 1 -- 192.168.123.105:0/2121332086 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1f80004440 con 0x7f1f881318d0 2026-03-09T15:00:37.741 INFO:tasks.workunit.client.1.vm09.stdout:7/122: rmdir d3/d1d 39 2026-03-09T15:00:37.742 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.741+0000 7f1f85ffb700 1 -- 192.168.123.105:0/2121332086 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f1f800045a0 con 0x7f1f881318d0 2026-03-09T15:00:37.742 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.741+0000 7f1f85ffb700 1 --2- 192.168.123.105:0/2121332086 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1f7006c7a0 0x7f1f7006ec50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:37.743 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.742+0000 7f1f85ffb700 1 -- 192.168.123.105:0/2121332086 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f1f8008d890 con 0x7f1f881318d0 2026-03-09T15:00:37.743 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.742+0000 7f1f8cdd7700 1 --2- 192.168.123.105:0/2121332086 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1f7006c7a0 0x7f1f7006ec50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:37.744 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.743+0000 7f1f8cdd7700 1 --2- 192.168.123.105:0/2121332086 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1f7006c7a0 0x7f1f7006ec50 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f1f78005950 tx=0x7f1f780058e0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:37.744 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.743+0000 7f1f8ddd9700 1 -- 192.168.123.105:0/2121332086 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1f74005320 con 0x7f1f881318d0 2026-03-09T15:00:37.749 INFO:tasks.workunit.client.1.vm09.stdout:9/112: creat d1/f28 x:0 0 0 2026-03-09T15:00:37.750 INFO:tasks.workunit.client.1.vm09.stdout:4/106: symlink db/d19/l1f 0 2026-03-09T15:00:37.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.750+0000 7f1f85ffb700 1 -- 192.168.123.105:0/2121332086 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f1f80058090 con 0x7f1f881318d0 2026-03-09T15:00:37.755 INFO:tasks.workunit.client.1.vm09.stdout:0/151: mknod da/d30/c34 0 2026-03-09T15:00:37.758 INFO:tasks.workunit.client.1.vm09.stdout:1/82: mkdir d8/d1b 0 2026-03-09T15:00:37.762 INFO:tasks.workunit.client.1.vm09.stdout:8/118: dwrite fa [0,4194304] 0 2026-03-09T15:00:37.765 INFO:tasks.workunit.client.1.vm09.stdout:5/113: write d2/f15 [783072,20871] 0 2026-03-09T15:00:37.765 INFO:tasks.workunit.client.1.vm09.stdout:8/119: rename df/d1f to df/d1f/d22 22 2026-03-09T15:00:37.770 INFO:tasks.workunit.client.1.vm09.stdout:3/122: mknod d3/d4/d18/d2b/c2d 0 2026-03-09T15:00:37.770 INFO:tasks.workunit.client.1.vm09.stdout:3/123: fsync d3/d4/f16 0 2026-03-09T15:00:37.779 INFO:tasks.workunit.client.1.vm09.stdout:7/123: dread d3/db/fe [0,4194304] 0 2026-03-09T15:00:37.785 INFO:tasks.workunit.client.1.vm09.stdout:7/124: dread d3/db/d15/d1e/f22 [0,4194304] 0 2026-03-09T15:00:37.785 INFO:tasks.workunit.client.1.vm09.stdout:6/139: dwrite d6/db/f1f [0,4194304] 0 2026-03-09T15:00:37.788 INFO:tasks.workunit.client.1.vm09.stdout:7/125: write d3/db/fe [603508,71306] 0 2026-03-09T15:00:37.798 INFO:tasks.workunit.client.1.vm09.stdout:1/83: mkdir d8/d10/d1c 0 2026-03-09T15:00:37.817 INFO:tasks.workunit.client.1.vm09.stdout:5/114: symlink d2/l2c 0 2026-03-09T15:00:37.818 INFO:tasks.workunit.client.1.vm09.stdout:2/143: truncate df/f17 615484 0 2026-03-09T15:00:37.821 INFO:tasks.workunit.client.1.vm09.stdout:3/124: symlink d3/d4/l2e 0 2026-03-09T15:00:37.821 INFO:tasks.workunit.client.1.vm09.stdout:5/115: dwrite d2/d4/f16 [0,4194304] 0 2026-03-09T15:00:37.823 INFO:tasks.workunit.client.1.vm09.stdout:3/125: fdatasync d3/d4/f8 0 2026-03-09T15:00:37.828 INFO:tasks.workunit.client.1.vm09.stdout:4/107: mknod db/d12/d16/c20 0 2026-03-09T15:00:37.831 INFO:tasks.workunit.client.1.vm09.stdout:3/126: dwrite d3/f6 [0,4194304] 0 2026-03-09T15:00:37.839 INFO:tasks.workunit.client.1.vm09.stdout:3/127: dwrite d3/f6 [0,4194304] 0 2026-03-09T15:00:37.864 INFO:tasks.workunit.client.1.vm09.stdout:2/144: mkdir df/d20/d2e 0 2026-03-09T15:00:37.866 INFO:tasks.workunit.client.1.vm09.stdout:6/140: dwrite d6/db/d10/f2c [0,4194304] 0 2026-03-09T15:00:37.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:37 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:00:37.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:37 vm09.local ceph-mon[59673]: pgmap v145: 65 pgs: 65 active+clean; 263 MiB data, 1.2 GiB used, 119 GiB / 120 GiB avail; 2.3 MiB/s rd, 17 MiB/s wr, 216 op/s 2026-03-09T15:00:37.869 INFO:tasks.workunit.client.1.vm09.stdout:5/116: mknod d2/d4/c2d 0 2026-03-09T15:00:37.871 INFO:tasks.workunit.client.1.vm09.stdout:2/145: dwrite f5 [0,4194304] 0 2026-03-09T15:00:37.877 INFO:tasks.workunit.client.1.vm09.stdout:2/146: dread df/d20/f24 [0,4194304] 0 2026-03-09T15:00:37.877 INFO:tasks.workunit.client.1.vm09.stdout:2/147: fsync df/f16 0 2026-03-09T15:00:37.881 INFO:tasks.workunit.client.1.vm09.stdout:2/148: dread df/f16 [0,4194304] 0 2026-03-09T15:00:37.883 INFO:tasks.workunit.client.1.vm09.stdout:4/108: creat db/f21 x:0 0 0 2026-03-09T15:00:37.891 INFO:tasks.workunit.client.1.vm09.stdout:5/117: sync 2026-03-09T15:00:37.898 INFO:tasks.workunit.client.1.vm09.stdout:6/141: creat d6/df/d23/f2f x:0 0 0 2026-03-09T15:00:37.899 INFO:tasks.workunit.client.1.vm09.stdout:5/118: dread d2/f14 [0,4194304] 0 2026-03-09T15:00:37.900 INFO:tasks.workunit.client.1.vm09.stdout:5/119: write d2/f29 [638798,96460] 0 2026-03-09T15:00:37.908 INFO:tasks.workunit.client.1.vm09.stdout:8/120: getdents df 0 2026-03-09T15:00:37.909 INFO:tasks.workunit.client.1.vm09.stdout:8/121: read df/d1c/f20 [563036,50053] 0 2026-03-09T15:00:37.910 INFO:tasks.workunit.client.1.vm09.stdout:9/113: creat d1/f29 x:0 0 0 2026-03-09T15:00:37.915 INFO:tasks.workunit.client.1.vm09.stdout:6/142: readlink d6/db/d10/l14 0 2026-03-09T15:00:37.916 INFO:tasks.workunit.client.1.vm09.stdout:6/143: write d6/df/d23/f2f [763358,106075] 0 2026-03-09T15:00:37.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.917+0000 7f1f8ddd9700 1 -- 192.168.123.105:0/2121332086 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f1f74000bf0 con 0x7f1f7006c7a0 2026-03-09T15:00:37.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.919+0000 7f1f85ffb700 1 -- 192.168.123.105:0/2121332086 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+434 (secure 0 0 0) 0x7f1f74000bf0 con 0x7f1f7006c7a0 2026-03-09T15:00:37.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.923+0000 7f1f6f7fe700 1 -- 192.168.123.105:0/2121332086 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1f7006c7a0 msgr2=0x7f1f7006ec50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:37.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.923+0000 7f1f6f7fe700 1 --2- 192.168.123.105:0/2121332086 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1f7006c7a0 0x7f1f7006ec50 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7f1f78005950 tx=0x7f1f780058e0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.923+0000 7f1f6f7fe700 1 -- 192.168.123.105:0/2121332086 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f881318d0 msgr2=0x7f1f8807f500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:37.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.923+0000 7f1f6f7fe700 1 --2- 192.168.123.105:0/2121332086 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f881318d0 0x7f1f8807f500 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f1f8000be90 tx=0x7f1f800119a0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.924+0000 7f1f6f7fe700 1 -- 192.168.123.105:0/2121332086 shutdown_connections 2026-03-09T15:00:37.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.924+0000 7f1f6f7fe700 1 --2- 192.168.123.105:0/2121332086 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f1f7006c7a0 0x7f1f7006ec50 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.924+0000 7f1f6f7fe700 1 --2- 192.168.123.105:0/2121332086 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1f88071950 0x7f1f88131390 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.924+0000 7f1f6f7fe700 1 --2- 192.168.123.105:0/2121332086 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1f881318d0 0x7f1f8807f500 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:37.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.924+0000 7f1f6f7fe700 1 -- 192.168.123.105:0/2121332086 >> 192.168.123.105:0/2121332086 conn(0x7f1f8806d1a0 msgr2=0x7f1f880764d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:37.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.924+0000 7f1f6f7fe700 1 -- 192.168.123.105:0/2121332086 shutdown_connections 2026-03-09T15:00:37.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:37.924+0000 7f1f6f7fe700 1 -- 192.168.123.105:0/2121332086 wait complete. 2026-03-09T15:00:37.927 INFO:tasks.workunit.client.1.vm09.stdout:5/120: rename d2/d4/d21/f2b to d2/f2e 0 2026-03-09T15:00:37.945 INFO:tasks.workunit.client.1.vm09.stdout:8/122: creat df/f23 x:0 0 0 2026-03-09T15:00:37.945 INFO:tasks.workunit.client.1.vm09.stdout:0/152: dwrite da/fb [0,4194304] 0 2026-03-09T15:00:37.946 INFO:tasks.workunit.client.1.vm09.stdout:6/144: symlink d6/df/l30 0 2026-03-09T15:00:37.947 INFO:tasks.workunit.client.1.vm09.stdout:6/145: readlink d6/db/d10/l14 0 2026-03-09T15:00:37.947 INFO:tasks.workunit.client.1.vm09.stdout:9/114: truncate d1/d7/d1e/f22 701183 0 2026-03-09T15:00:37.950 INFO:tasks.workunit.client.1.vm09.stdout:9/115: chown d1/d7/ce 1 1 2026-03-09T15:00:37.950 INFO:tasks.workunit.client.1.vm09.stdout:3/128: dread d3/d4/f8 [0,4194304] 0 2026-03-09T15:00:37.952 INFO:tasks.workunit.client.1.vm09.stdout:3/129: write d3/ff [2730047,27683] 0 2026-03-09T15:00:37.959 INFO:tasks.workunit.client.1.vm09.stdout:6/146: dread d6/db/d10/f2b [0,4194304] 0 2026-03-09T15:00:37.959 INFO:tasks.workunit.client.1.vm09.stdout:7/126: write d3/f16 [749788,19773] 0 2026-03-09T15:00:37.963 INFO:tasks.workunit.client.1.vm09.stdout:2/149: truncate df/f1d 1083445 0 2026-03-09T15:00:37.965 INFO:tasks.workunit.client.1.vm09.stdout:7/127: dwrite f1 [0,4194304] 0 2026-03-09T15:00:37.969 INFO:tasks.workunit.client.1.vm09.stdout:3/130: mknod d3/c2f 0 2026-03-09T15:00:37.970 INFO:tasks.workunit.client.1.vm09.stdout:7/128: dread d3/db/d15/d1e/f22 [0,4194304] 0 2026-03-09T15:00:37.971 INFO:tasks.workunit.client.1.vm09.stdout:4/109: getdents db/d12/d16 0 2026-03-09T15:00:37.979 INFO:tasks.workunit.client.1.vm09.stdout:5/121: write d2/d4/f1f [404528,92232] 0 2026-03-09T15:00:37.984 INFO:tasks.workunit.client.1.vm09.stdout:6/147: rmdir d6/df 39 2026-03-09T15:00:37.988 INFO:tasks.workunit.client.1.vm09.stdout:9/116: link d1/d7/d9/f27 d1/d7/d1e/f2a 0 2026-03-09T15:00:37.992 INFO:tasks.workunit.client.1.vm09.stdout:3/131: symlink d3/l30 0 2026-03-09T15:00:37.996 INFO:tasks.workunit.client.1.vm09.stdout:0/153: rename da/dc/d1c/c2f to da/dc/c35 0 2026-03-09T15:00:38.001 INFO:tasks.workunit.client.1.vm09.stdout:4/110: truncate db/fe 4844639 0 2026-03-09T15:00:38.001 INFO:tasks.workunit.client.1.vm09.stdout:0/154: dwrite f5 [0,4194304] 0 2026-03-09T15:00:38.002 INFO:tasks.workunit.client.1.vm09.stdout:9/117: sync 2026-03-09T15:00:38.015 INFO:tasks.workunit.client.1.vm09.stdout:9/118: dwrite d1/d7/d9/f12 [0,4194304] 0 2026-03-09T15:00:38.016 INFO:tasks.workunit.client.1.vm09.stdout:9/119: write d1/f29 [350511,61749] 0 2026-03-09T15:00:38.022 INFO:tasks.workunit.client.1.vm09.stdout:5/122: creat d2/d4/d21/f2f x:0 0 0 2026-03-09T15:00:38.027 INFO:tasks.workunit.client.1.vm09.stdout:6/148: write d6/df/f16 [1397910,79173] 0 2026-03-09T15:00:38.028 INFO:tasks.workunit.client.1.vm09.stdout:6/149: chown d6/df/d23/c28 89619 1 2026-03-09T15:00:38.033 INFO:tasks.workunit.client.1.vm09.stdout:1/84: truncate d8/d10/f15 2635473 0 2026-03-09T15:00:38.035 INFO:tasks.workunit.client.1.vm09.stdout:3/132: mkdir d3/d4/d18/d2b/d31 0 2026-03-09T15:00:38.036 INFO:tasks.workunit.client.1.vm09.stdout:8/123: getdents df/d1f 0 2026-03-09T15:00:38.042 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.041+0000 7f44b9d14700 1 -- 192.168.123.105:0/60612193 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f44b41036f0 msgr2=0x7f44b4103b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:38.042 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.041+0000 7f44b9d14700 1 --2- 192.168.123.105:0/60612193 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f44b41036f0 0x7f44b4103b40 secure :-1 s=READY pgs=299 cs=0 l=1 rev1=1 crypto rx=0x7f449c009a60 tx=0x7f449c009d70 comp rx=0 tx=0).stop 2026-03-09T15:00:38.044 INFO:tasks.workunit.client.1.vm09.stdout:7/129: symlink d3/d1d/l24 0 2026-03-09T15:00:38.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.047+0000 7f44b9d14700 1 -- 192.168.123.105:0/60612193 shutdown_connections 2026-03-09T15:00:38.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.047+0000 7f44b9d14700 1 --2- 192.168.123.105:0/60612193 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f44b41036f0 0x7f44b4103b40 unknown :-1 s=CLOSED pgs=299 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.047+0000 7f44b9d14700 1 --2- 192.168.123.105:0/60612193 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f44b41024f0 0x7f44b4102900 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.047+0000 7f44b9d14700 1 -- 192.168.123.105:0/60612193 >> 192.168.123.105:0/60612193 conn(0x7f44b40fdac0 msgr2=0x7f44b40ffed0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:38.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.048+0000 7f44b9d14700 1 -- 192.168.123.105:0/60612193 shutdown_connections 2026-03-09T15:00:38.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.048+0000 7f44b9d14700 1 -- 192.168.123.105:0/60612193 wait complete. 2026-03-09T15:00:38.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.049+0000 7f44b9d14700 1 Processor -- start 2026-03-09T15:00:38.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.049+0000 7f44b9d14700 1 -- start start 2026-03-09T15:00:38.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.049+0000 7f44b9d14700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f44b41024f0 0x7f44b4197db0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:38.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.049+0000 7f44b9d14700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f44b41036f0 0x7f44b41982f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:38.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.049+0000 7f44b9d14700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f44b4198910 con 0x7f44b41036f0 2026-03-09T15:00:38.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.049+0000 7f44b9d14700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f44b4198a50 con 0x7f44b41024f0 2026-03-09T15:00:38.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.049+0000 7f44b37fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f44b41024f0 0x7f44b4197db0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:38.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.049+0000 7f44b37fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f44b41024f0 0x7f44b4197db0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:47806/0 (socket says 192.168.123.105:47806) 2026-03-09T15:00:38.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.049+0000 7f44b37fe700 1 -- 192.168.123.105:0/4196756509 learned_addr learned my addr 192.168.123.105:0/4196756509 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:00:38.051 INFO:tasks.workunit.client.1.vm09.stdout:0/155: mkdir da/d30/d36 0 2026-03-09T15:00:38.051 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.050+0000 7f44aadff700 1 --2- 192.168.123.105:0/4196756509 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f44b41036f0 0x7f44b41982f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:38.051 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.050+0000 7f44aadff700 1 -- 192.168.123.105:0/4196756509 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f44b41024f0 msgr2=0x7f44b4197db0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:38.051 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.050+0000 7f44aadff700 1 --2- 192.168.123.105:0/4196756509 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f44b41024f0 0x7f44b4197db0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.051 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.050+0000 7f44aadff700 1 -- 192.168.123.105:0/4196756509 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f449c009710 con 0x7f44b41036f0 2026-03-09T15:00:38.051 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.050+0000 7f44aadff700 1 --2- 192.168.123.105:0/4196756509 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f44b41036f0 0x7f44b41982f0 secure :-1 s=READY pgs=300 cs=0 l=1 rev1=1 crypto rx=0x7f449c005950 tx=0x7f449c0037e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:38.051 INFO:tasks.workunit.client.1.vm09.stdout:5/123: mknod d2/d4/d21/c30 0 2026-03-09T15:00:38.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.051+0000 7f44b17fa700 1 -- 192.168.123.105:0/4196756509 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f449c01d070 con 0x7f44b41036f0 2026-03-09T15:00:38.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.051+0000 7f44b17fa700 1 -- 192.168.123.105:0/4196756509 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f449c00fc30 con 0x7f44b41036f0 2026-03-09T15:00:38.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.051+0000 7f44b9d14700 1 -- 192.168.123.105:0/4196756509 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f44b419d4a0 con 0x7f44b41036f0 2026-03-09T15:00:38.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.051+0000 7f44b9d14700 1 -- 192.168.123.105:0/4196756509 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f44b419d990 con 0x7f44b41036f0 2026-03-09T15:00:38.052 INFO:tasks.workunit.client.1.vm09.stdout:0/156: readlink da/dc/d10/l13 0 2026-03-09T15:00:38.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.052+0000 7f44b17fa700 1 -- 192.168.123.105:0/4196756509 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f449c017720 con 0x7f44b41036f0 2026-03-09T15:00:38.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.052+0000 7f44b9d14700 1 -- 192.168.123.105:0/4196756509 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f44b404ea50 con 0x7f44b41036f0 2026-03-09T15:00:38.054 INFO:tasks.workunit.client.1.vm09.stdout:0/157: write da/dc/fe [244412,120895] 0 2026-03-09T15:00:38.054 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.053+0000 7f44b17fa700 1 -- 192.168.123.105:0/4196756509 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f449c021b10 con 0x7f44b41036f0 2026-03-09T15:00:38.054 INFO:tasks.workunit.client.1.vm09.stdout:6/150: rmdir d6/df/d23 39 2026-03-09T15:00:38.055 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.054+0000 7f44b17fa700 1 --2- 192.168.123.105:0/4196756509 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f44a006c7a0 0x7f44a006ec50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:38.055 INFO:tasks.workunit.client.1.vm09.stdout:0/158: chown f8 91 1 2026-03-09T15:00:38.055 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.054+0000 7f44b17fa700 1 -- 192.168.123.105:0/4196756509 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f449c08c9e0 con 0x7f44b41036f0 2026-03-09T15:00:38.057 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.055+0000 7f44b37fe700 1 --2- 192.168.123.105:0/4196756509 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f44a006c7a0 0x7f44a006ec50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:38.057 INFO:tasks.workunit.client.1.vm09.stdout:2/150: link df/d1f/c2c df/d1f/c2f 0 2026-03-09T15:00:38.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.058+0000 7f44b37fe700 1 --2- 192.168.123.105:0/4196756509 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f44a006c7a0 0x7f44a006ec50 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7f44a4009e70 tx=0x7f44a4009500 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:38.059 INFO:tasks.workunit.client.1.vm09.stdout:8/124: mkdir df/d24 0 2026-03-09T15:00:38.060 INFO:tasks.workunit.client.1.vm09.stdout:8/125: readlink df/l17 0 2026-03-09T15:00:38.062 INFO:tasks.workunit.client.1.vm09.stdout:5/124: dread d2/f29 [0,4194304] 0 2026-03-09T15:00:38.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.061+0000 7f44b17fa700 1 -- 192.168.123.105:0/4196756509 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f449c05ad20 con 0x7f44b41036f0 2026-03-09T15:00:38.063 INFO:tasks.workunit.client.1.vm09.stdout:5/125: dread - d2/d4/f24 zero size 2026-03-09T15:00:38.065 INFO:tasks.workunit.client.1.vm09.stdout:5/126: dwrite d2/d4/f16 [0,4194304] 0 2026-03-09T15:00:38.066 INFO:tasks.workunit.client.1.vm09.stdout:5/127: truncate d2/d4/f1f 1987743 0 2026-03-09T15:00:38.070 INFO:tasks.workunit.client.1.vm09.stdout:5/128: dwrite d2/f15 [0,4194304] 0 2026-03-09T15:00:38.191 INFO:tasks.workunit.client.1.vm09.stdout:4/111: mknod db/d19/c22 0 2026-03-09T15:00:38.192 INFO:tasks.workunit.client.1.vm09.stdout:9/120: write d1/f1f [2719354,23476] 0 2026-03-09T15:00:38.198 INFO:tasks.workunit.client.1.vm09.stdout:0/159: symlink da/dc/d1c/l37 0 2026-03-09T15:00:38.198 INFO:tasks.workunit.client.1.vm09.stdout:8/126: mknod df/d1c/c25 0 2026-03-09T15:00:38.199 INFO:tasks.workunit.client.1.vm09.stdout:0/160: chown da/dc/d1c 9 1 2026-03-09T15:00:38.199 INFO:tasks.workunit.client.1.vm09.stdout:8/127: truncate df/f1a 1766562 0 2026-03-09T15:00:38.204 INFO:tasks.workunit.client.1.vm09.stdout:5/129: creat d2/d4/f31 x:0 0 0 2026-03-09T15:00:38.204 INFO:tasks.workunit.client.1.vm09.stdout:8/128: dwrite df/f1a [0,4194304] 0 2026-03-09T15:00:38.212 INFO:tasks.workunit.client.1.vm09.stdout:4/112: mkdir db/d19/d23 0 2026-03-09T15:00:38.214 INFO:tasks.workunit.client.1.vm09.stdout:9/121: mkdir d1/d7/d1e/d2b 0 2026-03-09T15:00:38.217 INFO:tasks.workunit.client.1.vm09.stdout:0/161: sync 2026-03-09T15:00:38.220 INFO:tasks.workunit.client.1.vm09.stdout:2/151: mknod df/d2d/c30 0 2026-03-09T15:00:38.224 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.223+0000 7f44b9d14700 1 -- 192.168.123.105:0/4196756509 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f44b4108040 con 0x7f44a006c7a0 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (3m) 2m ago 4m 22.6M - 0.25.0 c8568f914cd2 35e160b8d1de 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (4m) 2m ago 4m 8061k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (3m) 2m ago 3m 8242k - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (4m) 2m ago 4m 7411k - 18.2.0 dc2bc1663786 1c577d7a0de0 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (3m) 2m ago 3m 7402k - 18.2.0 dc2bc1663786 9e4961442551 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (3m) 2m ago 4m 82.7M - 9.4.7 954c08fa6188 46e00e5e5b38 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (2m) 2m ago 2m 16.0M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (2m) 2m ago 2m 12.8M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (2m) 2m ago 2m 12.9M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (2m) 2m ago 2m 15.7M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:9283,8765,8443 running (5m) 2m ago 5m 499M - 18.2.0 dc2bc1663786 528c75e7c581 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (3m) 2m ago 3m 444M - 18.2.0 dc2bc1663786 b7db289ecc14 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (5m) 2m ago 5m 49.2M 2048M 18.2.0 dc2bc1663786 c83e96b62251 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (3m) 2m ago 3m 45.0M 2048M 18.2.0 dc2bc1663786 7963792b5376 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (4m) 2m ago 4m 13.9M - 1.5.0 0da6a335fe13 925d94d1da6f 2026-03-09T15:00:38.233 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (3m) 2m ago 3m 14.6M - 1.5.0 0da6a335fe13 e0b25e3a046e 2026-03-09T15:00:38.234 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (3m) 2m ago 3m 45.7M 4096M 18.2.0 dc2bc1663786 50f3ca995318 2026-03-09T15:00:38.234 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (3m) 2m ago 3m 46.9M 4096M 18.2.0 dc2bc1663786 23e35bdafe50 2026-03-09T15:00:38.234 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (3m) 2m ago 3m 45.8M 4096M 18.2.0 dc2bc1663786 75097dc12979 2026-03-09T15:00:38.234 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (2m) 2m ago 2m 44.4M 4096M 18.2.0 dc2bc1663786 e79644a0564f 2026-03-09T15:00:38.234 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (2m) 2m ago 2m 43.6M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T15:00:38.234 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (2m) 2m ago 2m 42.5M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:00:38.234 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (3m) 2m ago 4m 38.9M - 2.43.0 a07b618ecd1d c36363ff6641 2026-03-09T15:00:38.234 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.230+0000 7f44b17fa700 1 -- 192.168.123.105:0/4196756509 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3216 (secure 0 0 0) 0x7f44b4108040 con 0x7f44a006c7a0 2026-03-09T15:00:38.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.234+0000 7f44b9d14700 1 -- 192.168.123.105:0/4196756509 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f44a006c7a0 msgr2=0x7f44a006ec50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:38.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.234+0000 7f44b9d14700 1 --2- 192.168.123.105:0/4196756509 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f44a006c7a0 0x7f44a006ec50 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7f44a4009e70 tx=0x7f44a4009500 comp rx=0 tx=0).stop 2026-03-09T15:00:38.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.234+0000 7f44b9d14700 1 -- 192.168.123.105:0/4196756509 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f44b41036f0 msgr2=0x7f44b41982f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:38.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.234+0000 7f44b9d14700 1 --2- 192.168.123.105:0/4196756509 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f44b41036f0 0x7f44b41982f0 secure :-1 s=READY pgs=300 cs=0 l=1 rev1=1 crypto rx=0x7f449c005950 tx=0x7f449c0037e0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.234+0000 7f44b9d14700 1 -- 192.168.123.105:0/4196756509 shutdown_connections 2026-03-09T15:00:38.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.234+0000 7f44b9d14700 1 --2- 192.168.123.105:0/4196756509 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f44a006c7a0 0x7f44a006ec50 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.234+0000 7f44b9d14700 1 --2- 192.168.123.105:0/4196756509 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f44b41024f0 0x7f44b4197db0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.234+0000 7f44b9d14700 1 --2- 192.168.123.105:0/4196756509 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f44b41036f0 0x7f44b41982f0 unknown :-1 s=CLOSED pgs=300 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.234+0000 7f44b9d14700 1 -- 192.168.123.105:0/4196756509 >> 192.168.123.105:0/4196756509 conn(0x7f44b40fdac0 msgr2=0x7f44b4106920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:38.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.234+0000 7f44b9d14700 1 -- 192.168.123.105:0/4196756509 shutdown_connections 2026-03-09T15:00:38.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.234+0000 7f44b9d14700 1 -- 192.168.123.105:0/4196756509 wait complete. 2026-03-09T15:00:38.239 INFO:tasks.workunit.client.1.vm09.stdout:7/130: rename d3/db/d15/d1e to d3/db/d25 0 2026-03-09T15:00:38.239 INFO:tasks.workunit.client.1.vm09.stdout:7/131: write d3/fd [853976,41090] 0 2026-03-09T15:00:38.244 INFO:tasks.workunit.client.1.vm09.stdout:7/132: dwrite d3/f16 [0,4194304] 0 2026-03-09T15:00:38.254 INFO:tasks.workunit.client.1.vm09.stdout:7/133: dwrite f1 [0,4194304] 0 2026-03-09T15:00:38.266 INFO:tasks.workunit.client.1.vm09.stdout:0/162: rmdir da/d30 39 2026-03-09T15:00:38.271 INFO:tasks.workunit.client.1.vm09.stdout:1/85: rename d8/l16 to d8/l1d 0 2026-03-09T15:00:38.275 INFO:tasks.workunit.client.1.vm09.stdout:3/133: link d3/f9 d3/d4/d18/d2b/f32 0 2026-03-09T15:00:38.275 INFO:tasks.workunit.client.1.vm09.stdout:3/134: read d3/f29 [293684,90344] 0 2026-03-09T15:00:38.295 INFO:tasks.workunit.client.1.vm09.stdout:1/86: rmdir d8/d10 39 2026-03-09T15:00:38.299 INFO:tasks.workunit.client.1.vm09.stdout:8/129: dwrite f8 [4194304,4194304] 0 2026-03-09T15:00:38.299 INFO:tasks.workunit.client.1.vm09.stdout:8/130: chown df/c11 2 1 2026-03-09T15:00:38.300 INFO:tasks.workunit.client.1.vm09.stdout:8/131: truncate df/f12 315209 0 2026-03-09T15:00:38.303 INFO:tasks.workunit.client.1.vm09.stdout:2/152: dwrite df/d20/f24 [0,4194304] 0 2026-03-09T15:00:38.305 INFO:tasks.workunit.client.1.vm09.stdout:8/132: dread fe [0,4194304] 0 2026-03-09T15:00:38.306 INFO:tasks.workunit.client.1.vm09.stdout:2/153: chown df/d1f/c25 181202258 1 2026-03-09T15:00:38.308 INFO:tasks.workunit.client.1.vm09.stdout:5/130: link d2/l18 d2/d4/d21/l32 0 2026-03-09T15:00:38.309 INFO:tasks.workunit.client.1.vm09.stdout:9/122: creat d1/f2c x:0 0 0 2026-03-09T15:00:38.309 INFO:tasks.workunit.client.1.vm09.stdout:0/163: creat da/d30/f38 x:0 0 0 2026-03-09T15:00:38.309 INFO:tasks.workunit.client.1.vm09.stdout:9/123: dread - d1/f2c zero size 2026-03-09T15:00:38.311 INFO:tasks.workunit.client.1.vm09.stdout:0/164: dread da/dc/d10/f29 [0,4194304] 0 2026-03-09T15:00:38.312 INFO:tasks.workunit.client.1.vm09.stdout:0/165: readlink da/dc/l2a 0 2026-03-09T15:00:38.312 INFO:tasks.workunit.client.1.vm09.stdout:0/166: fdatasync da/dc/fe 0 2026-03-09T15:00:38.314 INFO:tasks.workunit.client.1.vm09.stdout:0/167: fsync da/dc/d10/f16 0 2026-03-09T15:00:38.323 INFO:tasks.workunit.client.1.vm09.stdout:3/135: creat d3/d4/d18/d2b/d31/f33 x:0 0 0 2026-03-09T15:00:38.329 INFO:tasks.workunit.client.1.vm09.stdout:5/131: rmdir d2/d4/d21 39 2026-03-09T15:00:38.334 INFO:tasks.workunit.client.1.vm09.stdout:6/151: rename d6/db/d10/f2b to d6/df/f31 0 2026-03-09T15:00:38.336 INFO:tasks.workunit.client.1.vm09.stdout:7/134: link d3/fd d3/f26 0 2026-03-09T15:00:38.337 INFO:tasks.workunit.client.1.vm09.stdout:7/135: chown d3/db/d25/f22 146522 1 2026-03-09T15:00:38.340 INFO:tasks.workunit.client.1.vm09.stdout:9/124: creat d1/d7/d9/f2d x:0 0 0 2026-03-09T15:00:38.344 INFO:tasks.workunit.client.1.vm09.stdout:7/136: dwrite d3/f8 [0,4194304] 0 2026-03-09T15:00:38.347 INFO:tasks.workunit.client.1.vm09.stdout:9/125: dread - d1/d7/d1e/f20 zero size 2026-03-09T15:00:38.349 INFO:tasks.workunit.client.1.vm09.stdout:7/137: sync 2026-03-09T15:00:38.349 INFO:tasks.workunit.client.1.vm09.stdout:9/126: dwrite d1/f1f [0,4194304] 0 2026-03-09T15:00:38.356 INFO:tasks.workunit.client.1.vm09.stdout:0/168: mknod da/dc/d22/c39 0 2026-03-09T15:00:38.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.356+0000 7fceb85e7700 1 -- 192.168.123.105:0/2459581027 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0071a90 msgr2=0x7fceb0071ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:38.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.356+0000 7fceb85e7700 1 --2- 192.168.123.105:0/2459581027 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0071a90 0x7fceb0071ea0 secure :-1 s=READY pgs=301 cs=0 l=1 rev1=1 crypto rx=0x7fceac009b00 tx=0x7fceac009e10 comp rx=0 tx=0).stop 2026-03-09T15:00:38.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.357+0000 7fceb85e7700 1 -- 192.168.123.105:0/2459581027 shutdown_connections 2026-03-09T15:00:38.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.357+0000 7fceb85e7700 1 --2- 192.168.123.105:0/2459581027 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fceb0072470 0x7fceb010beb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.357+0000 7fceb85e7700 1 --2- 192.168.123.105:0/2459581027 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0071a90 0x7fceb0071ea0 unknown :-1 s=CLOSED pgs=301 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.357+0000 7fceb85e7700 1 -- 192.168.123.105:0/2459581027 >> 192.168.123.105:0/2459581027 conn(0x7fceb006d1a0 msgr2=0x7fceb006f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:38.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.357+0000 7fceb85e7700 1 -- 192.168.123.105:0/2459581027 shutdown_connections 2026-03-09T15:00:38.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.357+0000 7fceb85e7700 1 -- 192.168.123.105:0/2459581027 wait complete. 2026-03-09T15:00:38.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.358+0000 7fceb85e7700 1 Processor -- start 2026-03-09T15:00:38.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.358+0000 7fceb85e7700 1 -- start start 2026-03-09T15:00:38.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.358+0000 7fceb85e7700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fceb0071a90 0x7fceb0116c80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:38.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.358+0000 7fceb85e7700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0072470 0x7fceb01171c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:38.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.358+0000 7fceb85e7700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fceb01177e0 con 0x7fceb0072470 2026-03-09T15:00:38.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.358+0000 7fceb85e7700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fceb0117920 con 0x7fceb0071a90 2026-03-09T15:00:38.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.359+0000 7fceb5b82700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0072470 0x7fceb01171c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:38.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.359+0000 7fceb5b82700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0072470 0x7fceb01171c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:54034/0 (socket says 192.168.123.105:54034) 2026-03-09T15:00:38.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.359+0000 7fceb5b82700 1 -- 192.168.123.105:0/79555445 learned_addr learned my addr 192.168.123.105:0/79555445 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:00:38.360 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.359+0000 7fceb5b82700 1 -- 192.168.123.105:0/79555445 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fceb0071a90 msgr2=0x7fceb0116c80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:38.360 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.359+0000 7fceb5b82700 1 --2- 192.168.123.105:0/79555445 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fceb0071a90 0x7fceb0116c80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.360 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.359+0000 7fceb5b82700 1 -- 192.168.123.105:0/79555445 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fceac0097e0 con 0x7fceb0072470 2026-03-09T15:00:38.360 INFO:tasks.workunit.client.1.vm09.stdout:1/87: truncate d8/fa 695252 0 2026-03-09T15:00:38.360 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.359+0000 7fceb5b82700 1 --2- 192.168.123.105:0/79555445 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0072470 0x7fceb01171c0 secure :-1 s=READY pgs=302 cs=0 l=1 rev1=1 crypto rx=0x7fcea800bf40 tx=0x7fcea800bf70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:38.360 INFO:tasks.workunit.client.1.vm09.stdout:0/169: dread - da/dc/d10/f2d zero size 2026-03-09T15:00:38.361 INFO:tasks.workunit.client.1.vm09.stdout:3/136: chown d3/d4/ld 52494 1 2026-03-09T15:00:38.361 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.360+0000 7fcea77fe700 1 -- 192.168.123.105:0/79555445 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcea800cb40 con 0x7fceb0072470 2026-03-09T15:00:38.361 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.360+0000 7fcea77fe700 1 -- 192.168.123.105:0/79555445 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fcea800cca0 con 0x7fceb0072470 2026-03-09T15:00:38.362 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.360+0000 7fcea77fe700 1 -- 192.168.123.105:0/79555445 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcea8012720 con 0x7fceb0072470 2026-03-09T15:00:38.362 INFO:tasks.workunit.client.1.vm09.stdout:4/113: rename l5 to db/d12/d16/l24 0 2026-03-09T15:00:38.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.361+0000 7fceb85e7700 1 -- 192.168.123.105:0/79555445 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fceb01b2bf0 con 0x7fceb0072470 2026-03-09T15:00:38.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.361+0000 7fceb85e7700 1 -- 192.168.123.105:0/79555445 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fceb01b3110 con 0x7fceb0072470 2026-03-09T15:00:38.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.361+0000 7fceb85e7700 1 -- 192.168.123.105:0/79555445 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fceb0110c20 con 0x7fceb0072470 2026-03-09T15:00:38.366 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.365+0000 7fcea77fe700 1 -- 192.168.123.105:0/79555445 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fcea8012880 con 0x7fceb0072470 2026-03-09T15:00:38.380 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.365+0000 7fcea77fe700 1 --2- 192.168.123.105:0/79555445 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fce9c06c4d0 0x7fce9c06e980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:38.380 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.365+0000 7fcea77fe700 1 -- 192.168.123.105:0/79555445 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fcea808bc20 con 0x7fceb0072470 2026-03-09T15:00:38.380 INFO:tasks.workunit.client.1.vm09.stdout:4/114: chown db/f21 487128596 1 2026-03-09T15:00:38.381 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.365+0000 7fceb6383700 1 --2- 192.168.123.105:0/79555445 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fce9c06c4d0 0x7fce9c06e980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:38.381 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.367+0000 7fceb6383700 1 --2- 192.168.123.105:0/79555445 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fce9c06c4d0 0x7fce9c06e980 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fceac00b5c0 tx=0x7fceac009f90 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:38.381 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.368+0000 7fcea77fe700 1 -- 192.168.123.105:0/79555445 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fcea8059eb0 con 0x7fceb0072470 2026-03-09T15:00:38.383 INFO:tasks.workunit.client.1.vm09.stdout:0/170: dread da/dc/d22/f25 [0,4194304] 0 2026-03-09T15:00:38.384 INFO:tasks.workunit.client.1.vm09.stdout:5/132: symlink d2/d4/d21/l33 0 2026-03-09T15:00:38.384 INFO:tasks.workunit.client.1.vm09.stdout:6/152: mknod d6/d20/d24/c32 0 2026-03-09T15:00:38.386 INFO:tasks.workunit.client.1.vm09.stdout:6/153: chown d6/d20/d2a/l2d 2170432 1 2026-03-09T15:00:38.387 INFO:tasks.workunit.client.1.vm09.stdout:2/154: getdents df/d1f 0 2026-03-09T15:00:38.390 INFO:tasks.workunit.client.1.vm09.stdout:8/133: truncate f8 1086662 0 2026-03-09T15:00:38.393 INFO:tasks.workunit.client.1.vm09.stdout:0/171: mkdir da/dc/d10/d3a 0 2026-03-09T15:00:38.393 INFO:tasks.workunit.client.1.vm09.stdout:0/172: chown f3 4 1 2026-03-09T15:00:38.395 INFO:tasks.workunit.client.1.vm09.stdout:8/134: dread df/f1a [0,4194304] 0 2026-03-09T15:00:38.395 INFO:tasks.workunit.client.1.vm09.stdout:5/133: write d2/f14 [4610458,97928] 0 2026-03-09T15:00:38.398 INFO:tasks.workunit.client.1.vm09.stdout:7/138: fsync d3/f26 0 2026-03-09T15:00:38.402 INFO:tasks.workunit.client.1.vm09.stdout:6/154: rename d6/d20/d2a/l2d to d6/d20/d24/l33 0 2026-03-09T15:00:38.410 INFO:tasks.workunit.client.1.vm09.stdout:3/137: dwrite d3/d4/f26 [0,4194304] 0 2026-03-09T15:00:38.410 INFO:tasks.workunit.client.1.vm09.stdout:9/127: dwrite d1/d7/d9/f27 [0,4194304] 0 2026-03-09T15:00:38.415 INFO:tasks.workunit.client.1.vm09.stdout:0/173: creat da/dc/d22/f3b x:0 0 0 2026-03-09T15:00:38.416 INFO:tasks.workunit.client.1.vm09.stdout:0/174: stat da/d30 0 2026-03-09T15:00:38.416 INFO:tasks.workunit.client.1.vm09.stdout:4/115: truncate db/f14 3766117 0 2026-03-09T15:00:38.417 INFO:tasks.workunit.client.1.vm09.stdout:9/128: truncate d1/d7/d1e/f20 704407 0 2026-03-09T15:00:38.418 INFO:tasks.workunit.client.1.vm09.stdout:9/129: write d1/d7/d9/f1d [726568,114278] 0 2026-03-09T15:00:38.420 INFO:tasks.workunit.client.1.vm09.stdout:4/116: dread f3 [0,4194304] 0 2026-03-09T15:00:38.435 INFO:tasks.workunit.client.1.vm09.stdout:2/155: creat df/d20/d29/f31 x:0 0 0 2026-03-09T15:00:38.435 INFO:tasks.workunit.client.1.vm09.stdout:5/134: creat d2/f34 x:0 0 0 2026-03-09T15:00:38.437 INFO:tasks.workunit.client.1.vm09.stdout:7/139: dwrite d3/db/d15/f23 [4194304,4194304] 0 2026-03-09T15:00:38.443 INFO:tasks.workunit.client.1.vm09.stdout:6/155: creat d6/d20/f34 x:0 0 0 2026-03-09T15:00:38.444 INFO:tasks.workunit.client.1.vm09.stdout:6/156: dread d6/df/f31 [0,4194304] 0 2026-03-09T15:00:38.444 INFO:tasks.workunit.client.1.vm09.stdout:6/157: truncate d6/d20/f34 1047640 0 2026-03-09T15:00:38.446 INFO:tasks.workunit.client.1.vm09.stdout:0/175: mkdir da/dc/d1c/d3c 0 2026-03-09T15:00:38.457 INFO:tasks.workunit.client.1.vm09.stdout:5/135: symlink d2/d4/d21/l35 0 2026-03-09T15:00:38.466 INFO:tasks.workunit.client.1.vm09.stdout:6/158: dread d6/df/d23/f2f [0,4194304] 0 2026-03-09T15:00:38.466 INFO:tasks.workunit.client.1.vm09.stdout:0/176: rename f3 to da/d30/f3d 0 2026-03-09T15:00:38.466 INFO:tasks.workunit.client.1.vm09.stdout:0/177: fsync f7 0 2026-03-09T15:00:38.473 INFO:tasks.workunit.client.1.vm09.stdout:5/136: mkdir d2/d4/d21/d36 0 2026-03-09T15:00:38.473 INFO:tasks.workunit.client.1.vm09.stdout:5/137: dread - d2/d4/f24 zero size 2026-03-09T15:00:38.480 INFO:tasks.workunit.client.1.vm09.stdout:6/159: chown d6/db/ld 91071923 1 2026-03-09T15:00:38.482 INFO:tasks.workunit.client.1.vm09.stdout:0/178: mknod da/dc/d1c/c3e 0 2026-03-09T15:00:38.482 INFO:tasks.workunit.client.1.vm09.stdout:0/179: fdatasync da/dc/f17 0 2026-03-09T15:00:38.483 INFO:tasks.workunit.client.1.vm09.stdout:0/180: rename da/d30/d36 to da/d30/d36/d3f 22 2026-03-09T15:00:38.484 INFO:tasks.workunit.client.1.vm09.stdout:4/117: creat db/d12/f25 x:0 0 0 2026-03-09T15:00:38.486 INFO:tasks.workunit.client.1.vm09.stdout:2/156: link df/l18 df/d20/d2e/l32 0 2026-03-09T15:00:38.488 INFO:tasks.workunit.client.1.vm09.stdout:5/138: mkdir d2/d37 0 2026-03-09T15:00:38.493 INFO:tasks.workunit.client.1.vm09.stdout:6/160: rmdir d6/df/d23 39 2026-03-09T15:00:38.518 INFO:tasks.workunit.client.1.vm09.stdout:6/161: chown d6/db 58 1 2026-03-09T15:00:38.518 INFO:tasks.workunit.client.1.vm09.stdout:0/181: mknod da/dc/d1c/c40 0 2026-03-09T15:00:38.518 INFO:tasks.workunit.client.1.vm09.stdout:4/118: creat db/d12/d16/f26 x:0 0 0 2026-03-09T15:00:38.518 INFO:tasks.workunit.client.1.vm09.stdout:5/139: creat d2/f38 x:0 0 0 2026-03-09T15:00:38.518 INFO:tasks.workunit.client.1.vm09.stdout:5/140: dread - d2/f38 zero size 2026-03-09T15:00:38.519 INFO:tasks.workunit.client.1.vm09.stdout:0/182: mknod da/d30/c41 0 2026-03-09T15:00:38.519 INFO:tasks.workunit.client.1.vm09.stdout:5/141: rename d2/d4/c19 to d2/c39 0 2026-03-09T15:00:38.519 INFO:tasks.workunit.client.1.vm09.stdout:5/142: truncate d2/f34 516043 0 2026-03-09T15:00:38.519 INFO:tasks.workunit.client.1.vm09.stdout:6/162: symlink d6/df/l35 0 2026-03-09T15:00:38.519 INFO:tasks.workunit.client.1.vm09.stdout:6/163: chown d6 0 1 2026-03-09T15:00:38.519 INFO:tasks.workunit.client.1.vm09.stdout:2/157: dread f4 [4194304,4194304] 0 2026-03-09T15:00:38.519 INFO:tasks.workunit.client.1.vm09.stdout:2/158: write df/f16 [3079334,119140] 0 2026-03-09T15:00:38.519 INFO:tasks.workunit.client.1.vm09.stdout:2/159: fsync df/f28 0 2026-03-09T15:00:38.520 INFO:tasks.workunit.client.1.vm09.stdout:0/183: unlink f8 0 2026-03-09T15:00:38.521 INFO:tasks.workunit.client.1.vm09.stdout:6/164: dread d6/df/f16 [0,4194304] 0 2026-03-09T15:00:38.521 INFO:tasks.workunit.client.1.vm09.stdout:6/165: dread - d6/db/d10/f1c zero size 2026-03-09T15:00:38.525 INFO:tasks.workunit.client.1.vm09.stdout:5/143: creat d2/d4/d21/f3a x:0 0 0 2026-03-09T15:00:38.526 INFO:tasks.workunit.client.1.vm09.stdout:5/144: write d2/d4/f16 [5234534,130944] 0 2026-03-09T15:00:38.536 INFO:tasks.workunit.client.1.vm09.stdout:3/138: read d3/f9 [458716,27739] 0 2026-03-09T15:00:38.543 INFO:tasks.workunit.client.1.vm09.stdout:1/88: write d8/f19 [162284,119269] 0 2026-03-09T15:00:38.543 INFO:tasks.workunit.client.1.vm09.stdout:2/160: read df/f1c [3349662,109126] 0 2026-03-09T15:00:38.544 INFO:tasks.workunit.client.1.vm09.stdout:6/166: creat d6/d20/f36 x:0 0 0 2026-03-09T15:00:38.553 INFO:tasks.workunit.client.1.vm09.stdout:3/139: creat d3/d4/d18/d2b/d31/f34 x:0 0 0 2026-03-09T15:00:38.556 INFO:tasks.workunit.client.1.vm09.stdout:5/145: rename d2/d4/f24 to d2/d4/d21/d36/f3b 0 2026-03-09T15:00:38.559 INFO:tasks.workunit.client.1.vm09.stdout:1/89: chown d8/d10/d1c 693092 1 2026-03-09T15:00:38.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.564+0000 7fceb85e7700 1 -- 192.168.123.105:0/79555445 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fceb004ea50 con 0x7fceb0072470 2026-03-09T15:00:38.578 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.566+0000 7fcea77fe700 1 -- 192.168.123.105:0/79555445 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+558 (secure 0 0 0) 0x7fcea8059a40 con 0x7fceb0072470 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 6 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 14 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:00:38.579 INFO:tasks.workunit.client.1.vm09.stdout:2/161: dread df/f1d [0,4194304] 0 2026-03-09T15:00:38.579 INFO:tasks.workunit.client.1.vm09.stdout:2/162: write f0 [1107195,50355] 0 2026-03-09T15:00:38.579 INFO:tasks.workunit.client.1.vm09.stdout:0/184: getdents da/dc/d1c/d3c 0 2026-03-09T15:00:38.579 INFO:tasks.workunit.client.1.vm09.stdout:0/185: dwrite da/dc/f17 [0,4194304] 0 2026-03-09T15:00:38.579 INFO:tasks.workunit.client.1.vm09.stdout:0/186: chown da/l15 329868 1 2026-03-09T15:00:38.579 INFO:tasks.workunit.client.1.vm09.stdout:0/187: write da/dc/fe [1155344,18598] 0 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.570+0000 7fcea57fa700 1 -- 192.168.123.105:0/79555445 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fce9c06c4d0 msgr2=0x7fce9c06e980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.570+0000 7fcea57fa700 1 --2- 192.168.123.105:0/79555445 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fce9c06c4d0 0x7fce9c06e980 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fceac00b5c0 tx=0x7fceac009f90 comp rx=0 tx=0).stop 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.570+0000 7fcea57fa700 1 -- 192.168.123.105:0/79555445 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0072470 msgr2=0x7fceb01171c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.570+0000 7fcea57fa700 1 --2- 192.168.123.105:0/79555445 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0072470 0x7fceb01171c0 secure :-1 s=READY pgs=302 cs=0 l=1 rev1=1 crypto rx=0x7fcea800bf40 tx=0x7fcea800bf70 comp rx=0 tx=0).stop 2026-03-09T15:00:38.579 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.571+0000 7fcea57fa700 1 -- 192.168.123.105:0/79555445 shutdown_connections 2026-03-09T15:00:38.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.571+0000 7fcea57fa700 1 --2- 192.168.123.105:0/79555445 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fce9c06c4d0 0x7fce9c06e980 unknown :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.571+0000 7fcea57fa700 1 --2- 192.168.123.105:0/79555445 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fceb0071a90 0x7fceb0116c80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.571+0000 7fcea57fa700 1 --2- 192.168.123.105:0/79555445 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fceb0072470 0x7fceb01171c0 unknown :-1 s=CLOSED pgs=302 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.571+0000 7fcea57fa700 1 -- 192.168.123.105:0/79555445 >> 192.168.123.105:0/79555445 conn(0x7fceb006d1a0 msgr2=0x7fceb010b590 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:38.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.571+0000 7fcea57fa700 1 -- 192.168.123.105:0/79555445 shutdown_connections 2026-03-09T15:00:38.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.571+0000 7fcea57fa700 1 -- 192.168.123.105:0/79555445 wait complete. 2026-03-09T15:00:38.584 INFO:tasks.workunit.client.1.vm09.stdout:6/167: creat d6/d20/d2a/f37 x:0 0 0 2026-03-09T15:00:38.584 INFO:tasks.workunit.client.1.vm09.stdout:4/119: getdents db 0 2026-03-09T15:00:38.597 INFO:tasks.workunit.client.1.vm09.stdout:2/163: creat df/f33 x:0 0 0 2026-03-09T15:00:38.603 INFO:tasks.workunit.client.1.vm09.stdout:6/168: mkdir d6/d20/d38 0 2026-03-09T15:00:38.609 INFO:tasks.workunit.client.1.vm09.stdout:1/90: symlink d8/d10/d1c/l1e 0 2026-03-09T15:00:38.630 INFO:tasks.workunit.client.1.vm09.stdout:2/164: symlink df/d20/d2e/l34 0 2026-03-09T15:00:38.633 INFO:tasks.workunit.client.1.vm09.stdout:4/120: creat db/d12/f27 x:0 0 0 2026-03-09T15:00:38.635 INFO:tasks.workunit.client.1.vm09.stdout:2/165: dwrite f3 [0,4194304] 0 2026-03-09T15:00:38.643 INFO:tasks.workunit.client.1.vm09.stdout:6/169: creat d6/f39 x:0 0 0 2026-03-09T15:00:38.647 INFO:tasks.workunit.client.1.vm09.stdout:4/121: dwrite db/fe [0,4194304] 0 2026-03-09T15:00:38.656 INFO:tasks.workunit.client.1.vm09.stdout:2/166: dwrite df/f13 [0,4194304] 0 2026-03-09T15:00:38.670 INFO:tasks.workunit.client.1.vm09.stdout:6/170: symlink d6/df/d23/l3a 0 2026-03-09T15:00:38.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.679+0000 7f6519c3e700 1 -- 192.168.123.105:0/3616470247 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6514071e60 msgr2=0x7f6514072270 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:38.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.679+0000 7f6519c3e700 1 --2- 192.168.123.105:0/3616470247 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6514071e60 0x7f6514072270 secure :-1 s=READY pgs=303 cs=0 l=1 rev1=1 crypto rx=0x7f6504009b00 tx=0x7f6504009e10 comp rx=0 tx=0).stop 2026-03-09T15:00:38.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.679+0000 7f6519c3e700 1 -- 192.168.123.105:0/3616470247 shutdown_connections 2026-03-09T15:00:38.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.679+0000 7f6519c3e700 1 --2- 192.168.123.105:0/3616470247 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6514072840 0x7f6514107df0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.679+0000 7f6519c3e700 1 --2- 192.168.123.105:0/3616470247 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6514071e60 0x7f6514072270 unknown :-1 s=CLOSED pgs=303 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.679+0000 7f6519c3e700 1 -- 192.168.123.105:0/3616470247 >> 192.168.123.105:0/3616470247 conn(0x7f651406d1a0 msgr2=0x7f651406f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:38.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.679+0000 7f6519c3e700 1 -- 192.168.123.105:0/3616470247 shutdown_connections 2026-03-09T15:00:38.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.680+0000 7f6519c3e700 1 -- 192.168.123.105:0/3616470247 wait complete. 2026-03-09T15:00:38.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.680+0000 7f6519c3e700 1 Processor -- start 2026-03-09T15:00:38.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.680+0000 7f6519c3e700 1 -- start start 2026-03-09T15:00:38.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.680+0000 7f6519c3e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6514071e60 0x7f651419e600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:38.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.680+0000 7f6519c3e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6514072840 0x7f651419eb40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:38.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.680+0000 7f6519c3e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f651419f160 con 0x7f6514071e60 2026-03-09T15:00:38.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.680+0000 7f6519c3e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f651419f2a0 con 0x7f6514072840 2026-03-09T15:00:38.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.683+0000 7f6513fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6514072840 0x7f651419eb40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:38.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.683+0000 7f6513fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6514072840 0x7f651419eb40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:58136/0 (socket says 192.168.123.105:58136) 2026-03-09T15:00:38.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.683+0000 7f6513fff700 1 -- 192.168.123.105:0/2317550515 learned_addr learned my addr 192.168.123.105:0/2317550515 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:00:38.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.683+0000 7f6513fff700 1 -- 192.168.123.105:0/2317550515 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6514071e60 msgr2=0x7f651419e600 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:38.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.683+0000 7f6513fff700 1 --2- 192.168.123.105:0/2317550515 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6514071e60 0x7f651419e600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:38.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.683+0000 7f6513fff700 1 -- 192.168.123.105:0/2317550515 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f65040097e0 con 0x7f6514072840 2026-03-09T15:00:38.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.684+0000 7f6513fff700 1 --2- 192.168.123.105:0/2317550515 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6514072840 0x7f651419eb40 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f650800eb10 tx=0x7f650800eed0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:38.686 INFO:tasks.workunit.client.1.vm09.stdout:6/171: mkdir d6/d20/d2a/d3b 0 2026-03-09T15:00:38.687 INFO:tasks.workunit.client.1.vm09.stdout:6/172: chown d6/d20/c26 2751 1 2026-03-09T15:00:38.688 INFO:tasks.workunit.client.1.vm09.stdout:4/122: link db/lc db/d19/d23/l28 0 2026-03-09T15:00:38.695 INFO:tasks.workunit.client.1.vm09.stdout:6/173: dwrite d6/f25 [0,4194304] 0 2026-03-09T15:00:38.695 INFO:tasks.workunit.client.1.vm09.stdout:4/123: dwrite db/f21 [0,4194304] 0 2026-03-09T15:00:38.701 INFO:tasks.workunit.client.1.vm09.stdout:6/174: symlink d6/d20/l3c 0 2026-03-09T15:00:38.719 INFO:tasks.workunit.client.1.vm09.stdout:6/175: mkdir d6/d20/d2a/d3d 0 2026-03-09T15:00:38.719 INFO:tasks.workunit.client.1.vm09.stdout:6/176: dread f0 [0,4194304] 0 2026-03-09T15:00:38.729 INFO:tasks.workunit.client.1.vm09.stdout:8/135: read f6 [761107,106120] 0 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:2/167: rmdir df/d20/d29 39 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:4/124: dread db/f14 [0,4194304] 0 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:4/125: write db/d12/f1a [177155,97732] 0 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:8/136: creat df/f26 x:0 0 0 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:2/168: mkdir df/d1f/d35 0 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:4/126: creat db/f29 x:0 0 0 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:4/127: write db/d12/f27 [118415,54378] 0 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:2/169: chown df/d20/d29/f31 2604859 1 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:8/137: symlink df/d1c/d1d/l27 0 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:4/128: creat db/d12/d16/f2a x:0 0 0 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:4/129: dread - db/d12/f1b zero size 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:2/170: dread f3 [0,4194304] 0 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:4/130: write f3 [2696029,126886] 0 2026-03-09T15:00:38.744 INFO:tasks.workunit.client.1.vm09.stdout:8/138: creat df/d24/f28 x:0 0 0 2026-03-09T15:00:38.745 INFO:tasks.workunit.client.1.vm09.stdout:4/131: unlink db/d12/f1a 0 2026-03-09T15:00:38.746 INFO:tasks.workunit.client.1.vm09.stdout:2/171: dwrite df/f23 [0,4194304] 0 2026-03-09T15:00:38.756 INFO:tasks.workunit.client.1.vm09.stdout:2/172: rmdir df/d1f 39 2026-03-09T15:00:38.757 INFO:tasks.workunit.client.1.vm09.stdout:2/173: dread df/f1d [0,4194304] 0 2026-03-09T15:00:38.758 INFO:tasks.workunit.client.1.vm09.stdout:4/132: creat db/d12/f2b x:0 0 0 2026-03-09T15:00:38.760 INFO:tasks.workunit.client.1.vm09.stdout:4/133: symlink db/d19/d23/l2c 0 2026-03-09T15:00:38.762 INFO:tasks.workunit.client.1.vm09.stdout:2/174: rmdir df/d1f/d35 0 2026-03-09T15:00:38.763 INFO:tasks.workunit.client.1.vm09.stdout:4/134: symlink db/d12/l2d 0 2026-03-09T15:00:38.763 INFO:tasks.workunit.client.1.vm09.stdout:4/135: stat db/fe 0 2026-03-09T15:00:38.764 INFO:tasks.workunit.client.1.vm09.stdout:4/136: write db/fe [1767099,57730] 0 2026-03-09T15:00:38.832 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.829+0000 7f6511ffb700 1 -- 192.168.123.105:0/2317550515 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f650800cca0 con 0x7f6514072840 2026-03-09T15:00:38.832 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.829+0000 7f6519c3e700 1 -- 192.168.123.105:0/2317550515 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f65141a3d50 con 0x7f6514072840 2026-03-09T15:00:38.832 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.829+0000 7f6519c3e700 1 -- 192.168.123.105:0/2317550515 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f65141a42a0 con 0x7f6514072840 2026-03-09T15:00:38.832 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.829+0000 7f6511ffb700 1 -- 192.168.123.105:0/2317550515 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f650800ce00 con 0x7f6514072840 2026-03-09T15:00:38.832 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.831+0000 7f6511ffb700 1 -- 192.168.123.105:0/2317550515 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f65080189c0 con 0x7f6514072840 2026-03-09T15:00:38.832 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.831+0000 7f6519c3e700 1 -- 192.168.123.105:0/2317550515 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f651404ea50 con 0x7f6514072840 2026-03-09T15:00:38.834 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.832+0000 7f6511ffb700 1 -- 192.168.123.105:0/2317550515 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f6508018b20 con 0x7f6514072840 2026-03-09T15:00:38.834 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.833+0000 7f6511ffb700 1 --2- 192.168.123.105:0/2317550515 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f64fc06c4d0 0x7f64fc06e980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:00:38.834 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.833+0000 7f6511ffb700 1 -- 192.168.123.105:0/2317550515 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f6508014070 con 0x7f6514072840 2026-03-09T15:00:38.834 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.833+0000 7f6518c3c700 1 --2- 192.168.123.105:0/2317550515 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f64fc06c4d0 0x7f64fc06e980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:00:38.835 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.834+0000 7f6518c3c700 1 --2- 192.168.123.105:0/2317550515 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f64fc06c4d0 0x7f64fc06e980 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f6504005cb0 tx=0x7f6504005c20 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:00:38.838 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:38.836+0000 7f6511ffb700 1 -- 192.168.123.105:0/2317550515 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f650805ac70 con 0x7f6514072840 2026-03-09T15:00:38.901 INFO:tasks.workunit.client.1.vm09.stdout:9/130: truncate d1/d7/d1e/f20 10040 0 2026-03-09T15:00:38.901 INFO:tasks.workunit.client.1.vm09.stdout:9/131: stat d1 0 2026-03-09T15:00:38.901 INFO:tasks.workunit.client.1.vm09.stdout:9/132: chown d1/d7 2956 1 2026-03-09T15:00:38.901 INFO:tasks.workunit.client.1.vm09.stdout:7/140: truncate f1 2932964 0 2026-03-09T15:00:38.904 INFO:tasks.workunit.client.1.vm09.stdout:7/141: unlink d3/l6 0 2026-03-09T15:00:38.904 INFO:tasks.workunit.client.1.vm09.stdout:7/142: chown d3/f16 927886 1 2026-03-09T15:00:38.909 INFO:tasks.workunit.client.1.vm09.stdout:9/133: dwrite d1/f24 [0,4194304] 0 2026-03-09T15:00:38.912 INFO:tasks.workunit.client.1.vm09.stdout:7/143: link d3/d1d/l21 d3/d1d/l27 0 2026-03-09T15:00:38.912 INFO:tasks.workunit.client.1.vm09.stdout:7/144: dread d3/db/fe [0,4194304] 0 2026-03-09T15:00:38.922 INFO:tasks.workunit.client.1.vm09.stdout:7/145: dwrite d3/f26 [0,4194304] 0 2026-03-09T15:00:38.935 INFO:tasks.workunit.client.1.vm09.stdout:7/146: dwrite d3/db/d25/f22 [0,4194304] 0 2026-03-09T15:00:38.937 INFO:tasks.workunit.client.1.vm09.stdout:5/146: rename d2/d4/d21 to d2/d37/d3c 0 2026-03-09T15:00:38.941 INFO:tasks.workunit.client.1.vm09.stdout:1/91: rename d8/fe to d8/d1b/f1f 0 2026-03-09T15:00:38.941 INFO:tasks.workunit.client.1.vm09.stdout:7/147: dread d3/f26 [0,4194304] 0 2026-03-09T15:00:38.943 INFO:tasks.workunit.client.1.vm09.stdout:7/148: write d3/db/d25/f22 [3276460,111220] 0 2026-03-09T15:00:38.952 INFO:tasks.workunit.client.1.vm09.stdout:6/177: rename d6/df/d23/l3a to d6/db/d10/l3e 0 2026-03-09T15:00:38.952 INFO:tasks.workunit.client.1.vm09.stdout:1/92: unlink d8/d10/d1c/l1e 0 2026-03-09T15:00:38.955 INFO:tasks.workunit.client.1.vm09.stdout:7/149: dread d3/f26 [0,4194304] 0 2026-03-09T15:00:38.974 INFO:tasks.workunit.client.1.vm09.stdout:4/137: rename db/d19/d23/l28 to db/d19/l2e 0 2026-03-09T15:00:38.976 INFO:tasks.workunit.client.1.vm09.stdout:7/150: mkdir d3/d28 0 2026-03-09T15:00:38.980 INFO:tasks.workunit.client.1.vm09.stdout:7/151: dread d3/db/d25/f22 [0,4194304] 0 2026-03-09T15:00:38.982 INFO:tasks.workunit.client.1.vm09.stdout:9/134: rename d1/d7/d9 to d1/d7/d1e/d2b/d2e 0 2026-03-09T15:00:38.985 INFO:tasks.workunit.client.1.vm09.stdout:4/138: unlink db/d12/l15 0 2026-03-09T15:00:38.986 INFO:tasks.workunit.client.1.vm09.stdout:3/140: dwrite d3/d4/d18/f1c [0,4194304] 0 2026-03-09T15:00:38.995 INFO:tasks.workunit.client.1.vm09.stdout:3/141: dwrite d3/d4/d18/d2b/d31/f33 [0,4194304] 0 2026-03-09T15:00:38.998 INFO:tasks.workunit.client.1.vm09.stdout:0/188: truncate da/dc/f17 3642892 0 2026-03-09T15:00:39.000 INFO:tasks.workunit.client.1.vm09.stdout:4/139: mkdir db/d2f 0 2026-03-09T15:00:39.006 INFO:tasks.workunit.client.1.vm09.stdout:2/175: getdents df/d20/d2e 0 2026-03-09T15:00:39.007 INFO:tasks.workunit.client.1.vm09.stdout:7/152: creat d3/d28/f29 x:0 0 0 2026-03-09T15:00:39.014 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.012+0000 7f6519c3e700 1 -- 192.168.123.105:0/2317550515 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f65141a4550 con 0x7f64fc06c4d0 2026-03-09T15:00:39.014 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:38 vm05.local ceph-mon[50611]: from='client.24385 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:39.014 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:38 vm05.local ceph-mon[50611]: from='client.24389 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:39.019 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:38 vm05.local ceph-mon[50611]: from='client.14592 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:39.019 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:38 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/79555445' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:00:39.020 INFO:tasks.workunit.client.1.vm09.stdout:3/142: mknod d3/d4/c35 0 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) mgr", 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "0/2 daemons upgraded", 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc image on host vm09", 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.015+0000 7f6511ffb700 1 -- 192.168.123.105:0/2317550515 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+434 (secure 0 0 0) 0x7f65141a4550 con 0x7f64fc06c4d0 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.018+0000 7f6519c3e700 1 -- 192.168.123.105:0/2317550515 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f64fc06c4d0 msgr2=0x7f64fc06e980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.018+0000 7f6519c3e700 1 --2- 192.168.123.105:0/2317550515 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f64fc06c4d0 0x7f64fc06e980 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7f6504005cb0 tx=0x7f6504005c20 comp rx=0 tx=0).stop 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.018+0000 7f6519c3e700 1 -- 192.168.123.105:0/2317550515 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6514072840 msgr2=0x7f651419eb40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.018+0000 7f6519c3e700 1 --2- 192.168.123.105:0/2317550515 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6514072840 0x7f651419eb40 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f650800eb10 tx=0x7f650800eed0 comp rx=0 tx=0).stop 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.018+0000 7f6519c3e700 1 -- 192.168.123.105:0/2317550515 shutdown_connections 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.018+0000 7f6519c3e700 1 --2- 192.168.123.105:0/2317550515 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f64fc06c4d0 0x7f64fc06e980 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.018+0000 7f6519c3e700 1 --2- 192.168.123.105:0/2317550515 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6514071e60 0x7f651419e600 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.018+0000 7f6519c3e700 1 --2- 192.168.123.105:0/2317550515 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6514072840 0x7f651419eb40 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.018+0000 7f6519c3e700 1 -- 192.168.123.105:0/2317550515 >> 192.168.123.105:0/2317550515 conn(0x7f651406d1a0 msgr2=0x7f651410d080 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.018+0000 7f6519c3e700 1 -- 192.168.123.105:0/2317550515 shutdown_connections 2026-03-09T15:00:39.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:00:39.018+0000 7f6519c3e700 1 -- 192.168.123.105:0/2317550515 wait complete. 2026-03-09T15:00:39.023 INFO:tasks.workunit.client.1.vm09.stdout:3/143: mkdir d3/d4/d18/d2b/d36 0 2026-03-09T15:00:39.023 INFO:tasks.workunit.client.1.vm09.stdout:3/144: chown d3/d4/f26 23448 1 2026-03-09T15:00:39.030 INFO:tasks.workunit.client.1.vm09.stdout:3/145: symlink d3/d4/d18/d2b/l37 0 2026-03-09T15:00:39.030 INFO:tasks.workunit.client.1.vm09.stdout:3/146: fsync d3/d4/f8 0 2026-03-09T15:00:39.035 INFO:tasks.workunit.client.1.vm09.stdout:7/153: dread d3/f9 [0,4194304] 0 2026-03-09T15:00:39.038 INFO:tasks.workunit.client.1.vm09.stdout:3/147: fdatasync d3/d4/d18/d2b/d31/f33 0 2026-03-09T15:00:39.039 INFO:tasks.workunit.client.1.vm09.stdout:0/189: dread da/f12 [0,4194304] 0 2026-03-09T15:00:39.043 INFO:tasks.workunit.client.1.vm09.stdout:4/140: rename db/ld to db/d12/l30 0 2026-03-09T15:00:39.044 INFO:tasks.workunit.client.1.vm09.stdout:2/176: link c2 df/c36 0 2026-03-09T15:00:39.046 INFO:tasks.workunit.client.1.vm09.stdout:4/141: dread - db/d12/f25 zero size 2026-03-09T15:00:39.048 INFO:tasks.workunit.client.1.vm09.stdout:3/148: dread d3/f6 [0,4194304] 0 2026-03-09T15:00:39.048 INFO:tasks.workunit.client.1.vm09.stdout:9/135: dread d1/d7/d1e/d2b/d2e/f19 [0,4194304] 0 2026-03-09T15:00:39.048 INFO:tasks.workunit.client.1.vm09.stdout:3/149: fdatasync d3/ff 0 2026-03-09T15:00:39.050 INFO:tasks.workunit.client.1.vm09.stdout:7/154: fsync d3/f26 0 2026-03-09T15:00:39.051 INFO:tasks.workunit.client.1.vm09.stdout:0/190: dwrite da/dc/d10/f2d [0,4194304] 0 2026-03-09T15:00:39.055 INFO:tasks.workunit.client.1.vm09.stdout:9/136: dwrite d1/d7/d1e/d2b/d2e/f1d [0,4194304] 0 2026-03-09T15:00:39.065 INFO:tasks.workunit.client.1.vm09.stdout:9/137: dread d1/d7/d1e/d2b/d2e/f1d [0,4194304] 0 2026-03-09T15:00:39.068 INFO:tasks.workunit.client.1.vm09.stdout:4/142: mknod db/c31 0 2026-03-09T15:00:39.075 INFO:tasks.workunit.client.1.vm09.stdout:7/155: creat d3/d28/f2a x:0 0 0 2026-03-09T15:00:39.078 INFO:tasks.workunit.client.1.vm09.stdout:2/177: truncate df/f17 1167255 0 2026-03-09T15:00:39.082 INFO:tasks.workunit.client.1.vm09.stdout:2/178: dread - df/d20/f2b zero size 2026-03-09T15:00:39.082 INFO:tasks.workunit.client.1.vm09.stdout:2/179: write f0 [2869672,61528] 0 2026-03-09T15:00:39.082 INFO:tasks.workunit.client.1.vm09.stdout:9/138: creat d1/d7/d1e/d2b/f2f x:0 0 0 2026-03-09T15:00:39.082 INFO:tasks.workunit.client.1.vm09.stdout:4/143: mkdir db/d19/d32 0 2026-03-09T15:00:39.082 INFO:tasks.workunit.client.1.vm09.stdout:9/139: write d1/d7/d1e/d2b/d2e/f23 [13089,126678] 0 2026-03-09T15:00:39.082 INFO:tasks.workunit.client.1.vm09.stdout:3/150: symlink d3/d4/d18/d2b/d36/l38 0 2026-03-09T15:00:39.086 INFO:tasks.workunit.client.1.vm09.stdout:0/191: rename da/dc/d1c/l24 to da/dc/d10/d3a/l42 0 2026-03-09T15:00:39.088 INFO:tasks.workunit.client.1.vm09.stdout:3/151: dread d3/d4/d18/d2b/d31/f33 [0,4194304] 0 2026-03-09T15:00:39.102 INFO:tasks.workunit.client.1.vm09.stdout:3/152: dwrite d3/ff [0,4194304] 0 2026-03-09T15:00:39.105 INFO:tasks.workunit.client.1.vm09.stdout:3/153: chown d3/d4/d18/d2b/f32 926061328 1 2026-03-09T15:00:39.107 INFO:tasks.workunit.client.1.vm09.stdout:0/192: rmdir da/dc/d22 39 2026-03-09T15:00:39.111 INFO:tasks.workunit.client.1.vm09.stdout:3/154: dwrite d3/d4/f26 [0,4194304] 0 2026-03-09T15:00:39.128 INFO:tasks.workunit.client.1.vm09.stdout:4/144: link db/d19/c22 db/d19/d23/c33 0 2026-03-09T15:00:39.130 INFO:tasks.workunit.client.1.vm09.stdout:3/155: mkdir d3/d4/d18/d2b/d39 0 2026-03-09T15:00:39.132 INFO:tasks.workunit.client.1.vm09.stdout:0/193: mknod da/dc/d1c/d3c/c43 0 2026-03-09T15:00:39.132 INFO:tasks.workunit.client.1.vm09.stdout:0/194: fdatasync da/fb 0 2026-03-09T15:00:39.133 INFO:tasks.workunit.client.1.vm09.stdout:4/145: symlink db/d12/d16/l34 0 2026-03-09T15:00:39.137 INFO:tasks.workunit.client.1.vm09.stdout:0/195: mkdir da/dc/d1c/d3c/d44 0 2026-03-09T15:00:39.142 INFO:tasks.workunit.client.1.vm09.stdout:4/146: mkdir db/d19/d35 0 2026-03-09T15:00:39.142 INFO:tasks.workunit.client.1.vm09.stdout:8/139: getdents df/d1c/d1d 0 2026-03-09T15:00:39.142 INFO:tasks.workunit.client.1.vm09.stdout:4/147: creat db/d12/d16/f36 x:0 0 0 2026-03-09T15:00:39.144 INFO:tasks.workunit.client.1.vm09.stdout:8/140: creat df/d24/f29 x:0 0 0 2026-03-09T15:00:39.145 INFO:tasks.workunit.client.1.vm09.stdout:8/141: readlink df/l19 0 2026-03-09T15:00:39.145 INFO:tasks.workunit.client.1.vm09.stdout:8/142: fsync df/f12 0 2026-03-09T15:00:39.147 INFO:tasks.workunit.client.1.vm09.stdout:8/143: stat lc 0 2026-03-09T15:00:39.148 INFO:tasks.workunit.client.1.vm09.stdout:4/148: creat db/d12/f37 x:0 0 0 2026-03-09T15:00:39.149 INFO:tasks.workunit.client.1.vm09.stdout:8/144: rmdir df/d1f 39 2026-03-09T15:00:39.149 INFO:tasks.workunit.client.1.vm09.stdout:4/149: truncate db/d12/f2b 428690 0 2026-03-09T15:00:39.149 INFO:tasks.workunit.client.1.vm09.stdout:8/145: chown df 49390 1 2026-03-09T15:00:39.150 INFO:tasks.workunit.client.1.vm09.stdout:4/150: fsync db/fe 0 2026-03-09T15:00:39.152 INFO:tasks.workunit.client.1.vm09.stdout:8/146: fsync df/d1f/f21 0 2026-03-09T15:00:39.152 INFO:tasks.workunit.client.1.vm09.stdout:4/151: creat db/d19/f38 x:0 0 0 2026-03-09T15:00:39.158 INFO:tasks.workunit.client.1.vm09.stdout:4/152: dwrite db/d19/f38 [0,4194304] 0 2026-03-09T15:00:39.288 INFO:tasks.workunit.client.1.vm09.stdout:9/140: fsync d1/d7/d1e/f20 0 2026-03-09T15:00:39.288 INFO:tasks.workunit.client.1.vm09.stdout:9/141: chown d1/d7/d1e/f20 105 1 2026-03-09T15:00:39.289 INFO:tasks.workunit.client.1.vm09.stdout:9/142: write d1/f24 [1750923,99033] 0 2026-03-09T15:00:39.291 INFO:tasks.workunit.client.1.vm09.stdout:9/143: write d1/d7/d1e/d2b/d2e/f16 [1856243,67162] 0 2026-03-09T15:00:39.298 INFO:tasks.workunit.client.1.vm09.stdout:5/147: rmdir d2/d37/d3c 39 2026-03-09T15:00:39.303 INFO:tasks.workunit.client.1.vm09.stdout:5/148: creat d2/f3d x:0 0 0 2026-03-09T15:00:39.307 INFO:tasks.workunit.client.1.vm09.stdout:5/149: chown d2/d37/d3c/l33 15714 1 2026-03-09T15:00:39.310 INFO:tasks.workunit.client.1.vm09.stdout:5/150: dread d2/d4/f1f [0,4194304] 0 2026-03-09T15:00:39.314 INFO:tasks.workunit.client.1.vm09.stdout:5/151: dread d2/f29 [0,4194304] 0 2026-03-09T15:00:39.317 INFO:tasks.workunit.client.1.vm09.stdout:6/178: truncate d6/d20/f34 131986 0 2026-03-09T15:00:39.328 INFO:tasks.workunit.client.1.vm09.stdout:1/93: rmdir d8/d10/d1c 0 2026-03-09T15:00:39.332 INFO:tasks.workunit.client.1.vm09.stdout:6/179: write d6/d20/f27 [3340164,129406] 0 2026-03-09T15:00:39.334 INFO:tasks.workunit.client.1.vm09.stdout:5/152: mknod d2/d4/c3e 0 2026-03-09T15:00:39.334 INFO:tasks.workunit.client.1.vm09.stdout:5/153: fsync d2/d37/d3c/f2f 0 2026-03-09T15:00:39.344 INFO:tasks.workunit.client.1.vm09.stdout:5/154: dwrite d2/d4/f1f [0,4194304] 0 2026-03-09T15:00:39.346 INFO:tasks.workunit.client.1.vm09.stdout:5/155: truncate d2/f38 829388 0 2026-03-09T15:00:39.347 INFO:tasks.workunit.client.1.vm09.stdout:5/156: chown d2/d4/l26 486853735 1 2026-03-09T15:00:39.348 INFO:tasks.workunit.client.1.vm09.stdout:5/157: write d2/d37/d3c/f3a [822779,22876] 0 2026-03-09T15:00:39.349 INFO:tasks.workunit.client.1.vm09.stdout:5/158: write d2/d37/d3c/f2f [298522,99939] 0 2026-03-09T15:00:39.351 INFO:tasks.workunit.client.1.vm09.stdout:5/159: symlink d2/d37/d3c/d36/l3f 0 2026-03-09T15:00:39.354 INFO:tasks.workunit.client.1.vm09.stdout:5/160: dread d2/d4/f16 [0,4194304] 0 2026-03-09T15:00:39.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:38 vm09.local ceph-mon[59673]: from='client.24385 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:39.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:38 vm09.local ceph-mon[59673]: from='client.24389 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:39.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:38 vm09.local ceph-mon[59673]: from='client.14592 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:39.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:38 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/79555445' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:00:39.370 INFO:tasks.workunit.client.1.vm09.stdout:7/156: truncate d3/db/d25/f22 2182633 0 2026-03-09T15:00:39.372 INFO:tasks.workunit.client.1.vm09.stdout:7/157: write d3/d28/f2a [611767,130158] 0 2026-03-09T15:00:39.373 INFO:tasks.workunit.client.1.vm09.stdout:2/180: dwrite df/f1d [0,4194304] 0 2026-03-09T15:00:39.376 INFO:tasks.workunit.client.1.vm09.stdout:7/158: truncate d3/d28/f29 108336 0 2026-03-09T15:00:39.377 INFO:tasks.workunit.client.1.vm09.stdout:3/156: rename d3/d4/d18 to d3/d3a 0 2026-03-09T15:00:39.378 INFO:tasks.workunit.client.1.vm09.stdout:7/159: creat d3/d28/f2b x:0 0 0 2026-03-09T15:00:39.382 INFO:tasks.workunit.client.1.vm09.stdout:7/160: dread d3/db/fe [0,4194304] 0 2026-03-09T15:00:39.384 INFO:tasks.workunit.client.1.vm09.stdout:3/157: dread d3/f29 [0,4194304] 0 2026-03-09T15:00:39.386 INFO:tasks.workunit.client.1.vm09.stdout:7/161: dwrite d3/d28/f2b [0,4194304] 0 2026-03-09T15:00:39.388 INFO:tasks.workunit.client.1.vm09.stdout:4/153: rmdir db/d12/d16 39 2026-03-09T15:00:39.388 INFO:tasks.workunit.client.1.vm09.stdout:8/147: write df/d1c/f20 [1599376,39166] 0 2026-03-09T15:00:39.390 INFO:tasks.workunit.client.1.vm09.stdout:3/158: creat d3/f3b x:0 0 0 2026-03-09T15:00:39.390 INFO:tasks.workunit.client.1.vm09.stdout:7/162: symlink d3/db/d25/l2c 0 2026-03-09T15:00:39.391 INFO:tasks.workunit.client.1.vm09.stdout:3/159: chown d3/d4/f1a 46429 1 2026-03-09T15:00:39.392 INFO:tasks.workunit.client.1.vm09.stdout:3/160: truncate d3/d4/f8 5123301 0 2026-03-09T15:00:39.394 INFO:tasks.workunit.client.1.vm09.stdout:3/161: write d3/d4/f26 [601810,129472] 0 2026-03-09T15:00:39.395 INFO:tasks.workunit.client.1.vm09.stdout:7/163: fsync d3/d1d/f11 0 2026-03-09T15:00:39.395 INFO:tasks.workunit.client.1.vm09.stdout:0/196: rename da/c33 to da/dc/c45 0 2026-03-09T15:00:39.396 INFO:tasks.workunit.client.1.vm09.stdout:0/197: chown da/dc/d10/c26 30090741 1 2026-03-09T15:00:39.398 INFO:tasks.workunit.client.1.vm09.stdout:7/164: truncate d3/d28/f2b 4519203 0 2026-03-09T15:00:39.398 INFO:tasks.workunit.client.1.vm09.stdout:3/162: write d3/d3a/d2b/d31/f33 [97674,119055] 0 2026-03-09T15:00:39.399 INFO:tasks.workunit.client.1.vm09.stdout:9/144: rename d1/d7/d1e/d2b/d2e/f27 to d1/d7/d1e/d2b/f30 0 2026-03-09T15:00:39.399 INFO:tasks.workunit.client.1.vm09.stdout:2/181: rename df to df/d2d/d37 22 2026-03-09T15:00:39.402 INFO:tasks.workunit.client.1.vm09.stdout:7/165: mkdir d3/d1d/d2d 0 2026-03-09T15:00:39.403 INFO:tasks.workunit.client.1.vm09.stdout:0/198: write da/dc/d22/f3b [999465,32347] 0 2026-03-09T15:00:39.403 INFO:tasks.workunit.client.1.vm09.stdout:7/166: stat d3/d1d/d2d 0 2026-03-09T15:00:39.403 INFO:tasks.workunit.client.1.vm09.stdout:8/148: creat df/f2a x:0 0 0 2026-03-09T15:00:39.407 INFO:tasks.workunit.client.1.vm09.stdout:4/154: rename db/d12/d16/l18 to db/d2f/l39 0 2026-03-09T15:00:39.407 INFO:tasks.workunit.client.1.vm09.stdout:2/182: readlink df/l12 0 2026-03-09T15:00:39.408 INFO:tasks.workunit.client.1.vm09.stdout:7/167: mkdir d3/d28/d2e 0 2026-03-09T15:00:39.409 INFO:tasks.workunit.client.1.vm09.stdout:7/168: stat d3/d28/f2a 0 2026-03-09T15:00:39.411 INFO:tasks.workunit.client.1.vm09.stdout:8/149: rename df/d1f/f21 to df/d1c/d1d/f2b 0 2026-03-09T15:00:39.411 INFO:tasks.workunit.client.1.vm09.stdout:3/163: creat d3/d3a/d2b/d39/f3c x:0 0 0 2026-03-09T15:00:39.415 INFO:tasks.workunit.client.1.vm09.stdout:0/199: rename da/dc/d10/d3a to da/dc/d1c/d46 0 2026-03-09T15:00:39.420 INFO:tasks.workunit.client.1.vm09.stdout:8/150: mkdir df/d1c/d2c 0 2026-03-09T15:00:39.420 INFO:tasks.workunit.client.1.vm09.stdout:8/151: stat df/f23 0 2026-03-09T15:00:39.422 INFO:tasks.workunit.client.1.vm09.stdout:7/169: creat d3/d28/d2e/f2f x:0 0 0 2026-03-09T15:00:39.425 INFO:tasks.workunit.client.1.vm09.stdout:3/164: rename d3/d3a/l1e to d3/d3a/d2b/l3d 0 2026-03-09T15:00:39.430 INFO:tasks.workunit.client.1.vm09.stdout:3/165: chown d3/d4/l2e 1 1 2026-03-09T15:00:39.430 INFO:tasks.workunit.client.1.vm09.stdout:0/200: rmdir da/dc/d1c 39 2026-03-09T15:00:39.430 INFO:tasks.workunit.client.1.vm09.stdout:3/166: read - d3/f3b zero size 2026-03-09T15:00:39.430 INFO:tasks.workunit.client.1.vm09.stdout:8/152: dwrite df/d24/f29 [0,4194304] 0 2026-03-09T15:00:39.430 INFO:tasks.workunit.client.1.vm09.stdout:2/183: truncate df/f28 1873762 0 2026-03-09T15:00:39.430 INFO:tasks.workunit.client.1.vm09.stdout:1/94: write d8/fa [658173,127209] 0 2026-03-09T15:00:39.435 INFO:tasks.workunit.client.1.vm09.stdout:6/180: truncate d6/f17 1883474 0 2026-03-09T15:00:39.435 INFO:tasks.workunit.client.1.vm09.stdout:0/201: rename da/dc/fe to da/dc/d22/f47 0 2026-03-09T15:00:39.435 INFO:tasks.workunit.client.1.vm09.stdout:5/161: truncate d2/f38 801907 0 2026-03-09T15:00:39.436 INFO:tasks.workunit.client.1.vm09.stdout:7/170: dwrite d3/d28/f2a [0,4194304] 0 2026-03-09T15:00:39.439 INFO:tasks.workunit.client.1.vm09.stdout:7/171: chown d3/db/d15/l1c 47 1 2026-03-09T15:00:39.439 INFO:tasks.workunit.client.1.vm09.stdout:6/181: fdatasync d6/d20/d2a/f37 0 2026-03-09T15:00:39.443 INFO:tasks.workunit.client.1.vm09.stdout:3/167: stat d3/d4/c10 0 2026-03-09T15:00:39.445 INFO:tasks.workunit.client.1.vm09.stdout:7/172: truncate d3/d28/d2e/f2f 27399 0 2026-03-09T15:00:39.457 INFO:tasks.workunit.client.1.vm09.stdout:1/95: dwrite d8/f19 [0,4194304] 0 2026-03-09T15:00:39.467 INFO:tasks.workunit.client.1.vm09.stdout:9/145: read d1/d7/f13 [873991,79078] 0 2026-03-09T15:00:39.474 INFO:tasks.workunit.client.1.vm09.stdout:8/153: fsync fe 0 2026-03-09T15:00:39.475 INFO:tasks.workunit.client.1.vm09.stdout:6/182: mknod d6/db/d10/c3f 0 2026-03-09T15:00:39.480 INFO:tasks.workunit.client.1.vm09.stdout:7/173: link d3/d28/f2b d3/d1d/f30 0 2026-03-09T15:00:39.488 INFO:tasks.workunit.client.1.vm09.stdout:0/202: dwrite da/f12 [0,4194304] 0 2026-03-09T15:00:39.488 INFO:tasks.workunit.client.1.vm09.stdout:8/154: dread df/f14 [0,4194304] 0 2026-03-09T15:00:39.488 INFO:tasks.workunit.client.1.vm09.stdout:0/203: fdatasync da/dc/f28 0 2026-03-09T15:00:39.488 INFO:tasks.workunit.client.1.vm09.stdout:0/204: write da/f12 [3923741,9076] 0 2026-03-09T15:00:39.488 INFO:tasks.workunit.client.1.vm09.stdout:7/174: dwrite d3/f26 [4194304,4194304] 0 2026-03-09T15:00:39.490 INFO:tasks.workunit.client.1.vm09.stdout:7/175: write d3/d28/d2e/f2f [193714,6109] 0 2026-03-09T15:00:39.493 INFO:tasks.workunit.client.1.vm09.stdout:6/183: creat d6/df/f40 x:0 0 0 2026-03-09T15:00:39.497 INFO:tasks.workunit.client.1.vm09.stdout:1/96: rename d8/l1d to d8/l20 0 2026-03-09T15:00:39.501 INFO:tasks.workunit.client.1.vm09.stdout:8/155: fdatasync df/d1c/d1d/f2b 0 2026-03-09T15:00:39.503 INFO:tasks.workunit.client.1.vm09.stdout:0/205: write da/dc/d10/f29 [4322425,123491] 0 2026-03-09T15:00:39.503 INFO:tasks.workunit.client.1.vm09.stdout:8/156: write df/f23 [455052,34578] 0 2026-03-09T15:00:39.504 INFO:tasks.workunit.client.1.vm09.stdout:8/157: write df/f2a [937862,99901] 0 2026-03-09T15:00:39.505 INFO:tasks.workunit.client.1.vm09.stdout:8/158: dread - df/f26 zero size 2026-03-09T15:00:39.505 INFO:tasks.workunit.client.1.vm09.stdout:6/184: dwrite d6/db/f1f [4194304,4194304] 0 2026-03-09T15:00:39.509 INFO:tasks.workunit.client.1.vm09.stdout:8/159: write df/f12 [110547,18078] 0 2026-03-09T15:00:39.509 INFO:tasks.workunit.client.1.vm09.stdout:0/206: chown da/l2e 116275 1 2026-03-09T15:00:39.513 INFO:tasks.workunit.client.1.vm09.stdout:1/97: unlink d8/cb 0 2026-03-09T15:00:39.515 INFO:tasks.workunit.client.1.vm09.stdout:1/98: chown d8/d10/f15 25445976 1 2026-03-09T15:00:39.515 INFO:tasks.workunit.client.1.vm09.stdout:6/185: link d6/lc d6/d20/d38/l41 0 2026-03-09T15:00:39.515 INFO:tasks.workunit.client.1.vm09.stdout:0/207: mknod da/dc/d1c/c48 0 2026-03-09T15:00:39.515 INFO:tasks.workunit.client.1.vm09.stdout:6/186: write d6/d20/f36 [399668,104340] 0 2026-03-09T15:00:39.515 INFO:tasks.workunit.client.1.vm09.stdout:1/99: creat d8/d1b/f21 x:0 0 0 2026-03-09T15:00:39.517 INFO:tasks.workunit.client.1.vm09.stdout:1/100: write d8/f19 [779344,72255] 0 2026-03-09T15:00:39.519 INFO:tasks.workunit.client.1.vm09.stdout:8/160: dwrite fa [0,4194304] 0 2026-03-09T15:00:39.531 INFO:tasks.workunit.client.1.vm09.stdout:1/101: mkdir d8/d22 0 2026-03-09T15:00:39.531 INFO:tasks.workunit.client.1.vm09.stdout:8/161: mkdir df/d2d 0 2026-03-09T15:00:39.531 INFO:tasks.workunit.client.1.vm09.stdout:3/168: sync 2026-03-09T15:00:39.533 INFO:tasks.workunit.client.1.vm09.stdout:3/169: chown d3/d4/f26 1105624 1 2026-03-09T15:00:39.533 INFO:tasks.workunit.client.1.vm09.stdout:3/170: dread - d3/d3a/d2b/d31/f34 zero size 2026-03-09T15:00:39.537 INFO:tasks.workunit.client.1.vm09.stdout:1/102: dwrite d8/d10/f15 [0,4194304] 0 2026-03-09T15:00:39.543 INFO:tasks.workunit.client.1.vm09.stdout:0/208: dwrite da/dc/d10/f1e [0,4194304] 0 2026-03-09T15:00:39.557 INFO:tasks.workunit.client.1.vm09.stdout:7/176: sync 2026-03-09T15:00:39.557 INFO:tasks.workunit.client.1.vm09.stdout:1/103: sync 2026-03-09T15:00:39.559 INFO:tasks.workunit.client.1.vm09.stdout:7/177: mknod d3/db/d25/c31 0 2026-03-09T15:00:39.565 INFO:tasks.workunit.client.1.vm09.stdout:0/209: dread da/dc/d22/f3b [0,4194304] 0 2026-03-09T15:00:39.617 INFO:tasks.workunit.client.1.vm09.stdout:8/162: dread df/f2a [0,4194304] 0 2026-03-09T15:00:39.617 INFO:tasks.workunit.client.1.vm09.stdout:9/146: getdents d1/d7/d1e/d2b 0 2026-03-09T15:00:39.619 INFO:tasks.workunit.client.1.vm09.stdout:4/155: getdents db/d2f 0 2026-03-09T15:00:39.625 INFO:tasks.workunit.client.1.vm09.stdout:9/147: mknod d1/c31 0 2026-03-09T15:00:39.626 INFO:tasks.workunit.client.1.vm09.stdout:7/178: rmdir d3/d28/d2e 39 2026-03-09T15:00:39.627 INFO:tasks.workunit.client.1.vm09.stdout:8/163: link df/d1c/d1d/l27 df/d1c/l2e 0 2026-03-09T15:00:39.630 INFO:tasks.workunit.client.1.vm09.stdout:4/156: getdents db 0 2026-03-09T15:00:39.631 INFO:tasks.workunit.client.1.vm09.stdout:4/157: chown db/f14 186 1 2026-03-09T15:00:39.636 INFO:tasks.workunit.client.1.vm09.stdout:4/158: dread db/f14 [0,4194304] 0 2026-03-09T15:00:39.638 INFO:tasks.workunit.client.1.vm09.stdout:4/159: dread db/d12/f2b [0,4194304] 0 2026-03-09T15:00:39.639 INFO:tasks.workunit.client.1.vm09.stdout:7/179: creat d3/f32 x:0 0 0 2026-03-09T15:00:39.639 INFO:tasks.workunit.client.1.vm09.stdout:4/160: chown db/d12/f27 858 1 2026-03-09T15:00:39.639 INFO:tasks.workunit.client.1.vm09.stdout:7/180: chown d3/d1d/d2d 71 1 2026-03-09T15:00:39.641 INFO:tasks.workunit.client.1.vm09.stdout:8/164: dread df/d1c/f20 [0,4194304] 0 2026-03-09T15:00:39.647 INFO:tasks.workunit.client.1.vm09.stdout:7/181: sync 2026-03-09T15:00:39.648 INFO:tasks.workunit.client.1.vm09.stdout:4/161: dread db/d19/f38 [0,4194304] 0 2026-03-09T15:00:39.649 INFO:tasks.workunit.client.1.vm09.stdout:8/165: dwrite df/f12 [0,4194304] 0 2026-03-09T15:00:39.651 INFO:tasks.workunit.client.1.vm09.stdout:4/162: write db/d12/d16/f26 [106849,36243] 0 2026-03-09T15:00:39.651 INFO:tasks.workunit.client.1.vm09.stdout:8/166: readlink df/l1b 0 2026-03-09T15:00:39.651 INFO:tasks.workunit.client.1.vm09.stdout:8/167: dread - df/d24/f28 zero size 2026-03-09T15:00:39.652 INFO:tasks.workunit.client.1.vm09.stdout:4/163: dread - db/d12/f37 zero size 2026-03-09T15:00:39.661 INFO:tasks.workunit.client.1.vm09.stdout:2/184: rename df/f28 to df/d1f/f38 0 2026-03-09T15:00:39.666 INFO:tasks.workunit.client.1.vm09.stdout:6/187: truncate d6/f17 2593612 0 2026-03-09T15:00:39.667 INFO:tasks.workunit.client.1.vm09.stdout:6/188: chown d6/d20/d2a/d3d 2 1 2026-03-09T15:00:39.670 INFO:tasks.workunit.client.1.vm09.stdout:8/168: creat df/d2d/f2f x:0 0 0 2026-03-09T15:00:39.679 INFO:tasks.workunit.client.1.vm09.stdout:9/148: rename d1/d7/d1e/d2b/d2e/f23 to d1/d7/d1e/d2b/f32 0 2026-03-09T15:00:39.680 INFO:tasks.workunit.client.1.vm09.stdout:2/185: creat df/d1f/f39 x:0 0 0 2026-03-09T15:00:39.680 INFO:tasks.workunit.client.1.vm09.stdout:4/164: getdents db/d2f 0 2026-03-09T15:00:39.682 INFO:tasks.workunit.client.1.vm09.stdout:9/149: creat d1/d7/d1e/d2b/f33 x:0 0 0 2026-03-09T15:00:39.684 INFO:tasks.workunit.client.1.vm09.stdout:2/186: symlink df/d1f/l3a 0 2026-03-09T15:00:39.687 INFO:tasks.workunit.client.1.vm09.stdout:4/165: mknod db/d2f/c3a 0 2026-03-09T15:00:39.687 INFO:tasks.workunit.client.1.vm09.stdout:9/150: creat d1/d7/d1e/f34 x:0 0 0 2026-03-09T15:00:39.687 INFO:tasks.workunit.client.1.vm09.stdout:4/166: dread - db/d12/f25 zero size 2026-03-09T15:00:39.687 INFO:tasks.workunit.client.1.vm09.stdout:6/189: link d6/d20/f36 d6/db/f42 0 2026-03-09T15:00:39.689 INFO:tasks.workunit.client.1.vm09.stdout:9/151: unlink d1/d7/d1e/d2b/f2f 0 2026-03-09T15:00:39.699 INFO:tasks.workunit.client.1.vm09.stdout:4/167: dwrite db/d12/f25 [0,4194304] 0 2026-03-09T15:00:39.699 INFO:tasks.workunit.client.1.vm09.stdout:8/169: fdatasync df/f12 0 2026-03-09T15:00:39.700 INFO:tasks.workunit.client.1.vm09.stdout:8/170: readlink df/l18 0 2026-03-09T15:00:39.701 INFO:tasks.workunit.client.1.vm09.stdout:8/171: chown df/d1c/d1d 2961 1 2026-03-09T15:00:39.701 INFO:tasks.workunit.client.1.vm09.stdout:9/152: dread d1/d7/d1e/d2b/d2e/f19 [0,4194304] 0 2026-03-09T15:00:39.708 INFO:tasks.workunit.client.1.vm09.stdout:4/168: dread db/d12/f27 [0,4194304] 0 2026-03-09T15:00:39.713 INFO:tasks.workunit.client.1.vm09.stdout:9/153: symlink d1/l35 0 2026-03-09T15:00:39.713 INFO:tasks.workunit.client.1.vm09.stdout:4/169: mkdir db/d19/d32/d3b 0 2026-03-09T15:00:39.713 INFO:tasks.workunit.client.1.vm09.stdout:9/154: write d1/d7/d1e/f20 [47361,128485] 0 2026-03-09T15:00:39.717 INFO:tasks.workunit.client.1.vm09.stdout:9/155: write d1/d7/d1e/f2a [1026645,26200] 0 2026-03-09T15:00:39.719 INFO:tasks.workunit.client.1.vm09.stdout:9/156: symlink d1/d7/d1e/l36 0 2026-03-09T15:00:39.725 INFO:tasks.workunit.client.1.vm09.stdout:9/157: dwrite d1/d7/d1e/d2b/f32 [0,4194304] 0 2026-03-09T15:00:39.733 INFO:tasks.workunit.client.1.vm09.stdout:9/158: write d1/d7/d1e/d2b/d2e/f19 [3442885,121339] 0 2026-03-09T15:00:39.736 INFO:tasks.workunit.client.1.vm09.stdout:9/159: rename d1/d7/d1e/d2b/f33 to d1/d7/d1e/d2b/f37 0 2026-03-09T15:00:39.781 INFO:tasks.workunit.client.1.vm09.stdout:2/187: dread f5 [0,4194304] 0 2026-03-09T15:00:39.784 INFO:tasks.workunit.client.1.vm09.stdout:2/188: mkdir df/d3b 0 2026-03-09T15:00:39.795 INFO:tasks.workunit.client.1.vm09.stdout:2/189: sync 2026-03-09T15:00:39.797 INFO:tasks.workunit.client.1.vm09.stdout:2/190: dread f5 [0,4194304] 0 2026-03-09T15:00:39.799 INFO:tasks.workunit.client.1.vm09.stdout:2/191: write df/f1b [2014805,61013] 0 2026-03-09T15:00:39.799 INFO:tasks.workunit.client.1.vm09.stdout:2/192: chown df/d20 293602 1 2026-03-09T15:00:39.842 INFO:tasks.workunit.client.1.vm09.stdout:1/104: rename d8/l20 to d8/l23 0 2026-03-09T15:00:39.844 INFO:tasks.workunit.client.1.vm09.stdout:1/105: mkdir d8/d10/d24 0 2026-03-09T15:00:39.845 INFO:tasks.workunit.client.1.vm09.stdout:1/106: rename d8/d10 to d8/d10/d24/d25 22 2026-03-09T15:00:39.845 INFO:tasks.workunit.client.1.vm09.stdout:1/107: write d8/d10/f15 [2007238,7733] 0 2026-03-09T15:00:39.846 INFO:tasks.workunit.client.1.vm09.stdout:1/108: readlink d8/d10/l11 0 2026-03-09T15:00:39.852 INFO:tasks.workunit.client.1.vm09.stdout:1/109: link d8/d10/l11 d8/d1b/l26 0 2026-03-09T15:00:39.856 INFO:tasks.workunit.client.1.vm09.stdout:1/110: symlink d8/d22/l27 0 2026-03-09T15:00:39.856 INFO:tasks.workunit.client.1.vm09.stdout:1/111: read - d8/d10/f1a zero size 2026-03-09T15:00:39.859 INFO:tasks.workunit.client.1.vm09.stdout:1/112: creat d8/d1b/f28 x:0 0 0 2026-03-09T15:00:39.860 INFO:tasks.workunit.client.1.vm09.stdout:1/113: write d8/d10/f13 [595714,65642] 0 2026-03-09T15:00:39.861 INFO:tasks.workunit.client.1.vm09.stdout:0/210: getdents da/dc/d1c 0 2026-03-09T15:00:39.864 INFO:tasks.workunit.client.1.vm09.stdout:1/114: rename d8/d1b/f28 to d8/d10/f29 0 2026-03-09T15:00:39.872 INFO:tasks.workunit.client.1.vm09.stdout:1/115: fsync d8/d10/f13 0 2026-03-09T15:00:39.877 INFO:tasks.workunit.client.1.vm09.stdout:0/211: link da/dc/c1a da/d30/c49 0 2026-03-09T15:00:39.883 INFO:tasks.workunit.client.1.vm09.stdout:0/212: creat da/dc/d10/f4a x:0 0 0 2026-03-09T15:00:39.884 INFO:tasks.workunit.client.1.vm09.stdout:0/213: stat da/d30 0 2026-03-09T15:00:39.885 INFO:tasks.workunit.client.1.vm09.stdout:0/214: fdatasync da/dc/f28 0 2026-03-09T15:00:39.886 INFO:tasks.workunit.client.1.vm09.stdout:3/171: creat d3/f3e x:0 0 0 2026-03-09T15:00:39.886 INFO:tasks.workunit.client.1.vm09.stdout:3/172: readlink d3/d4/l2a 0 2026-03-09T15:00:39.887 INFO:tasks.workunit.client.1.vm09.stdout:1/116: creat d8/d10/d24/f2a x:0 0 0 2026-03-09T15:00:39.890 INFO:tasks.workunit.client.1.vm09.stdout:7/182: dwrite d3/d1d/f11 [0,4194304] 0 2026-03-09T15:00:39.892 INFO:tasks.workunit.client.1.vm09.stdout:1/117: fsync d8/d10/f13 0 2026-03-09T15:00:39.901 INFO:tasks.workunit.client.1.vm09.stdout:0/215: creat da/f4b x:0 0 0 2026-03-09T15:00:39.905 INFO:tasks.workunit.client.1.vm09.stdout:1/118: symlink d8/d22/l2b 0 2026-03-09T15:00:39.908 INFO:tasks.workunit.client.1.vm09.stdout:7/183: dread d3/d28/d2e/f2f [0,4194304] 0 2026-03-09T15:00:39.910 INFO:tasks.workunit.client.1.vm09.stdout:0/216: dread da/dc/d10/f2d [0,4194304] 0 2026-03-09T15:00:39.912 INFO:tasks.workunit.client.1.vm09.stdout:0/217: dread - da/dc/d10/f4a zero size 2026-03-09T15:00:39.913 INFO:tasks.workunit.client.1.vm09.stdout:0/218: fdatasync da/d30/f38 0 2026-03-09T15:00:39.919 INFO:tasks.workunit.client.1.vm09.stdout:7/184: dwrite d3/f12 [4194304,4194304] 0 2026-03-09T15:00:39.923 INFO:tasks.workunit.client.1.vm09.stdout:7/185: creat d3/d1d/f33 x:0 0 0 2026-03-09T15:00:39.923 INFO:tasks.workunit.client.1.vm09.stdout:7/186: stat d3/d1d/f33 0 2026-03-09T15:00:39.933 INFO:tasks.workunit.client.1.vm09.stdout:0/219: creat da/f4c x:0 0 0 2026-03-09T15:00:39.933 INFO:tasks.workunit.client.1.vm09.stdout:0/220: fsync da/dc/d10/f29 0 2026-03-09T15:00:39.941 INFO:tasks.workunit.client.1.vm09.stdout:7/187: unlink d3/d28/d2e/f2f 0 2026-03-09T15:00:39.951 INFO:tasks.workunit.client.1.vm09.stdout:7/188: sync 2026-03-09T15:00:39.960 INFO:tasks.workunit.client.1.vm09.stdout:7/189: link l2 d3/d28/l34 0 2026-03-09T15:00:39.962 INFO:tasks.workunit.client.1.vm09.stdout:8/172: write f1 [4580315,33224] 0 2026-03-09T15:00:39.963 INFO:tasks.workunit.client.1.vm09.stdout:6/190: dwrite d6/df/f16 [0,4194304] 0 2026-03-09T15:00:39.975 INFO:tasks.workunit.client.1.vm09.stdout:4/170: rmdir db/d19 39 2026-03-09T15:00:39.978 INFO:tasks.workunit.client.1.vm09.stdout:6/191: unlink d6/df/f31 0 2026-03-09T15:00:39.978 INFO:tasks.workunit.client.1.vm09.stdout:6/192: readlink d6/d20/l3c 0 2026-03-09T15:00:39.988 INFO:tasks.workunit.client.1.vm09.stdout:9/160: rmdir d1 39 2026-03-09T15:00:39.991 INFO:tasks.workunit.client.1.vm09.stdout:6/193: creat d6/d20/d2a/d3d/f43 x:0 0 0 2026-03-09T15:00:39.993 INFO:tasks.workunit.client.1.vm09.stdout:4/171: sync 2026-03-09T15:00:39.996 INFO:tasks.workunit.client.1.vm09.stdout:9/161: symlink d1/d7/d1e/d2b/l38 0 2026-03-09T15:00:39.996 INFO:tasks.workunit.client.1.vm09.stdout:9/162: stat d1/d7/d1e/d2b/d2e/f19 0 2026-03-09T15:00:39.998 INFO:tasks.workunit.client.1.vm09.stdout:4/172: symlink db/d2f/l3c 0 2026-03-09T15:00:39.999 INFO:tasks.workunit.client.1.vm09.stdout:9/163: sync 2026-03-09T15:00:40.002 INFO:tasks.workunit.client.1.vm09.stdout:9/164: unlink d1/d7/lc 0 2026-03-09T15:00:40.002 INFO:tasks.workunit.client.1.vm09.stdout:9/165: stat d1/d7/d1e/d2b/d2e/f12 0 2026-03-09T15:00:40.006 INFO:tasks.workunit.client.1.vm09.stdout:9/166: chown d1/f4 1 1 2026-03-09T15:00:40.007 INFO:tasks.workunit.client.1.vm09.stdout:9/167: sync 2026-03-09T15:00:40.011 INFO:tasks.workunit.client.1.vm09.stdout:9/168: dwrite d1/d7/d1e/d2b/d2e/f2d [0,4194304] 0 2026-03-09T15:00:40.013 INFO:tasks.workunit.client.1.vm09.stdout:9/169: chown d1/l35 1 1 2026-03-09T15:00:40.023 INFO:tasks.workunit.client.1.vm09.stdout:4/173: creat db/d12/f3d x:0 0 0 2026-03-09T15:00:40.026 INFO:tasks.workunit.client.1.vm09.stdout:2/193: write df/f17 [1840057,57356] 0 2026-03-09T15:00:40.026 INFO:tasks.workunit.client.1.vm09.stdout:4/174: chown db/d19/d35 78374172 1 2026-03-09T15:00:40.030 INFO:tasks.workunit.client.1.vm09.stdout:2/194: creat df/d2d/f3c x:0 0 0 2026-03-09T15:00:40.035 INFO:tasks.workunit.client.1.vm09.stdout:2/195: symlink df/d3b/l3d 0 2026-03-09T15:00:40.035 INFO:tasks.workunit.client.1.vm09.stdout:2/196: write df/f16 [734704,14501] 0 2026-03-09T15:00:40.037 INFO:tasks.workunit.client.1.vm09.stdout:9/170: fdatasync d1/d7/d1e/d2b/d2e/f2d 0 2026-03-09T15:00:40.046 INFO:tasks.workunit.client.1.vm09.stdout:2/197: dread df/d20/f24 [0,4194304] 0 2026-03-09T15:00:40.046 INFO:tasks.workunit.client.1.vm09.stdout:2/198: stat df/f1d 0 2026-03-09T15:00:40.047 INFO:tasks.workunit.client.1.vm09.stdout:9/171: mknod d1/d7/d1e/c39 0 2026-03-09T15:00:40.047 INFO:tasks.workunit.client.1.vm09.stdout:2/199: truncate df/d1f/f39 930910 0 2026-03-09T15:00:40.050 INFO:tasks.workunit.client.1.vm09.stdout:2/200: fdatasync f4 0 2026-03-09T15:00:40.050 INFO:tasks.workunit.client.1.vm09.stdout:9/172: creat d1/d7/d1e/d2b/d2e/f3a x:0 0 0 2026-03-09T15:00:40.051 INFO:tasks.workunit.client.1.vm09.stdout:1/119: rmdir d8/d1b 39 2026-03-09T15:00:40.051 INFO:tasks.workunit.client.1.vm09.stdout:1/120: dread - d8/d10/f1a zero size 2026-03-09T15:00:40.054 INFO:tasks.workunit.client.1.vm09.stdout:2/201: dwrite df/d20/d29/f31 [0,4194304] 0 2026-03-09T15:00:40.059 INFO:tasks.workunit.client.1.vm09.stdout:6/194: unlink l2 0 2026-03-09T15:00:40.059 INFO:tasks.workunit.client.1.vm09.stdout:9/173: mknod d1/d7/d1e/c3b 0 2026-03-09T15:00:40.064 INFO:tasks.workunit.client.1.vm09.stdout:1/121: chown d8/f17 133151745 1 2026-03-09T15:00:40.069 INFO:tasks.workunit.client.1.vm09.stdout:1/122: chown d8/d10/f29 46553209 1 2026-03-09T15:00:40.070 INFO:tasks.workunit.client.1.vm09.stdout:1/123: dread - d8/d10/f29 zero size 2026-03-09T15:00:40.070 INFO:tasks.workunit.client.1.vm09.stdout:6/195: mkdir d6/d20/d44 0 2026-03-09T15:00:40.070 INFO:tasks.workunit.client.1.vm09.stdout:3/173: dwrite d3/d3a/f1d [0,4194304] 0 2026-03-09T15:00:40.073 INFO:tasks.workunit.client.1.vm09.stdout:9/174: symlink d1/d7/l3c 0 2026-03-09T15:00:40.073 INFO:tasks.workunit.client.1.vm09.stdout:6/196: dread d6/d20/f36 [0,4194304] 0 2026-03-09T15:00:40.075 INFO:tasks.workunit.client.1.vm09.stdout:3/174: write d3/d4/f8 [589087,5760] 0 2026-03-09T15:00:40.079 INFO:tasks.workunit.client.1.vm09.stdout:6/197: sync 2026-03-09T15:00:40.081 INFO:tasks.workunit.client.1.vm09.stdout:0/221: write da/dc/d22/f25 [253773,105779] 0 2026-03-09T15:00:40.081 INFO:tasks.workunit.client.1.vm09.stdout:1/124: rename d8 to d8/d1b/d2c 22 2026-03-09T15:00:40.082 INFO:tasks.workunit.client.1.vm09.stdout:9/175: dread d1/d7/d1e/d2b/f32 [0,4194304] 0 2026-03-09T15:00:40.084 INFO:tasks.workunit.client.1.vm09.stdout:9/176: write d1/d7/d1e/f20 [701746,117173] 0 2026-03-09T15:00:40.085 INFO:tasks.workunit.client.1.vm09.stdout:9/177: write d1/d7/d1e/d2b/d2e/f12 [4942974,128520] 0 2026-03-09T15:00:40.091 INFO:tasks.workunit.client.1.vm09.stdout:5/162: link d2/lf d2/l40 0 2026-03-09T15:00:40.099 INFO:tasks.workunit.client.1.vm09.stdout:7/190: dwrite f1 [0,4194304] 0 2026-03-09T15:00:40.109 INFO:tasks.workunit.client.1.vm09.stdout:1/125: symlink d8/d10/l2d 0 2026-03-09T15:00:40.110 INFO:tasks.workunit.client.1.vm09.stdout:8/173: truncate df/d24/f29 3004283 0 2026-03-09T15:00:40.111 INFO:tasks.workunit.client.1.vm09.stdout:0/222: rmdir da/dc/d22 39 2026-03-09T15:00:40.114 INFO:tasks.workunit.client.1.vm09.stdout:9/178: mknod d1/d7/d1e/c3d 0 2026-03-09T15:00:40.116 INFO:tasks.workunit.client.1.vm09.stdout:3/175: link d3/d3a/d2b/d31/f34 d3/d3a/d2b/d31/f3f 0 2026-03-09T15:00:40.117 INFO:tasks.workunit.client.1.vm09.stdout:4/175: getdents db/d2f 0 2026-03-09T15:00:40.118 INFO:tasks.workunit.client.1.vm09.stdout:4/176: dread - db/d12/d16/f36 zero size 2026-03-09T15:00:40.118 INFO:tasks.workunit.client.1.vm09.stdout:4/177: fdatasync f3 0 2026-03-09T15:00:40.124 INFO:tasks.workunit.client.1.vm09.stdout:5/163: unlink d2/d37/d3c/d36/f3b 0 2026-03-09T15:00:40.128 INFO:tasks.workunit.client.1.vm09.stdout:5/164: sync 2026-03-09T15:00:40.128 INFO:tasks.workunit.client.1.vm09.stdout:5/165: chown d2/l3 24 1 2026-03-09T15:00:40.131 INFO:tasks.workunit.client.1.vm09.stdout:7/191: chown d3/d1d/l21 1646410 1 2026-03-09T15:00:40.133 INFO:tasks.workunit.client.1.vm09.stdout:1/126: mkdir d8/d1b/d2e 0 2026-03-09T15:00:40.140 INFO:tasks.workunit.client.1.vm09.stdout:0/223: symlink da/dc/d1c/d46/l4d 0 2026-03-09T15:00:40.142 INFO:tasks.workunit.client.1.vm09.stdout:9/179: creat d1/d7/f3e x:0 0 0 2026-03-09T15:00:40.143 INFO:tasks.workunit.client.1.vm09.stdout:9/180: write d1/d7/d1e/d2b/d2e/f16 [102721,3108] 0 2026-03-09T15:00:40.148 INFO:tasks.workunit.client.1.vm09.stdout:1/127: dread d8/d1b/f1f [4194304,4194304] 0 2026-03-09T15:00:40.149 INFO:tasks.workunit.client.1.vm09.stdout:1/128: chown d8/fa 1637220 1 2026-03-09T15:00:40.149 INFO:tasks.workunit.client.1.vm09.stdout:3/176: creat d3/d3a/d2b/d31/f40 x:0 0 0 2026-03-09T15:00:40.154 INFO:tasks.workunit.client.1.vm09.stdout:1/129: dread d8/d10/f13 [0,4194304] 0 2026-03-09T15:00:40.159 INFO:tasks.workunit.client.1.vm09.stdout:4/178: dwrite db/d19/f38 [0,4194304] 0 2026-03-09T15:00:40.162 INFO:tasks.workunit.client.1.vm09.stdout:2/202: write df/d1f/f38 [291558,13013] 0 2026-03-09T15:00:40.163 INFO:tasks.workunit.client.1.vm09.stdout:2/203: chown f0 667 1 2026-03-09T15:00:40.172 INFO:tasks.workunit.client.1.vm09.stdout:0/224: unlink da/d30/c31 0 2026-03-09T15:00:40.172 INFO:tasks.workunit.client.1.vm09.stdout:9/181: creat d1/d7/d1e/d2b/f3f x:0 0 0 2026-03-09T15:00:40.173 INFO:tasks.workunit.client.1.vm09.stdout:9/182: readlink d1/d7/l1c 0 2026-03-09T15:00:40.173 INFO:tasks.workunit.client.1.vm09.stdout:3/177: unlink d3/d4/l25 0 2026-03-09T15:00:40.178 INFO:tasks.workunit.client.1.vm09.stdout:4/179: dwrite db/d12/f37 [0,4194304] 0 2026-03-09T15:00:40.182 INFO:tasks.workunit.client.1.vm09.stdout:9/183: dwrite d1/f4 [0,4194304] 0 2026-03-09T15:00:40.189 INFO:tasks.workunit.client.1.vm09.stdout:9/184: truncate d1/d7/d1e/d2b/d2e/f16 4899303 0 2026-03-09T15:00:40.193 INFO:tasks.workunit.client.1.vm09.stdout:6/198: getdents d6/d20/d38 0 2026-03-09T15:00:40.216 INFO:tasks.workunit.client.1.vm09.stdout:5/166: symlink d2/d4/l41 0 2026-03-09T15:00:40.216 INFO:tasks.workunit.client.1.vm09.stdout:1/130: creat d8/d10/f2f x:0 0 0 2026-03-09T15:00:40.216 INFO:tasks.workunit.client.1.vm09.stdout:1/131: dread d8/f19 [0,4194304] 0 2026-03-09T15:00:40.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:39 vm05.local ceph-mon[50611]: from='client.24397 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:39 vm05.local ceph-mon[50611]: pgmap v146: 65 pgs: 65 active+clean; 327 MiB data, 1.4 GiB used, 119 GiB / 120 GiB avail; 1.9 MiB/s rd, 27 MiB/s wr, 186 op/s 2026-03-09T15:00:40.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:39 vm09.local ceph-mon[59673]: from='client.24397 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:00:40.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:39 vm09.local ceph-mon[59673]: pgmap v146: 65 pgs: 65 active+clean; 327 MiB data, 1.4 GiB used, 119 GiB / 120 GiB avail; 1.9 MiB/s rd, 27 MiB/s wr, 186 op/s 2026-03-09T15:00:40.800 INFO:tasks.workunit.client.1.vm09.stdout:8/174: rename f6 to df/f30 0 2026-03-09T15:00:40.805 INFO:tasks.workunit.client.1.vm09.stdout:2/204: creat df/d1f/f3e x:0 0 0 2026-03-09T15:00:40.807 INFO:tasks.workunit.client.1.vm09.stdout:0/225: fsync da/dc/d10/f2d 0 2026-03-09T15:00:40.809 INFO:tasks.workunit.client.1.vm09.stdout:4/180: creat db/d19/d32/f3e x:0 0 0 2026-03-09T15:00:40.813 INFO:tasks.workunit.client.1.vm09.stdout:7/192: dwrite d3/f9 [0,4194304] 0 2026-03-09T15:00:40.814 INFO:tasks.workunit.client.1.vm09.stdout:9/185: mkdir d1/d7/d1e/d2b/d40 0 2026-03-09T15:00:40.815 INFO:tasks.workunit.client.1.vm09.stdout:4/181: dread db/d12/f25 [0,4194304] 0 2026-03-09T15:00:40.824 INFO:tasks.workunit.client.1.vm09.stdout:3/178: rename d3/d4/l2e to d3/d3a/d2b/l41 0 2026-03-09T15:00:40.825 INFO:tasks.workunit.client.1.vm09.stdout:8/175: write df/f30 [704966,110145] 0 2026-03-09T15:00:40.826 INFO:tasks.workunit.client.1.vm09.stdout:9/186: mknod d1/d7/d1e/d2b/d2e/c41 0 2026-03-09T15:00:40.828 INFO:tasks.workunit.client.1.vm09.stdout:8/176: read - df/d1c/d1d/f2b zero size 2026-03-09T15:00:40.830 INFO:tasks.workunit.client.1.vm09.stdout:4/182: dwrite db/d12/f37 [0,4194304] 0 2026-03-09T15:00:40.833 INFO:tasks.workunit.client.1.vm09.stdout:5/167: rename d2/l40 to d2/l42 0 2026-03-09T15:00:40.833 INFO:tasks.workunit.client.1.vm09.stdout:1/132: getdents d8/d1b/d2e 0 2026-03-09T15:00:40.835 INFO:tasks.workunit.client.1.vm09.stdout:5/168: fsync d2/d37/d3c/f2f 0 2026-03-09T15:00:40.839 INFO:tasks.workunit.client.1.vm09.stdout:0/226: rename da/dc/d22/f25 to da/d30/d36/f4e 0 2026-03-09T15:00:40.845 INFO:tasks.workunit.client.1.vm09.stdout:4/183: write db/d12/d16/f36 [922183,114340] 0 2026-03-09T15:00:40.846 INFO:tasks.workunit.client.1.vm09.stdout:1/133: dwrite d8/d1b/f21 [0,4194304] 0 2026-03-09T15:00:40.847 INFO:tasks.workunit.client.1.vm09.stdout:3/179: unlink d3/d4/c11 0 2026-03-09T15:00:40.850 INFO:tasks.workunit.client.1.vm09.stdout:2/205: creat df/d20/f3f x:0 0 0 2026-03-09T15:00:40.850 INFO:tasks.workunit.client.1.vm09.stdout:0/227: dread da/dc/d22/f3b [0,4194304] 0 2026-03-09T15:00:40.850 INFO:tasks.workunit.client.1.vm09.stdout:2/206: chown le 0 1 2026-03-09T15:00:40.850 INFO:tasks.workunit.client.1.vm09.stdout:9/187: dread d1/d7/d1e/d2b/d2e/f2d [0,4194304] 0 2026-03-09T15:00:40.851 INFO:tasks.workunit.client.1.vm09.stdout:2/207: fsync f4 0 2026-03-09T15:00:40.857 INFO:tasks.workunit.client.1.vm09.stdout:8/177: link df/d2d/f2f df/d1f/f31 0 2026-03-09T15:00:40.857 INFO:tasks.workunit.client.1.vm09.stdout:7/193: creat d3/d28/f35 x:0 0 0 2026-03-09T15:00:40.857 INFO:tasks.workunit.client.1.vm09.stdout:8/178: stat df/f30 0 2026-03-09T15:00:40.858 INFO:tasks.workunit.client.1.vm09.stdout:4/184: symlink db/d19/d32/l3f 0 2026-03-09T15:00:40.858 INFO:tasks.workunit.client.1.vm09.stdout:8/179: chown fe 0 1 2026-03-09T15:00:40.859 INFO:tasks.workunit.client.1.vm09.stdout:8/180: write df/f12 [332725,57113] 0 2026-03-09T15:00:40.861 INFO:tasks.workunit.client.1.vm09.stdout:9/188: write d1/d7/d1e/d2b/f37 [830672,64619] 0 2026-03-09T15:00:40.864 INFO:tasks.workunit.client.1.vm09.stdout:2/208: mkdir df/d20/d29/d40 0 2026-03-09T15:00:40.864 INFO:tasks.workunit.client.1.vm09.stdout:1/134: dwrite d8/d10/d24/f2a [0,4194304] 0 2026-03-09T15:00:40.867 INFO:tasks.workunit.client.1.vm09.stdout:2/209: read df/d1f/f39 [25431,126489] 0 2026-03-09T15:00:40.869 INFO:tasks.workunit.client.1.vm09.stdout:7/194: creat d3/d28/d2e/f36 x:0 0 0 2026-03-09T15:00:40.878 INFO:tasks.workunit.client.1.vm09.stdout:8/181: creat df/d24/f32 x:0 0 0 2026-03-09T15:00:40.880 INFO:tasks.workunit.client.1.vm09.stdout:9/189: creat d1/d7/d1e/d2b/f42 x:0 0 0 2026-03-09T15:00:40.885 INFO:tasks.workunit.client.1.vm09.stdout:2/210: dwrite df/d20/f2b [0,4194304] 0 2026-03-09T15:00:40.885 INFO:tasks.workunit.client.1.vm09.stdout:8/182: write df/d2d/f2f [216973,2543] 0 2026-03-09T15:00:40.887 INFO:tasks.workunit.client.1.vm09.stdout:4/185: getdents db/d12 0 2026-03-09T15:00:40.893 INFO:tasks.workunit.client.1.vm09.stdout:1/135: sync 2026-03-09T15:00:40.895 INFO:tasks.workunit.client.1.vm09.stdout:4/186: dwrite db/d12/d16/f26 [0,4194304] 0 2026-03-09T15:00:40.903 INFO:tasks.workunit.client.1.vm09.stdout:8/183: mkdir df/d1c/d2c/d33 0 2026-03-09T15:00:40.903 INFO:tasks.workunit.client.1.vm09.stdout:1/136: write d8/d1b/f1f [6543226,16409] 0 2026-03-09T15:00:40.904 INFO:tasks.workunit.client.1.vm09.stdout:8/184: truncate df/f14 4721717 0 2026-03-09T15:00:40.904 INFO:tasks.workunit.client.1.vm09.stdout:2/211: link df/f16 df/d2d/f41 0 2026-03-09T15:00:40.904 INFO:tasks.workunit.client.1.vm09.stdout:2/212: write df/f14 [1363024,126329] 0 2026-03-09T15:00:40.912 INFO:tasks.workunit.client.1.vm09.stdout:8/185: link fe df/f34 0 2026-03-09T15:00:40.918 INFO:tasks.workunit.client.1.vm09.stdout:1/137: dwrite d8/f17 [4194304,4194304] 0 2026-03-09T15:00:40.922 INFO:tasks.workunit.client.1.vm09.stdout:2/213: dwrite df/d1f/f38 [0,4194304] 0 2026-03-09T15:00:40.928 INFO:tasks.workunit.client.1.vm09.stdout:4/187: dwrite db/f21 [0,4194304] 0 2026-03-09T15:00:40.928 INFO:tasks.workunit.client.1.vm09.stdout:2/214: write df/f16 [2835372,128936] 0 2026-03-09T15:00:40.930 INFO:tasks.workunit.client.1.vm09.stdout:1/138: write d8/ff [924966,128265] 0 2026-03-09T15:00:40.932 INFO:tasks.workunit.client.1.vm09.stdout:1/139: rmdir d8/d1b 39 2026-03-09T15:00:40.932 INFO:tasks.workunit.client.1.vm09.stdout:4/188: symlink db/l40 0 2026-03-09T15:00:40.932 INFO:tasks.workunit.client.1.vm09.stdout:2/215: truncate df/d20/f24 228171 0 2026-03-09T15:00:40.933 INFO:tasks.workunit.client.1.vm09.stdout:1/140: chown d8/f17 23162329 1 2026-03-09T15:00:40.936 INFO:tasks.workunit.client.1.vm09.stdout:1/141: rmdir d8/d1b/d2e 0 2026-03-09T15:00:40.938 INFO:tasks.workunit.client.1.vm09.stdout:1/142: fsync d8/d10/f1a 0 2026-03-09T15:00:40.941 INFO:tasks.workunit.client.1.vm09.stdout:1/143: symlink d8/l30 0 2026-03-09T15:00:40.967 INFO:tasks.workunit.client.1.vm09.stdout:1/144: sync 2026-03-09T15:00:40.967 INFO:tasks.workunit.client.1.vm09.stdout:1/145: readlink d8/d22/l27 0 2026-03-09T15:00:40.968 INFO:tasks.workunit.client.1.vm09.stdout:1/146: symlink d8/d10/d24/l31 0 2026-03-09T15:00:40.974 INFO:tasks.workunit.client.1.vm09.stdout:1/147: dwrite d8/d10/f2f [0,4194304] 0 2026-03-09T15:00:40.981 INFO:tasks.workunit.client.1.vm09.stdout:1/148: mkdir d8/d22/d32 0 2026-03-09T15:00:40.981 INFO:tasks.workunit.client.1.vm09.stdout:1/149: readlink d8/d10/d24/l31 0 2026-03-09T15:00:41.008 INFO:tasks.workunit.client.1.vm09.stdout:6/199: write d6/d20/f34 [147044,55493] 0 2026-03-09T15:00:41.013 INFO:tasks.workunit.client.1.vm09.stdout:6/200: fdatasync d6/df/d23/f2f 0 2026-03-09T15:00:41.015 INFO:tasks.workunit.client.1.vm09.stdout:0/228: rename da/d30/d36/f4e to da/dc/d1c/d3c/f4f 0 2026-03-09T15:00:41.016 INFO:tasks.workunit.client.1.vm09.stdout:0/229: chown da/dc/d10/c26 403 1 2026-03-09T15:00:41.022 INFO:tasks.workunit.client.1.vm09.stdout:7/195: rename d3/f12 to d3/d1d/f37 0 2026-03-09T15:00:41.022 INFO:tasks.workunit.client.1.vm09.stdout:7/196: write d3/d28/f35 [665503,13584] 0 2026-03-09T15:00:41.023 INFO:tasks.workunit.client.1.vm09.stdout:0/230: mknod da/dc/d1c/d3c/c50 0 2026-03-09T15:00:41.025 INFO:tasks.workunit.client.1.vm09.stdout:3/180: write d3/d3a/d2b/d31/f34 [364354,17858] 0 2026-03-09T15:00:41.027 INFO:tasks.workunit.client.1.vm09.stdout:3/181: write d3/d3a/f1d [4603387,57337] 0 2026-03-09T15:00:41.027 INFO:tasks.workunit.client.1.vm09.stdout:5/169: truncate d2/d4/f23 1689967 0 2026-03-09T15:00:41.030 INFO:tasks.workunit.client.1.vm09.stdout:0/231: dwrite f7 [4194304,4194304] 0 2026-03-09T15:00:41.033 INFO:tasks.workunit.client.1.vm09.stdout:3/182: dread d3/d4/f26 [0,4194304] 0 2026-03-09T15:00:41.034 INFO:tasks.workunit.client.1.vm09.stdout:3/183: fdatasync d3/d4/f1a 0 2026-03-09T15:00:41.034 INFO:tasks.workunit.client.1.vm09.stdout:9/190: rename d1/d7/d1e/d2b/f37 to d1/d7/d1e/d2b/d40/f43 0 2026-03-09T15:00:41.036 INFO:tasks.workunit.client.1.vm09.stdout:0/232: write da/dc/d1c/d3c/f4f [982356,129381] 0 2026-03-09T15:00:41.037 INFO:tasks.workunit.client.1.vm09.stdout:9/191: write d1/d7/d1e/d2b/d2e/f19 [1793369,76528] 0 2026-03-09T15:00:41.039 INFO:tasks.workunit.client.1.vm09.stdout:0/233: truncate da/dc/d10/f16 981928 0 2026-03-09T15:00:41.040 INFO:tasks.workunit.client.1.vm09.stdout:0/234: read - da/f4b zero size 2026-03-09T15:00:41.041 INFO:tasks.workunit.client.1.vm09.stdout:7/197: link d3/l18 d3/d1d/l38 0 2026-03-09T15:00:41.042 INFO:tasks.workunit.client.1.vm09.stdout:0/235: readlink da/dc/d1c/d46/l4d 0 2026-03-09T15:00:41.042 INFO:tasks.workunit.client.1.vm09.stdout:3/184: link d3/d4/l5 d3/d3a/d2b/l42 0 2026-03-09T15:00:41.042 INFO:tasks.workunit.client.1.vm09.stdout:7/198: fdatasync d3/d28/d2e/f36 0 2026-03-09T15:00:41.048 INFO:tasks.workunit.client.1.vm09.stdout:3/185: mknod d3/d3a/c43 0 2026-03-09T15:00:41.049 INFO:tasks.workunit.client.1.vm09.stdout:0/236: creat da/dc/d1c/d3c/d44/f51 x:0 0 0 2026-03-09T15:00:41.049 INFO:tasks.workunit.client.1.vm09.stdout:9/192: getdents d1 0 2026-03-09T15:00:41.054 INFO:tasks.workunit.client.1.vm09.stdout:9/193: symlink d1/d7/d1e/d2b/l44 0 2026-03-09T15:00:41.054 INFO:tasks.workunit.client.1.vm09.stdout:0/237: dwrite da/f4b [0,4194304] 0 2026-03-09T15:00:41.056 INFO:tasks.workunit.client.1.vm09.stdout:9/194: creat d1/d7/f45 x:0 0 0 2026-03-09T15:00:41.057 INFO:tasks.workunit.client.1.vm09.stdout:0/238: write da/dc/d1c/d3c/d44/f51 [133840,8671] 0 2026-03-09T15:00:41.057 INFO:tasks.workunit.client.1.vm09.stdout:9/195: truncate d1/f2c 334013 0 2026-03-09T15:00:41.058 INFO:tasks.workunit.client.1.vm09.stdout:3/186: link d3/f9 d3/d3a/d2b/d36/f44 0 2026-03-09T15:00:41.067 INFO:tasks.workunit.client.1.vm09.stdout:3/187: dwrite d3/f3b [0,4194304] 0 2026-03-09T15:00:41.068 INFO:tasks.workunit.client.1.vm09.stdout:0/239: dread da/dc/d10/f2d [0,4194304] 0 2026-03-09T15:00:41.070 INFO:tasks.workunit.client.1.vm09.stdout:1/150: write d8/d10/d24/f2a [4720788,45164] 0 2026-03-09T15:00:41.072 INFO:tasks.workunit.client.1.vm09.stdout:1/151: chown d8/f17 127621 1 2026-03-09T15:00:41.085 INFO:tasks.workunit.client.1.vm09.stdout:0/240: creat da/dc/d1c/d46/f52 x:0 0 0 2026-03-09T15:00:41.086 INFO:tasks.workunit.client.1.vm09.stdout:0/241: rmdir da/dc/d1c/d3c 39 2026-03-09T15:00:41.089 INFO:tasks.workunit.client.1.vm09.stdout:7/199: dread d3/d1d/f37 [0,4194304] 0 2026-03-09T15:00:41.090 INFO:tasks.workunit.client.1.vm09.stdout:7/200: readlink l2 0 2026-03-09T15:00:41.090 INFO:tasks.workunit.client.1.vm09.stdout:7/201: dread - d3/f32 zero size 2026-03-09T15:00:41.094 INFO:tasks.workunit.client.1.vm09.stdout:7/202: link d3/d1d/l21 d3/d1d/d2d/l39 0 2026-03-09T15:00:41.094 INFO:tasks.workunit.client.1.vm09.stdout:7/203: readlink d3/db/d15/l1c 0 2026-03-09T15:00:41.098 INFO:tasks.workunit.client.1.vm09.stdout:7/204: rmdir d3/d28/d2e 39 2026-03-09T15:00:41.100 INFO:tasks.workunit.client.1.vm09.stdout:0/242: sync 2026-03-09T15:00:41.105 INFO:tasks.workunit.client.1.vm09.stdout:0/243: write da/dc/d1c/d3c/d44/f51 [772804,27403] 0 2026-03-09T15:00:41.106 INFO:tasks.workunit.client.1.vm09.stdout:0/244: dread - da/f4c zero size 2026-03-09T15:00:41.108 INFO:tasks.workunit.client.1.vm09.stdout:0/245: unlink da/dc/d10/f1e 0 2026-03-09T15:00:41.114 INFO:tasks.workunit.client.1.vm09.stdout:0/246: dwrite da/f4c [0,4194304] 0 2026-03-09T15:00:41.117 INFO:tasks.workunit.client.1.vm09.stdout:0/247: write da/dc/d22/f3b [1283966,18718] 0 2026-03-09T15:00:41.120 INFO:tasks.workunit.client.1.vm09.stdout:0/248: creat da/dc/d22/f53 x:0 0 0 2026-03-09T15:00:41.121 INFO:tasks.workunit.client.1.vm09.stdout:0/249: write da/dc/d10/f11 [138725,65888] 0 2026-03-09T15:00:41.122 INFO:tasks.workunit.client.1.vm09.stdout:0/250: chown da/dc/f28 5 1 2026-03-09T15:00:41.127 INFO:tasks.workunit.client.1.vm09.stdout:0/251: symlink da/dc/d1c/d3c/d44/l54 0 2026-03-09T15:00:41.132 INFO:tasks.workunit.client.1.vm09.stdout:0/252: dwrite da/f4c [0,4194304] 0 2026-03-09T15:00:41.145 INFO:tasks.workunit.client.1.vm09.stdout:9/196: dread d1/d7/f13 [0,4194304] 0 2026-03-09T15:00:41.147 INFO:tasks.workunit.client.1.vm09.stdout:9/197: link d1/f29 d1/d7/d1e/f46 0 2026-03-09T15:00:41.151 INFO:tasks.workunit.client.1.vm09.stdout:9/198: symlink d1/d7/d1e/d2b/l47 0 2026-03-09T15:00:41.153 INFO:tasks.workunit.client.1.vm09.stdout:9/199: rename d1/d7/d1e/d2b/l38 to d1/l48 0 2026-03-09T15:00:41.180 INFO:tasks.workunit.client.1.vm09.stdout:8/186: dwrite df/f1a [0,4194304] 0 2026-03-09T15:00:41.182 INFO:tasks.workunit.client.1.vm09.stdout:8/187: fdatasync df/d24/f32 0 2026-03-09T15:00:41.214 INFO:tasks.workunit.client.1.vm09.stdout:4/189: dwrite db/f14 [0,4194304] 0 2026-03-09T15:00:41.214 INFO:tasks.workunit.client.1.vm09.stdout:4/190: chown db/c31 131 1 2026-03-09T15:00:41.220 INFO:tasks.workunit.client.1.vm09.stdout:4/191: readlink db/lc 0 2026-03-09T15:00:41.224 INFO:tasks.workunit.client.1.vm09.stdout:1/152: rmdir d8/d10 39 2026-03-09T15:00:41.225 INFO:tasks.workunit.client.1.vm09.stdout:2/216: dwrite df/f1c [4194304,4194304] 0 2026-03-09T15:00:41.226 INFO:tasks.workunit.client.1.vm09.stdout:2/217: truncate df/d2d/f3c 496225 0 2026-03-09T15:00:41.226 INFO:tasks.workunit.client.1.vm09.stdout:2/218: readlink df/l12 0 2026-03-09T15:00:41.231 INFO:tasks.workunit.client.1.vm09.stdout:4/192: sync 2026-03-09T15:00:41.237 INFO:tasks.workunit.client.1.vm09.stdout:1/153: dread - d8/d10/f12 zero size 2026-03-09T15:00:41.241 INFO:tasks.workunit.client.1.vm09.stdout:4/193: rename db/d12/d16/l34 to db/d2f/l41 0 2026-03-09T15:00:41.241 INFO:tasks.workunit.client.1.vm09.stdout:1/154: mknod d8/d10/c33 0 2026-03-09T15:00:41.242 INFO:tasks.workunit.client.1.vm09.stdout:1/155: write d8/d10/f15 [3929789,49958] 0 2026-03-09T15:00:41.243 INFO:tasks.workunit.client.1.vm09.stdout:1/156: read d8/f17 [7551058,74450] 0 2026-03-09T15:00:41.243 INFO:tasks.workunit.client.1.vm09.stdout:4/194: truncate f4 3356027 0 2026-03-09T15:00:41.244 INFO:tasks.workunit.client.1.vm09.stdout:1/157: truncate d8/d10/f13 918898 0 2026-03-09T15:00:41.245 INFO:tasks.workunit.client.1.vm09.stdout:1/158: fdatasync d8/ff 0 2026-03-09T15:00:41.248 INFO:tasks.workunit.client.1.vm09.stdout:5/170: rmdir d2/d4 39 2026-03-09T15:00:41.252 INFO:tasks.workunit.client.1.vm09.stdout:4/195: write db/d12/f27 [109697,116914] 0 2026-03-09T15:00:41.252 INFO:tasks.workunit.client.1.vm09.stdout:6/201: dwrite d6/db/f42 [0,4194304] 0 2026-03-09T15:00:41.255 INFO:tasks.workunit.client.1.vm09.stdout:1/159: dwrite d8/fa [0,4194304] 0 2026-03-09T15:00:41.255 INFO:tasks.workunit.client.1.vm09.stdout:1/160: dread - d8/d10/f12 zero size 2026-03-09T15:00:41.265 INFO:tasks.workunit.client.1.vm09.stdout:3/188: dwrite d3/f29 [0,4194304] 0 2026-03-09T15:00:41.265 INFO:tasks.workunit.client.1.vm09.stdout:7/205: write d3/d1d/f30 [1392081,29032] 0 2026-03-09T15:00:41.265 INFO:tasks.workunit.client.1.vm09.stdout:7/206: stat d3/db/c20 0 2026-03-09T15:00:41.266 INFO:tasks.workunit.client.1.vm09.stdout:5/171: creat d2/d37/f43 x:0 0 0 2026-03-09T15:00:41.270 INFO:tasks.workunit.client.1.vm09.stdout:1/161: mknod d8/d1b/c34 0 2026-03-09T15:00:41.271 INFO:tasks.workunit.client.1.vm09.stdout:3/189: creat d3/d3a/d2b/d31/f45 x:0 0 0 2026-03-09T15:00:41.271 INFO:tasks.workunit.client.1.vm09.stdout:6/202: mkdir d6/d20/d44/d45 0 2026-03-09T15:00:41.272 INFO:tasks.workunit.client.1.vm09.stdout:0/253: truncate da/dc/d10/f16 59440 0 2026-03-09T15:00:41.274 INFO:tasks.workunit.client.1.vm09.stdout:3/190: write d3/d3a/d2b/d39/f3c [399573,109226] 0 2026-03-09T15:00:41.277 INFO:tasks.workunit.client.1.vm09.stdout:7/207: unlink d3/d28/f2b 0 2026-03-09T15:00:41.279 INFO:tasks.workunit.client.1.vm09.stdout:7/208: dread - d3/f32 zero size 2026-03-09T15:00:41.279 INFO:tasks.workunit.client.1.vm09.stdout:5/172: fsync d2/f29 0 2026-03-09T15:00:41.279 INFO:tasks.workunit.client.1.vm09.stdout:4/196: link db/d19/d23/c33 db/d2f/c42 0 2026-03-09T15:00:41.281 INFO:tasks.workunit.client.1.vm09.stdout:6/203: fdatasync f0 0 2026-03-09T15:00:41.281 INFO:tasks.workunit.client.1.vm09.stdout:6/204: chown d6/d20/d44/d45 3683 1 2026-03-09T15:00:41.281 INFO:tasks.workunit.client.1.vm09.stdout:1/162: write d8/f17 [9029234,32874] 0 2026-03-09T15:00:41.283 INFO:tasks.workunit.client.1.vm09.stdout:3/191: creat d3/d3a/d2b/d36/f46 x:0 0 0 2026-03-09T15:00:41.288 INFO:tasks.workunit.client.1.vm09.stdout:7/209: symlink d3/d1d/l3a 0 2026-03-09T15:00:41.291 INFO:tasks.workunit.client.1.vm09.stdout:5/173: chown d2/d4/c1c 5 1 2026-03-09T15:00:41.291 INFO:tasks.workunit.client.1.vm09.stdout:1/163: write d8/ff [1532343,20496] 0 2026-03-09T15:00:41.294 INFO:tasks.workunit.client.1.vm09.stdout:1/164: symlink d8/d22/l35 0 2026-03-09T15:00:41.296 INFO:tasks.workunit.client.1.vm09.stdout:5/174: rmdir d2/d4 39 2026-03-09T15:00:41.296 INFO:tasks.workunit.client.1.vm09.stdout:6/205: dwrite d6/d20/d2a/d3d/f43 [0,4194304] 0 2026-03-09T15:00:41.297 INFO:tasks.workunit.client.1.vm09.stdout:0/254: dwrite da/dc/d10/f29 [4194304,4194304] 0 2026-03-09T15:00:41.303 INFO:tasks.workunit.client.1.vm09.stdout:6/206: fsync d6/df/d23/f29 0 2026-03-09T15:00:41.308 INFO:tasks.workunit.client.1.vm09.stdout:7/210: rename d3/db/d25/c31 to d3/db/d15/c3b 0 2026-03-09T15:00:41.310 INFO:tasks.workunit.client.1.vm09.stdout:8/188: truncate df/f23 315756 0 2026-03-09T15:00:41.312 INFO:tasks.workunit.client.1.vm09.stdout:6/207: mkdir d6/d20/d2a/d3d/d46 0 2026-03-09T15:00:41.314 INFO:tasks.workunit.client.1.vm09.stdout:1/165: dwrite d8/fa [0,4194304] 0 2026-03-09T15:00:41.314 INFO:tasks.workunit.client.1.vm09.stdout:0/255: creat da/dc/d22/f55 x:0 0 0 2026-03-09T15:00:41.314 INFO:tasks.workunit.client.1.vm09.stdout:9/200: stat d1/d7/d1e/d2b/f42 0 2026-03-09T15:00:41.314 INFO:tasks.workunit.client.1.vm09.stdout:9/201: write d1/d7/d1e/d2b/f3f [999141,95444] 0 2026-03-09T15:00:41.315 INFO:tasks.workunit.client.1.vm09.stdout:9/202: dread - d1/d7/d1e/d2b/d2e/f3a zero size 2026-03-09T15:00:41.326 INFO:tasks.workunit.client.1.vm09.stdout:5/175: rename d2/c20 to d2/d37/d3c/c44 0 2026-03-09T15:00:41.328 INFO:tasks.workunit.client.1.vm09.stdout:1/166: mknod d8/d1b/c36 0 2026-03-09T15:00:41.328 INFO:tasks.workunit.client.1.vm09.stdout:8/189: stat df/d1c/l2e 0 2026-03-09T15:00:41.331 INFO:tasks.workunit.client.1.vm09.stdout:4/197: write db/d12/d16/f2a [3031,75357] 0 2026-03-09T15:00:41.332 INFO:tasks.workunit.client.1.vm09.stdout:4/198: fdatasync db/d12/f3d 0 2026-03-09T15:00:41.333 INFO:tasks.workunit.client.1.vm09.stdout:9/203: rmdir d1/d7/d1e 39 2026-03-09T15:00:41.333 INFO:tasks.workunit.client.1.vm09.stdout:5/176: fsync d2/d4/f1f 0 2026-03-09T15:00:41.334 INFO:tasks.workunit.client.1.vm09.stdout:6/208: symlink d6/l47 0 2026-03-09T15:00:41.339 INFO:tasks.workunit.client.1.vm09.stdout:8/190: fdatasync df/f30 0 2026-03-09T15:00:41.343 INFO:tasks.workunit.client.1.vm09.stdout:9/204: dread d1/d7/f13 [0,4194304] 0 2026-03-09T15:00:41.343 INFO:tasks.workunit.client.1.vm09.stdout:0/256: symlink da/dc/d1c/l56 0 2026-03-09T15:00:41.343 INFO:tasks.workunit.client.1.vm09.stdout:0/257: stat da 0 2026-03-09T15:00:41.343 INFO:tasks.workunit.client.1.vm09.stdout:6/209: mknod d6/d20/c48 0 2026-03-09T15:00:41.343 INFO:tasks.workunit.client.1.vm09.stdout:6/210: chown d6/d20/d38 1051374 1 2026-03-09T15:00:41.343 INFO:tasks.workunit.client.1.vm09.stdout:6/211: write d6/d20/f36 [3621522,112417] 0 2026-03-09T15:00:41.343 INFO:tasks.workunit.client.1.vm09.stdout:4/199: creat db/d19/d35/f43 x:0 0 0 2026-03-09T15:00:41.344 INFO:tasks.workunit.client.1.vm09.stdout:8/191: rmdir df/d1f 39 2026-03-09T15:00:41.348 INFO:tasks.workunit.client.1.vm09.stdout:1/167: sync 2026-03-09T15:00:41.349 INFO:tasks.workunit.client.1.vm09.stdout:0/258: mkdir da/d57 0 2026-03-09T15:00:41.351 INFO:tasks.workunit.client.1.vm09.stdout:0/259: write da/f4c [2184237,28216] 0 2026-03-09T15:00:41.358 INFO:tasks.workunit.client.1.vm09.stdout:7/211: dread d3/db/d25/f22 [0,4194304] 0 2026-03-09T15:00:41.360 INFO:tasks.workunit.client.1.vm09.stdout:1/168: dwrite d8/d1b/f1f [8388608,4194304] 0 2026-03-09T15:00:41.360 INFO:tasks.workunit.client.1.vm09.stdout:1/169: chown d8/l30 118 1 2026-03-09T15:00:41.360 INFO:tasks.workunit.client.1.vm09.stdout:8/192: creat df/d1f/f35 x:0 0 0 2026-03-09T15:00:41.361 INFO:tasks.workunit.client.1.vm09.stdout:2/219: write df/d1f/f39 [1111537,40268] 0 2026-03-09T15:00:41.361 INFO:tasks.workunit.client.1.vm09.stdout:2/220: stat f5 0 2026-03-09T15:00:41.364 INFO:tasks.workunit.client.1.vm09.stdout:6/212: link d6/df/d23/f2f d6/d20/d24/f49 0 2026-03-09T15:00:41.365 INFO:tasks.workunit.client.1.vm09.stdout:1/170: chown d8/d22/l2b 26251887 1 2026-03-09T15:00:41.365 INFO:tasks.workunit.client.1.vm09.stdout:2/221: dread - df/d20/f3f zero size 2026-03-09T15:00:41.371 INFO:tasks.workunit.client.1.vm09.stdout:6/213: creat d6/d20/d44/f4a x:0 0 0 2026-03-09T15:00:41.374 INFO:tasks.workunit.client.1.vm09.stdout:7/212: mknod d3/d28/c3c 0 2026-03-09T15:00:41.374 INFO:tasks.workunit.client.1.vm09.stdout:2/222: unlink df/d20/f2b 0 2026-03-09T15:00:41.375 INFO:tasks.workunit.client.1.vm09.stdout:2/223: chown df/d1f/f38 20876109 1 2026-03-09T15:00:41.376 INFO:tasks.workunit.client.1.vm09.stdout:1/171: dwrite d8/d10/f1a [0,4194304] 0 2026-03-09T15:00:41.377 INFO:tasks.workunit.client.1.vm09.stdout:0/260: dread da/dc/f17 [0,4194304] 0 2026-03-09T15:00:41.377 INFO:tasks.workunit.client.1.vm09.stdout:1/172: chown d8/d10 17997 1 2026-03-09T15:00:41.380 INFO:tasks.workunit.client.1.vm09.stdout:0/261: write f5 [3399218,123136] 0 2026-03-09T15:00:41.381 INFO:tasks.workunit.client.1.vm09.stdout:0/262: chown da/dc/d1c/c3e 40 1 2026-03-09T15:00:41.382 INFO:tasks.workunit.client.1.vm09.stdout:6/214: sync 2026-03-09T15:00:41.388 INFO:tasks.workunit.client.1.vm09.stdout:6/215: dwrite f0 [4194304,4194304] 0 2026-03-09T15:00:41.394 INFO:tasks.workunit.client.1.vm09.stdout:2/224: link df/d1f/f38 df/f42 0 2026-03-09T15:00:41.395 INFO:tasks.workunit.client.1.vm09.stdout:0/263: dread da/dc/d10/f2d [0,4194304] 0 2026-03-09T15:00:41.395 INFO:tasks.workunit.client.1.vm09.stdout:6/216: symlink d6/d20/d2a/l4b 0 2026-03-09T15:00:41.396 INFO:tasks.workunit.client.1.vm09.stdout:6/217: dread - d6/d20/d44/f4a zero size 2026-03-09T15:00:41.397 INFO:tasks.workunit.client.1.vm09.stdout:0/264: symlink da/dc/d1c/d46/l58 0 2026-03-09T15:00:41.397 INFO:tasks.workunit.client.1.vm09.stdout:0/265: dread - da/d30/f38 zero size 2026-03-09T15:00:41.397 INFO:tasks.workunit.client.1.vm09.stdout:6/218: creat d6/d20/d44/d45/f4c x:0 0 0 2026-03-09T15:00:41.405 INFO:tasks.workunit.client.1.vm09.stdout:6/219: rename d6/l47 to d6/db/d10/l4d 0 2026-03-09T15:00:41.405 INFO:tasks.workunit.client.1.vm09.stdout:6/220: fsync d6/f39 0 2026-03-09T15:00:41.410 INFO:tasks.workunit.client.1.vm09.stdout:5/177: dread d2/d4/f1f [0,4194304] 0 2026-03-09T15:00:41.410 INFO:tasks.workunit.client.1.vm09.stdout:5/178: stat d2/d4/c2d 0 2026-03-09T15:00:41.413 INFO:tasks.workunit.client.1.vm09.stdout:6/221: dread d6/d20/d2a/d3d/f43 [0,4194304] 0 2026-03-09T15:00:41.415 INFO:tasks.workunit.client.1.vm09.stdout:6/222: mkdir d6/d20/d38/d4e 0 2026-03-09T15:00:41.438 INFO:tasks.workunit.client.1.vm09.stdout:5/179: write d2/f29 [1599539,75956] 0 2026-03-09T15:00:41.439 INFO:tasks.workunit.client.1.vm09.stdout:3/192: getdents d3/d3a/d2b/d36 0 2026-03-09T15:00:41.440 INFO:tasks.workunit.client.1.vm09.stdout:3/193: rename d3/d3a/d2b to d3/d3a/d2b/d39/d47 22 2026-03-09T15:00:41.443 INFO:tasks.workunit.client.1.vm09.stdout:5/180: mkdir d2/d37/d3c/d36/d45 0 2026-03-09T15:00:41.444 INFO:tasks.workunit.client.1.vm09.stdout:0/266: dwrite da/dc/d10/f16 [0,4194304] 0 2026-03-09T15:00:41.449 INFO:tasks.workunit.client.1.vm09.stdout:3/194: dwrite d3/d3a/d2b/d31/f40 [0,4194304] 0 2026-03-09T15:00:41.457 INFO:tasks.workunit.client.1.vm09.stdout:0/267: creat da/d57/f59 x:0 0 0 2026-03-09T15:00:41.460 INFO:tasks.workunit.client.1.vm09.stdout:5/181: link d2/d37/d3c/c28 d2/d37/d3c/c46 0 2026-03-09T15:00:41.462 INFO:tasks.workunit.client.1.vm09.stdout:0/268: fsync da/dc/f17 0 2026-03-09T15:00:41.464 INFO:tasks.workunit.client.1.vm09.stdout:0/269: write da/dc/d10/f29 [5618033,40890] 0 2026-03-09T15:00:41.467 INFO:tasks.workunit.client.1.vm09.stdout:0/270: unlink da/d30/c49 0 2026-03-09T15:00:41.473 INFO:tasks.workunit.client.1.vm09.stdout:3/195: dread d3/d4/f8 [0,4194304] 0 2026-03-09T15:00:41.477 INFO:tasks.workunit.client.1.vm09.stdout:3/196: fdatasync d3/d3a/d2b/d31/f34 0 2026-03-09T15:00:41.482 INFO:tasks.workunit.client.1.vm09.stdout:3/197: dwrite d3/d3a/d2b/d31/f34 [0,4194304] 0 2026-03-09T15:00:41.490 INFO:tasks.workunit.client.1.vm09.stdout:2/225: read df/d20/f24 [37437,58221] 0 2026-03-09T15:00:41.490 INFO:tasks.workunit.client.1.vm09.stdout:1/173: rmdir d8 39 2026-03-09T15:00:41.491 INFO:tasks.workunit.client.1.vm09.stdout:2/226: symlink df/d3b/l43 0 2026-03-09T15:00:41.494 INFO:tasks.workunit.client.1.vm09.stdout:1/174: creat d8/d1b/f37 x:0 0 0 2026-03-09T15:00:41.497 INFO:tasks.workunit.client.1.vm09.stdout:1/175: rename d8/d10/c14 to d8/d22/c38 0 2026-03-09T15:00:41.501 INFO:tasks.workunit.client.1.vm09.stdout:2/227: dread f5 [0,4194304] 0 2026-03-09T15:00:41.503 INFO:tasks.workunit.client.1.vm09.stdout:1/176: dread d8/f19 [0,4194304] 0 2026-03-09T15:00:41.504 INFO:tasks.workunit.client.1.vm09.stdout:2/228: chown df/d1f/c2c 0 1 2026-03-09T15:00:41.504 INFO:tasks.workunit.client.1.vm09.stdout:3/198: dread d3/f9 [0,4194304] 0 2026-03-09T15:00:41.507 INFO:tasks.workunit.client.1.vm09.stdout:3/199: mkdir d3/d3a/d2b/d39/d48 0 2026-03-09T15:00:41.508 INFO:tasks.workunit.client.1.vm09.stdout:1/177: mkdir d8/d22/d32/d39 0 2026-03-09T15:00:41.509 INFO:tasks.workunit.client.1.vm09.stdout:3/200: write d3/d3a/f1c [1088889,129163] 0 2026-03-09T15:00:41.510 INFO:tasks.workunit.client.1.vm09.stdout:1/178: creat d8/d22/f3a x:0 0 0 2026-03-09T15:00:41.512 INFO:tasks.workunit.client.1.vm09.stdout:2/229: dread df/d1f/f39 [0,4194304] 0 2026-03-09T15:00:41.515 INFO:tasks.workunit.client.1.vm09.stdout:3/201: truncate d3/f6 4724424 0 2026-03-09T15:00:41.518 INFO:tasks.workunit.client.1.vm09.stdout:1/179: dwrite d8/d10/f13 [0,4194304] 0 2026-03-09T15:00:41.519 INFO:tasks.workunit.client.1.vm09.stdout:1/180: truncate d8/d10/f29 492574 0 2026-03-09T15:00:41.522 INFO:tasks.workunit.client.1.vm09.stdout:1/181: write d8/d1b/f21 [884162,27198] 0 2026-03-09T15:00:41.523 INFO:tasks.workunit.client.1.vm09.stdout:3/202: write d3/d3a/d2b/d31/f45 [355777,54218] 0 2026-03-09T15:00:41.527 INFO:tasks.workunit.client.1.vm09.stdout:2/230: unlink df/d1f/f3e 0 2026-03-09T15:00:41.530 INFO:tasks.workunit.client.1.vm09.stdout:3/203: dwrite d3/ff [4194304,4194304] 0 2026-03-09T15:00:41.531 INFO:tasks.workunit.client.1.vm09.stdout:1/182: link d8/d10/f15 d8/d10/f3b 0 2026-03-09T15:00:41.531 INFO:tasks.workunit.client.1.vm09.stdout:2/231: mknod df/d20/c44 0 2026-03-09T15:00:41.533 INFO:tasks.workunit.client.1.vm09.stdout:1/183: readlink d8/d1b/l26 0 2026-03-09T15:00:41.533 INFO:tasks.workunit.client.1.vm09.stdout:3/204: rename d3/d3a/l24 to d3/l49 0 2026-03-09T15:00:41.537 INFO:tasks.workunit.client.1.vm09.stdout:1/184: dread d8/f19 [0,4194304] 0 2026-03-09T15:00:41.538 INFO:tasks.workunit.client.1.vm09.stdout:2/232: symlink df/d20/d2e/l45 0 2026-03-09T15:00:41.538 INFO:tasks.workunit.client.1.vm09.stdout:2/233: chown df/d1f/c22 12 1 2026-03-09T15:00:41.538 INFO:tasks.workunit.client.1.vm09.stdout:3/205: truncate d3/d3a/d2b/d39/f3c 949212 0 2026-03-09T15:00:41.540 INFO:tasks.workunit.client.1.vm09.stdout:2/234: fdatasync df/f16 0 2026-03-09T15:00:41.543 INFO:tasks.workunit.client.1.vm09.stdout:3/206: mkdir d3/d3a/d2b/d31/d4a 0 2026-03-09T15:00:41.546 INFO:tasks.workunit.client.1.vm09.stdout:1/185: link d8/d22/l27 d8/d22/d32/d39/l3c 0 2026-03-09T15:00:41.547 INFO:tasks.workunit.client.1.vm09.stdout:1/186: write d8/d10/f3b [1413405,81031] 0 2026-03-09T15:00:41.549 INFO:tasks.workunit.client.1.vm09.stdout:3/207: dread d3/f29 [0,4194304] 0 2026-03-09T15:00:41.551 INFO:tasks.workunit.client.1.vm09.stdout:2/235: sync 2026-03-09T15:00:41.552 INFO:tasks.workunit.client.1.vm09.stdout:2/236: fsync df/f1b 0 2026-03-09T15:00:41.552 INFO:tasks.workunit.client.1.vm09.stdout:1/187: dwrite d8/d1b/f21 [0,4194304] 0 2026-03-09T15:00:41.556 INFO:tasks.workunit.client.1.vm09.stdout:2/237: creat df/d3b/f46 x:0 0 0 2026-03-09T15:00:41.557 INFO:tasks.workunit.client.1.vm09.stdout:2/238: mkdir df/d1f/d47 0 2026-03-09T15:00:41.558 INFO:tasks.workunit.client.1.vm09.stdout:3/208: dread d3/d4/f8 [0,4194304] 0 2026-03-09T15:00:41.559 INFO:tasks.workunit.client.1.vm09.stdout:2/239: write df/d2d/f3c [674332,87579] 0 2026-03-09T15:00:41.560 INFO:tasks.workunit.client.1.vm09.stdout:3/209: readlink d3/d3a/d2b/l37 0 2026-03-09T15:00:41.563 INFO:tasks.workunit.client.1.vm09.stdout:2/240: fdatasync df/d1f/f38 0 2026-03-09T15:00:41.563 INFO:tasks.workunit.client.1.vm09.stdout:2/241: write df/d3b/f46 [887207,69470] 0 2026-03-09T15:00:41.565 INFO:tasks.workunit.client.1.vm09.stdout:1/188: creat d8/f3d x:0 0 0 2026-03-09T15:00:41.567 INFO:tasks.workunit.client.1.vm09.stdout:1/189: mknod d8/d10/d24/c3e 0 2026-03-09T15:00:41.568 INFO:tasks.workunit.client.1.vm09.stdout:1/190: write d8/d1b/f21 [2363457,97423] 0 2026-03-09T15:00:41.568 INFO:tasks.workunit.client.1.vm09.stdout:3/210: symlink d3/d3a/d2b/d39/d48/l4b 0 2026-03-09T15:00:41.571 INFO:tasks.workunit.client.1.vm09.stdout:3/211: rename d3/c2f to d3/d3a/d2b/d39/c4c 0 2026-03-09T15:00:41.583 INFO:tasks.workunit.client.1.vm09.stdout:3/212: dread d3/d3a/d2b/d31/f33 [0,4194304] 0 2026-03-09T15:00:41.586 INFO:tasks.workunit.client.1.vm09.stdout:3/213: mknod d3/d3a/d2b/c4d 0 2026-03-09T15:00:41.621 INFO:tasks.workunit.client.1.vm09.stdout:9/205: truncate d1/d7/d1e/d2b/d2e/f16 862443 0 2026-03-09T15:00:41.623 INFO:tasks.workunit.client.1.vm09.stdout:9/206: rmdir d1/d7/d1e 39 2026-03-09T15:00:41.625 INFO:tasks.workunit.client.1.vm09.stdout:8/193: write df/d24/f29 [3858968,15319] 0 2026-03-09T15:00:41.628 INFO:tasks.workunit.client.1.vm09.stdout:9/207: mknod d1/d7/d1e/d2b/d2e/c49 0 2026-03-09T15:00:41.632 INFO:tasks.workunit.client.1.vm09.stdout:8/194: sync 2026-03-09T15:00:41.632 INFO:tasks.workunit.client.1.vm09.stdout:9/208: sync 2026-03-09T15:00:41.632 INFO:tasks.workunit.client.1.vm09.stdout:9/209: chown d1/d7/l1c 734992 1 2026-03-09T15:00:41.634 INFO:tasks.workunit.client.1.vm09.stdout:8/195: rename df/f14 to df/d1c/d1d/f36 0 2026-03-09T15:00:41.637 INFO:tasks.workunit.client.1.vm09.stdout:8/196: mkdir df/d24/d37 0 2026-03-09T15:00:41.638 INFO:tasks.workunit.client.1.vm09.stdout:9/210: dwrite d1/f24 [0,4194304] 0 2026-03-09T15:00:41.642 INFO:tasks.workunit.client.1.vm09.stdout:9/211: fsync d1/d7/d1e/f2a 0 2026-03-09T15:00:41.643 INFO:tasks.workunit.client.1.vm09.stdout:8/197: mkdir df/d38 0 2026-03-09T15:00:41.646 INFO:tasks.workunit.client.1.vm09.stdout:8/198: getdents df/d38 0 2026-03-09T15:00:41.646 INFO:tasks.workunit.client.1.vm09.stdout:8/199: truncate df/d1f/f31 1151105 0 2026-03-09T15:00:41.648 INFO:tasks.workunit.client.1.vm09.stdout:8/200: creat df/d38/f39 x:0 0 0 2026-03-09T15:00:41.652 INFO:tasks.workunit.client.1.vm09.stdout:8/201: dread fe [0,4194304] 0 2026-03-09T15:00:41.655 INFO:tasks.workunit.client.1.vm09.stdout:7/213: truncate d3/d28/f2a 431153 0 2026-03-09T15:00:41.656 INFO:tasks.workunit.client.1.vm09.stdout:7/214: mkdir d3/d3d 0 2026-03-09T15:00:41.684 INFO:tasks.workunit.client.1.vm09.stdout:6/223: mkdir d6/db/d10/d4f 0 2026-03-09T15:00:41.712 INFO:tasks.workunit.client.1.vm09.stdout:4/200: dwrite f4 [0,4194304] 0 2026-03-09T15:00:41.714 INFO:tasks.workunit.client.1.vm09.stdout:4/201: write db/f21 [2629552,66803] 0 2026-03-09T15:00:41.727 INFO:tasks.workunit.client.1.vm09.stdout:4/202: mkdir db/d19/d23/d44 0 2026-03-09T15:00:41.727 INFO:tasks.workunit.client.1.vm09.stdout:4/203: stat db/d19/d32 0 2026-03-09T15:00:41.733 INFO:tasks.workunit.client.1.vm09.stdout:5/182: dwrite d2/f38 [0,4194304] 0 2026-03-09T15:00:41.736 INFO:tasks.workunit.client.1.vm09.stdout:7/215: mknod d3/d1d/d2d/c3e 0 2026-03-09T15:00:41.736 INFO:tasks.workunit.client.1.vm09.stdout:0/271: truncate da/dc/d10/f16 1929696 0 2026-03-09T15:00:41.738 INFO:tasks.workunit.client.1.vm09.stdout:5/183: read d2/d4/fd [4193797,103353] 0 2026-03-09T15:00:41.746 INFO:tasks.workunit.client.1.vm09.stdout:0/272: chown da/dc/c1a 390277472 1 2026-03-09T15:00:41.747 INFO:tasks.workunit.client.1.vm09.stdout:4/204: dread f3 [0,4194304] 0 2026-03-09T15:00:41.747 INFO:tasks.workunit.client.1.vm09.stdout:0/273: dread - da/dc/d1c/d46/f52 zero size 2026-03-09T15:00:41.748 INFO:tasks.workunit.client.1.vm09.stdout:4/205: write f4 [3321227,24029] 0 2026-03-09T15:00:41.748 INFO:tasks.workunit.client.1.vm09.stdout:5/184: creat d2/f47 x:0 0 0 2026-03-09T15:00:41.749 INFO:tasks.workunit.client.1.vm09.stdout:0/274: mknod da/dc/d22/c5a 0 2026-03-09T15:00:41.751 INFO:tasks.workunit.client.1.vm09.stdout:0/275: mkdir da/dc/d1c/d46/d5b 0 2026-03-09T15:00:41.752 INFO:tasks.workunit.client.1.vm09.stdout:5/185: mknod d2/c48 0 2026-03-09T15:00:41.753 INFO:tasks.workunit.client.1.vm09.stdout:0/276: symlink da/d30/l5c 0 2026-03-09T15:00:41.755 INFO:tasks.workunit.client.1.vm09.stdout:0/277: symlink da/dc/d1c/l5d 0 2026-03-09T15:00:41.756 INFO:tasks.workunit.client.1.vm09.stdout:5/186: rename d2/l3 to d2/l49 0 2026-03-09T15:00:41.757 INFO:tasks.workunit.client.1.vm09.stdout:4/206: dwrite db/d12/d16/f36 [0,4194304] 0 2026-03-09T15:00:41.757 INFO:tasks.workunit.client.1.vm09.stdout:0/278: write da/d30/f3d [7525821,114033] 0 2026-03-09T15:00:41.760 INFO:tasks.workunit.client.1.vm09.stdout:5/187: write d2/f22 [3645377,91855] 0 2026-03-09T15:00:41.766 INFO:tasks.workunit.client.1.vm09.stdout:1/191: getdents d8/d22/d32 0 2026-03-09T15:00:41.769 INFO:tasks.workunit.client.1.vm09.stdout:4/207: rename f7 to db/d19/d23/d44/f45 0 2026-03-09T15:00:41.769 INFO:tasks.workunit.client.1.vm09.stdout:1/192: write d8/d10/d24/f2a [814944,81483] 0 2026-03-09T15:00:41.770 INFO:tasks.workunit.client.1.vm09.stdout:4/208: fdatasync db/d19/d32/f3e 0 2026-03-09T15:00:41.772 INFO:tasks.workunit.client.1.vm09.stdout:4/209: chown db/d19/d35/f43 181736 1 2026-03-09T15:00:41.772 INFO:tasks.workunit.client.1.vm09.stdout:5/188: dwrite d2/d4/f1d [4194304,4194304] 0 2026-03-09T15:00:41.775 INFO:tasks.workunit.client.1.vm09.stdout:4/210: fsync db/f14 0 2026-03-09T15:00:41.776 INFO:tasks.workunit.client.1.vm09.stdout:5/189: chown d2/d4/l1a 4766 1 2026-03-09T15:00:41.776 INFO:tasks.workunit.client.1.vm09.stdout:4/211: stat db/d12/f3d 0 2026-03-09T15:00:41.776 INFO:tasks.workunit.client.1.vm09.stdout:5/190: write d2/f47 [593378,74850] 0 2026-03-09T15:00:41.782 INFO:tasks.workunit.client.1.vm09.stdout:4/212: creat db/d12/d16/f46 x:0 0 0 2026-03-09T15:00:41.782 INFO:tasks.workunit.client.1.vm09.stdout:5/191: creat d2/d37/d3c/d36/f4a x:0 0 0 2026-03-09T15:00:41.783 INFO:tasks.workunit.client.1.vm09.stdout:1/193: symlink d8/l3f 0 2026-03-09T15:00:41.783 INFO:tasks.workunit.client.1.vm09.stdout:4/213: chown db/d19/d23/c33 8 1 2026-03-09T15:00:41.784 INFO:tasks.workunit.client.1.vm09.stdout:4/214: readlink db/d19/d32/l3f 0 2026-03-09T15:00:41.788 INFO:tasks.workunit.client.1.vm09.stdout:5/192: creat d2/d37/d3c/f4b x:0 0 0 2026-03-09T15:00:41.788 INFO:tasks.workunit.client.1.vm09.stdout:5/193: dread - d2/f3d zero size 2026-03-09T15:00:41.789 INFO:tasks.workunit.client.1.vm09.stdout:4/215: rename db/d19/c22 to db/d19/d32/c47 0 2026-03-09T15:00:41.791 INFO:tasks.workunit.client.1.vm09.stdout:5/194: mkdir d2/d37/d3c/d36/d4c 0 2026-03-09T15:00:41.795 INFO:tasks.workunit.client.1.vm09.stdout:1/194: dwrite d8/d22/f3a [0,4194304] 0 2026-03-09T15:00:41.795 INFO:tasks.workunit.client.1.vm09.stdout:4/216: dread db/d12/f37 [0,4194304] 0 2026-03-09T15:00:41.797 INFO:tasks.workunit.client.1.vm09.stdout:4/217: truncate db/d12/f27 381543 0 2026-03-09T15:00:41.801 INFO:tasks.workunit.client.1.vm09.stdout:5/195: mknod d2/d4/c4d 0 2026-03-09T15:00:41.804 INFO:tasks.workunit.client.1.vm09.stdout:5/196: rename d2/d4/f31 to d2/d37/d3c/f4e 0 2026-03-09T15:00:41.805 INFO:tasks.workunit.client.1.vm09.stdout:4/218: creat db/d19/d32/d3b/f48 x:0 0 0 2026-03-09T15:00:41.805 INFO:tasks.workunit.client.1.vm09.stdout:1/195: dread d8/d10/f13 [0,4194304] 0 2026-03-09T15:00:41.806 INFO:tasks.workunit.client.1.vm09.stdout:4/219: chown db/d2f/l41 7168920 1 2026-03-09T15:00:41.808 INFO:tasks.workunit.client.1.vm09.stdout:4/220: readlink db/d12/l30 0 2026-03-09T15:00:41.811 INFO:tasks.workunit.client.1.vm09.stdout:5/197: dwrite d2/f2e [0,4194304] 0 2026-03-09T15:00:41.818 INFO:tasks.workunit.client.1.vm09.stdout:5/198: write d2/d37/d3c/f4e [985595,124815] 0 2026-03-09T15:00:41.818 INFO:tasks.workunit.client.1.vm09.stdout:4/221: dwrite db/f21 [0,4194304] 0 2026-03-09T15:00:41.823 INFO:tasks.workunit.client.1.vm09.stdout:4/222: creat db/d19/d32/d3b/f49 x:0 0 0 2026-03-09T15:00:41.823 INFO:tasks.workunit.client.1.vm09.stdout:5/199: dwrite d2/f29 [0,4194304] 0 2026-03-09T15:00:41.824 INFO:tasks.workunit.client.1.vm09.stdout:5/200: stat d2/d4/c3e 0 2026-03-09T15:00:41.826 INFO:tasks.workunit.client.1.vm09.stdout:4/223: symlink db/d19/d23/d44/l4a 0 2026-03-09T15:00:41.889 INFO:tasks.workunit.client.1.vm09.stdout:2/242: getdents df/d3b 0 2026-03-09T15:00:41.892 INFO:tasks.workunit.client.1.vm09.stdout:2/243: creat df/d20/d2e/f48 x:0 0 0 2026-03-09T15:00:41.893 INFO:tasks.workunit.client.1.vm09.stdout:2/244: write df/f1b [5079832,79907] 0 2026-03-09T15:00:41.907 INFO:tasks.workunit.client.1.vm09.stdout:3/214: chown d3/d3a/d2b/d39/c4c 97903282 1 2026-03-09T15:00:41.907 INFO:tasks.workunit.client.1.vm09.stdout:3/215: write d3/f3b [1033069,69202] 0 2026-03-09T15:00:41.915 INFO:tasks.workunit.client.1.vm09.stdout:3/216: dwrite d3/d3a/f1c [0,4194304] 0 2026-03-09T15:00:41.918 INFO:tasks.workunit.client.1.vm09.stdout:3/217: dwrite d3/d3a/d2b/d36/f46 [0,4194304] 0 2026-03-09T15:00:41.928 INFO:tasks.workunit.client.1.vm09.stdout:3/218: rename d3/d3a/d2b/c2d to d3/d3a/d2b/d36/c4e 0 2026-03-09T15:00:41.940 INFO:tasks.workunit.client.1.vm09.stdout:9/212: truncate d1/d7/d1e/d2b/f30 3363853 0 2026-03-09T15:00:41.940 INFO:tasks.workunit.client.1.vm09.stdout:9/213: write d1/d7/d1e/d2b/f42 [402842,37637] 0 2026-03-09T15:00:41.944 INFO:tasks.workunit.client.1.vm09.stdout:9/214: dwrite d1/f28 [0,4194304] 0 2026-03-09T15:00:41.953 INFO:tasks.workunit.client.1.vm09.stdout:9/215: truncate d1/d7/d1e/d2b/d2e/f1d 4689111 0 2026-03-09T15:00:41.953 INFO:tasks.workunit.client.1.vm09.stdout:9/216: mknod d1/d7/d1e/d2b/c4a 0 2026-03-09T15:00:41.953 INFO:tasks.workunit.client.1.vm09.stdout:9/217: mkdir d1/d7/d1e/d2b/d4b 0 2026-03-09T15:00:41.953 INFO:tasks.workunit.client.1.vm09.stdout:9/218: fdatasync d1/d7/f3e 0 2026-03-09T15:00:41.953 INFO:tasks.workunit.client.1.vm09.stdout:9/219: dread - d1/d7/d1e/d2b/d2e/f3a zero size 2026-03-09T15:00:41.969 INFO:tasks.workunit.client.1.vm09.stdout:3/219: sync 2026-03-09T15:00:41.971 INFO:tasks.workunit.client.1.vm09.stdout:3/220: symlink d3/d3a/d2b/d31/l4f 0 2026-03-09T15:00:41.972 INFO:tasks.workunit.client.1.vm09.stdout:3/221: chown d3/l23 3323480 1 2026-03-09T15:00:41.973 INFO:tasks.workunit.client.1.vm09.stdout:3/222: symlink d3/d3a/d2b/d39/l50 0 2026-03-09T15:00:41.976 INFO:tasks.workunit.client.1.vm09.stdout:3/223: dread d3/d3a/d2b/d31/f34 [0,4194304] 0 2026-03-09T15:00:41.977 INFO:tasks.workunit.client.1.vm09.stdout:3/224: write d3/d3a/d2b/d36/f46 [1110680,52664] 0 2026-03-09T15:00:41.979 INFO:tasks.workunit.client.1.vm09.stdout:3/225: creat d3/d3a/d2b/d36/f51 x:0 0 0 2026-03-09T15:00:41.982 INFO:tasks.workunit.client.1.vm09.stdout:3/226: dread d3/f29 [0,4194304] 0 2026-03-09T15:00:41.983 INFO:tasks.workunit.client.1.vm09.stdout:3/227: mkdir d3/d3a/d2b/d39/d48/d52 0 2026-03-09T15:00:41.986 INFO:tasks.workunit.client.1.vm09.stdout:3/228: rmdir d3/d3a/d2b/d39/d48/d52 0 2026-03-09T15:00:41.987 INFO:tasks.workunit.client.1.vm09.stdout:3/229: mkdir d3/d3a/d2b/d53 0 2026-03-09T15:00:41.987 INFO:tasks.workunit.client.1.vm09.stdout:3/230: chown d3/f3b 21678 1 2026-03-09T15:00:41.988 INFO:tasks.workunit.client.1.vm09.stdout:3/231: mkdir d3/d3a/d54 0 2026-03-09T15:00:41.989 INFO:tasks.workunit.client.1.vm09.stdout:3/232: chown d3 782 1 2026-03-09T15:00:41.992 INFO:tasks.workunit.client.1.vm09.stdout:3/233: dwrite d3/d4/f1b [0,4194304] 0 2026-03-09T15:00:41.993 INFO:tasks.workunit.client.1.vm09.stdout:3/234: chown d3/d4/c10 16081 1 2026-03-09T15:00:41.993 INFO:tasks.workunit.client.1.vm09.stdout:3/235: read d3/d3a/f1c [2446830,107929] 0 2026-03-09T15:00:41.994 INFO:tasks.workunit.client.1.vm09.stdout:3/236: chown d3/d4/c14 1002 1 2026-03-09T15:00:41.994 INFO:tasks.workunit.client.1.vm09.stdout:3/237: stat d3/d4/f8 0 2026-03-09T15:00:41.999 INFO:tasks.workunit.client.1.vm09.stdout:3/238: link d3/d3a/d2b/d31/f3f d3/d4/f55 0 2026-03-09T15:00:42.001 INFO:tasks.workunit.client.1.vm09.stdout:3/239: dread d3/d4/f55 [0,4194304] 0 2026-03-09T15:00:42.005 INFO:tasks.workunit.client.1.vm09.stdout:8/202: rmdir df/d38 39 2026-03-09T15:00:42.008 INFO:tasks.workunit.client.1.vm09.stdout:8/203: symlink df/d1f/l3a 0 2026-03-09T15:00:42.008 INFO:tasks.workunit.client.1.vm09.stdout:8/204: truncate df/d1f/f35 651703 0 2026-03-09T15:00:42.011 INFO:tasks.workunit.client.1.vm09.stdout:3/240: dwrite d3/d3a/d2b/d31/f33 [0,4194304] 0 2026-03-09T15:00:42.013 INFO:tasks.workunit.client.1.vm09.stdout:8/205: mknod df/d38/c3b 0 2026-03-09T15:00:42.015 INFO:tasks.workunit.client.1.vm09.stdout:8/206: rmdir df/d1c/d2c 39 2026-03-09T15:00:42.015 INFO:tasks.workunit.client.1.vm09.stdout:3/241: truncate d3/d3a/d2b/d36/f44 1596795 0 2026-03-09T15:00:42.017 INFO:tasks.workunit.client.1.vm09.stdout:8/207: chown df/d1c/d2c/d33 1 1 2026-03-09T15:00:42.017 INFO:tasks.workunit.client.1.vm09.stdout:3/242: unlink d3/d4/f55 0 2026-03-09T15:00:42.023 INFO:tasks.workunit.client.1.vm09.stdout:8/208: dwrite df/d1c/d1d/f36 [4194304,4194304] 0 2026-03-09T15:00:42.030 INFO:tasks.workunit.client.1.vm09.stdout:7/216: write d3/d28/f2a [632548,13957] 0 2026-03-09T15:00:42.036 INFO:tasks.workunit.client.1.vm09.stdout:7/217: creat d3/f3f x:0 0 0 2026-03-09T15:00:42.039 INFO:tasks.workunit.client.1.vm09.stdout:7/218: symlink d3/db/d25/l40 0 2026-03-09T15:00:42.060 INFO:tasks.workunit.client.1.vm09.stdout:7/219: dread d3/d28/f2a [0,4194304] 0 2026-03-09T15:00:42.061 INFO:tasks.workunit.client.1.vm09.stdout:7/220: truncate d3/db/fe 1066817 0 2026-03-09T15:00:42.064 INFO:tasks.workunit.client.1.vm09.stdout:7/221: symlink d3/l41 0 2026-03-09T15:00:42.065 INFO:tasks.workunit.client.1.vm09.stdout:6/224: dwrite d6/d20/d24/f49 [0,4194304] 0 2026-03-09T15:00:42.066 INFO:tasks.workunit.client.1.vm09.stdout:6/225: chown d6 20577036 1 2026-03-09T15:00:42.066 INFO:tasks.workunit.client.1.vm09.stdout:7/222: chown d3/d1d/l21 158 1 2026-03-09T15:00:42.066 INFO:tasks.workunit.client.1.vm09.stdout:7/223: fdatasync d3/d28/f29 0 2026-03-09T15:00:42.067 INFO:tasks.workunit.client.1.vm09.stdout:6/226: write d6/d20/f36 [1706497,88630] 0 2026-03-09T15:00:42.070 INFO:tasks.workunit.client.1.vm09.stdout:6/227: write d6/db/d10/f2c [1538430,130448] 0 2026-03-09T15:00:42.070 INFO:tasks.workunit.client.1.vm09.stdout:7/224: creat d3/db/f42 x:0 0 0 2026-03-09T15:00:42.073 INFO:tasks.workunit.client.1.vm09.stdout:7/225: dwrite d3/f9 [0,4194304] 0 2026-03-09T15:00:42.077 INFO:tasks.workunit.client.1.vm09.stdout:7/226: link d3/l41 d3/l43 0 2026-03-09T15:00:42.087 INFO:tasks.workunit.client.1.vm09.stdout:7/227: dread f1 [0,4194304] 0 2026-03-09T15:00:42.089 INFO:tasks.workunit.client.1.vm09.stdout:7/228: mkdir d3/d28/d2e/d44 0 2026-03-09T15:00:42.090 INFO:tasks.workunit.client.1.vm09.stdout:7/229: mknod d3/db/d25/c45 0 2026-03-09T15:00:42.090 INFO:tasks.workunit.client.1.vm09.stdout:7/230: chown d3/d28/d2e 0 1 2026-03-09T15:00:42.093 INFO:tasks.workunit.client.1.vm09.stdout:7/231: mkdir d3/db/d46 0 2026-03-09T15:00:42.094 INFO:tasks.workunit.client.1.vm09.stdout:7/232: mkdir d3/db/d25/d47 0 2026-03-09T15:00:42.095 INFO:tasks.workunit.client.1.vm09.stdout:7/233: chown d3/d1d/f33 2424339 1 2026-03-09T15:00:42.097 INFO:tasks.workunit.client.1.vm09.stdout:7/234: dread d3/db/d25/f22 [0,4194304] 0 2026-03-09T15:00:42.097 INFO:tasks.workunit.client.1.vm09.stdout:7/235: fsync d3/fd 0 2026-03-09T15:00:42.100 INFO:tasks.workunit.client.1.vm09.stdout:7/236: write d3/d1d/f37 [4759002,113537] 0 2026-03-09T15:00:42.101 INFO:tasks.workunit.client.1.vm09.stdout:7/237: fsync f1 0 2026-03-09T15:00:42.101 INFO:tasks.workunit.client.1.vm09.stdout:7/238: write d3/db/d15/f23 [3991700,28496] 0 2026-03-09T15:00:42.122 INFO:tasks.workunit.client.1.vm09.stdout:7/239: dread d3/db/d15/f23 [0,4194304] 0 2026-03-09T15:00:42.122 INFO:tasks.workunit.client.1.vm09.stdout:7/240: chown d3/db/f42 1910 1 2026-03-09T15:00:42.125 INFO:tasks.workunit.client.1.vm09.stdout:7/241: dwrite d3/db/f42 [0,4194304] 0 2026-03-09T15:00:42.133 INFO:tasks.workunit.client.1.vm09.stdout:7/242: dwrite d3/d1d/f33 [0,4194304] 0 2026-03-09T15:00:42.134 INFO:tasks.workunit.client.1.vm09.stdout:7/243: read d3/fd [1257794,66114] 0 2026-03-09T15:00:42.134 INFO:tasks.workunit.client.1.vm09.stdout:7/244: write d3/d1d/f37 [3741743,88436] 0 2026-03-09T15:00:42.160 INFO:tasks.workunit.client.1.vm09.stdout:7/245: sync 2026-03-09T15:00:42.162 INFO:tasks.workunit.client.1.vm09.stdout:7/246: symlink d3/db/d15/l48 0 2026-03-09T15:00:42.200 INFO:tasks.workunit.client.1.vm09.stdout:5/201: fsync d2/d37/d3c/f4b 0 2026-03-09T15:00:42.203 INFO:tasks.workunit.client.1.vm09.stdout:5/202: unlink d2/d4/l26 0 2026-03-09T15:00:42.204 INFO:tasks.workunit.client.1.vm09.stdout:5/203: fdatasync d2/d4/f16 0 2026-03-09T15:00:42.208 INFO:tasks.workunit.client.1.vm09.stdout:5/204: creat d2/f4f x:0 0 0 2026-03-09T15:00:42.209 INFO:tasks.workunit.client.1.vm09.stdout:1/196: truncate d8/d1b/f21 722048 0 2026-03-09T15:00:42.209 INFO:tasks.workunit.client.1.vm09.stdout:1/197: fsync d8/d1b/f37 0 2026-03-09T15:00:42.211 INFO:tasks.workunit.client.1.vm09.stdout:4/224: truncate db/d12/f37 3730672 0 2026-03-09T15:00:42.212 INFO:tasks.workunit.client.1.vm09.stdout:1/198: chown d8/l23 28 1 2026-03-09T15:00:42.214 INFO:tasks.workunit.client.1.vm09.stdout:5/205: mknod d2/d4/c50 0 2026-03-09T15:00:42.217 INFO:tasks.workunit.client.1.vm09.stdout:1/199: dwrite d8/d1b/f1f [0,4194304] 0 2026-03-09T15:00:42.217 INFO:tasks.workunit.client.1.vm09.stdout:1/200: readlink d8/l3f 0 2026-03-09T15:00:42.217 INFO:tasks.workunit.client.1.vm09.stdout:1/201: dread - d8/f3d zero size 2026-03-09T15:00:42.232 INFO:tasks.workunit.client.1.vm09.stdout:1/202: mkdir d8/d22/d40 0 2026-03-09T15:00:42.233 INFO:tasks.workunit.client.1.vm09.stdout:5/206: mkdir d2/d37/d3c/d36/d4c/d51 0 2026-03-09T15:00:42.234 INFO:tasks.workunit.client.1.vm09.stdout:4/225: dread db/d19/d23/d44/f45 [0,4194304] 0 2026-03-09T15:00:42.235 INFO:tasks.workunit.client.1.vm09.stdout:1/203: creat d8/d1b/f41 x:0 0 0 2026-03-09T15:00:42.236 INFO:tasks.workunit.client.1.vm09.stdout:1/204: read d8/d10/f13 [3028768,55778] 0 2026-03-09T15:00:42.237 INFO:tasks.workunit.client.1.vm09.stdout:4/226: dread db/fe [0,4194304] 0 2026-03-09T15:00:42.238 INFO:tasks.workunit.client.1.vm09.stdout:4/227: fdatasync db/d19/d35/f43 0 2026-03-09T15:00:42.241 INFO:tasks.workunit.client.1.vm09.stdout:5/207: symlink d2/d37/l52 0 2026-03-09T15:00:42.241 INFO:tasks.workunit.client.1.vm09.stdout:5/208: chown c1 4959 1 2026-03-09T15:00:42.244 INFO:tasks.workunit.client.1.vm09.stdout:5/209: dread d2/d4/f1f [0,4194304] 0 2026-03-09T15:00:42.244 INFO:tasks.workunit.client.1.vm09.stdout:2/245: truncate df/f17 1225579 0 2026-03-09T15:00:42.249 INFO:tasks.workunit.client.1.vm09.stdout:1/205: creat d8/f42 x:0 0 0 2026-03-09T15:00:42.253 INFO:tasks.workunit.client.1.vm09.stdout:5/210: sync 2026-03-09T15:00:42.256 INFO:tasks.workunit.client.1.vm09.stdout:3/243: getdents d3/d3a 0 2026-03-09T15:00:42.256 INFO:tasks.workunit.client.1.vm09.stdout:9/220: truncate d1/d7/d1e/d2b/d2e/f16 811368 0 2026-03-09T15:00:42.257 INFO:tasks.workunit.client.1.vm09.stdout:4/228: getdents db/d19/d32 0 2026-03-09T15:00:42.258 INFO:tasks.workunit.client.1.vm09.stdout:2/246: creat df/d20/f49 x:0 0 0 2026-03-09T15:00:42.259 INFO:tasks.workunit.client.1.vm09.stdout:3/244: chown d3/d3a/d2b/d31/f3f 114812 1 2026-03-09T15:00:42.261 INFO:tasks.workunit.client.1.vm09.stdout:2/247: sync 2026-03-09T15:00:42.263 INFO:tasks.workunit.client.1.vm09.stdout:4/229: symlink db/d19/d23/d44/l4b 0 2026-03-09T15:00:42.263 INFO:tasks.workunit.client.1.vm09.stdout:2/248: chown df/d1f/c25 5097133 1 2026-03-09T15:00:42.264 INFO:tasks.workunit.client.1.vm09.stdout:3/245: dwrite d3/f3b [0,4194304] 0 2026-03-09T15:00:42.269 INFO:tasks.workunit.client.1.vm09.stdout:9/221: rename d1/l35 to d1/d7/d1e/d2b/d2e/l4c 0 2026-03-09T15:00:42.270 INFO:tasks.workunit.client.1.vm09.stdout:2/249: dwrite f4 [4194304,4194304] 0 2026-03-09T15:00:42.272 INFO:tasks.workunit.client.1.vm09.stdout:3/246: symlink d3/d3a/l56 0 2026-03-09T15:00:42.272 INFO:tasks.workunit.client.1.vm09.stdout:2/250: truncate f0 4765440 0 2026-03-09T15:00:42.273 INFO:tasks.workunit.client.1.vm09.stdout:4/230: symlink db/d19/d35/l4c 0 2026-03-09T15:00:42.275 INFO:tasks.workunit.client.1.vm09.stdout:4/231: dread f3 [0,4194304] 0 2026-03-09T15:00:42.277 INFO:tasks.workunit.client.1.vm09.stdout:4/232: chown db/d19/d23 371863 1 2026-03-09T15:00:42.280 INFO:tasks.workunit.client.1.vm09.stdout:2/251: write f5 [3605024,87138] 0 2026-03-09T15:00:42.280 INFO:tasks.workunit.client.1.vm09.stdout:4/233: mknod db/d12/d16/c4d 0 2026-03-09T15:00:42.282 INFO:tasks.workunit.client.1.vm09.stdout:3/247: link d3/d4/f1a d3/d4/f57 0 2026-03-09T15:00:42.285 INFO:tasks.workunit.client.1.vm09.stdout:2/252: dwrite df/f1b [0,4194304] 0 2026-03-09T15:00:42.288 INFO:tasks.workunit.client.1.vm09.stdout:3/248: rename d3/f6 to d3/d3a/d54/f58 0 2026-03-09T15:00:42.290 INFO:tasks.workunit.client.1.vm09.stdout:4/234: dwrite db/d19/d35/f43 [0,4194304] 0 2026-03-09T15:00:42.292 INFO:tasks.workunit.client.1.vm09.stdout:4/235: fdatasync db/d19/d32/d3b/f48 0 2026-03-09T15:00:42.294 INFO:tasks.workunit.client.1.vm09.stdout:4/236: fdatasync db/f14 0 2026-03-09T15:00:42.298 INFO:tasks.workunit.client.1.vm09.stdout:2/253: creat df/f4a x:0 0 0 2026-03-09T15:00:42.299 INFO:tasks.workunit.client.1.vm09.stdout:4/237: creat db/d19/d35/f4e x:0 0 0 2026-03-09T15:00:42.300 INFO:tasks.workunit.client.1.vm09.stdout:2/254: fsync df/d1f/f39 0 2026-03-09T15:00:42.303 INFO:tasks.workunit.client.1.vm09.stdout:2/255: truncate df/d20/f24 563766 0 2026-03-09T15:00:42.306 INFO:tasks.workunit.client.1.vm09.stdout:2/256: rmdir df/d1f 39 2026-03-09T15:00:42.310 INFO:tasks.workunit.client.1.vm09.stdout:8/209: dwrite f8 [0,4194304] 0 2026-03-09T15:00:42.318 INFO:tasks.workunit.client.1.vm09.stdout:8/210: mknod df/d24/c3c 0 2026-03-09T15:00:42.319 INFO:tasks.workunit.client.1.vm09.stdout:5/211: dread d2/d37/d3c/f3a [0,4194304] 0 2026-03-09T15:00:42.322 INFO:tasks.workunit.client.1.vm09.stdout:5/212: mkdir d2/d37/d53 0 2026-03-09T15:00:42.322 INFO:tasks.workunit.client.1.vm09.stdout:5/213: dread - d2/f4f zero size 2026-03-09T15:00:42.334 INFO:tasks.workunit.client.1.vm09.stdout:5/214: readlink d2/d37/d3c/l33 0 2026-03-09T15:00:42.334 INFO:tasks.workunit.client.1.vm09.stdout:6/228: write d6/f17 [2890148,33085] 0 2026-03-09T15:00:42.334 INFO:tasks.workunit.client.1.vm09.stdout:5/215: truncate d2/d37/d3c/f4e 1531092 0 2026-03-09T15:00:42.334 INFO:tasks.workunit.client.1.vm09.stdout:6/229: dread - d6/db/d10/f1c zero size 2026-03-09T15:00:42.334 INFO:tasks.workunit.client.1.vm09.stdout:7/247: getdents d3/db/d25 0 2026-03-09T15:00:42.334 INFO:tasks.workunit.client.1.vm09.stdout:0/279: truncate da/dc/d10/f16 1482211 0 2026-03-09T15:00:42.334 INFO:tasks.workunit.client.1.vm09.stdout:0/280: stat da/dc/d22/c39 0 2026-03-09T15:00:42.334 INFO:tasks.workunit.client.1.vm09.stdout:0/281: readlink da/l2c 0 2026-03-09T15:00:42.336 INFO:tasks.workunit.client.1.vm09.stdout:0/282: rmdir da/dc/d1c 39 2026-03-09T15:00:42.339 INFO:tasks.workunit.client.1.vm09.stdout:0/283: dwrite da/dc/d22/f55 [0,4194304] 0 2026-03-09T15:00:42.352 INFO:tasks.workunit.client.1.vm09.stdout:8/211: sync 2026-03-09T15:00:42.353 INFO:tasks.workunit.client.1.vm09.stdout:0/284: dwrite da/dc/d1c/d3c/d44/f51 [0,4194304] 0 2026-03-09T15:00:42.356 INFO:tasks.workunit.client.1.vm09.stdout:0/285: mknod da/dc/d1c/d3c/c5e 0 2026-03-09T15:00:42.356 INFO:tasks.workunit.client.1.vm09.stdout:0/286: write da/d30/f38 [842727,18931] 0 2026-03-09T15:00:42.358 INFO:tasks.workunit.client.1.vm09.stdout:7/248: sync 2026-03-09T15:00:42.358 INFO:tasks.workunit.client.1.vm09.stdout:6/230: sync 2026-03-09T15:00:42.358 INFO:tasks.workunit.client.1.vm09.stdout:6/231: dread - d6/df/f40 zero size 2026-03-09T15:00:42.360 INFO:tasks.workunit.client.1.vm09.stdout:7/249: symlink d3/db/d25/l49 0 2026-03-09T15:00:42.361 INFO:tasks.workunit.client.1.vm09.stdout:6/232: mknod d6/d20/d2a/d3d/c50 0 2026-03-09T15:00:42.370 INFO:tasks.workunit.client.1.vm09.stdout:0/287: dread da/dc/d10/f11 [0,4194304] 0 2026-03-09T15:00:42.371 INFO:tasks.workunit.client.1.vm09.stdout:0/288: dread - da/d57/f59 zero size 2026-03-09T15:00:42.373 INFO:tasks.workunit.client.1.vm09.stdout:6/233: dwrite d6/db/d10/f1c [0,4194304] 0 2026-03-09T15:00:42.379 INFO:tasks.workunit.client.1.vm09.stdout:6/234: sync 2026-03-09T15:00:42.381 INFO:tasks.workunit.client.1.vm09.stdout:7/250: dread d3/fd [0,4194304] 0 2026-03-09T15:00:42.382 INFO:tasks.workunit.client.1.vm09.stdout:7/251: chown d3/db/d15 9022130 1 2026-03-09T15:00:42.382 INFO:tasks.workunit.client.1.vm09.stdout:7/252: truncate d3/d28/f29 804597 0 2026-03-09T15:00:42.383 INFO:tasks.workunit.client.1.vm09.stdout:7/253: chown d3/db/d25/c45 24975 1 2026-03-09T15:00:42.387 INFO:tasks.workunit.client.1.vm09.stdout:6/235: symlink d6/d20/d24/l51 0 2026-03-09T15:00:42.391 INFO:tasks.workunit.client.1.vm09.stdout:6/236: dread - d6/d20/d44/d45/f4c zero size 2026-03-09T15:00:42.391 INFO:tasks.workunit.client.1.vm09.stdout:0/289: getdents da/d30/d36 0 2026-03-09T15:00:42.391 INFO:tasks.workunit.client.1.vm09.stdout:0/290: dwrite da/d30/f38 [0,4194304] 0 2026-03-09T15:00:42.393 INFO:tasks.workunit.client.1.vm09.stdout:0/291: sync 2026-03-09T15:00:42.396 INFO:tasks.workunit.client.1.vm09.stdout:0/292: dwrite da/dc/d10/f2d [4194304,4194304] 0 2026-03-09T15:00:42.403 INFO:tasks.workunit.client.1.vm09.stdout:7/254: link d3/d1d/l21 d3/db/d15/l4a 0 2026-03-09T15:00:42.407 INFO:tasks.workunit.client.1.vm09.stdout:0/293: unlink da/l1f 0 2026-03-09T15:00:42.407 INFO:tasks.workunit.client.1.vm09.stdout:6/237: getdents d6/d20/d2a/d3b 0 2026-03-09T15:00:42.407 INFO:tasks.workunit.client.1.vm09.stdout:6/238: fsync d6/d20/d44/f4a 0 2026-03-09T15:00:42.407 INFO:tasks.workunit.client.1.vm09.stdout:7/255: creat d3/d3d/f4b x:0 0 0 2026-03-09T15:00:42.409 INFO:tasks.workunit.client.1.vm09.stdout:7/256: stat d3/db/d15/f23 0 2026-03-09T15:00:42.412 INFO:tasks.workunit.client.1.vm09.stdout:0/294: getdents da/dc 0 2026-03-09T15:00:42.413 INFO:tasks.workunit.client.1.vm09.stdout:0/295: chown da/dc/d1c/d46 6 1 2026-03-09T15:00:42.413 INFO:tasks.workunit.client.1.vm09.stdout:7/257: symlink d3/d28/d2e/l4c 0 2026-03-09T15:00:42.413 INFO:tasks.workunit.client.1.vm09.stdout:6/239: creat d6/d20/f52 x:0 0 0 2026-03-09T15:00:42.414 INFO:tasks.workunit.client.1.vm09.stdout:6/240: chown d6/d20/c26 40567900 1 2026-03-09T15:00:42.415 INFO:tasks.workunit.client.1.vm09.stdout:7/258: creat d3/db/f4d x:0 0 0 2026-03-09T15:00:42.417 INFO:tasks.workunit.client.1.vm09.stdout:7/259: symlink d3/d28/d2e/d44/l4e 0 2026-03-09T15:00:42.418 INFO:tasks.workunit.client.1.vm09.stdout:7/260: write f1 [378789,4133] 0 2026-03-09T15:00:42.419 INFO:tasks.workunit.client.1.vm09.stdout:7/261: write d3/f9 [4590892,25546] 0 2026-03-09T15:00:42.420 INFO:tasks.workunit.client.1.vm09.stdout:7/262: stat d3/db/d15 0 2026-03-09T15:00:42.422 INFO:tasks.workunit.client.1.vm09.stdout:6/241: dwrite d6/db/d10/f2c [0,4194304] 0 2026-03-09T15:00:42.437 INFO:tasks.workunit.client.1.vm09.stdout:6/242: rmdir d6/df/d23 39 2026-03-09T15:00:42.437 INFO:tasks.workunit.client.1.vm09.stdout:6/243: fsync d6/d20/f27 0 2026-03-09T15:00:42.440 INFO:tasks.workunit.client.1.vm09.stdout:6/244: write d6/db/f42 [3758096,8781] 0 2026-03-09T15:00:42.441 INFO:tasks.workunit.client.1.vm09.stdout:7/263: truncate d3/fd 844524 0 2026-03-09T15:00:42.441 INFO:tasks.workunit.client.1.vm09.stdout:5/216: getdents d2/d37/d3c/d36/d4c 0 2026-03-09T15:00:42.441 INFO:tasks.workunit.client.1.vm09.stdout:6/245: readlink d6/l22 0 2026-03-09T15:00:42.441 INFO:tasks.workunit.client.1.vm09.stdout:7/264: truncate d3/f16 4937682 0 2026-03-09T15:00:42.442 INFO:tasks.workunit.client.1.vm09.stdout:6/246: write d6/df/f16 [4810365,65424] 0 2026-03-09T15:00:42.446 INFO:tasks.workunit.client.1.vm09.stdout:7/265: dwrite d3/d28/d2e/f36 [0,4194304] 0 2026-03-09T15:00:42.448 INFO:tasks.workunit.client.1.vm09.stdout:7/266: read d3/d28/d2e/f36 [1671515,64718] 0 2026-03-09T15:00:42.448 INFO:tasks.workunit.client.1.vm09.stdout:7/267: chown l2 724 1 2026-03-09T15:00:42.455 INFO:tasks.workunit.client.1.vm09.stdout:1/206: truncate d8/d10/f13 642439 0 2026-03-09T15:00:42.456 INFO:tasks.workunit.client.1.vm09.stdout:6/247: readlink d6/db/d10/l4d 0 2026-03-09T15:00:42.459 INFO:tasks.workunit.client.1.vm09.stdout:7/268: dwrite d3/f5 [0,4194304] 0 2026-03-09T15:00:42.476 INFO:tasks.workunit.client.1.vm09.stdout:5/217: symlink d2/d37/d3c/d36/d4c/d51/l54 0 2026-03-09T15:00:42.476 INFO:tasks.workunit.client.1.vm09.stdout:6/248: mknod d6/d20/d44/d45/c53 0 2026-03-09T15:00:42.477 INFO:tasks.workunit.client.1.vm09.stdout:6/249: write d6/d20/d44/d45/f4c [808008,6661] 0 2026-03-09T15:00:42.479 INFO:tasks.workunit.client.1.vm09.stdout:6/250: chown d6/d20/d2a/d3d/d46 251 1 2026-03-09T15:00:42.480 INFO:tasks.workunit.client.1.vm09.stdout:6/251: stat d6/d20/d24/c32 0 2026-03-09T15:00:42.482 INFO:tasks.workunit.client.1.vm09.stdout:5/218: mkdir d2/d37/d3c/d55 0 2026-03-09T15:00:42.483 INFO:tasks.workunit.client.1.vm09.stdout:7/269: dwrite d3/d28/d2e/f36 [0,4194304] 0 2026-03-09T15:00:42.491 INFO:tasks.workunit.client.1.vm09.stdout:6/252: readlink d6/db/ld 0 2026-03-09T15:00:42.498 INFO:tasks.workunit.client.1.vm09.stdout:7/270: mkdir d3/d4f 0 2026-03-09T15:00:42.502 INFO:tasks.workunit.client.1.vm09.stdout:5/219: dread d2/d4/f23 [0,4194304] 0 2026-03-09T15:00:42.503 INFO:tasks.workunit.client.1.vm09.stdout:7/271: rename d3/d28/f2a to d3/db/d25/f50 0 2026-03-09T15:00:42.504 INFO:tasks.workunit.client.1.vm09.stdout:7/272: chown d3/db/fe 128429354 1 2026-03-09T15:00:42.509 INFO:tasks.workunit.client.1.vm09.stdout:6/253: link d6/df/d23/c28 d6/d20/c54 0 2026-03-09T15:00:42.514 INFO:tasks.workunit.client.1.vm09.stdout:6/254: truncate d6/db/d10/f2c 4369104 0 2026-03-09T15:00:42.514 INFO:tasks.workunit.client.1.vm09.stdout:7/273: link d3/db/f4d d3/d3d/f51 0 2026-03-09T15:00:42.514 INFO:tasks.workunit.client.1.vm09.stdout:5/220: link d2/f4f d2/f56 0 2026-03-09T15:00:42.514 INFO:tasks.workunit.client.1.vm09.stdout:7/274: symlink d3/d1d/d2d/l52 0 2026-03-09T15:00:42.514 INFO:tasks.workunit.client.1.vm09.stdout:5/221: write d2/d37/f43 [914139,58441] 0 2026-03-09T15:00:42.514 INFO:tasks.workunit.client.1.vm09.stdout:6/255: mkdir d6/d20/d38/d4e/d55 0 2026-03-09T15:00:42.518 INFO:tasks.workunit.client.1.vm09.stdout:7/275: unlink d3/d1d/d2d/l52 0 2026-03-09T15:00:42.521 INFO:tasks.workunit.client.1.vm09.stdout:6/256: dread - d6/db/d10/f19 zero size 2026-03-09T15:00:42.522 INFO:tasks.workunit.client.1.vm09.stdout:7/276: rename d3/db/d25/d47 to d3/d4f/d53 0 2026-03-09T15:00:42.525 INFO:tasks.workunit.client.1.vm09.stdout:6/257: mkdir d6/d20/d38/d56 0 2026-03-09T15:00:42.525 INFO:tasks.workunit.client.1.vm09.stdout:5/222: dwrite d2/f56 [0,4194304] 0 2026-03-09T15:00:42.527 INFO:tasks.workunit.client.1.vm09.stdout:7/277: mknod d3/db/c54 0 2026-03-09T15:00:42.528 INFO:tasks.workunit.client.1.vm09.stdout:7/278: readlink d3/l17 0 2026-03-09T15:00:42.539 INFO:tasks.workunit.client.1.vm09.stdout:5/223: creat d2/d37/d3c/d55/f57 x:0 0 0 2026-03-09T15:00:42.540 INFO:tasks.workunit.client.1.vm09.stdout:6/258: mkdir d6/d20/d2a/d57 0 2026-03-09T15:00:42.541 INFO:tasks.workunit.client.1.vm09.stdout:6/259: readlink d6/db/d10/l4d 0 2026-03-09T15:00:42.544 INFO:tasks.workunit.client.1.vm09.stdout:5/224: creat d2/d37/d3c/d55/f58 x:0 0 0 2026-03-09T15:00:42.561 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:42 vm05.local ceph-mon[50611]: pgmap v147: 65 pgs: 65 active+clean; 565 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 5.3 MiB/s rd, 63 MiB/s wr, 252 op/s 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:3/249: write d3/d4/f57 [619968,83484] 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:6/260: mknod d6/df/d23/c58 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:7/279: mknod d3/db/c55 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:7/280: fsync d3/f8 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:4/238: truncate db/d19/d23/d44/f45 520784 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:6/261: dread d6/db/d10/f2c [0,4194304] 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:2/257: truncate df/f23 169254 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:9/222: creat d1/d7/d1e/d2b/d40/f4d x:0 0 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:3/250: creat d3/d3a/d54/f59 x:0 0 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:9/223: write d1/d7/d1e/d2b/f3f [832401,13166] 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:6/262: fsync d6/df/d23/f2f 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:2/258: write df/f14 [2505076,59993] 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:8/212: truncate fe 4004700 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:6/263: write d6/d20/d44/f4a [993137,69067] 0 2026-03-09T15:00:42.562 INFO:tasks.workunit.client.1.vm09.stdout:4/239: creat db/d12/d16/f4f x:0 0 0 2026-03-09T15:00:42.567 INFO:tasks.workunit.client.1.vm09.stdout:7/281: dread d3/db/fe [0,4194304] 0 2026-03-09T15:00:42.573 INFO:tasks.workunit.client.1.vm09.stdout:3/251: symlink d3/d3a/d2b/d36/l5a 0 2026-03-09T15:00:42.573 INFO:tasks.workunit.client.1.vm09.stdout:0/296: truncate da/dc/d10/f11 3366512 0 2026-03-09T15:00:42.576 INFO:tasks.workunit.client.1.vm09.stdout:0/297: truncate da/dc/d1c/d3c/f4f 1130521 0 2026-03-09T15:00:42.579 INFO:tasks.workunit.client.1.vm09.stdout:8/213: write df/d1c/f20 [2113522,80223] 0 2026-03-09T15:00:42.580 INFO:tasks.workunit.client.1.vm09.stdout:8/214: write df/d1f/f35 [656306,98831] 0 2026-03-09T15:00:42.584 INFO:tasks.workunit.client.1.vm09.stdout:6/264: creat d6/d20/f59 x:0 0 0 2026-03-09T15:00:42.584 INFO:tasks.workunit.client.1.vm09.stdout:6/265: write d6/df/d23/f29 [982052,84213] 0 2026-03-09T15:00:42.584 INFO:tasks.workunit.client.1.vm09.stdout:6/266: chown d6/f39 3050 1 2026-03-09T15:00:42.586 INFO:tasks.workunit.client.1.vm09.stdout:4/240: creat db/d12/f50 x:0 0 0 2026-03-09T15:00:42.588 INFO:tasks.workunit.client.1.vm09.stdout:7/282: symlink d3/db/d25/l56 0 2026-03-09T15:00:42.589 INFO:tasks.workunit.client.1.vm09.stdout:5/225: truncate d2/d4/f1f 652373 0 2026-03-09T15:00:42.590 INFO:tasks.workunit.client.1.vm09.stdout:5/226: read d2/d4/f1d [2715761,21141] 0 2026-03-09T15:00:42.590 INFO:tasks.workunit.client.1.vm09.stdout:3/252: mkdir d3/d5b 0 2026-03-09T15:00:42.592 INFO:tasks.workunit.client.1.vm09.stdout:2/259: symlink df/d20/l4b 0 2026-03-09T15:00:42.598 INFO:tasks.workunit.client.1.vm09.stdout:7/283: sync 2026-03-09T15:00:42.598 INFO:tasks.workunit.client.1.vm09.stdout:7/284: read - d3/f32 zero size 2026-03-09T15:00:42.599 INFO:tasks.workunit.client.1.vm09.stdout:7/285: chown d3/d28/f35 227 1 2026-03-09T15:00:42.601 INFO:tasks.workunit.client.1.vm09.stdout:7/286: write f1 [4232596,125882] 0 2026-03-09T15:00:42.602 INFO:tasks.workunit.client.1.vm09.stdout:8/215: symlink df/d24/l3d 0 2026-03-09T15:00:42.603 INFO:tasks.workunit.client.1.vm09.stdout:7/287: sync 2026-03-09T15:00:42.605 INFO:tasks.workunit.client.1.vm09.stdout:6/267: creat d6/d20/d38/d4e/f5a x:0 0 0 2026-03-09T15:00:42.607 INFO:tasks.workunit.client.1.vm09.stdout:4/241: mknod db/d19/d32/c51 0 2026-03-09T15:00:42.612 INFO:tasks.workunit.client.1.vm09.stdout:5/227: creat d2/d37/d3c/d36/d4c/d51/f59 x:0 0 0 2026-03-09T15:00:42.612 INFO:tasks.workunit.client.1.vm09.stdout:2/260: creat df/d20/d2e/f4c x:0 0 0 2026-03-09T15:00:42.614 INFO:tasks.workunit.client.1.vm09.stdout:9/224: link d1/d7/ce d1/d7/d1e/c4e 0 2026-03-09T15:00:42.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:42 vm09.local ceph-mon[59673]: pgmap v147: 65 pgs: 65 active+clean; 565 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 5.3 MiB/s rd, 63 MiB/s wr, 252 op/s 2026-03-09T15:00:42.617 INFO:tasks.workunit.client.1.vm09.stdout:0/298: symlink da/l5f 0 2026-03-09T15:00:42.618 INFO:tasks.workunit.client.1.vm09.stdout:2/261: dread f4 [4194304,4194304] 0 2026-03-09T15:00:42.619 INFO:tasks.workunit.client.1.vm09.stdout:2/262: readlink df/l19 0 2026-03-09T15:00:42.629 INFO:tasks.workunit.client.1.vm09.stdout:7/288: creat d3/d1d/d2d/f57 x:0 0 0 2026-03-09T15:00:42.630 INFO:tasks.workunit.client.1.vm09.stdout:6/268: mkdir d6/df/d23/d5b 0 2026-03-09T15:00:42.633 INFO:tasks.workunit.client.1.vm09.stdout:4/242: mkdir db/d19/d52 0 2026-03-09T15:00:42.635 INFO:tasks.workunit.client.1.vm09.stdout:7/289: dwrite d3/f8 [4194304,4194304] 0 2026-03-09T15:00:42.636 INFO:tasks.workunit.client.1.vm09.stdout:3/253: symlink d3/d3a/d2b/d31/d4a/l5c 0 2026-03-09T15:00:42.636 INFO:tasks.workunit.client.1.vm09.stdout:6/269: dwrite d6/d20/d44/f4a [0,4194304] 0 2026-03-09T15:00:42.640 INFO:tasks.workunit.client.1.vm09.stdout:3/254: read - d3/f3e zero size 2026-03-09T15:00:42.647 INFO:tasks.workunit.client.1.vm09.stdout:6/270: dread d6/d20/d44/f4a [0,4194304] 0 2026-03-09T15:00:42.650 INFO:tasks.workunit.client.1.vm09.stdout:6/271: write d6/d20/d44/d45/f4c [1446140,32598] 0 2026-03-09T15:00:42.652 INFO:tasks.workunit.client.1.vm09.stdout:6/272: write d6/f39 [598540,116484] 0 2026-03-09T15:00:42.659 INFO:tasks.workunit.client.1.vm09.stdout:2/263: dwrite f3 [4194304,4194304] 0 2026-03-09T15:00:42.660 INFO:tasks.workunit.client.1.vm09.stdout:8/216: symlink df/l3e 0 2026-03-09T15:00:42.661 INFO:tasks.workunit.client.1.vm09.stdout:4/243: mkdir db/d19/d35/d53 0 2026-03-09T15:00:42.661 INFO:tasks.workunit.client.1.vm09.stdout:8/217: readlink df/d1f/l3a 0 2026-03-09T15:00:42.671 INFO:tasks.workunit.client.1.vm09.stdout:2/264: dread df/f14 [0,4194304] 0 2026-03-09T15:00:42.672 INFO:tasks.workunit.client.1.vm09.stdout:1/207: truncate d8/d10/f3b 6301201 0 2026-03-09T15:00:42.673 INFO:tasks.workunit.client.1.vm09.stdout:1/208: write d8/d10/d24/f2a [3757691,84977] 0 2026-03-09T15:00:42.674 INFO:tasks.workunit.client.1.vm09.stdout:3/255: symlink d3/d3a/d2b/d31/d4a/l5d 0 2026-03-09T15:00:42.676 INFO:tasks.workunit.client.1.vm09.stdout:3/256: dread d3/d3a/d2b/d31/f3f [0,4194304] 0 2026-03-09T15:00:42.684 INFO:tasks.workunit.client.1.vm09.stdout:7/290: fsync d3/db/f4d 0 2026-03-09T15:00:42.687 INFO:tasks.workunit.client.1.vm09.stdout:8/218: rmdir df/d1c/d2c 39 2026-03-09T15:00:42.693 INFO:tasks.workunit.client.1.vm09.stdout:5/228: creat d2/f5a x:0 0 0 2026-03-09T15:00:42.694 INFO:tasks.workunit.client.1.vm09.stdout:3/257: symlink d3/d3a/d2b/d31/d4a/l5e 0 2026-03-09T15:00:42.696 INFO:tasks.workunit.client.1.vm09.stdout:6/273: creat d6/d20/d38/d4e/d55/f5c x:0 0 0 2026-03-09T15:00:42.697 INFO:tasks.workunit.client.1.vm09.stdout:6/274: fsync d6/df/d23/f2f 0 2026-03-09T15:00:42.697 INFO:tasks.workunit.client.1.vm09.stdout:6/275: chown d6/db/d10/f1c 30075194 1 2026-03-09T15:00:42.698 INFO:tasks.workunit.client.1.vm09.stdout:6/276: stat d6/db/d10/f19 0 2026-03-09T15:00:42.699 INFO:tasks.workunit.client.1.vm09.stdout:6/277: chown d6/d20/d24/c32 2 1 2026-03-09T15:00:42.699 INFO:tasks.workunit.client.1.vm09.stdout:9/225: rmdir d1/d7/d1e/d2b/d4b 0 2026-03-09T15:00:42.703 INFO:tasks.workunit.client.1.vm09.stdout:0/299: truncate da/fb 2600156 0 2026-03-09T15:00:42.703 INFO:tasks.workunit.client.1.vm09.stdout:0/300: fdatasync da/d30/f38 0 2026-03-09T15:00:42.706 INFO:tasks.workunit.client.1.vm09.stdout:8/219: mkdir df/d24/d3f 0 2026-03-09T15:00:42.710 INFO:tasks.workunit.client.1.vm09.stdout:1/209: symlink d8/d22/d32/d39/l43 0 2026-03-09T15:00:42.711 INFO:tasks.workunit.client.1.vm09.stdout:5/229: readlink d2/d37/d3c/l32 0 2026-03-09T15:00:42.712 INFO:tasks.workunit.client.1.vm09.stdout:5/230: write d2/f47 [265594,85964] 0 2026-03-09T15:00:42.714 INFO:tasks.workunit.client.1.vm09.stdout:3/258: creat d3/d3a/d2b/d39/d48/f5f x:0 0 0 2026-03-09T15:00:42.717 INFO:tasks.workunit.client.1.vm09.stdout:7/291: mknod d3/d28/c58 0 2026-03-09T15:00:42.719 INFO:tasks.workunit.client.1.vm09.stdout:6/278: creat d6/d20/d2a/f5d x:0 0 0 2026-03-09T15:00:42.719 INFO:tasks.workunit.client.1.vm09.stdout:5/231: dwrite d2/d37/d3c/d36/f4a [0,4194304] 0 2026-03-09T15:00:42.720 INFO:tasks.workunit.client.1.vm09.stdout:5/232: chown d2/c1e 5948 1 2026-03-09T15:00:42.720 INFO:tasks.workunit.client.1.vm09.stdout:0/301: rename da/d57/f59 to da/d57/f60 0 2026-03-09T15:00:42.724 INFO:tasks.workunit.client.1.vm09.stdout:8/220: creat df/d1f/f40 x:0 0 0 2026-03-09T15:00:42.724 INFO:tasks.workunit.client.1.vm09.stdout:0/302: stat da/dc/d10/f2d 0 2026-03-09T15:00:42.724 INFO:tasks.workunit.client.1.vm09.stdout:5/233: chown d2/d37/d3c/l35 298 1 2026-03-09T15:00:42.724 INFO:tasks.workunit.client.1.vm09.stdout:6/279: fsync d6/d20/f34 0 2026-03-09T15:00:42.724 INFO:tasks.workunit.client.1.vm09.stdout:6/280: chown d6/d20/d2a/d57 104023 1 2026-03-09T15:00:42.729 INFO:tasks.workunit.client.1.vm09.stdout:2/265: truncate df/f16 1063371 0 2026-03-09T15:00:42.729 INFO:tasks.workunit.client.1.vm09.stdout:1/210: creat d8/d10/f44 x:0 0 0 2026-03-09T15:00:42.734 INFO:tasks.workunit.client.1.vm09.stdout:3/259: write d3/d4/f1b [59496,119252] 0 2026-03-09T15:00:42.738 INFO:tasks.workunit.client.1.vm09.stdout:9/226: mkdir d1/d4f 0 2026-03-09T15:00:42.743 INFO:tasks.workunit.client.1.vm09.stdout:0/303: dread da/dc/d10/f2d [0,4194304] 0 2026-03-09T15:00:42.747 INFO:tasks.workunit.client.1.vm09.stdout:1/211: unlink d8/d1b/f1f 0 2026-03-09T15:00:42.750 INFO:tasks.workunit.client.1.vm09.stdout:3/260: mkdir d3/d60 0 2026-03-09T15:00:42.751 INFO:tasks.workunit.client.1.vm09.stdout:2/266: dwrite df/d20/d2e/f4c [0,4194304] 0 2026-03-09T15:00:42.753 INFO:tasks.workunit.client.1.vm09.stdout:8/221: dread df/d1c/d1d/f36 [0,4194304] 0 2026-03-09T15:00:42.753 INFO:tasks.workunit.client.1.vm09.stdout:4/244: rmdir db/d12/d16 39 2026-03-09T15:00:42.754 INFO:tasks.workunit.client.1.vm09.stdout:8/222: chown df/d1c/d1d/f36 31213577 1 2026-03-09T15:00:42.767 INFO:tasks.workunit.client.1.vm09.stdout:0/304: dread da/dc/d10/f29 [4194304,4194304] 0 2026-03-09T15:00:42.768 INFO:tasks.workunit.client.1.vm09.stdout:0/305: read - da/d57/f60 zero size 2026-03-09T15:00:42.775 INFO:tasks.workunit.client.1.vm09.stdout:1/212: mkdir d8/d10/d24/d45 0 2026-03-09T15:00:42.778 INFO:tasks.workunit.client.1.vm09.stdout:7/292: rmdir d3/d4f/d53 0 2026-03-09T15:00:42.782 INFO:tasks.workunit.client.1.vm09.stdout:1/213: dwrite d8/d10/f2f [0,4194304] 0 2026-03-09T15:00:42.791 INFO:tasks.workunit.client.1.vm09.stdout:0/306: mkdir da/dc/d61 0 2026-03-09T15:00:42.791 INFO:tasks.workunit.client.1.vm09.stdout:5/234: rename d2/d37/d3c/c30 to d2/d37/c5b 0 2026-03-09T15:00:42.791 INFO:tasks.workunit.client.1.vm09.stdout:2/267: symlink df/d1f/l4d 0 2026-03-09T15:00:42.793 INFO:tasks.workunit.client.1.vm09.stdout:6/281: link d6/db/d10/f2c d6/d20/d2a/f5e 0 2026-03-09T15:00:42.797 INFO:tasks.workunit.client.1.vm09.stdout:9/227: rename d1/d7/d1e/d2b/d2e/c1b to d1/d7/d1e/d2b/d2e/c50 0 2026-03-09T15:00:42.801 INFO:tasks.workunit.client.1.vm09.stdout:7/293: dwrite d3/f16 [0,4194304] 0 2026-03-09T15:00:42.805 INFO:tasks.workunit.client.1.vm09.stdout:4/245: dread db/d12/d16/f36 [0,4194304] 0 2026-03-09T15:00:42.805 INFO:tasks.workunit.client.1.vm09.stdout:7/294: fsync d3/d28/d2e/f36 0 2026-03-09T15:00:42.809 INFO:tasks.workunit.client.1.vm09.stdout:5/235: dwrite d2/f56 [0,4194304] 0 2026-03-09T15:00:42.821 INFO:tasks.workunit.client.1.vm09.stdout:9/228: dwrite d1/d7/d1e/f34 [0,4194304] 0 2026-03-09T15:00:42.825 INFO:tasks.workunit.client.1.vm09.stdout:4/246: dread db/d19/d35/f43 [0,4194304] 0 2026-03-09T15:00:42.827 INFO:tasks.workunit.client.1.vm09.stdout:2/268: truncate fb 4293426 0 2026-03-09T15:00:42.841 INFO:tasks.workunit.client.1.vm09.stdout:1/214: rename d8/l23 to d8/d22/d32/d39/l46 0 2026-03-09T15:00:42.845 INFO:tasks.workunit.client.1.vm09.stdout:9/229: dread d1/d7/d1e/f46 [0,4194304] 0 2026-03-09T15:00:42.858 INFO:tasks.workunit.client.1.vm09.stdout:4/247: creat db/d12/d16/f54 x:0 0 0 2026-03-09T15:00:42.866 INFO:tasks.workunit.client.1.vm09.stdout:9/230: creat d1/d7/d1e/d2b/f51 x:0 0 0 2026-03-09T15:00:42.867 INFO:tasks.workunit.client.1.vm09.stdout:8/223: rmdir df/d24 39 2026-03-09T15:00:42.867 INFO:tasks.workunit.client.1.vm09.stdout:8/224: fsync df/d2d/f2f 0 2026-03-09T15:00:42.867 INFO:tasks.workunit.client.1.vm09.stdout:7/295: symlink d3/d1d/l59 0 2026-03-09T15:00:42.867 INFO:tasks.workunit.client.1.vm09.stdout:5/236: mkdir d2/d37/d3c/d36/d45/d5c 0 2026-03-09T15:00:42.867 INFO:tasks.workunit.client.1.vm09.stdout:5/237: chown d2/d37 421733710 1 2026-03-09T15:00:42.868 INFO:tasks.workunit.client.1.vm09.stdout:5/238: write d2/d37/d3c/d55/f57 [979016,82973] 0 2026-03-09T15:00:42.869 INFO:tasks.workunit.client.1.vm09.stdout:4/248: mknod db/d19/c55 0 2026-03-09T15:00:42.870 INFO:tasks.workunit.client.1.vm09.stdout:4/249: write db/f21 [4396323,5812] 0 2026-03-09T15:00:42.873 INFO:tasks.workunit.client.1.vm09.stdout:7/296: dread d3/f16 [0,4194304] 0 2026-03-09T15:00:42.873 INFO:tasks.workunit.client.1.vm09.stdout:4/250: write db/d12/d16/f46 [325117,93670] 0 2026-03-09T15:00:42.877 INFO:tasks.workunit.client.1.vm09.stdout:6/282: link d6/df/l30 d6/d20/d2a/d57/l5f 0 2026-03-09T15:00:42.877 INFO:tasks.workunit.client.1.vm09.stdout:5/239: dwrite d2/f56 [4194304,4194304] 0 2026-03-09T15:00:42.882 INFO:tasks.workunit.client.1.vm09.stdout:9/231: sync 2026-03-09T15:00:42.887 INFO:tasks.workunit.client.1.vm09.stdout:8/225: creat df/d1c/d1d/f41 x:0 0 0 2026-03-09T15:00:42.888 INFO:tasks.workunit.client.1.vm09.stdout:8/226: fdatasync df/f1a 0 2026-03-09T15:00:42.888 INFO:tasks.workunit.client.1.vm09.stdout:9/232: chown d1/d7/d1e/d2b/d2e/f1d 1 1 2026-03-09T15:00:42.889 INFO:tasks.workunit.client.1.vm09.stdout:2/269: rmdir df/d20/d29/d40 0 2026-03-09T15:00:42.895 INFO:tasks.workunit.client.1.vm09.stdout:7/297: rmdir d3/d28/d2e/d44 39 2026-03-09T15:00:42.899 INFO:tasks.workunit.client.1.vm09.stdout:3/261: truncate d3/f29 331845 0 2026-03-09T15:00:42.899 INFO:tasks.workunit.client.1.vm09.stdout:8/227: mkdir df/d2d/d42 0 2026-03-09T15:00:42.899 INFO:tasks.workunit.client.1.vm09.stdout:8/228: fsync df/d1f/f35 0 2026-03-09T15:00:42.899 INFO:tasks.workunit.client.1.vm09.stdout:9/233: chown d1/f1f 814 1 2026-03-09T15:00:42.899 INFO:tasks.workunit.client.1.vm09.stdout:8/229: fsync fa 0 2026-03-09T15:00:42.900 INFO:tasks.workunit.client.1.vm09.stdout:8/230: read fe [1341219,37557] 0 2026-03-09T15:00:42.901 INFO:tasks.workunit.client.1.vm09.stdout:2/270: rmdir df/d2d 39 2026-03-09T15:00:42.903 INFO:tasks.workunit.client.1.vm09.stdout:4/251: mknod db/d12/c56 0 2026-03-09T15:00:42.903 INFO:tasks.workunit.client.1.vm09.stdout:4/252: readlink db/d2f/l41 0 2026-03-09T15:00:42.904 INFO:tasks.workunit.client.1.vm09.stdout:4/253: dread - db/d19/d32/f3e zero size 2026-03-09T15:00:42.906 INFO:tasks.workunit.client.1.vm09.stdout:9/234: dread d1/d7/d1e/d2b/f42 [0,4194304] 0 2026-03-09T15:00:42.907 INFO:tasks.workunit.client.1.vm09.stdout:4/254: write f4 [4998792,117214] 0 2026-03-09T15:00:42.907 INFO:tasks.workunit.client.1.vm09.stdout:7/298: write d3/d1d/f30 [963967,75890] 0 2026-03-09T15:00:42.910 INFO:tasks.workunit.client.1.vm09.stdout:5/240: rename d2/c1e to d2/d4/c5d 0 2026-03-09T15:00:42.911 INFO:tasks.workunit.client.1.vm09.stdout:7/299: unlink d3/d1d/l59 0 2026-03-09T15:00:42.912 INFO:tasks.workunit.client.1.vm09.stdout:0/307: symlink da/dc/d1c/d46/d5b/l62 0 2026-03-09T15:00:42.915 INFO:tasks.workunit.client.1.vm09.stdout:3/262: link d3/d4/c10 d3/d3a/d2b/d39/d48/c61 0 2026-03-09T15:00:42.917 INFO:tasks.workunit.client.1.vm09.stdout:8/231: mknod df/d24/d37/c43 0 2026-03-09T15:00:42.917 INFO:tasks.workunit.client.1.vm09.stdout:3/263: write d3/d3a/d2b/d36/f51 [892081,79369] 0 2026-03-09T15:00:42.917 INFO:tasks.workunit.client.1.vm09.stdout:8/232: write fa [3300433,25502] 0 2026-03-09T15:00:42.920 INFO:tasks.workunit.client.1.vm09.stdout:7/300: chown d3/db/d15/l4a 129659 1 2026-03-09T15:00:42.920 INFO:tasks.workunit.client.1.vm09.stdout:4/255: getdents db/d19/d52 0 2026-03-09T15:00:42.920 INFO:tasks.workunit.client.1.vm09.stdout:4/256: chown db/d12/f1b 1 1 2026-03-09T15:00:42.925 INFO:tasks.workunit.client.1.vm09.stdout:2/271: getdents df 0 2026-03-09T15:00:42.925 INFO:tasks.workunit.client.1.vm09.stdout:7/301: sync 2026-03-09T15:00:42.925 INFO:tasks.workunit.client.1.vm09.stdout:2/272: fsync df/f4a 0 2026-03-09T15:00:42.926 INFO:tasks.workunit.client.1.vm09.stdout:9/235: dwrite d1/d7/d1e/d2b/d40/f43 [0,4194304] 0 2026-03-09T15:00:42.938 INFO:tasks.workunit.client.1.vm09.stdout:0/308: dread da/dc/d10/f29 [4194304,4194304] 0 2026-03-09T15:00:42.939 INFO:tasks.workunit.client.1.vm09.stdout:7/302: stat d3/db/d15/c3b 0 2026-03-09T15:00:42.944 INFO:tasks.workunit.client.1.vm09.stdout:5/241: rename d2/d37/d3c/f2f to d2/f5e 0 2026-03-09T15:00:42.944 INFO:tasks.workunit.client.1.vm09.stdout:5/242: dread d2/d37/d3c/f3a [0,4194304] 0 2026-03-09T15:00:42.944 INFO:tasks.workunit.client.1.vm09.stdout:2/273: rmdir df/d20/d29 39 2026-03-09T15:00:42.946 INFO:tasks.workunit.client.1.vm09.stdout:2/274: chown le 272286 1 2026-03-09T15:00:42.949 INFO:tasks.workunit.client.1.vm09.stdout:4/257: mknod db/d19/d35/d53/c57 0 2026-03-09T15:00:42.949 INFO:tasks.workunit.client.1.vm09.stdout:9/236: mkdir d1/d4f/d52 0 2026-03-09T15:00:42.949 INFO:tasks.workunit.client.1.vm09.stdout:8/233: dread df/f2a [0,4194304] 0 2026-03-09T15:00:42.949 INFO:tasks.workunit.client.1.vm09.stdout:2/275: write df/d20/d2e/f4c [857270,1551] 0 2026-03-09T15:00:42.950 INFO:tasks.workunit.client.1.vm09.stdout:2/276: chown df 2 1 2026-03-09T15:00:42.957 INFO:tasks.workunit.client.1.vm09.stdout:5/243: dread d2/f29 [0,4194304] 0 2026-03-09T15:00:42.957 INFO:tasks.workunit.client.1.vm09.stdout:5/244: fsync d2/f14 0 2026-03-09T15:00:42.958 INFO:tasks.workunit.client.1.vm09.stdout:0/309: dread da/d30/f3d [8388608,4194304] 0 2026-03-09T15:00:42.958 INFO:tasks.workunit.client.1.vm09.stdout:7/303: link d3/f8 d3/d3d/f5a 0 2026-03-09T15:00:42.959 INFO:tasks.workunit.client.1.vm09.stdout:5/245: write d2/d37/d3c/f4b [631537,92083] 0 2026-03-09T15:00:42.960 INFO:tasks.workunit.client.1.vm09.stdout:2/277: creat df/d3b/f4e x:0 0 0 2026-03-09T15:00:42.961 INFO:tasks.workunit.client.1.vm09.stdout:4/258: getdents db/d19/d52 0 2026-03-09T15:00:42.961 INFO:tasks.workunit.client.1.vm09.stdout:8/234: dwrite df/d38/f39 [0,4194304] 0 2026-03-09T15:00:42.966 INFO:tasks.workunit.client.1.vm09.stdout:5/246: unlink d2/f5a 0 2026-03-09T15:00:42.966 INFO:tasks.workunit.client.1.vm09.stdout:7/304: dread d3/d28/f29 [0,4194304] 0 2026-03-09T15:00:42.970 INFO:tasks.workunit.client.1.vm09.stdout:7/305: chown d3/d28/f35 2284 1 2026-03-09T15:00:42.971 INFO:tasks.workunit.client.1.vm09.stdout:2/278: rename df/f1b to df/d2d/f4f 0 2026-03-09T15:00:42.974 INFO:tasks.workunit.client.1.vm09.stdout:4/259: creat db/d19/d32/d3b/f58 x:0 0 0 2026-03-09T15:00:42.976 INFO:tasks.workunit.client.1.vm09.stdout:4/260: readlink l8 0 2026-03-09T15:00:42.976 INFO:tasks.workunit.client.1.vm09.stdout:8/235: creat df/d1c/d1d/f44 x:0 0 0 2026-03-09T15:00:42.976 INFO:tasks.workunit.client.1.vm09.stdout:9/237: getdents d1 0 2026-03-09T15:00:42.979 INFO:tasks.workunit.client.1.vm09.stdout:2/279: sync 2026-03-09T15:00:42.979 INFO:tasks.workunit.client.1.vm09.stdout:5/247: creat d2/d37/d3c/d36/d4c/d51/f5f x:0 0 0 2026-03-09T15:00:42.979 INFO:tasks.workunit.client.1.vm09.stdout:4/261: write db/d19/f38 [3133809,82010] 0 2026-03-09T15:00:42.979 INFO:tasks.workunit.client.1.vm09.stdout:8/236: readlink df/d1c/l2e 0 2026-03-09T15:00:42.981 INFO:tasks.workunit.client.1.vm09.stdout:4/262: sync 2026-03-09T15:00:42.983 INFO:tasks.workunit.client.1.vm09.stdout:7/306: dwrite d3/d1d/f37 [4194304,4194304] 0 2026-03-09T15:00:42.984 INFO:tasks.workunit.client.1.vm09.stdout:4/263: write db/d19/d32/d3b/f58 [57650,117118] 0 2026-03-09T15:00:42.987 INFO:tasks.workunit.client.1.vm09.stdout:4/264: write db/d19/d32/f3e [600369,89936] 0 2026-03-09T15:00:43.001 INFO:tasks.workunit.client.1.vm09.stdout:8/237: write fe [1165184,107163] 0 2026-03-09T15:00:43.003 INFO:tasks.workunit.client.1.vm09.stdout:9/238: creat d1/d4f/d52/f53 x:0 0 0 2026-03-09T15:00:43.014 INFO:tasks.workunit.client.1.vm09.stdout:8/238: unlink df/d24/d37/c43 0 2026-03-09T15:00:43.016 INFO:tasks.workunit.client.1.vm09.stdout:8/239: chown df/d1c/d1d/f41 24223 1 2026-03-09T15:00:43.017 INFO:tasks.workunit.client.1.vm09.stdout:7/307: creat d3/db/d46/f5b x:0 0 0 2026-03-09T15:00:43.018 INFO:tasks.workunit.client.1.vm09.stdout:4/265: rename l9 to db/d19/d23/l59 0 2026-03-09T15:00:43.018 INFO:tasks.workunit.client.1.vm09.stdout:8/240: write df/f12 [4722711,81840] 0 2026-03-09T15:00:43.022 INFO:tasks.workunit.client.1.vm09.stdout:9/239: dwrite d1/d7/f13 [0,4194304] 0 2026-03-09T15:00:43.024 INFO:tasks.workunit.client.1.vm09.stdout:9/240: dread d1/d7/d1e/f22 [0,4194304] 0 2026-03-09T15:00:43.024 INFO:tasks.workunit.client.1.vm09.stdout:6/283: truncate d6/f39 388382 0 2026-03-09T15:00:43.031 INFO:tasks.workunit.client.1.vm09.stdout:8/241: creat df/d2d/d42/f45 x:0 0 0 2026-03-09T15:00:43.033 INFO:tasks.workunit.client.1.vm09.stdout:4/266: creat db/d12/f5a x:0 0 0 2026-03-09T15:00:43.033 INFO:tasks.workunit.client.1.vm09.stdout:9/241: getdents d1/d7 0 2026-03-09T15:00:43.034 INFO:tasks.workunit.client.1.vm09.stdout:4/267: write db/d12/d16/f2a [34324,50040] 0 2026-03-09T15:00:43.039 INFO:tasks.workunit.client.1.vm09.stdout:9/242: symlink d1/d4f/l54 0 2026-03-09T15:00:43.040 INFO:tasks.workunit.client.1.vm09.stdout:8/242: write df/f26 [850941,115023] 0 2026-03-09T15:00:43.043 INFO:tasks.workunit.client.1.vm09.stdout:8/243: truncate f8 4133729 0 2026-03-09T15:00:43.046 INFO:tasks.workunit.client.1.vm09.stdout:8/244: stat df/d1c/d2c/d33 0 2026-03-09T15:00:43.053 INFO:tasks.workunit.client.1.vm09.stdout:8/245: dwrite df/f2a [0,4194304] 0 2026-03-09T15:00:43.063 INFO:tasks.workunit.client.1.vm09.stdout:1/215: dwrite d8/d10/f3b [0,4194304] 0 2026-03-09T15:00:43.065 INFO:tasks.workunit.client.1.vm09.stdout:1/216: chown d8/d10/d24 8784 1 2026-03-09T15:00:43.071 INFO:tasks.workunit.client.1.vm09.stdout:8/246: unlink f1 0 2026-03-09T15:00:43.074 INFO:tasks.workunit.client.1.vm09.stdout:0/310: truncate da/dc/d22/f55 2444924 0 2026-03-09T15:00:43.077 INFO:tasks.workunit.client.1.vm09.stdout:0/311: truncate da/dc/d1c/d46/f52 912158 0 2026-03-09T15:00:43.077 INFO:tasks.workunit.client.1.vm09.stdout:1/217: symlink d8/d10/d24/l47 0 2026-03-09T15:00:43.078 INFO:tasks.workunit.client.1.vm09.stdout:1/218: chown d8/l3f 3147464 1 2026-03-09T15:00:43.081 INFO:tasks.workunit.client.1.vm09.stdout:8/247: dread df/d1f/f31 [0,4194304] 0 2026-03-09T15:00:43.082 INFO:tasks.workunit.client.1.vm09.stdout:8/248: write df/d2d/d42/f45 [1026106,23304] 0 2026-03-09T15:00:43.082 INFO:tasks.workunit.client.1.vm09.stdout:0/312: mkdir da/dc/d1c/d46/d63 0 2026-03-09T15:00:43.083 INFO:tasks.workunit.client.1.vm09.stdout:8/249: truncate df/d1f/f35 1515678 0 2026-03-09T15:00:43.091 INFO:tasks.workunit.client.1.vm09.stdout:8/250: dread df/d1f/f35 [0,4194304] 0 2026-03-09T15:00:43.092 INFO:tasks.workunit.client.1.vm09.stdout:8/251: fsync df/f26 0 2026-03-09T15:00:43.097 INFO:tasks.workunit.client.1.vm09.stdout:0/313: dread f5 [0,4194304] 0 2026-03-09T15:00:43.102 INFO:tasks.workunit.client.1.vm09.stdout:0/314: mkdir da/dc/d22/d64 0 2026-03-09T15:00:43.106 INFO:tasks.workunit.client.1.vm09.stdout:0/315: link da/dc/c35 da/d30/d36/c65 0 2026-03-09T15:00:43.106 INFO:tasks.workunit.client.1.vm09.stdout:2/280: dread df/d20/d29/f31 [0,4194304] 0 2026-03-09T15:00:43.107 INFO:tasks.workunit.client.1.vm09.stdout:0/316: rmdir da/d30/d36 39 2026-03-09T15:00:43.107 INFO:tasks.workunit.client.1.vm09.stdout:0/317: chown da/d57 1557 1 2026-03-09T15:00:43.111 INFO:tasks.workunit.client.1.vm09.stdout:2/281: mkdir df/d2d/d50 0 2026-03-09T15:00:43.111 INFO:tasks.workunit.client.1.vm09.stdout:2/282: fdatasync f0 0 2026-03-09T15:00:43.113 INFO:tasks.workunit.client.1.vm09.stdout:2/283: chown c2 105026976 1 2026-03-09T15:00:43.114 INFO:tasks.workunit.client.1.vm09.stdout:0/318: dwrite da/f12 [0,4194304] 0 2026-03-09T15:00:43.118 INFO:tasks.workunit.client.1.vm09.stdout:0/319: write da/dc/d1c/d3c/d44/f51 [4062957,54152] 0 2026-03-09T15:00:43.124 INFO:tasks.workunit.client.1.vm09.stdout:5/248: truncate d2/f2e 3241886 0 2026-03-09T15:00:43.128 INFO:tasks.workunit.client.1.vm09.stdout:5/249: dwrite d2/f34 [0,4194304] 0 2026-03-09T15:00:43.129 INFO:tasks.workunit.client.1.vm09.stdout:7/308: write d3/fd [341620,85561] 0 2026-03-09T15:00:43.132 INFO:tasks.workunit.client.1.vm09.stdout:6/284: truncate d6/d20/f36 507077 0 2026-03-09T15:00:43.137 INFO:tasks.workunit.client.1.vm09.stdout:9/243: write d1/d7/d1e/d2b/f30 [1041876,93975] 0 2026-03-09T15:00:43.140 INFO:tasks.workunit.client.1.vm09.stdout:3/264: truncate d3/f29 1283952 0 2026-03-09T15:00:43.141 INFO:tasks.workunit.client.1.vm09.stdout:4/268: dwrite db/d12/f25 [0,4194304] 0 2026-03-09T15:00:43.161 INFO:tasks.workunit.client.1.vm09.stdout:5/250: symlink d2/d37/d3c/l60 0 2026-03-09T15:00:43.161 INFO:tasks.workunit.client.1.vm09.stdout:1/219: write d8/d10/f13 [1579664,41737] 0 2026-03-09T15:00:43.162 INFO:tasks.workunit.client.1.vm09.stdout:8/252: truncate df/f26 597769 0 2026-03-09T15:00:43.163 INFO:tasks.workunit.client.1.vm09.stdout:0/320: creat da/dc/d61/f66 x:0 0 0 2026-03-09T15:00:43.163 INFO:tasks.workunit.client.1.vm09.stdout:9/244: unlink d1/f2c 0 2026-03-09T15:00:43.164 INFO:tasks.workunit.client.1.vm09.stdout:9/245: read - d1/d7/d1e/d2b/d2e/f3a zero size 2026-03-09T15:00:43.165 INFO:tasks.workunit.client.1.vm09.stdout:0/321: stat da/dc/d1c/c3e 0 2026-03-09T15:00:43.169 INFO:tasks.workunit.client.1.vm09.stdout:6/285: rename d6/d20/f34 to d6/d20/d24/f60 0 2026-03-09T15:00:43.170 INFO:tasks.workunit.client.1.vm09.stdout:6/286: fdatasync d6/d20/f59 0 2026-03-09T15:00:43.172 INFO:tasks.workunit.client.1.vm09.stdout:9/246: dread d1/d7/d1e/f20 [0,4194304] 0 2026-03-09T15:00:43.181 INFO:tasks.workunit.client.1.vm09.stdout:9/247: dwrite d1/d7/d1e/d2b/f3f [0,4194304] 0 2026-03-09T15:00:43.184 INFO:tasks.workunit.client.1.vm09.stdout:4/269: mkdir db/d12/d16/d5b 0 2026-03-09T15:00:43.184 INFO:tasks.workunit.client.1.vm09.stdout:2/284: link df/d2d/f41 df/d20/d29/f51 0 2026-03-09T15:00:43.185 INFO:tasks.workunit.client.1.vm09.stdout:6/287: dread d6/df/d23/f2f [0,4194304] 0 2026-03-09T15:00:43.188 INFO:tasks.workunit.client.1.vm09.stdout:2/285: chown df/d20/l4b 67 1 2026-03-09T15:00:43.202 INFO:tasks.workunit.client.1.vm09.stdout:1/220: mkdir d8/d10/d24/d48 0 2026-03-09T15:00:43.207 INFO:tasks.workunit.client.1.vm09.stdout:8/253: unlink df/d2d/d42/f45 0 2026-03-09T15:00:43.208 INFO:tasks.workunit.client.1.vm09.stdout:1/221: dwrite d8/f3d [0,4194304] 0 2026-03-09T15:00:43.219 INFO:tasks.workunit.client.1.vm09.stdout:9/248: mknod d1/d7/d1e/d2b/d40/c55 0 2026-03-09T15:00:43.219 INFO:tasks.workunit.client.1.vm09.stdout:4/270: rmdir db 39 2026-03-09T15:00:43.220 INFO:tasks.workunit.client.1.vm09.stdout:1/222: dwrite d8/f19 [0,4194304] 0 2026-03-09T15:00:43.224 INFO:tasks.workunit.client.1.vm09.stdout:6/288: creat d6/d20/d2a/f61 x:0 0 0 2026-03-09T15:00:43.224 INFO:tasks.workunit.client.1.vm09.stdout:1/223: fdatasync d8/d10/f2f 0 2026-03-09T15:00:43.228 INFO:tasks.workunit.client.1.vm09.stdout:1/224: write d8/d10/f44 [255836,103492] 0 2026-03-09T15:00:43.241 INFO:tasks.workunit.client.1.vm09.stdout:6/289: write d6/d20/d2a/f37 [1024401,10075] 0 2026-03-09T15:00:43.242 INFO:tasks.workunit.client.1.vm09.stdout:8/254: rename df/d1c/d2c to df/d2d/d46 0 2026-03-09T15:00:43.242 INFO:tasks.workunit.client.1.vm09.stdout:8/255: write df/f34 [888743,122381] 0 2026-03-09T15:00:43.242 INFO:tasks.workunit.client.1.vm09.stdout:0/322: truncate da/fb 487314 0 2026-03-09T15:00:43.242 INFO:tasks.workunit.client.1.vm09.stdout:9/249: chown d1/d7/d1e/d2b/d2e/c50 31676 1 2026-03-09T15:00:43.242 INFO:tasks.workunit.client.1.vm09.stdout:0/323: stat da/dc/d22/d64 0 2026-03-09T15:00:43.242 INFO:tasks.workunit.client.1.vm09.stdout:8/256: dread - df/d24/f28 zero size 2026-03-09T15:00:43.242 INFO:tasks.workunit.client.1.vm09.stdout:8/257: chown df/d24 7 1 2026-03-09T15:00:43.242 INFO:tasks.workunit.client.1.vm09.stdout:5/251: creat d2/f61 x:0 0 0 2026-03-09T15:00:43.246 INFO:tasks.workunit.client.1.vm09.stdout:8/258: readlink df/l3e 0 2026-03-09T15:00:43.247 INFO:tasks.workunit.client.1.vm09.stdout:2/286: creat df/d20/f52 x:0 0 0 2026-03-09T15:00:43.248 INFO:tasks.workunit.client.1.vm09.stdout:1/225: symlink d8/d10/d24/d45/l49 0 2026-03-09T15:00:43.248 INFO:tasks.workunit.client.1.vm09.stdout:0/324: rename da/f4b to da/dc/d1c/d3c/d44/f67 0 2026-03-09T15:00:43.248 INFO:tasks.workunit.client.1.vm09.stdout:9/250: mkdir d1/d7/d1e/d2b/d2e/d56 0 2026-03-09T15:00:43.252 INFO:tasks.workunit.client.1.vm09.stdout:3/265: dread d3/f29 [0,4194304] 0 2026-03-09T15:00:43.256 INFO:tasks.workunit.client.1.vm09.stdout:2/287: read df/d1f/f39 [918926,79384] 0 2026-03-09T15:00:43.256 INFO:tasks.workunit.client.1.vm09.stdout:8/259: symlink df/d24/d3f/l47 0 2026-03-09T15:00:43.258 INFO:tasks.workunit.client.1.vm09.stdout:0/325: dwrite da/d30/f38 [0,4194304] 0 2026-03-09T15:00:43.259 INFO:tasks.workunit.client.1.vm09.stdout:0/326: readlink da/d30/l5c 0 2026-03-09T15:00:43.259 INFO:tasks.workunit.client.1.vm09.stdout:3/266: dwrite d3/d3a/f1d [4194304,4194304] 0 2026-03-09T15:00:43.260 INFO:tasks.workunit.client.1.vm09.stdout:2/288: write df/d20/f49 [265086,64452] 0 2026-03-09T15:00:43.265 INFO:tasks.workunit.client.1.vm09.stdout:5/252: getdents d2/d4 0 2026-03-09T15:00:43.272 INFO:tasks.workunit.client.1.vm09.stdout:6/290: sync 2026-03-09T15:00:43.279 INFO:tasks.workunit.client.1.vm09.stdout:8/260: creat df/d24/d37/f48 x:0 0 0 2026-03-09T15:00:43.280 INFO:tasks.workunit.client.1.vm09.stdout:8/261: chown fe 3865676 1 2026-03-09T15:00:43.280 INFO:tasks.workunit.client.1.vm09.stdout:0/327: creat da/dc/d1c/d3c/d44/f68 x:0 0 0 2026-03-09T15:00:43.283 INFO:tasks.workunit.client.1.vm09.stdout:3/267: rename d3/d4 to d3/d3a/d2b/d31/d4a/d62 0 2026-03-09T15:00:43.292 INFO:tasks.workunit.client.1.vm09.stdout:4/271: link db/d2f/c42 db/c5c 0 2026-03-09T15:00:43.292 INFO:tasks.workunit.client.1.vm09.stdout:2/289: chown df/f33 1829815 1 2026-03-09T15:00:43.292 INFO:tasks.workunit.client.1.vm09.stdout:4/272: readlink db/d2f/l3c 0 2026-03-09T15:00:43.292 INFO:tasks.workunit.client.1.vm09.stdout:5/253: creat d2/d37/d3c/d36/d4c/d51/f62 x:0 0 0 2026-03-09T15:00:43.292 INFO:tasks.workunit.client.1.vm09.stdout:8/262: unlink fa 0 2026-03-09T15:00:43.293 INFO:tasks.workunit.client.1.vm09.stdout:4/273: truncate db/d12/f27 1227147 0 2026-03-09T15:00:43.293 INFO:tasks.workunit.client.1.vm09.stdout:8/263: chown df/d1c/d1d/f36 14135545 1 2026-03-09T15:00:43.293 INFO:tasks.workunit.client.1.vm09.stdout:5/254: fdatasync d2/d37/d3c/d55/f57 0 2026-03-09T15:00:43.293 INFO:tasks.workunit.client.1.vm09.stdout:3/268: creat d3/d3a/d2b/d31/d4a/f63 x:0 0 0 2026-03-09T15:00:43.294 INFO:tasks.workunit.client.1.vm09.stdout:2/290: mkdir df/d20/d29/d53 0 2026-03-09T15:00:43.295 INFO:tasks.workunit.client.1.vm09.stdout:0/328: mknod da/dc/d1c/d46/d63/c69 0 2026-03-09T15:00:43.297 INFO:tasks.workunit.client.1.vm09.stdout:8/264: creat df/d1c/d1d/f49 x:0 0 0 2026-03-09T15:00:43.298 INFO:tasks.workunit.client.1.vm09.stdout:4/274: dwrite db/d12/f3d [0,4194304] 0 2026-03-09T15:00:43.300 INFO:tasks.workunit.client.1.vm09.stdout:7/309: truncate d3/f26 699172 0 2026-03-09T15:00:43.306 INFO:tasks.workunit.client.1.vm09.stdout:2/291: creat df/d20/d2e/f54 x:0 0 0 2026-03-09T15:00:43.309 INFO:tasks.workunit.client.1.vm09.stdout:5/255: dread d2/f4f [4194304,4194304] 0 2026-03-09T15:00:43.316 INFO:tasks.workunit.client.1.vm09.stdout:4/275: mkdir db/d2f/d5d 0 2026-03-09T15:00:43.318 INFO:tasks.workunit.client.1.vm09.stdout:2/292: creat df/d1f/f55 x:0 0 0 2026-03-09T15:00:43.319 INFO:tasks.workunit.client.1.vm09.stdout:9/251: dread d1/f24 [0,4194304] 0 2026-03-09T15:00:43.320 INFO:tasks.workunit.client.1.vm09.stdout:8/265: dwrite df/d24/f32 [0,4194304] 0 2026-03-09T15:00:43.320 INFO:tasks.workunit.client.1.vm09.stdout:3/269: getdents d3/d3a/d2b/d31 0 2026-03-09T15:00:43.325 INFO:tasks.workunit.client.1.vm09.stdout:0/329: dread da/f12 [0,4194304] 0 2026-03-09T15:00:43.328 INFO:tasks.workunit.client.1.vm09.stdout:2/293: rmdir df/d2d/d50 0 2026-03-09T15:00:43.328 INFO:tasks.workunit.client.1.vm09.stdout:9/252: creat d1/d7/d1e/d2b/d40/f57 x:0 0 0 2026-03-09T15:00:43.329 INFO:tasks.workunit.client.1.vm09.stdout:1/226: truncate d8/f19 3284173 0 2026-03-09T15:00:43.329 INFO:tasks.workunit.client.1.vm09.stdout:4/276: link l6 db/d19/d35/l5e 0 2026-03-09T15:00:43.329 INFO:tasks.workunit.client.1.vm09.stdout:8/266: symlink df/d24/l4a 0 2026-03-09T15:00:43.329 INFO:tasks.workunit.client.1.vm09.stdout:0/330: fsync da/dc/d10/f29 0 2026-03-09T15:00:43.329 INFO:tasks.workunit.client.1.vm09.stdout:4/277: stat db/d19/d35 0 2026-03-09T15:00:43.330 INFO:tasks.workunit.client.1.vm09.stdout:1/227: chown d8/fa 0 1 2026-03-09T15:00:43.330 INFO:tasks.workunit.client.1.vm09.stdout:3/270: dread - d3/d3a/d2b/d31/d4a/d62/f16 zero size 2026-03-09T15:00:43.330 INFO:tasks.workunit.client.1.vm09.stdout:2/294: chown df/d1f/f55 44936 1 2026-03-09T15:00:43.331 INFO:tasks.workunit.client.1.vm09.stdout:9/253: mkdir d1/d58 0 2026-03-09T15:00:43.331 INFO:tasks.workunit.client.1.vm09.stdout:4/278: mkdir db/d19/d35/d5f 0 2026-03-09T15:00:43.334 INFO:tasks.workunit.client.1.vm09.stdout:3/271: read d3/d3a/d2b/d39/f3c [703341,113357] 0 2026-03-09T15:00:43.336 INFO:tasks.workunit.client.1.vm09.stdout:0/331: rename f5 to da/dc/d1c/d46/d5b/f6a 0 2026-03-09T15:00:43.336 INFO:tasks.workunit.client.1.vm09.stdout:0/332: chown da/dc 210648262 1 2026-03-09T15:00:43.338 INFO:tasks.workunit.client.1.vm09.stdout:1/228: mknod d8/d22/d40/c4a 0 2026-03-09T15:00:43.338 INFO:tasks.workunit.client.1.vm09.stdout:4/279: dread db/d12/f3d [0,4194304] 0 2026-03-09T15:00:43.338 INFO:tasks.workunit.client.1.vm09.stdout:3/272: truncate d3/d3a/d2b/d31/d4a/d62/f1b 4898021 0 2026-03-09T15:00:43.339 INFO:tasks.workunit.client.1.vm09.stdout:2/295: creat df/d1f/d47/f56 x:0 0 0 2026-03-09T15:00:43.347 INFO:tasks.workunit.client.1.vm09.stdout:1/229: chown d8/d22/l27 724578 1 2026-03-09T15:00:43.351 INFO:tasks.workunit.client.1.vm09.stdout:9/254: dwrite d1/d7/d1e/d2b/f42 [0,4194304] 0 2026-03-09T15:00:43.352 INFO:tasks.workunit.client.1.vm09.stdout:3/273: dwrite d3/d3a/d2b/d31/d4a/f63 [0,4194304] 0 2026-03-09T15:00:43.353 INFO:tasks.workunit.client.1.vm09.stdout:1/230: fsync d8/d10/f2f 0 2026-03-09T15:00:43.354 INFO:tasks.workunit.client.1.vm09.stdout:0/333: truncate da/dc/d22/f53 645724 0 2026-03-09T15:00:43.356 INFO:tasks.workunit.client.1.vm09.stdout:1/231: chown d8/d10/d24/l47 3559658 1 2026-03-09T15:00:43.359 INFO:tasks.workunit.client.1.vm09.stdout:6/291: dwrite d6/d20/d2a/f5e [0,4194304] 0 2026-03-09T15:00:43.359 INFO:tasks.workunit.client.1.vm09.stdout:7/310: truncate d3/f26 1148877 0 2026-03-09T15:00:43.361 INFO:tasks.workunit.client.1.vm09.stdout:6/292: readlink d6/d20/l3c 0 2026-03-09T15:00:43.374 INFO:tasks.workunit.client.1.vm09.stdout:9/255: dwrite d1/d7/d1e/d2b/d2e/f3a [0,4194304] 0 2026-03-09T15:00:43.378 INFO:tasks.workunit.client.1.vm09.stdout:5/256: truncate d2/f4f 761557 0 2026-03-09T15:00:43.381 INFO:tasks.workunit.client.1.vm09.stdout:2/296: dread f5 [4194304,4194304] 0 2026-03-09T15:00:43.389 INFO:tasks.workunit.client.1.vm09.stdout:5/257: dwrite d2/d37/d3c/f4e [0,4194304] 0 2026-03-09T15:00:43.408 INFO:tasks.workunit.client.1.vm09.stdout:8/267: truncate df/d1c/f20 1213797 0 2026-03-09T15:00:43.410 INFO:tasks.workunit.client.1.vm09.stdout:5/258: read d2/d37/f43 [667345,53570] 0 2026-03-09T15:00:43.411 INFO:tasks.workunit.client.1.vm09.stdout:0/334: creat da/dc/d1c/d46/d5b/f6b x:0 0 0 2026-03-09T15:00:43.411 INFO:tasks.workunit.client.1.vm09.stdout:7/311: rmdir d3/d1d/d2d 39 2026-03-09T15:00:43.411 INFO:tasks.workunit.client.1.vm09.stdout:9/256: creat d1/d7/d1e/d2b/d40/f59 x:0 0 0 2026-03-09T15:00:43.411 INFO:tasks.workunit.client.1.vm09.stdout:6/293: dwrite d6/db/f1f [0,4194304] 0 2026-03-09T15:00:43.411 INFO:tasks.workunit.client.1.vm09.stdout:7/312: write d3/d1d/d2d/f57 [326634,98525] 0 2026-03-09T15:00:43.411 INFO:tasks.workunit.client.1.vm09.stdout:9/257: creat d1/d7/d1e/f5a x:0 0 0 2026-03-09T15:00:43.411 INFO:tasks.workunit.client.1.vm09.stdout:7/313: mkdir d3/db/d25/d5c 0 2026-03-09T15:00:43.411 INFO:tasks.workunit.client.1.vm09.stdout:6/294: chown d6/d20/d24/l33 2284636 1 2026-03-09T15:00:43.411 INFO:tasks.workunit.client.1.vm09.stdout:9/258: mknod d1/d7/d1e/d2b/c5b 0 2026-03-09T15:00:43.412 INFO:tasks.workunit.client.1.vm09.stdout:6/295: stat d6/df 0 2026-03-09T15:00:43.414 INFO:tasks.workunit.client.1.vm09.stdout:8/268: rename df/d24/f29 to df/f4b 0 2026-03-09T15:00:43.414 INFO:tasks.workunit.client.1.vm09.stdout:0/335: read da/dc/d10/f29 [4076275,42646] 0 2026-03-09T15:00:43.424 INFO:tasks.workunit.client.1.vm09.stdout:6/296: dwrite d6/df/f16 [0,4194304] 0 2026-03-09T15:00:43.424 INFO:tasks.workunit.client.1.vm09.stdout:8/269: read df/f26 [178574,51582] 0 2026-03-09T15:00:43.429 INFO:tasks.workunit.client.1.vm09.stdout:9/259: dwrite d1/d7/d1e/d2b/f51 [0,4194304] 0 2026-03-09T15:00:43.430 INFO:tasks.workunit.client.1.vm09.stdout:9/260: readlink d1/l26 0 2026-03-09T15:00:43.430 INFO:tasks.workunit.client.1.vm09.stdout:9/261: chown d1/f28 1165 1 2026-03-09T15:00:43.431 INFO:tasks.workunit.client.1.vm09.stdout:2/297: dread f3 [0,4194304] 0 2026-03-09T15:00:43.441 INFO:tasks.workunit.client.1.vm09.stdout:6/297: rmdir d6/d20/d2a/d57 39 2026-03-09T15:00:43.441 INFO:tasks.workunit.client.1.vm09.stdout:7/314: creat d3/f5d x:0 0 0 2026-03-09T15:00:43.446 INFO:tasks.workunit.client.1.vm09.stdout:8/270: rename df/d1f/l3a to df/d1f/l4c 0 2026-03-09T15:00:43.451 INFO:tasks.workunit.client.1.vm09.stdout:0/336: dread da/dc/f28 [0,4194304] 0 2026-03-09T15:00:43.451 INFO:tasks.workunit.client.1.vm09.stdout:2/298: symlink df/d1f/l57 0 2026-03-09T15:00:43.453 INFO:tasks.workunit.client.1.vm09.stdout:9/262: write d1/d7/d1e/d2b/d2e/f1d [1092027,9053] 0 2026-03-09T15:00:43.455 INFO:tasks.workunit.client.1.vm09.stdout:9/263: dread d1/f24 [0,4194304] 0 2026-03-09T15:00:43.469 INFO:tasks.workunit.client.1.vm09.stdout:8/271: mknod df/d24/c4d 0 2026-03-09T15:00:43.469 INFO:tasks.workunit.client.1.vm09.stdout:0/337: fdatasync da/dc/f17 0 2026-03-09T15:00:43.469 INFO:tasks.workunit.client.1.vm09.stdout:9/264: rename d1/d7/d1e/d2b/l44 to d1/d4f/d52/l5c 0 2026-03-09T15:00:43.469 INFO:tasks.workunit.client.1.vm09.stdout:7/315: link f1 d3/db/d25/d5c/f5e 0 2026-03-09T15:00:43.469 INFO:tasks.workunit.client.1.vm09.stdout:6/298: mknod d6/df/d23/d5b/c62 0 2026-03-09T15:00:43.470 INFO:tasks.workunit.client.1.vm09.stdout:6/299: dread - d6/d20/f59 zero size 2026-03-09T15:00:43.471 INFO:tasks.workunit.client.1.vm09.stdout:8/272: mknod df/d2d/d42/c4e 0 2026-03-09T15:00:43.473 INFO:tasks.workunit.client.1.vm09.stdout:0/338: mkdir da/dc/d1c/d3c/d44/d6c 0 2026-03-09T15:00:43.476 INFO:tasks.workunit.client.1.vm09.stdout:0/339: write da/dc/f28 [2693389,20017] 0 2026-03-09T15:00:43.488 INFO:tasks.workunit.client.1.vm09.stdout:2/299: mkdir df/d58 0 2026-03-09T15:00:43.491 INFO:tasks.workunit.client.1.vm09.stdout:9/265: creat d1/d7/d1e/f5d x:0 0 0 2026-03-09T15:00:43.498 INFO:tasks.workunit.client.1.vm09.stdout:8/273: mkdir df/d2d/d4f 0 2026-03-09T15:00:43.499 INFO:tasks.workunit.client.1.vm09.stdout:8/274: fdatasync df/d1f/f31 0 2026-03-09T15:00:43.499 INFO:tasks.workunit.client.1.vm09.stdout:8/275: dread - df/d1c/d1d/f44 zero size 2026-03-09T15:00:43.499 INFO:tasks.workunit.client.1.vm09.stdout:0/340: sync 2026-03-09T15:00:43.500 INFO:tasks.workunit.client.1.vm09.stdout:0/341: fsync f7 0 2026-03-09T15:00:43.505 INFO:tasks.workunit.client.1.vm09.stdout:6/300: mknod d6/db/d10/c63 0 2026-03-09T15:00:43.505 INFO:tasks.workunit.client.1.vm09.stdout:2/300: dwrite df/d20/d2e/f4c [0,4194304] 0 2026-03-09T15:00:43.508 INFO:tasks.workunit.client.1.vm09.stdout:9/266: mkdir d1/d7/d1e/d2b/d2e/d56/d5e 0 2026-03-09T15:00:43.518 INFO:tasks.workunit.client.1.vm09.stdout:2/301: dwrite df/d1f/f38 [0,4194304] 0 2026-03-09T15:00:43.519 INFO:tasks.workunit.client.1.vm09.stdout:0/342: creat da/dc/d1c/f6d x:0 0 0 2026-03-09T15:00:43.519 INFO:tasks.workunit.client.1.vm09.stdout:9/267: creat d1/d7/d1e/d2b/f5f x:0 0 0 2026-03-09T15:00:43.525 INFO:tasks.workunit.client.1.vm09.stdout:9/268: dread d1/d7/d1e/d2b/f42 [0,4194304] 0 2026-03-09T15:00:43.531 INFO:tasks.workunit.client.1.vm09.stdout:0/343: dread da/d30/f38 [0,4194304] 0 2026-03-09T15:00:43.532 INFO:tasks.workunit.client.1.vm09.stdout:2/302: creat df/d20/d2e/f59 x:0 0 0 2026-03-09T15:00:43.532 INFO:tasks.workunit.client.1.vm09.stdout:6/301: dread d6/d20/d44/d45/f4c [0,4194304] 0 2026-03-09T15:00:43.535 INFO:tasks.workunit.client.1.vm09.stdout:6/302: stat d6/l9 0 2026-03-09T15:00:43.537 INFO:tasks.workunit.client.1.vm09.stdout:2/303: dwrite df/d20/d2e/f4c [0,4194304] 0 2026-03-09T15:00:43.539 INFO:tasks.workunit.client.1.vm09.stdout:2/304: truncate df/d20/d2e/f4c 5018489 0 2026-03-09T15:00:43.541 INFO:tasks.workunit.client.1.vm09.stdout:9/269: symlink d1/d7/d1e/l60 0 2026-03-09T15:00:43.547 INFO:tasks.workunit.client.1.vm09.stdout:0/344: rmdir da/dc/d22 39 2026-03-09T15:00:43.551 INFO:tasks.workunit.client.1.vm09.stdout:4/280: dwrite db/d12/f37 [0,4194304] 0 2026-03-09T15:00:43.557 INFO:tasks.workunit.client.1.vm09.stdout:3/274: dwrite d3/f29 [0,4194304] 0 2026-03-09T15:00:43.568 INFO:tasks.workunit.client.1.vm09.stdout:1/232: dread d8/ff [0,4194304] 0 2026-03-09T15:00:43.569 INFO:tasks.workunit.client.1.vm09.stdout:1/233: write d8/d1b/f41 [448542,46519] 0 2026-03-09T15:00:43.575 INFO:tasks.workunit.client.1.vm09.stdout:9/270: symlink d1/d7/d1e/d2b/d2e/l61 0 2026-03-09T15:00:43.575 INFO:tasks.workunit.client.1.vm09.stdout:9/271: fdatasync d1/f1f 0 2026-03-09T15:00:43.577 INFO:tasks.workunit.client.1.vm09.stdout:2/305: dwrite df/d3b/f4e [0,4194304] 0 2026-03-09T15:00:43.578 INFO:tasks.workunit.client.1.vm09.stdout:5/259: write d2/d4/fd [4596052,110951] 0 2026-03-09T15:00:43.581 INFO:tasks.workunit.client.1.vm09.stdout:1/234: mknod d8/d10/c4b 0 2026-03-09T15:00:43.581 INFO:tasks.workunit.client.1.vm09.stdout:3/275: creat d3/d3a/d2b/f64 x:0 0 0 2026-03-09T15:00:43.581 INFO:tasks.workunit.client.1.vm09.stdout:9/272: mknod d1/d7/c62 0 2026-03-09T15:00:43.593 INFO:tasks.workunit.client.1.vm09.stdout:2/306: mknod df/d20/d2e/c5a 0 2026-03-09T15:00:43.594 INFO:tasks.workunit.client.1.vm09.stdout:3/276: creat d3/d3a/d2b/f65 x:0 0 0 2026-03-09T15:00:43.596 INFO:tasks.workunit.client.1.vm09.stdout:0/345: fsync da/dc/d1c/d3c/d44/f67 0 2026-03-09T15:00:43.597 INFO:tasks.workunit.client.1.vm09.stdout:3/277: creat d3/d3a/d2b/f66 x:0 0 0 2026-03-09T15:00:43.597 INFO:tasks.workunit.client.1.vm09.stdout:1/235: dwrite d8/d10/f29 [0,4194304] 0 2026-03-09T15:00:43.599 INFO:tasks.workunit.client.1.vm09.stdout:2/307: dread df/d20/d29/f31 [0,4194304] 0 2026-03-09T15:00:43.599 INFO:tasks.workunit.client.1.vm09.stdout:2/308: chown c9 4264 1 2026-03-09T15:00:43.602 INFO:tasks.workunit.client.1.vm09.stdout:3/278: read d3/d3a/d2b/d36/f51 [234595,48511] 0 2026-03-09T15:00:43.603 INFO:tasks.workunit.client.1.vm09.stdout:3/279: chown d3/d3a/c28 4 1 2026-03-09T15:00:43.608 INFO:tasks.workunit.client.1.vm09.stdout:5/260: sync 2026-03-09T15:00:43.609 INFO:tasks.workunit.client.1.vm09.stdout:1/236: rename d8/d22/f3a to d8/d22/f4c 0 2026-03-09T15:00:43.609 INFO:tasks.workunit.client.1.vm09.stdout:0/346: rename da/dc/c1a to da/dc/d22/d64/c6e 0 2026-03-09T15:00:43.609 INFO:tasks.workunit.client.1.vm09.stdout:3/280: symlink d3/d3a/d2b/d31/l67 0 2026-03-09T15:00:43.614 INFO:tasks.workunit.client.1.vm09.stdout:2/309: creat df/f5b x:0 0 0 2026-03-09T15:00:43.615 INFO:tasks.workunit.client.1.vm09.stdout:5/261: rename d2/l42 to d2/d37/d3c/d36/d4c/l63 0 2026-03-09T15:00:43.617 INFO:tasks.workunit.client.1.vm09.stdout:1/237: getdents d8 0 2026-03-09T15:00:43.617 INFO:tasks.workunit.client.1.vm09.stdout:2/310: symlink df/d20/d29/d53/l5c 0 2026-03-09T15:00:43.619 INFO:tasks.workunit.client.1.vm09.stdout:1/238: mknod d8/d1b/c4d 0 2026-03-09T15:00:43.619 INFO:tasks.workunit.client.1.vm09.stdout:2/311: mkdir df/d1f/d47/d5d 0 2026-03-09T15:00:43.619 INFO:tasks.workunit.client.1.vm09.stdout:3/281: dread d3/d3a/d2b/d36/f44 [0,4194304] 0 2026-03-09T15:00:43.624 INFO:tasks.workunit.client.1.vm09.stdout:1/239: chown d8/d22/l27 39240 1 2026-03-09T15:00:43.626 INFO:tasks.workunit.client.1.vm09.stdout:5/262: sync 2026-03-09T15:00:43.627 INFO:tasks.workunit.client.1.vm09.stdout:2/312: write df/d20/d2e/f48 [765656,67474] 0 2026-03-09T15:00:43.629 INFO:tasks.workunit.client.1.vm09.stdout:1/240: write d8/d10/d24/f2a [50750,116791] 0 2026-03-09T15:00:43.630 INFO:tasks.workunit.client.1.vm09.stdout:1/241: chown d8/d22/d40 17741 1 2026-03-09T15:00:43.631 INFO:tasks.workunit.client.1.vm09.stdout:5/263: write d2/f38 [3070139,78107] 0 2026-03-09T15:00:43.632 INFO:tasks.workunit.client.1.vm09.stdout:3/282: dwrite d3/f3b [4194304,4194304] 0 2026-03-09T15:00:43.634 INFO:tasks.workunit.client.1.vm09.stdout:5/264: truncate d2/d37/d3c/d55/f58 248022 0 2026-03-09T15:00:43.642 INFO:tasks.workunit.client.1.vm09.stdout:5/265: chown d2/d4/c1c 841192 1 2026-03-09T15:00:43.642 INFO:tasks.workunit.client.1.vm09.stdout:1/242: rename d8/d10/c33 to d8/d10/d24/d45/c4e 0 2026-03-09T15:00:43.642 INFO:tasks.workunit.client.1.vm09.stdout:5/266: symlink d2/d37/d3c/d36/d45/l64 0 2026-03-09T15:00:43.642 INFO:tasks.workunit.client.1.vm09.stdout:3/283: creat d3/d3a/d2b/d36/f68 x:0 0 0 2026-03-09T15:00:43.644 INFO:tasks.workunit.client.1.vm09.stdout:2/313: creat df/d1f/d47/d5d/f5e x:0 0 0 2026-03-09T15:00:43.647 INFO:tasks.workunit.client.1.vm09.stdout:1/243: rename d8/d10/d24/l47 to d8/d10/l4f 0 2026-03-09T15:00:43.647 INFO:tasks.workunit.client.1.vm09.stdout:2/314: chown c8 16831459 1 2026-03-09T15:00:43.648 INFO:tasks.workunit.client.1.vm09.stdout:5/267: link d2/d4/c11 d2/d4/c65 0 2026-03-09T15:00:43.649 INFO:tasks.workunit.client.1.vm09.stdout:2/315: mkdir df/d20/d29/d53/d5f 0 2026-03-09T15:00:43.650 INFO:tasks.workunit.client.1.vm09.stdout:3/284: rename d3/d3a/d2b/d31/d4a/d62/ld to d3/d3a/d2b/d31/d4a/l69 0 2026-03-09T15:00:43.651 INFO:tasks.workunit.client.1.vm09.stdout:5/268: creat d2/d37/d3c/d36/d45/f66 x:0 0 0 2026-03-09T15:00:43.651 INFO:tasks.workunit.client.1.vm09.stdout:1/244: rename d8/d22/d32 to d8/d50 0 2026-03-09T15:00:43.652 INFO:tasks.workunit.client.1.vm09.stdout:3/285: mkdir d3/d3a/d2b/d39/d6a 0 2026-03-09T15:00:43.656 INFO:tasks.workunit.client.1.vm09.stdout:3/286: creat d3/d3a/f6b x:0 0 0 2026-03-09T15:00:43.657 INFO:tasks.workunit.client.1.vm09.stdout:5/269: sync 2026-03-09T15:00:43.661 INFO:tasks.workunit.client.1.vm09.stdout:5/270: dread d2/f29 [0,4194304] 0 2026-03-09T15:00:43.666 INFO:tasks.workunit.client.1.vm09.stdout:2/316: rename df/f1c to df/d1f/d47/f60 0 2026-03-09T15:00:43.666 INFO:tasks.workunit.client.1.vm09.stdout:3/287: chown d3/d3a/d2b/l41 989902 1 2026-03-09T15:00:43.671 INFO:tasks.workunit.client.1.vm09.stdout:5/271: dwrite d2/d37/d3c/f3a [0,4194304] 0 2026-03-09T15:00:43.680 INFO:tasks.workunit.client.1.vm09.stdout:5/272: dread d2/f34 [0,4194304] 0 2026-03-09T15:00:43.699 INFO:tasks.workunit.client.1.vm09.stdout:4/281: fdatasync db/d12/f37 0 2026-03-09T15:00:43.699 INFO:tasks.workunit.client.1.vm09.stdout:4/282: dread - db/d19/d35/f4e zero size 2026-03-09T15:00:43.704 INFO:tasks.workunit.client.1.vm09.stdout:4/283: dread db/d12/f27 [0,4194304] 0 2026-03-09T15:00:43.705 INFO:tasks.workunit.client.1.vm09.stdout:8/276: unlink df/f4b 0 2026-03-09T15:00:43.709 INFO:tasks.workunit.client.1.vm09.stdout:8/277: write df/d1f/f40 [714029,34517] 0 2026-03-09T15:00:43.709 INFO:tasks.workunit.client.1.vm09.stdout:8/278: write df/d24/f28 [776761,7264] 0 2026-03-09T15:00:43.709 INFO:tasks.workunit.client.1.vm09.stdout:4/284: dread db/d12/f37 [0,4194304] 0 2026-03-09T15:00:43.715 INFO:tasks.workunit.client.1.vm09.stdout:4/285: dread db/d12/f3d [0,4194304] 0 2026-03-09T15:00:43.721 INFO:tasks.workunit.client.1.vm09.stdout:4/286: creat db/d12/d16/f60 x:0 0 0 2026-03-09T15:00:43.731 INFO:tasks.workunit.client.1.vm09.stdout:2/317: getdents df/d1f 0 2026-03-09T15:00:43.744 INFO:tasks.workunit.client.1.vm09.stdout:0/347: rmdir da/dc/d1c/d3c/d44 39 2026-03-09T15:00:43.744 INFO:tasks.workunit.client.1.vm09.stdout:2/318: dwrite df/f33 [0,4194304] 0 2026-03-09T15:00:43.758 INFO:tasks.workunit.client.1.vm09.stdout:2/319: creat df/d3b/f61 x:0 0 0 2026-03-09T15:00:43.759 INFO:tasks.workunit.client.1.vm09.stdout:7/316: dwrite d3/d3d/f5a [4194304,4194304] 0 2026-03-09T15:00:43.762 INFO:tasks.workunit.client.1.vm09.stdout:2/320: write df/d2d/f3c [775131,125227] 0 2026-03-09T15:00:43.762 INFO:tasks.workunit.client.1.vm09.stdout:0/348: creat da/d30/f6f x:0 0 0 2026-03-09T15:00:43.775 INFO:tasks.workunit.client.1.vm09.stdout:0/349: sync 2026-03-09T15:00:43.778 INFO:tasks.workunit.client.1.vm09.stdout:0/350: symlink da/dc/d1c/d3c/d44/d6c/l70 0 2026-03-09T15:00:43.783 INFO:tasks.workunit.client.1.vm09.stdout:0/351: link da/dc/d10/f2d da/dc/d1c/d3c/d44/f71 0 2026-03-09T15:00:43.787 INFO:tasks.workunit.client.1.vm09.stdout:3/288: dwrite d3/f29 [4194304,4194304] 0 2026-03-09T15:00:43.787 INFO:tasks.workunit.client.1.vm09.stdout:6/303: truncate f0 1501237 0 2026-03-09T15:00:43.788 INFO:tasks.workunit.client.1.vm09.stdout:0/352: sync 2026-03-09T15:00:43.788 INFO:tasks.workunit.client.1.vm09.stdout:6/304: sync 2026-03-09T15:00:43.789 INFO:tasks.workunit.client.1.vm09.stdout:0/353: chown da/dc/d1c/d3c 36309477 1 2026-03-09T15:00:43.790 INFO:tasks.workunit.client.1.vm09.stdout:3/289: rmdir d3/d3a/d2b/d31/d4a/d62 39 2026-03-09T15:00:43.796 INFO:tasks.workunit.client.1.vm09.stdout:2/321: dread fb [0,4194304] 0 2026-03-09T15:00:43.802 INFO:tasks.workunit.client.1.vm09.stdout:6/305: dread d6/d20/d2a/f37 [0,4194304] 0 2026-03-09T15:00:43.803 INFO:tasks.workunit.client.1.vm09.stdout:6/306: truncate d6/d20/f52 249870 0 2026-03-09T15:00:43.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:43 vm05.local ceph-mon[50611]: pgmap v148: 65 pgs: 65 active+clean; 565 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 4.5 MiB/s rd, 62 MiB/s wr, 178 op/s 2026-03-09T15:00:43.813 INFO:tasks.workunit.client.1.vm09.stdout:6/307: dread d6/d20/d44/f4a [0,4194304] 0 2026-03-09T15:00:43.813 INFO:tasks.workunit.client.1.vm09.stdout:9/273: chown d1/d7/d1e/d2b/f3f 9116950 1 2026-03-09T15:00:43.818 INFO:tasks.workunit.client.1.vm09.stdout:3/290: symlink d3/d3a/d2b/d39/d6a/l6c 0 2026-03-09T15:00:43.819 INFO:tasks.workunit.client.1.vm09.stdout:3/291: dread - d3/d3a/f6b zero size 2026-03-09T15:00:43.820 INFO:tasks.workunit.client.1.vm09.stdout:3/292: readlink d3/d3a/d2b/d31/d4a/l5c 0 2026-03-09T15:00:43.825 INFO:tasks.workunit.client.1.vm09.stdout:9/274: dwrite d1/d7/f3e [0,4194304] 0 2026-03-09T15:00:43.832 INFO:tasks.workunit.client.1.vm09.stdout:6/308: creat d6/d20/d2a/d3d/d46/f64 x:0 0 0 2026-03-09T15:00:43.837 INFO:tasks.workunit.client.1.vm09.stdout:6/309: truncate d6/d20/d2a/f5e 4687903 0 2026-03-09T15:00:43.844 INFO:tasks.workunit.client.1.vm09.stdout:9/275: mkdir d1/d7/d1e/d2b/d2e/d56/d63 0 2026-03-09T15:00:43.848 INFO:tasks.workunit.client.1.vm09.stdout:6/310: dwrite d6/d20/f59 [0,4194304] 0 2026-03-09T15:00:43.848 INFO:tasks.workunit.client.1.vm09.stdout:3/293: creat d3/d5b/f6d x:0 0 0 2026-03-09T15:00:43.849 INFO:tasks.workunit.client.1.vm09.stdout:3/294: write d3/f3b [8464378,117764] 0 2026-03-09T15:00:43.854 INFO:tasks.workunit.client.1.vm09.stdout:9/276: unlink d1/d7/d1e/d2b/d40/f59 0 2026-03-09T15:00:43.863 INFO:tasks.workunit.client.1.vm09.stdout:6/311: mkdir d6/d20/d38/d56/d65 0 2026-03-09T15:00:43.864 INFO:tasks.workunit.client.1.vm09.stdout:9/277: rmdir d1/d7/d1e/d2b/d2e/d56/d63 0 2026-03-09T15:00:43.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:43 vm09.local ceph-mon[59673]: pgmap v148: 65 pgs: 65 active+clean; 565 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 4.5 MiB/s rd, 62 MiB/s wr, 178 op/s 2026-03-09T15:00:43.869 INFO:tasks.workunit.client.1.vm09.stdout:6/312: rename d6/d20/f36 to d6/db/f66 0 2026-03-09T15:00:43.871 INFO:tasks.workunit.client.1.vm09.stdout:6/313: chown d6/d20/d2a/d3d 63142 1 2026-03-09T15:00:43.871 INFO:tasks.workunit.client.1.vm09.stdout:6/314: read d6/d20/f52 [180557,8289] 0 2026-03-09T15:00:43.873 INFO:tasks.workunit.client.1.vm09.stdout:3/295: dread d3/d3a/d2b/d31/f33 [0,4194304] 0 2026-03-09T15:00:43.884 INFO:tasks.workunit.client.1.vm09.stdout:9/278: dread d1/f28 [0,4194304] 0 2026-03-09T15:00:43.889 INFO:tasks.workunit.client.1.vm09.stdout:3/296: dread d3/d3a/d2b/d31/f40 [0,4194304] 0 2026-03-09T15:00:43.889 INFO:tasks.workunit.client.1.vm09.stdout:3/297: write d3/d3a/d2b/f65 [248860,17764] 0 2026-03-09T15:00:43.890 INFO:tasks.workunit.client.1.vm09.stdout:3/298: readlink d3/d3a/d2b/d31/d4a/l5e 0 2026-03-09T15:00:43.894 INFO:tasks.workunit.client.1.vm09.stdout:9/279: creat d1/d4f/f64 x:0 0 0 2026-03-09T15:00:43.905 INFO:tasks.workunit.client.1.vm09.stdout:9/280: symlink d1/d7/d1e/d2b/d2e/d56/l65 0 2026-03-09T15:00:43.906 INFO:tasks.workunit.client.1.vm09.stdout:9/281: chown d1/d7/d1e/d2b/d2e/f2d 29487 1 2026-03-09T15:00:43.912 INFO:tasks.workunit.client.1.vm09.stdout:9/282: mknod d1/d7/d1e/d2b/c66 0 2026-03-09T15:00:43.912 INFO:tasks.workunit.client.1.vm09.stdout:9/283: write d1/d7/d1e/d2b/f51 [2344963,42984] 0 2026-03-09T15:00:43.913 INFO:tasks.workunit.client.1.vm09.stdout:9/284: stat d1/f4 0 2026-03-09T15:00:43.921 INFO:tasks.workunit.client.1.vm09.stdout:2/322: getdents df/d1f/d47/d5d 0 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:2/323: chown df/d2d/f3c 1930 1 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:2/324: dwrite df/d20/f52 [0,4194304] 0 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:2/325: symlink df/d2d/l62 0 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:1/245: rmdir d8/d10 39 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:8/279: dwrite df/f26 [0,4194304] 0 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:8/280: creat df/d24/d3f/f50 x:0 0 0 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:8/281: readlink df/l3e 0 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:2/326: truncate f0 4519392 0 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:2/327: creat df/d20/d29/d53/d5f/f63 x:0 0 0 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:8/282: creat df/f51 x:0 0 0 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:8/283: write fe [549836,108216] 0 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:1/246: getdents d8/d10 0 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:1/247: symlink d8/d22/l51 0 2026-03-09T15:00:43.956 INFO:tasks.workunit.client.1.vm09.stdout:1/248: creat d8/d22/d40/f52 x:0 0 0 2026-03-09T15:00:43.958 INFO:tasks.workunit.client.1.vm09.stdout:1/249: truncate d8/f42 749308 0 2026-03-09T15:00:43.960 INFO:tasks.workunit.client.1.vm09.stdout:1/250: getdents d8/d10/d24/d45 0 2026-03-09T15:00:43.992 INFO:tasks.workunit.client.1.vm09.stdout:1/251: sync 2026-03-09T15:00:43.994 INFO:tasks.workunit.client.1.vm09.stdout:1/252: write d8/d10/f2f [1446888,116788] 0 2026-03-09T15:00:43.995 INFO:tasks.workunit.client.1.vm09.stdout:1/253: chown d8/d10/l2d 193 1 2026-03-09T15:00:43.997 INFO:tasks.workunit.client.1.vm09.stdout:4/287: rmdir db/d12/d16 39 2026-03-09T15:00:44.001 INFO:tasks.workunit.client.1.vm09.stdout:4/288: truncate db/d12/f5a 569578 0 2026-03-09T15:00:44.003 INFO:tasks.workunit.client.1.vm09.stdout:1/254: unlink d8/d50/d39/l46 0 2026-03-09T15:00:44.010 INFO:tasks.workunit.client.1.vm09.stdout:1/255: creat d8/d10/d24/d45/f53 x:0 0 0 2026-03-09T15:00:44.020 INFO:tasks.workunit.client.1.vm09.stdout:0/354: write da/dc/d10/f11 [4409434,90874] 0 2026-03-09T15:00:44.020 INFO:tasks.workunit.client.1.vm09.stdout:1/256: unlink d8/d10/f3b 0 2026-03-09T15:00:44.020 INFO:tasks.workunit.client.1.vm09.stdout:7/317: dwrite f1 [0,4194304] 0 2026-03-09T15:00:44.020 INFO:tasks.workunit.client.1.vm09.stdout:5/273: truncate d2/f4f 718883 0 2026-03-09T15:00:44.024 INFO:tasks.workunit.client.1.vm09.stdout:5/274: truncate d2/d37/d3c/d55/f58 766550 0 2026-03-09T15:00:44.026 INFO:tasks.workunit.client.1.vm09.stdout:4/289: link db/d19/d32/l3f db/d19/l61 0 2026-03-09T15:00:44.029 INFO:tasks.workunit.client.1.vm09.stdout:0/355: mknod da/dc/d1c/d3c/d44/c72 0 2026-03-09T15:00:44.030 INFO:tasks.workunit.client.1.vm09.stdout:4/290: dread - db/d12/d16/f4f zero size 2026-03-09T15:00:44.036 INFO:tasks.workunit.client.1.vm09.stdout:7/318: dread d3/db/d25/f22 [0,4194304] 0 2026-03-09T15:00:44.036 INFO:tasks.workunit.client.1.vm09.stdout:1/257: creat d8/d1b/f54 x:0 0 0 2026-03-09T15:00:44.039 INFO:tasks.workunit.client.1.vm09.stdout:5/275: mkdir d2/d37/d67 0 2026-03-09T15:00:44.043 INFO:tasks.workunit.client.1.vm09.stdout:0/356: write da/d57/f60 [135988,16862] 0 2026-03-09T15:00:44.054 INFO:tasks.workunit.client.1.vm09.stdout:7/319: rename d3/d28/d2e to d3/db/d15/d5f 0 2026-03-09T15:00:44.059 INFO:tasks.workunit.client.1.vm09.stdout:5/276: creat d2/d37/d3c/d55/f68 x:0 0 0 2026-03-09T15:00:44.059 INFO:tasks.workunit.client.1.vm09.stdout:5/277: fdatasync d2/d37/d3c/d36/d4c/d51/f62 0 2026-03-09T15:00:44.066 INFO:tasks.workunit.client.1.vm09.stdout:4/291: mknod db/c62 0 2026-03-09T15:00:44.067 INFO:tasks.workunit.client.1.vm09.stdout:5/278: dwrite d2/d37/d3c/d36/d45/f66 [0,4194304] 0 2026-03-09T15:00:44.080 INFO:tasks.workunit.client.1.vm09.stdout:4/292: dread f4 [0,4194304] 0 2026-03-09T15:00:44.093 INFO:tasks.workunit.client.1.vm09.stdout:4/293: sync 2026-03-09T15:00:44.094 INFO:tasks.workunit.client.1.vm09.stdout:4/294: chown db/d2f/c3a 1864 1 2026-03-09T15:00:44.094 INFO:tasks.workunit.client.1.vm09.stdout:4/295: fsync db/d19/d32/d3b/f48 0 2026-03-09T15:00:44.095 INFO:tasks.workunit.client.1.vm09.stdout:4/296: fsync db/d12/d16/f2a 0 2026-03-09T15:00:44.096 INFO:tasks.workunit.client.1.vm09.stdout:4/297: fsync db/d19/d35/f4e 0 2026-03-09T15:00:44.101 INFO:tasks.workunit.client.1.vm09.stdout:6/315: write d6/db/d10/f19 [600436,115584] 0 2026-03-09T15:00:44.113 INFO:tasks.workunit.client.1.vm09.stdout:3/299: write d3/f9 [296449,34698] 0 2026-03-09T15:00:44.124 INFO:tasks.workunit.client.1.vm09.stdout:9/285: truncate d1/d7/d1e/d2b/f30 2149922 0 2026-03-09T15:00:44.126 INFO:tasks.workunit.client.1.vm09.stdout:7/320: mknod d3/d1d/c60 0 2026-03-09T15:00:44.129 INFO:tasks.workunit.client.1.vm09.stdout:6/316: creat d6/d20/d24/f67 x:0 0 0 2026-03-09T15:00:44.134 INFO:tasks.workunit.client.1.vm09.stdout:1/258: link d8/l9 d8/d10/d24/d48/l55 0 2026-03-09T15:00:44.135 INFO:tasks.workunit.client.1.vm09.stdout:6/317: dwrite d6/d20/d38/d4e/f5a [0,4194304] 0 2026-03-09T15:00:44.140 INFO:tasks.workunit.client.1.vm09.stdout:6/318: dread d6/df/d23/f2f [0,4194304] 0 2026-03-09T15:00:44.144 INFO:tasks.workunit.client.1.vm09.stdout:6/319: stat d6/l9 0 2026-03-09T15:00:44.144 INFO:tasks.workunit.client.1.vm09.stdout:6/320: readlink d6/la 0 2026-03-09T15:00:44.150 INFO:tasks.workunit.client.1.vm09.stdout:5/279: dread d2/d4/f16 [0,4194304] 0 2026-03-09T15:00:44.152 INFO:tasks.workunit.client.1.vm09.stdout:5/280: chown d2/d37/d3c/d55/f57 1620 1 2026-03-09T15:00:44.154 INFO:tasks.workunit.client.1.vm09.stdout:1/259: dread d8/f17 [0,4194304] 0 2026-03-09T15:00:44.154 INFO:tasks.workunit.client.1.vm09.stdout:1/260: write d8/d1b/f41 [874921,39110] 0 2026-03-09T15:00:44.157 INFO:tasks.workunit.client.1.vm09.stdout:3/300: creat d3/d60/f6e x:0 0 0 2026-03-09T15:00:44.160 INFO:tasks.workunit.client.1.vm09.stdout:1/261: dwrite d8/d10/f29 [0,4194304] 0 2026-03-09T15:00:44.162 INFO:tasks.workunit.client.1.vm09.stdout:7/321: mkdir d3/d61 0 2026-03-09T15:00:44.164 INFO:tasks.workunit.client.1.vm09.stdout:1/262: readlink d8/d10/l11 0 2026-03-09T15:00:44.176 INFO:tasks.workunit.client.1.vm09.stdout:5/281: dwrite d2/f47 [0,4194304] 0 2026-03-09T15:00:44.185 INFO:tasks.workunit.client.1.vm09.stdout:3/301: creat d3/d3a/d2b/d39/d6a/f6f x:0 0 0 2026-03-09T15:00:44.189 INFO:tasks.workunit.client.1.vm09.stdout:1/263: read d8/d1b/f21 [711753,82796] 0 2026-03-09T15:00:44.189 INFO:tasks.workunit.client.1.vm09.stdout:5/282: chown d2/f2e 0 1 2026-03-09T15:00:44.190 INFO:tasks.workunit.client.1.vm09.stdout:5/283: readlink d2/d37/d3c/d36/d45/l64 0 2026-03-09T15:00:44.192 INFO:tasks.workunit.client.1.vm09.stdout:7/322: rename d3/db/d15/f23 to d3/db/d15/d5f/d44/f62 0 2026-03-09T15:00:44.194 INFO:tasks.workunit.client.1.vm09.stdout:1/264: read d8/fa [233081,125876] 0 2026-03-09T15:00:44.196 INFO:tasks.workunit.client.1.vm09.stdout:5/284: symlink d2/d37/d3c/l69 0 2026-03-09T15:00:44.197 INFO:tasks.workunit.client.1.vm09.stdout:5/285: dread - d2/d37/d3c/d36/d4c/d51/f59 zero size 2026-03-09T15:00:44.197 INFO:tasks.workunit.client.1.vm09.stdout:7/323: unlink l2 0 2026-03-09T15:00:44.200 INFO:tasks.workunit.client.1.vm09.stdout:1/265: readlink d8/l3f 0 2026-03-09T15:00:44.208 INFO:tasks.workunit.client.1.vm09.stdout:5/286: dwrite d2/d37/d3c/d36/d45/f66 [0,4194304] 0 2026-03-09T15:00:44.218 INFO:tasks.workunit.client.1.vm09.stdout:7/324: rename d3/l41 to d3/d28/l63 0 2026-03-09T15:00:44.218 INFO:tasks.workunit.client.1.vm09.stdout:7/325: readlink d3/l17 0 2026-03-09T15:00:44.224 INFO:tasks.workunit.client.1.vm09.stdout:5/287: stat d2/d4/l5 0 2026-03-09T15:00:44.231 INFO:tasks.workunit.client.1.vm09.stdout:8/284: rmdir df 39 2026-03-09T15:00:44.235 INFO:tasks.workunit.client.1.vm09.stdout:8/285: fsync df/d24/f28 0 2026-03-09T15:00:44.235 INFO:tasks.workunit.client.1.vm09.stdout:7/326: rmdir d3/d4f 0 2026-03-09T15:00:44.236 INFO:tasks.workunit.client.1.vm09.stdout:7/327: symlink d3/d3d/l64 0 2026-03-09T15:00:44.240 INFO:tasks.workunit.client.1.vm09.stdout:7/328: rmdir d3/db/d15/d5f/d44 39 2026-03-09T15:00:44.242 INFO:tasks.workunit.client.1.vm09.stdout:8/286: dwrite df/d1c/d1d/f2b [0,4194304] 0 2026-03-09T15:00:44.243 INFO:tasks.workunit.client.1.vm09.stdout:8/287: write df/d1c/d1d/f49 [981851,53632] 0 2026-03-09T15:00:44.247 INFO:tasks.workunit.client.1.vm09.stdout:5/288: sync 2026-03-09T15:00:44.250 INFO:tasks.workunit.client.1.vm09.stdout:8/288: creat df/d38/f52 x:0 0 0 2026-03-09T15:00:44.251 INFO:tasks.workunit.client.1.vm09.stdout:5/289: mknod d2/d37/d3c/d36/d45/d5c/c6a 0 2026-03-09T15:00:44.259 INFO:tasks.workunit.client.1.vm09.stdout:8/289: dwrite df/d1f/f31 [0,4194304] 0 2026-03-09T15:00:44.261 INFO:tasks.workunit.client.1.vm09.stdout:5/290: link d2/la d2/d37/d3c/d36/d45/d5c/l6b 0 2026-03-09T15:00:44.263 INFO:tasks.workunit.client.1.vm09.stdout:2/328: dwrite df/d20/f24 [0,4194304] 0 2026-03-09T15:00:44.268 INFO:tasks.workunit.client.1.vm09.stdout:8/290: creat df/d38/f53 x:0 0 0 2026-03-09T15:00:44.271 INFO:tasks.workunit.client.1.vm09.stdout:2/329: creat df/d20/d2e/f64 x:0 0 0 2026-03-09T15:00:44.274 INFO:tasks.workunit.client.1.vm09.stdout:8/291: mkdir df/d1c/d54 0 2026-03-09T15:00:44.275 INFO:tasks.workunit.client.1.vm09.stdout:2/330: rename df/f1d to df/d58/f65 0 2026-03-09T15:00:44.275 INFO:tasks.workunit.client.1.vm09.stdout:5/291: creat d2/d37/f6c x:0 0 0 2026-03-09T15:00:44.276 INFO:tasks.workunit.client.1.vm09.stdout:2/331: chown df/d2d/f4f 949384308 1 2026-03-09T15:00:44.277 INFO:tasks.workunit.client.1.vm09.stdout:2/332: chown df/d20/f49 1386 1 2026-03-09T15:00:44.277 INFO:tasks.workunit.client.1.vm09.stdout:8/292: mknod df/d2d/d46/d33/c55 0 2026-03-09T15:00:44.281 INFO:tasks.workunit.client.1.vm09.stdout:2/333: dread df/f42 [0,4194304] 0 2026-03-09T15:00:44.286 INFO:tasks.workunit.client.1.vm09.stdout:8/293: unlink df/d1c/d1d/f49 0 2026-03-09T15:00:44.286 INFO:tasks.workunit.client.1.vm09.stdout:2/334: symlink df/d20/d29/l66 0 2026-03-09T15:00:44.286 INFO:tasks.workunit.client.1.vm09.stdout:5/292: creat d2/d37/f6d x:0 0 0 2026-03-09T15:00:44.286 INFO:tasks.workunit.client.1.vm09.stdout:8/294: unlink df/c11 0 2026-03-09T15:00:44.287 INFO:tasks.workunit.client.1.vm09.stdout:2/335: unlink df/d20/d29/d53/l5c 0 2026-03-09T15:00:44.287 INFO:tasks.workunit.client.1.vm09.stdout:2/336: chown c9 191622012 1 2026-03-09T15:00:44.287 INFO:tasks.workunit.client.1.vm09.stdout:5/293: chown d2/d4/l27 17227 1 2026-03-09T15:00:44.287 INFO:tasks.workunit.client.1.vm09.stdout:5/294: fdatasync d2/d37/d3c/f4b 0 2026-03-09T15:00:44.287 INFO:tasks.workunit.client.1.vm09.stdout:2/337: readlink ld 0 2026-03-09T15:00:44.289 INFO:tasks.workunit.client.1.vm09.stdout:2/338: dread - df/d20/f3f zero size 2026-03-09T15:00:44.291 INFO:tasks.workunit.client.1.vm09.stdout:2/339: write df/f5b [485466,76040] 0 2026-03-09T15:00:44.294 INFO:tasks.workunit.client.1.vm09.stdout:1/266: rmdir d8/d22 39 2026-03-09T15:00:44.303 INFO:tasks.workunit.client.1.vm09.stdout:5/295: creat d2/d37/d3c/d36/d45/f6e x:0 0 0 2026-03-09T15:00:44.303 INFO:tasks.workunit.client.1.vm09.stdout:8/295: dread df/f2a [0,4194304] 0 2026-03-09T15:00:44.303 INFO:tasks.workunit.client.1.vm09.stdout:8/296: rename df/d1c/d54 to df/d24/d56 0 2026-03-09T15:00:44.303 INFO:tasks.workunit.client.1.vm09.stdout:1/267: mkdir d8/d22/d56 0 2026-03-09T15:00:44.308 INFO:tasks.workunit.client.1.vm09.stdout:8/297: dwrite fe [0,4194304] 0 2026-03-09T15:00:44.323 INFO:tasks.workunit.client.1.vm09.stdout:8/298: creat df/d2d/f57 x:0 0 0 2026-03-09T15:00:44.327 INFO:tasks.workunit.client.1.vm09.stdout:8/299: dwrite df/f34 [0,4194304] 0 2026-03-09T15:00:44.337 INFO:tasks.workunit.client.1.vm09.stdout:0/357: write da/dc/d10/f2d [724018,87362] 0 2026-03-09T15:00:44.339 INFO:tasks.workunit.client.1.vm09.stdout:4/298: write db/d19/d35/f43 [4668968,112327] 0 2026-03-09T15:00:44.339 INFO:tasks.workunit.client.1.vm09.stdout:9/286: rmdir d1 39 2026-03-09T15:00:44.342 INFO:tasks.workunit.client.1.vm09.stdout:4/299: chown db/d19/d35/f4e 40 1 2026-03-09T15:00:44.343 INFO:tasks.workunit.client.1.vm09.stdout:1/268: rename d8/f19 to d8/f57 0 2026-03-09T15:00:44.344 INFO:tasks.workunit.client.1.vm09.stdout:8/300: creat df/d38/f58 x:0 0 0 2026-03-09T15:00:44.344 INFO:tasks.workunit.client.1.vm09.stdout:8/301: write f8 [534692,40350] 0 2026-03-09T15:00:44.346 INFO:tasks.workunit.client.1.vm09.stdout:4/300: stat db/d19/l61 0 2026-03-09T15:00:44.348 INFO:tasks.workunit.client.1.vm09.stdout:0/358: link da/dc/d1c/d3c/d44/f68 da/dc/d22/f73 0 2026-03-09T15:00:44.352 INFO:tasks.workunit.client.1.vm09.stdout:8/302: fsync df/d1f/f35 0 2026-03-09T15:00:44.356 INFO:tasks.workunit.client.1.vm09.stdout:9/287: dwrite d1/d7/d1e/d2b/d2e/f19 [0,4194304] 0 2026-03-09T15:00:44.360 INFO:tasks.workunit.client.1.vm09.stdout:0/359: mknod da/dc/d61/c74 0 2026-03-09T15:00:44.361 INFO:tasks.workunit.client.1.vm09.stdout:1/269: dwrite d8/ff [0,4194304] 0 2026-03-09T15:00:44.361 INFO:tasks.workunit.client.1.vm09.stdout:4/301: creat db/d12/d16/f63 x:0 0 0 2026-03-09T15:00:44.363 INFO:tasks.workunit.client.1.vm09.stdout:6/321: write d6/db/f42 [1088449,89541] 0 2026-03-09T15:00:44.364 INFO:tasks.workunit.client.1.vm09.stdout:6/322: chown d6/l22 0 1 2026-03-09T15:00:44.365 INFO:tasks.workunit.client.1.vm09.stdout:8/303: dwrite df/d2d/f57 [0,4194304] 0 2026-03-09T15:00:44.366 INFO:tasks.workunit.client.1.vm09.stdout:4/302: symlink db/d12/d16/l64 0 2026-03-09T15:00:44.367 INFO:tasks.workunit.client.1.vm09.stdout:0/360: write da/dc/f28 [981739,75703] 0 2026-03-09T15:00:44.374 INFO:tasks.workunit.client.1.vm09.stdout:4/303: write db/d19/d35/f4e [258488,130813] 0 2026-03-09T15:00:44.386 INFO:tasks.workunit.client.1.vm09.stdout:1/270: symlink d8/d10/d24/d45/l58 0 2026-03-09T15:00:44.386 INFO:tasks.workunit.client.1.vm09.stdout:8/304: creat df/d24/d37/f59 x:0 0 0 2026-03-09T15:00:44.388 INFO:tasks.workunit.client.1.vm09.stdout:0/361: symlink da/dc/d1c/d46/d63/l75 0 2026-03-09T15:00:44.389 INFO:tasks.workunit.client.1.vm09.stdout:0/362: stat da/dc/d1c/d3c/d44/c72 0 2026-03-09T15:00:44.391 INFO:tasks.workunit.client.1.vm09.stdout:4/304: unlink f4 0 2026-03-09T15:00:44.392 INFO:tasks.workunit.client.1.vm09.stdout:4/305: fdatasync db/d19/d32/d3b/f49 0 2026-03-09T15:00:44.397 INFO:tasks.workunit.client.1.vm09.stdout:4/306: creat db/d19/d32/f65 x:0 0 0 2026-03-09T15:00:44.397 INFO:tasks.workunit.client.1.vm09.stdout:8/305: dread df/d1c/d1d/f2b [0,4194304] 0 2026-03-09T15:00:44.400 INFO:tasks.workunit.client.1.vm09.stdout:8/306: write df/d1c/d1d/f44 [6206,73680] 0 2026-03-09T15:00:44.401 INFO:tasks.workunit.client.1.vm09.stdout:4/307: creat db/d19/d35/d5f/f66 x:0 0 0 2026-03-09T15:00:44.403 INFO:tasks.workunit.client.1.vm09.stdout:8/307: chown df/d1f/f31 834 1 2026-03-09T15:00:44.408 INFO:tasks.workunit.client.1.vm09.stdout:6/323: dread d6/db/f1f [4194304,4194304] 0 2026-03-09T15:00:44.411 INFO:tasks.workunit.client.1.vm09.stdout:4/308: link db/d12/d16/f36 db/d19/d32/f67 0 2026-03-09T15:00:44.411 INFO:tasks.workunit.client.1.vm09.stdout:4/309: read - db/d19/d32/f65 zero size 2026-03-09T15:00:44.411 INFO:tasks.workunit.client.1.vm09.stdout:8/308: dread df/f26 [0,4194304] 0 2026-03-09T15:00:44.412 INFO:tasks.workunit.client.1.vm09.stdout:6/324: unlink d6/l22 0 2026-03-09T15:00:44.413 INFO:tasks.workunit.client.1.vm09.stdout:6/325: write d6/d20/d2a/d3d/d46/f64 [723164,59423] 0 2026-03-09T15:00:44.416 INFO:tasks.workunit.client.1.vm09.stdout:0/363: dread f7 [4194304,4194304] 0 2026-03-09T15:00:44.417 INFO:tasks.workunit.client.1.vm09.stdout:1/271: sync 2026-03-09T15:00:44.418 INFO:tasks.workunit.client.1.vm09.stdout:6/326: mkdir d6/d20/d38/d56/d65/d68 0 2026-03-09T15:00:44.421 INFO:tasks.workunit.client.1.vm09.stdout:0/364: getdents da/dc/d1c 0 2026-03-09T15:00:44.421 INFO:tasks.workunit.client.1.vm09.stdout:1/272: creat d8/f59 x:0 0 0 2026-03-09T15:00:44.422 INFO:tasks.workunit.client.1.vm09.stdout:6/327: dread d6/d20/d38/d4e/f5a [0,4194304] 0 2026-03-09T15:00:44.423 INFO:tasks.workunit.client.1.vm09.stdout:0/365: mkdir da/dc/d22/d76 0 2026-03-09T15:00:44.431 INFO:tasks.workunit.client.1.vm09.stdout:3/302: dwrite d3/d3a/d54/f58 [4194304,4194304] 0 2026-03-09T15:00:44.439 INFO:tasks.workunit.client.1.vm09.stdout:0/366: dwrite da/d57/f60 [0,4194304] 0 2026-03-09T15:00:44.445 INFO:tasks.workunit.client.1.vm09.stdout:0/367: rmdir da/dc 39 2026-03-09T15:00:44.445 INFO:tasks.workunit.client.1.vm09.stdout:3/303: creat d3/d3a/d2b/d39/f70 x:0 0 0 2026-03-09T15:00:44.446 INFO:tasks.workunit.client.1.vm09.stdout:1/273: dwrite d8/f17 [0,4194304] 0 2026-03-09T15:00:44.447 INFO:tasks.workunit.client.1.vm09.stdout:3/304: write d3/f9 [1172629,58920] 0 2026-03-09T15:00:44.465 INFO:tasks.workunit.client.1.vm09.stdout:0/368: dwrite da/dc/d22/f53 [0,4194304] 0 2026-03-09T15:00:44.495 INFO:tasks.workunit.client.1.vm09.stdout:0/369: mknod da/dc/d1c/d46/c77 0 2026-03-09T15:00:44.504 INFO:tasks.workunit.client.1.vm09.stdout:0/370: mkdir da/dc/d1c/d3c/d78 0 2026-03-09T15:00:44.505 INFO:tasks.workunit.client.1.vm09.stdout:0/371: chown da/dc/d1c/d3c/d44/d6c/l70 3090 1 2026-03-09T15:00:44.506 INFO:tasks.workunit.client.1.vm09.stdout:1/274: getdents d8/d10/d24/d48 0 2026-03-09T15:00:44.508 INFO:tasks.workunit.client.1.vm09.stdout:1/275: dread d8/d10/f44 [0,4194304] 0 2026-03-09T15:00:44.509 INFO:tasks.workunit.client.1.vm09.stdout:1/276: write d8/d10/f2f [1985290,50134] 0 2026-03-09T15:00:44.509 INFO:tasks.workunit.client.1.vm09.stdout:0/372: symlink da/d30/l79 0 2026-03-09T15:00:44.510 INFO:tasks.workunit.client.1.vm09.stdout:0/373: write da/dc/f28 [4131076,123367] 0 2026-03-09T15:00:44.511 INFO:tasks.workunit.client.1.vm09.stdout:0/374: write da/dc/d1c/d3c/d44/f71 [2901209,4659] 0 2026-03-09T15:00:44.514 INFO:tasks.workunit.client.1.vm09.stdout:0/375: mkdir da/dc/d1c/d3c/d78/d7a 0 2026-03-09T15:00:44.515 INFO:tasks.workunit.client.1.vm09.stdout:0/376: fdatasync da/dc/d1c/d3c/f4f 0 2026-03-09T15:00:44.516 INFO:tasks.workunit.client.1.vm09.stdout:0/377: mkdir da/dc/d1c/d3c/d44/d6c/d7b 0 2026-03-09T15:00:44.517 INFO:tasks.workunit.client.1.vm09.stdout:0/378: truncate da/d30/f38 4389953 0 2026-03-09T15:00:44.517 INFO:tasks.workunit.client.1.vm09.stdout:0/379: fdatasync da/d57/f60 0 2026-03-09T15:00:44.517 INFO:tasks.workunit.client.1.vm09.stdout:0/380: dread - da/d30/f6f zero size 2026-03-09T15:00:44.518 INFO:tasks.workunit.client.1.vm09.stdout:0/381: dread - da/dc/d1c/d3c/d44/f68 zero size 2026-03-09T15:00:44.521 INFO:tasks.workunit.client.1.vm09.stdout:0/382: write da/dc/d1c/d3c/d44/f68 [609570,58394] 0 2026-03-09T15:00:44.552 INFO:tasks.workunit.client.1.vm09.stdout:1/277: sync 2026-03-09T15:00:44.557 INFO:tasks.workunit.client.1.vm09.stdout:1/278: read d8/d10/f44 [82261,62725] 0 2026-03-09T15:00:44.565 INFO:tasks.workunit.client.1.vm09.stdout:1/279: write d8/d22/f4c [2364836,84257] 0 2026-03-09T15:00:44.566 INFO:tasks.workunit.client.1.vm09.stdout:1/280: dread - d8/d1b/f37 zero size 2026-03-09T15:00:44.570 INFO:tasks.workunit.client.1.vm09.stdout:1/281: dwrite d8/d10/d24/f2a [0,4194304] 0 2026-03-09T15:00:44.582 INFO:tasks.workunit.client.1.vm09.stdout:1/282: chown d8/l9 1153 1 2026-03-09T15:00:44.583 INFO:tasks.workunit.client.1.vm09.stdout:1/283: write d8/d10/f29 [4661327,109179] 0 2026-03-09T15:00:44.589 INFO:tasks.workunit.client.1.vm09.stdout:5/296: rmdir d2/d37/d3c/d36/d45 39 2026-03-09T15:00:44.592 INFO:tasks.workunit.client.1.vm09.stdout:2/340: write f5 [1445952,62097] 0 2026-03-09T15:00:44.595 INFO:tasks.workunit.client.1.vm09.stdout:5/297: mknod d2/d37/c6f 0 2026-03-09T15:00:44.613 INFO:tasks.workunit.client.1.vm09.stdout:2/341: dwrite df/d58/f65 [0,4194304] 0 2026-03-09T15:00:44.613 INFO:tasks.workunit.client.1.vm09.stdout:2/342: fsync df/d3b/f46 0 2026-03-09T15:00:44.617 INFO:tasks.workunit.client.1.vm09.stdout:9/288: dwrite d1/d7/d1e/d2b/f3f [4194304,4194304] 0 2026-03-09T15:00:44.618 INFO:tasks.workunit.client.1.vm09.stdout:8/309: rmdir df 39 2026-03-09T15:00:44.623 INFO:tasks.workunit.client.1.vm09.stdout:9/289: chown d1/d7/cf 2 1 2026-03-09T15:00:44.632 INFO:tasks.workunit.client.1.vm09.stdout:8/310: fdatasync df/d1c/d1d/f44 0 2026-03-09T15:00:44.637 INFO:tasks.workunit.client.1.vm09.stdout:6/328: getdents d6 0 2026-03-09T15:00:44.641 INFO:tasks.workunit.client.1.vm09.stdout:9/290: dwrite d1/f24 [4194304,4194304] 0 2026-03-09T15:00:44.641 INFO:tasks.workunit.client.1.vm09.stdout:9/291: dread - d1/d4f/f64 zero size 2026-03-09T15:00:44.646 INFO:tasks.workunit.client.1.vm09.stdout:4/310: dwrite db/f29 [0,4194304] 0 2026-03-09T15:00:44.648 INFO:tasks.workunit.client.1.vm09.stdout:6/329: creat d6/d20/d38/d56/d65/d68/f69 x:0 0 0 2026-03-09T15:00:44.648 INFO:tasks.workunit.client.1.vm09.stdout:9/292: dread - d1/d7/f45 zero size 2026-03-09T15:00:44.648 INFO:tasks.workunit.client.1.vm09.stdout:4/311: write db/d19/d32/f65 [1044484,45467] 0 2026-03-09T15:00:44.652 INFO:tasks.workunit.client.1.vm09.stdout:8/311: dwrite df/d1f/f31 [4194304,4194304] 0 2026-03-09T15:00:44.660 INFO:tasks.workunit.client.1.vm09.stdout:6/330: dread d6/d20/f59 [0,4194304] 0 2026-03-09T15:00:44.670 INFO:tasks.workunit.client.1.vm09.stdout:4/312: truncate db/d12/f2b 550391 0 2026-03-09T15:00:44.671 INFO:tasks.workunit.client.1.vm09.stdout:4/313: stat db/fe 0 2026-03-09T15:00:44.674 INFO:tasks.workunit.client.1.vm09.stdout:9/293: dread d1/d7/d1e/d2b/d2e/f1d [0,4194304] 0 2026-03-09T15:00:44.676 INFO:tasks.workunit.client.1.vm09.stdout:9/294: write d1/d7/d1e/d2b/f3f [3617047,48700] 0 2026-03-09T15:00:44.677 INFO:tasks.workunit.client.1.vm09.stdout:2/343: dread df/f13 [0,4194304] 0 2026-03-09T15:00:44.677 INFO:tasks.workunit.client.1.vm09.stdout:9/295: read - d1/d7/d1e/d2b/f5f zero size 2026-03-09T15:00:44.678 INFO:tasks.workunit.client.1.vm09.stdout:8/312: dwrite df/d1c/d1d/f2b [4194304,4194304] 0 2026-03-09T15:00:44.690 INFO:tasks.workunit.client.1.vm09.stdout:2/344: unlink df/d1f/c2c 0 2026-03-09T15:00:44.690 INFO:tasks.workunit.client.1.vm09.stdout:4/314: dwrite db/d19/f38 [0,4194304] 0 2026-03-09T15:00:44.691 INFO:tasks.workunit.client.1.vm09.stdout:9/296: getdents d1/d4f 0 2026-03-09T15:00:44.700 INFO:tasks.workunit.client.1.vm09.stdout:8/313: dwrite df/d38/f52 [0,4194304] 0 2026-03-09T15:00:44.700 INFO:tasks.workunit.client.1.vm09.stdout:4/315: dwrite db/f21 [0,4194304] 0 2026-03-09T15:00:44.701 INFO:tasks.workunit.client.1.vm09.stdout:8/314: truncate df/d24/d37/f59 744220 0 2026-03-09T15:00:44.701 INFO:tasks.workunit.client.1.vm09.stdout:4/316: write db/d12/d16/f46 [159676,77027] 0 2026-03-09T15:00:44.702 INFO:tasks.workunit.client.1.vm09.stdout:8/315: chown df/l19 0 1 2026-03-09T15:00:44.706 INFO:tasks.workunit.client.1.vm09.stdout:2/345: sync 2026-03-09T15:00:44.707 INFO:tasks.workunit.client.1.vm09.stdout:8/316: chown df/d24/d3f/f50 604 1 2026-03-09T15:00:44.709 INFO:tasks.workunit.client.1.vm09.stdout:8/317: symlink df/d24/d37/l5a 0 2026-03-09T15:00:44.718 INFO:tasks.workunit.client.1.vm09.stdout:8/318: read df/d1f/f40 [345827,121098] 0 2026-03-09T15:00:44.719 INFO:tasks.workunit.client.1.vm09.stdout:8/319: stat df/d1f 0 2026-03-09T15:00:44.719 INFO:tasks.workunit.client.1.vm09.stdout:4/317: dwrite db/d12/f5a [0,4194304] 0 2026-03-09T15:00:44.730 INFO:tasks.workunit.client.1.vm09.stdout:4/318: write db/fe [1009158,67538] 0 2026-03-09T15:00:44.737 INFO:tasks.workunit.client.1.vm09.stdout:8/320: dwrite df/f51 [0,4194304] 0 2026-03-09T15:00:44.737 INFO:tasks.workunit.client.1.vm09.stdout:4/319: getdents db/d19/d23/d44 0 2026-03-09T15:00:44.741 INFO:tasks.workunit.client.1.vm09.stdout:8/321: getdents df/d2d/d46 0 2026-03-09T15:00:44.779 INFO:tasks.workunit.client.1.vm09.stdout:9/297: fsync d1/f24 0 2026-03-09T15:00:44.780 INFO:tasks.workunit.client.1.vm09.stdout:9/298: write d1/d7/d1e/d2b/f5f [728980,42583] 0 2026-03-09T15:00:44.844 INFO:tasks.workunit.client.1.vm09.stdout:4/320: write db/d12/d16/f54 [3959,22151] 0 2026-03-09T15:00:44.944 INFO:tasks.workunit.client.1.vm09.stdout:6/331: mkdir d6/df/d6a 0 2026-03-09T15:00:44.944 INFO:tasks.workunit.client.1.vm09.stdout:6/332: truncate d6/db/d10/f19 914024 0 2026-03-09T15:00:44.954 INFO:tasks.workunit.client.1.vm09.stdout:6/333: dread d6/f17 [0,4194304] 0 2026-03-09T15:00:44.958 INFO:tasks.workunit.client.1.vm09.stdout:6/334: dread d6/df/d23/f29 [0,4194304] 0 2026-03-09T15:00:44.963 INFO:tasks.workunit.client.1.vm09.stdout:7/329: unlink d3/d28/l63 0 2026-03-09T15:00:44.966 INFO:tasks.workunit.client.1.vm09.stdout:6/335: dread d6/d20/f52 [0,4194304] 0 2026-03-09T15:00:44.968 INFO:tasks.workunit.client.1.vm09.stdout:6/336: dread d6/df/d23/f2f [0,4194304] 0 2026-03-09T15:00:44.973 INFO:tasks.workunit.client.1.vm09.stdout:7/330: mkdir d3/d1d/d65 0 2026-03-09T15:00:44.978 INFO:tasks.workunit.client.1.vm09.stdout:6/337: mknod d6/db/d10/d4f/c6b 0 2026-03-09T15:00:44.978 INFO:tasks.workunit.client.1.vm09.stdout:6/338: chown d6/d20/d44/d45 213774566 1 2026-03-09T15:00:44.982 INFO:tasks.workunit.client.1.vm09.stdout:6/339: fdatasync d6/f39 0 2026-03-09T15:00:44.983 INFO:tasks.workunit.client.1.vm09.stdout:6/340: write d6/db/d10/f2c [2880618,45469] 0 2026-03-09T15:00:44.985 INFO:tasks.workunit.client.1.vm09.stdout:6/341: stat d6/d20/d38/d4e/d55 0 2026-03-09T15:00:44.995 INFO:tasks.workunit.client.1.vm09.stdout:6/342: getdents d6/d20/d38/d56 0 2026-03-09T15:00:44.997 INFO:tasks.workunit.client.1.vm09.stdout:6/343: creat d6/d20/d24/f6c x:0 0 0 2026-03-09T15:00:44.998 INFO:tasks.workunit.client.1.vm09.stdout:6/344: write d6/df/d23/f2f [3974196,106048] 0 2026-03-09T15:00:45.001 INFO:tasks.workunit.client.1.vm09.stdout:6/345: creat d6/df/d23/f6d x:0 0 0 2026-03-09T15:00:45.053 INFO:tasks.workunit.client.1.vm09.stdout:5/298: dwrite d2/d4/f1f [0,4194304] 0 2026-03-09T15:00:45.068 INFO:tasks.workunit.client.1.vm09.stdout:5/299: sync 2026-03-09T15:00:45.088 INFO:tasks.workunit.client.1.vm09.stdout:4/321: dread db/d12/f2b [0,4194304] 0 2026-03-09T15:00:45.090 INFO:tasks.workunit.client.1.vm09.stdout:5/300: dread d2/f15 [0,4194304] 0 2026-03-09T15:00:45.093 INFO:tasks.workunit.client.1.vm09.stdout:5/301: symlink d2/l70 0 2026-03-09T15:00:45.095 INFO:tasks.workunit.client.1.vm09.stdout:4/322: dwrite db/d19/d35/f4e [0,4194304] 0 2026-03-09T15:00:45.103 INFO:tasks.workunit.client.1.vm09.stdout:2/346: truncate df/d3b/f4e 1464980 0 2026-03-09T15:00:45.118 INFO:tasks.workunit.client.1.vm09.stdout:3/305: rename d3/d3a/d2b/c4d to d3/d3a/d2b/d53/c71 0 2026-03-09T15:00:45.123 INFO:tasks.workunit.client.1.vm09.stdout:0/383: rename da/dc/d1c/d3c/d44/f68 to da/dc/d22/f7c 0 2026-03-09T15:00:45.124 INFO:tasks.workunit.client.1.vm09.stdout:0/384: dread - da/dc/d1c/f6d zero size 2026-03-09T15:00:45.127 INFO:tasks.workunit.client.1.vm09.stdout:0/385: creat da/dc/d1c/d46/f7d x:0 0 0 2026-03-09T15:00:45.129 INFO:tasks.workunit.client.1.vm09.stdout:1/284: rename d8/d22/c38 to d8/d22/d56/c5a 0 2026-03-09T15:00:45.131 INFO:tasks.workunit.client.1.vm09.stdout:1/285: mkdir d8/d50/d5b 0 2026-03-09T15:00:45.132 INFO:tasks.workunit.client.1.vm09.stdout:0/386: dwrite da/dc/d1c/d46/f52 [0,4194304] 0 2026-03-09T15:00:45.133 INFO:tasks.workunit.client.1.vm09.stdout:1/286: write d8/d10/f1a [4903919,114027] 0 2026-03-09T15:00:45.133 INFO:tasks.workunit.client.1.vm09.stdout:1/287: chown d8/d22/f4c 1553095 1 2026-03-09T15:00:45.136 INFO:tasks.workunit.client.1.vm09.stdout:1/288: dwrite d8/f59 [0,4194304] 0 2026-03-09T15:00:45.147 INFO:tasks.workunit.client.1.vm09.stdout:0/387: read da/dc/d1c/d46/d5b/f6a [890231,97156] 0 2026-03-09T15:00:45.153 INFO:tasks.workunit.client.1.vm09.stdout:9/299: rename d1/d7/d1e/d2b/d2e/f2d to d1/d7/f67 0 2026-03-09T15:00:45.156 INFO:tasks.workunit.client.1.vm09.stdout:7/331: rename d3/d3d/f4b to d3/db/d46/f66 0 2026-03-09T15:00:45.157 INFO:tasks.workunit.client.1.vm09.stdout:7/332: chown d3/db/d15/l4a 1728 1 2026-03-09T15:00:45.162 INFO:tasks.workunit.client.1.vm09.stdout:5/302: unlink d2/d4/c7 0 2026-03-09T15:00:45.163 INFO:tasks.workunit.client.1.vm09.stdout:1/289: creat d8/d10/f5c x:0 0 0 2026-03-09T15:00:45.164 INFO:tasks.workunit.client.1.vm09.stdout:0/388: link da/l15 da/dc/d22/l7e 0 2026-03-09T15:00:45.165 INFO:tasks.workunit.client.1.vm09.stdout:7/333: unlink d3/f5d 0 2026-03-09T15:00:45.168 INFO:tasks.workunit.client.1.vm09.stdout:0/389: unlink da/d30/f38 0 2026-03-09T15:00:45.168 INFO:tasks.workunit.client.1.vm09.stdout:7/334: chown d3/f26 67673749 1 2026-03-09T15:00:45.169 INFO:tasks.workunit.client.1.vm09.stdout:5/303: mknod d2/d37/d53/c71 0 2026-03-09T15:00:45.196 INFO:tasks.workunit.client.1.vm09.stdout:5/304: creat d2/d37/f72 x:0 0 0 2026-03-09T15:00:45.201 INFO:tasks.workunit.client.1.vm09.stdout:7/335: getdents d3/db/d25 0 2026-03-09T15:00:45.208 INFO:tasks.workunit.client.1.vm09.stdout:6/346: dwrite d6/f39 [0,4194304] 0 2026-03-09T15:00:45.212 INFO:tasks.workunit.client.1.vm09.stdout:5/305: creat d2/d4/f73 x:0 0 0 2026-03-09T15:00:45.220 INFO:tasks.workunit.client.1.vm09.stdout:6/347: dread d6/d20/d44/f4a [0,4194304] 0 2026-03-09T15:00:45.223 INFO:tasks.workunit.client.1.vm09.stdout:6/348: creat d6/d20/f6e x:0 0 0 2026-03-09T15:00:45.224 INFO:tasks.workunit.client.1.vm09.stdout:6/349: write d6/d20/d2a/f37 [1639970,113687] 0 2026-03-09T15:00:45.226 INFO:tasks.workunit.client.1.vm09.stdout:6/350: read d6/df/d23/f29 [931179,14221] 0 2026-03-09T15:00:45.227 INFO:tasks.workunit.client.1.vm09.stdout:8/322: rename df/d1f to df/d5b 0 2026-03-09T15:00:45.228 INFO:tasks.workunit.client.1.vm09.stdout:4/323: unlink db/c5c 0 2026-03-09T15:00:45.229 INFO:tasks.workunit.client.1.vm09.stdout:8/323: mkdir df/d5c 0 2026-03-09T15:00:45.230 INFO:tasks.workunit.client.1.vm09.stdout:6/351: dwrite d6/d20/f6e [0,4194304] 0 2026-03-09T15:00:45.233 INFO:tasks.workunit.client.1.vm09.stdout:6/352: read d6/d20/d2a/d3d/f43 [284808,63251] 0 2026-03-09T15:00:45.245 INFO:tasks.workunit.client.1.vm09.stdout:4/324: fdatasync db/d19/d23/d44/f45 0 2026-03-09T15:00:45.248 INFO:tasks.workunit.client.1.vm09.stdout:2/347: rename df/d3b to df/d58/d67 0 2026-03-09T15:00:45.257 INFO:tasks.workunit.client.1.vm09.stdout:8/324: write df/f2a [687434,28832] 0 2026-03-09T15:00:45.257 INFO:tasks.workunit.client.1.vm09.stdout:4/325: dwrite db/d19/f38 [0,4194304] 0 2026-03-09T15:00:45.257 INFO:tasks.workunit.client.1.vm09.stdout:6/353: mkdir d6/d20/d38/d56/d65/d68/d6f 0 2026-03-09T15:00:45.257 INFO:tasks.workunit.client.1.vm09.stdout:4/326: dread db/d19/d23/d44/f45 [0,4194304] 0 2026-03-09T15:00:45.257 INFO:tasks.workunit.client.1.vm09.stdout:4/327: dwrite db/fe [0,4194304] 0 2026-03-09T15:00:45.264 INFO:tasks.workunit.client.1.vm09.stdout:8/325: symlink df/d1c/d1d/l5d 0 2026-03-09T15:00:45.264 INFO:tasks.workunit.client.1.vm09.stdout:2/348: dwrite f3 [0,4194304] 0 2026-03-09T15:00:45.269 INFO:tasks.workunit.client.1.vm09.stdout:3/306: dwrite d3/d3a/d2b/d39/f3c [0,4194304] 0 2026-03-09T15:00:45.275 INFO:tasks.workunit.client.1.vm09.stdout:8/326: rmdir df/d2d 39 2026-03-09T15:00:45.276 INFO:tasks.workunit.client.1.vm09.stdout:8/327: chown df/d24/d37/f48 870 1 2026-03-09T15:00:45.280 INFO:tasks.workunit.client.1.vm09.stdout:2/349: rmdir df/d20/d29/d53/d5f 39 2026-03-09T15:00:45.284 INFO:tasks.workunit.client.1.vm09.stdout:3/307: creat d3/d3a/d2b/f72 x:0 0 0 2026-03-09T15:00:45.287 INFO:tasks.workunit.client.1.vm09.stdout:4/328: dread db/f14 [0,4194304] 0 2026-03-09T15:00:45.291 INFO:tasks.workunit.client.1.vm09.stdout:3/308: mkdir d3/d3a/d73 0 2026-03-09T15:00:45.292 INFO:tasks.workunit.client.1.vm09.stdout:3/309: write d3/d3a/d2b/d39/f3c [176680,107954] 0 2026-03-09T15:00:45.293 INFO:tasks.workunit.client.1.vm09.stdout:3/310: read - d3/d3a/f6b zero size 2026-03-09T15:00:45.295 INFO:tasks.workunit.client.1.vm09.stdout:2/350: truncate df/d20/d29/f51 1081136 0 2026-03-09T15:00:45.301 INFO:tasks.workunit.client.1.vm09.stdout:2/351: stat le 0 2026-03-09T15:00:45.301 INFO:tasks.workunit.client.1.vm09.stdout:2/352: truncate df/d20/d2e/f64 666599 0 2026-03-09T15:00:45.301 INFO:tasks.workunit.client.1.vm09.stdout:2/353: chown c6 187 1 2026-03-09T15:00:45.301 INFO:tasks.workunit.client.1.vm09.stdout:9/300: dwrite d1/d7/d1e/f20 [0,4194304] 0 2026-03-09T15:00:45.303 INFO:tasks.workunit.client.1.vm09.stdout:3/311: read d3/d3a/d2b/d31/f3f [2310646,68921] 0 2026-03-09T15:00:45.304 INFO:tasks.workunit.client.1.vm09.stdout:3/312: dread - d3/d3a/f6b zero size 2026-03-09T15:00:45.304 INFO:tasks.workunit.client.1.vm09.stdout:1/290: dwrite d8/d10/f13 [0,4194304] 0 2026-03-09T15:00:45.310 INFO:tasks.workunit.client.1.vm09.stdout:0/390: dwrite da/fb [0,4194304] 0 2026-03-09T15:00:45.315 INFO:tasks.workunit.client.1.vm09.stdout:5/306: truncate d2/d37/d3c/d55/f58 150218 0 2026-03-09T15:00:45.319 INFO:tasks.workunit.client.1.vm09.stdout:2/354: dwrite df/d1f/d47/d5d/f5e [0,4194304] 0 2026-03-09T15:00:45.319 INFO:tasks.workunit.client.1.vm09.stdout:3/313: mkdir d3/d74 0 2026-03-09T15:00:45.320 INFO:tasks.workunit.client.1.vm09.stdout:5/307: mkdir d2/d37/d3c/d36/d4c/d51/d74 0 2026-03-09T15:00:45.321 INFO:tasks.workunit.client.1.vm09.stdout:5/308: chown d2/f3d 25726 1 2026-03-09T15:00:45.321 INFO:tasks.workunit.client.1.vm09.stdout:2/355: mknod df/d1f/d47/c68 0 2026-03-09T15:00:45.337 INFO:tasks.workunit.client.1.vm09.stdout:2/356: symlink df/d20/d2e/l69 0 2026-03-09T15:00:45.337 INFO:tasks.workunit.client.1.vm09.stdout:5/309: creat d2/d37/f75 x:0 0 0 2026-03-09T15:00:45.342 INFO:tasks.workunit.client.1.vm09.stdout:2/357: creat df/d20/f6a x:0 0 0 2026-03-09T15:00:45.343 INFO:tasks.workunit.client.1.vm09.stdout:2/358: symlink df/l6b 0 2026-03-09T15:00:45.343 INFO:tasks.workunit.client.1.vm09.stdout:8/328: sync 2026-03-09T15:00:45.344 INFO:tasks.workunit.client.1.vm09.stdout:2/359: creat df/d1f/d47/d5d/f6c x:0 0 0 2026-03-09T15:00:45.345 INFO:tasks.workunit.client.1.vm09.stdout:8/329: unlink df/f2a 0 2026-03-09T15:00:45.345 INFO:tasks.workunit.client.1.vm09.stdout:8/330: fdatasync df/f12 0 2026-03-09T15:00:45.346 INFO:tasks.workunit.client.1.vm09.stdout:8/331: mkdir df/d24/d5e 0 2026-03-09T15:00:45.353 INFO:tasks.workunit.client.1.vm09.stdout:2/360: mkdir df/d1f/d6d 0 2026-03-09T15:00:45.353 INFO:tasks.workunit.client.1.vm09.stdout:8/332: truncate df/d38/f53 403121 0 2026-03-09T15:00:45.353 INFO:tasks.workunit.client.1.vm09.stdout:8/333: write df/d2d/f2f [1994874,95616] 0 2026-03-09T15:00:45.353 INFO:tasks.workunit.client.1.vm09.stdout:8/334: rename df/d24/d5e to df/d24/d3f/d5f 0 2026-03-09T15:00:45.353 INFO:tasks.workunit.client.1.vm09.stdout:8/335: mkdir df/d24/d37/d60 0 2026-03-09T15:00:45.355 INFO:tasks.workunit.client.1.vm09.stdout:3/314: sync 2026-03-09T15:00:45.356 INFO:tasks.workunit.client.1.vm09.stdout:9/301: sync 2026-03-09T15:00:45.360 INFO:tasks.workunit.client.1.vm09.stdout:2/361: dread df/d20/d2e/f48 [0,4194304] 0 2026-03-09T15:00:45.363 INFO:tasks.workunit.client.1.vm09.stdout:2/362: readlink df/d2d/l62 0 2026-03-09T15:00:45.363 INFO:tasks.workunit.client.1.vm09.stdout:9/302: write d1/d7/d1e/d2b/f3f [3785388,99831] 0 2026-03-09T15:00:45.363 INFO:tasks.workunit.client.1.vm09.stdout:5/310: read d2/f4f [74986,61175] 0 2026-03-09T15:00:45.365 INFO:tasks.workunit.client.1.vm09.stdout:8/336: dread df/f30 [0,4194304] 0 2026-03-09T15:00:45.365 INFO:tasks.workunit.client.1.vm09.stdout:8/337: stat df/f12 0 2026-03-09T15:00:45.367 INFO:tasks.workunit.client.1.vm09.stdout:8/338: creat df/d24/f61 x:0 0 0 2026-03-09T15:00:45.369 INFO:tasks.workunit.client.1.vm09.stdout:9/303: dread d1/d7/d1e/f20 [0,4194304] 0 2026-03-09T15:00:45.370 INFO:tasks.workunit.client.1.vm09.stdout:2/363: mkdir df/d6e 0 2026-03-09T15:00:45.370 INFO:tasks.workunit.client.1.vm09.stdout:2/364: chown fb 244241 1 2026-03-09T15:00:45.371 INFO:tasks.workunit.client.1.vm09.stdout:2/365: dread - df/d1f/d47/f56 zero size 2026-03-09T15:00:45.371 INFO:tasks.workunit.client.1.vm09.stdout:8/339: dread df/f51 [0,4194304] 0 2026-03-09T15:00:45.372 INFO:tasks.workunit.client.1.vm09.stdout:8/340: chown df/d5c 27717535 1 2026-03-09T15:00:45.372 INFO:tasks.workunit.client.1.vm09.stdout:8/341: chown df/d1c/l1e 16 1 2026-03-09T15:00:45.373 INFO:tasks.workunit.client.1.vm09.stdout:3/315: rename d3/d3a/d2b/l3d to d3/d3a/d2b/d31/d4a/d62/l75 0 2026-03-09T15:00:45.374 INFO:tasks.workunit.client.1.vm09.stdout:9/304: write d1/d7/f13 [3636928,94006] 0 2026-03-09T15:00:45.376 INFO:tasks.workunit.client.1.vm09.stdout:8/342: creat df/d24/d3f/d5f/f62 x:0 0 0 2026-03-09T15:00:45.386 INFO:tasks.workunit.client.1.vm09.stdout:9/305: chown d1/d7/d1e/d2b/f30 86969522 1 2026-03-09T15:00:45.386 INFO:tasks.workunit.client.1.vm09.stdout:8/343: dread - df/d24/d3f/f50 zero size 2026-03-09T15:00:45.386 INFO:tasks.workunit.client.1.vm09.stdout:8/344: creat df/d24/d37/f63 x:0 0 0 2026-03-09T15:00:45.386 INFO:tasks.workunit.client.1.vm09.stdout:8/345: chown df/d2d/f2f 670515 1 2026-03-09T15:00:45.386 INFO:tasks.workunit.client.1.vm09.stdout:3/316: dread d3/d3a/f1c [0,4194304] 0 2026-03-09T15:00:45.386 INFO:tasks.workunit.client.1.vm09.stdout:8/346: chown df/f30 56888773 1 2026-03-09T15:00:45.388 INFO:tasks.workunit.client.1.vm09.stdout:5/311: sync 2026-03-09T15:00:45.389 INFO:tasks.workunit.client.1.vm09.stdout:5/312: chown d2/d4/l5 0 1 2026-03-09T15:00:45.392 INFO:tasks.workunit.client.1.vm09.stdout:5/313: dread - d2/d37/d3c/d36/d45/f6e zero size 2026-03-09T15:00:45.398 INFO:tasks.workunit.client.1.vm09.stdout:6/354: dwrite d6/f25 [0,4194304] 0 2026-03-09T15:00:45.401 INFO:tasks.workunit.client.1.vm09.stdout:6/355: chown d6/d20/d2a/f5d 25353 1 2026-03-09T15:00:45.401 INFO:tasks.workunit.client.1.vm09.stdout:6/356: readlink d6/l9 0 2026-03-09T15:00:45.431 INFO:tasks.workunit.client.1.vm09.stdout:4/329: dwrite db/d12/f27 [0,4194304] 0 2026-03-09T15:00:45.433 INFO:tasks.workunit.client.1.vm09.stdout:4/330: read db/d12/d16/f46 [224129,54715] 0 2026-03-09T15:00:45.434 INFO:tasks.workunit.client.1.vm09.stdout:4/331: dread - db/d19/d35/d5f/f66 zero size 2026-03-09T15:00:45.439 INFO:tasks.workunit.client.1.vm09.stdout:1/291: dwrite d8/d1b/f21 [0,4194304] 0 2026-03-09T15:00:45.446 INFO:tasks.workunit.client.1.vm09.stdout:0/391: write da/dc/d10/f4a [546453,127809] 0 2026-03-09T15:00:45.453 INFO:tasks.workunit.client.1.vm09.stdout:4/332: chown db/d19/d35/l5e 108 1 2026-03-09T15:00:45.456 INFO:tasks.workunit.client.1.vm09.stdout:7/336: write d3/f9 [5219082,30565] 0 2026-03-09T15:00:45.469 INFO:tasks.workunit.client.1.vm09.stdout:8/347: rename df/d24/d3f to df/d38/d64 0 2026-03-09T15:00:45.472 INFO:tasks.workunit.client.1.vm09.stdout:2/366: write df/f17 [732516,2006] 0 2026-03-09T15:00:45.473 INFO:tasks.workunit.client.1.vm09.stdout:0/392: link da/dc/d1c/d46/d5b/f6b da/dc/d1c/d46/d63/f7f 0 2026-03-09T15:00:45.473 INFO:tasks.workunit.client.1.vm09.stdout:8/348: read df/d24/f32 [2127150,121601] 0 2026-03-09T15:00:45.475 INFO:tasks.workunit.client.1.vm09.stdout:0/393: truncate da/f12 4916548 0 2026-03-09T15:00:45.475 INFO:tasks.workunit.client.1.vm09.stdout:8/349: fdatasync df/d1c/d1d/f41 0 2026-03-09T15:00:45.476 INFO:tasks.workunit.client.1.vm09.stdout:9/306: dwrite d1/d7/d1e/f22 [0,4194304] 0 2026-03-09T15:00:45.493 INFO:tasks.workunit.client.1.vm09.stdout:2/367: dwrite df/d1f/d47/f60 [0,4194304] 0 2026-03-09T15:00:45.496 INFO:tasks.workunit.client.1.vm09.stdout:4/333: link db/d19/c1d db/d19/d52/c68 0 2026-03-09T15:00:45.506 INFO:tasks.workunit.client.1.vm09.stdout:9/307: dread d1/f28 [0,4194304] 0 2026-03-09T15:00:45.509 INFO:tasks.workunit.client.1.vm09.stdout:9/308: chown d1/d7/d1e/d2b/f42 2 1 2026-03-09T15:00:45.510 INFO:tasks.workunit.client.1.vm09.stdout:0/394: dread da/dc/d1c/d3c/d44/f51 [0,4194304] 0 2026-03-09T15:00:45.514 INFO:tasks.workunit.client.1.vm09.stdout:2/368: chown df/d1f/c22 1951242 1 2026-03-09T15:00:45.519 INFO:tasks.workunit.client.1.vm09.stdout:0/395: dwrite da/dc/d1c/d3c/f4f [0,4194304] 0 2026-03-09T15:00:45.519 INFO:tasks.workunit.client.1.vm09.stdout:1/292: truncate d8/f3d 674938 0 2026-03-09T15:00:45.526 INFO:tasks.workunit.client.1.vm09.stdout:1/293: dwrite d8/d22/d40/f52 [0,4194304] 0 2026-03-09T15:00:45.527 INFO:tasks.workunit.client.1.vm09.stdout:2/369: dwrite df/d1f/d47/d5d/f6c [0,4194304] 0 2026-03-09T15:00:45.529 INFO:tasks.workunit.client.1.vm09.stdout:4/334: creat db/d19/d32/d3b/f69 x:0 0 0 2026-03-09T15:00:45.529 INFO:tasks.workunit.client.1.vm09.stdout:3/317: rename d3/d3a/c43 to d3/d3a/d2b/d31/d4a/c76 0 2026-03-09T15:00:45.541 INFO:tasks.workunit.client.1.vm09.stdout:1/294: dread d8/d22/d40/f52 [0,4194304] 0 2026-03-09T15:00:45.549 INFO:tasks.workunit.client.1.vm09.stdout:9/309: symlink d1/d7/l68 0 2026-03-09T15:00:45.550 INFO:tasks.workunit.client.1.vm09.stdout:1/295: write d8/d1b/f37 [1006048,50322] 0 2026-03-09T15:00:45.550 INFO:tasks.workunit.client.1.vm09.stdout:9/310: write d1/d7/d1e/d2b/d40/f57 [917231,77592] 0 2026-03-09T15:00:45.550 INFO:tasks.workunit.client.1.vm09.stdout:7/337: getdents d3/db/d15 0 2026-03-09T15:00:45.550 INFO:tasks.workunit.client.1.vm09.stdout:0/396: mkdir da/d80 0 2026-03-09T15:00:45.551 INFO:tasks.workunit.client.1.vm09.stdout:7/338: fsync d3/db/d25/d5c/f5e 0 2026-03-09T15:00:45.555 INFO:tasks.workunit.client.1.vm09.stdout:1/296: mkdir d8/d1b/d5d 0 2026-03-09T15:00:45.556 INFO:tasks.workunit.client.1.vm09.stdout:9/311: symlink d1/d7/d1e/d2b/d2e/d56/l69 0 2026-03-09T15:00:45.562 INFO:tasks.workunit.client.1.vm09.stdout:4/335: sync 2026-03-09T15:00:45.563 INFO:tasks.workunit.client.1.vm09.stdout:1/297: symlink d8/d10/d24/l5e 0 2026-03-09T15:00:45.564 INFO:tasks.workunit.client.1.vm09.stdout:4/336: truncate db/d19/d32/f3e 1114838 0 2026-03-09T15:00:45.564 INFO:tasks.workunit.client.1.vm09.stdout:9/312: chown d1/d7/c15 66328 1 2026-03-09T15:00:45.564 INFO:tasks.workunit.client.1.vm09.stdout:1/298: chown d8/d10/f29 0 1 2026-03-09T15:00:45.565 INFO:tasks.workunit.client.1.vm09.stdout:7/339: rmdir d3/d3d 39 2026-03-09T15:00:45.567 INFO:tasks.workunit.client.1.vm09.stdout:2/370: getdents df/d20 0 2026-03-09T15:00:45.568 INFO:tasks.workunit.client.1.vm09.stdout:7/340: read d3/db/d15/d5f/f36 [175596,29439] 0 2026-03-09T15:00:45.568 INFO:tasks.workunit.client.1.vm09.stdout:0/397: sync 2026-03-09T15:00:45.570 INFO:tasks.workunit.client.1.vm09.stdout:5/314: rename d2/d37/d3c/d36/d45/d5c/l6b to d2/d37/d3c/d36/l76 0 2026-03-09T15:00:45.571 INFO:tasks.workunit.client.1.vm09.stdout:9/313: write d1/d7/f3e [3558372,79546] 0 2026-03-09T15:00:45.572 INFO:tasks.workunit.client.1.vm09.stdout:1/299: mkdir d8/d10/d24/d45/d5f 0 2026-03-09T15:00:45.574 INFO:tasks.workunit.client.1.vm09.stdout:0/398: write da/f12 [3252614,76607] 0 2026-03-09T15:00:45.579 INFO:tasks.workunit.client.1.vm09.stdout:0/399: write da/dc/d1c/d3c/d44/f67 [4701605,108959] 0 2026-03-09T15:00:45.579 INFO:tasks.workunit.client.1.vm09.stdout:6/357: rename d6/d20/d2a/d3d/d46/f64 to d6/d20/f70 0 2026-03-09T15:00:45.580 INFO:tasks.workunit.client.1.vm09.stdout:4/337: creat db/d19/d52/f6a x:0 0 0 2026-03-09T15:00:45.580 INFO:tasks.workunit.client.1.vm09.stdout:5/315: creat d2/d37/d53/f77 x:0 0 0 2026-03-09T15:00:45.584 INFO:tasks.workunit.client.1.vm09.stdout:6/358: write d6/df/d23/f2f [5232668,99434] 0 2026-03-09T15:00:45.589 INFO:tasks.workunit.client.1.vm09.stdout:0/400: dwrite da/dc/d1c/d3c/d44/f71 [8388608,4194304] 0 2026-03-09T15:00:45.599 INFO:tasks.workunit.client.1.vm09.stdout:1/300: dread d8/d10/f29 [0,4194304] 0 2026-03-09T15:00:45.599 INFO:tasks.workunit.client.1.vm09.stdout:1/301: write d8/f17 [1662044,119303] 0 2026-03-09T15:00:45.611 INFO:tasks.workunit.client.1.vm09.stdout:8/350: rename df/d1c to df/d5b/d65 0 2026-03-09T15:00:45.616 INFO:tasks.workunit.client.1.vm09.stdout:1/302: fsync d8/f57 0 2026-03-09T15:00:45.616 INFO:tasks.workunit.client.1.vm09.stdout:1/303: chown d8/d22/l2b 19457786 1 2026-03-09T15:00:45.624 INFO:tasks.workunit.client.1.vm09.stdout:3/318: write d3/d3a/d2b/d31/d4a/d62/f1b [4651660,42480] 0 2026-03-09T15:00:45.624 INFO:tasks.workunit.client.1.vm09.stdout:3/319: chown d3/d3a/d2b/d31/d4a/d62/f8 617 1 2026-03-09T15:00:45.628 INFO:tasks.workunit.client.1.vm09.stdout:1/304: dread d8/d10/f2f [0,4194304] 0 2026-03-09T15:00:45.628 INFO:tasks.workunit.client.1.vm09.stdout:1/305: readlink d8/d10/d24/d45/l58 0 2026-03-09T15:00:45.636 INFO:tasks.workunit.client.1.vm09.stdout:5/316: symlink d2/d37/l78 0 2026-03-09T15:00:45.643 INFO:tasks.workunit.client.1.vm09.stdout:3/320: creat d3/f77 x:0 0 0 2026-03-09T15:00:45.643 INFO:tasks.workunit.client.1.vm09.stdout:3/321: fsync d3/d3a/d2b/d39/d48/f5f 0 2026-03-09T15:00:45.646 INFO:tasks.workunit.client.1.vm09.stdout:4/338: creat db/d12/f6b x:0 0 0 2026-03-09T15:00:45.647 INFO:tasks.workunit.client.1.vm09.stdout:3/322: dwrite d3/d3a/d2b/d39/d6a/f6f [0,4194304] 0 2026-03-09T15:00:45.656 INFO:tasks.workunit.client.1.vm09.stdout:0/401: creat da/dc/d1c/d3c/f81 x:0 0 0 2026-03-09T15:00:45.657 INFO:tasks.workunit.client.1.vm09.stdout:5/317: dwrite d2/d4/fd [0,4194304] 0 2026-03-09T15:00:45.658 INFO:tasks.workunit.client.1.vm09.stdout:1/306: link d8/d10/f2f d8/d10/d24/d45/d5f/f60 0 2026-03-09T15:00:45.667 INFO:tasks.workunit.client.1.vm09.stdout:1/307: truncate d8/d10/f5c 192663 0 2026-03-09T15:00:45.668 INFO:tasks.workunit.client.1.vm09.stdout:1/308: fsync d8/d1b/f21 0 2026-03-09T15:00:45.669 INFO:tasks.workunit.client.1.vm09.stdout:2/371: dwrite df/d1f/f38 [0,4194304] 0 2026-03-09T15:00:45.671 INFO:tasks.workunit.client.1.vm09.stdout:2/372: dread - df/d20/f6a zero size 2026-03-09T15:00:45.672 INFO:tasks.workunit.client.1.vm09.stdout:2/373: write df/d20/f24 [3397632,100431] 0 2026-03-09T15:00:45.681 INFO:tasks.workunit.client.1.vm09.stdout:9/314: rename d1/d7/d1e/d2b/f51 to d1/d7/f6a 0 2026-03-09T15:00:45.685 INFO:tasks.workunit.client.1.vm09.stdout:8/351: rmdir df/d5b/d65 39 2026-03-09T15:00:45.689 INFO:tasks.workunit.client.1.vm09.stdout:0/402: dwrite da/dc/d1c/d3c/d44/f51 [0,4194304] 0 2026-03-09T15:00:45.698 INFO:tasks.workunit.client.1.vm09.stdout:5/318: dread d2/d37/f43 [0,4194304] 0 2026-03-09T15:00:45.698 INFO:tasks.workunit.client.1.vm09.stdout:1/309: creat d8/d22/f61 x:0 0 0 2026-03-09T15:00:45.700 INFO:tasks.workunit.client.1.vm09.stdout:5/319: read d2/d37/d3c/f4e [3145790,126480] 0 2026-03-09T15:00:45.700 INFO:tasks.workunit.client.1.vm09.stdout:1/310: read d8/f17 [2398608,105291] 0 2026-03-09T15:00:45.701 INFO:tasks.workunit.client.1.vm09.stdout:5/320: stat d2/d37/d3c/d36/d4c 0 2026-03-09T15:00:45.704 INFO:tasks.workunit.client.1.vm09.stdout:7/341: dwrite d3/d1d/f11 [0,4194304] 0 2026-03-09T15:00:45.719 INFO:tasks.workunit.client.1.vm09.stdout:6/359: dwrite f0 [0,4194304] 0 2026-03-09T15:00:45.720 INFO:tasks.workunit.client.1.vm09.stdout:1/311: creat d8/d10/d24/d45/f62 x:0 0 0 2026-03-09T15:00:45.728 INFO:tasks.workunit.client.1.vm09.stdout:5/321: rename d2/d37/d53/f77 to d2/d37/d53/f79 0 2026-03-09T15:00:45.728 INFO:tasks.workunit.client.1.vm09.stdout:2/374: mknod df/c6f 0 2026-03-09T15:00:45.733 INFO:tasks.workunit.client.1.vm09.stdout:2/375: write df/f17 [1262632,123554] 0 2026-03-09T15:00:45.741 INFO:tasks.workunit.client.1.vm09.stdout:0/403: mknod da/dc/c82 0 2026-03-09T15:00:45.741 INFO:tasks.workunit.client.1.vm09.stdout:0/404: write da/dc/d10/f4a [866816,37685] 0 2026-03-09T15:00:45.741 INFO:tasks.workunit.client.1.vm09.stdout:6/360: symlink d6/d20/d38/d56/d65/d68/l71 0 2026-03-09T15:00:45.741 INFO:tasks.workunit.client.1.vm09.stdout:7/342: dwrite d3/f32 [0,4194304] 0 2026-03-09T15:00:45.755 INFO:tasks.workunit.client.1.vm09.stdout:6/361: dwrite d6/db/f42 [0,4194304] 0 2026-03-09T15:00:45.756 INFO:tasks.workunit.client.1.vm09.stdout:7/343: symlink d3/db/d46/l67 0 2026-03-09T15:00:45.756 INFO:tasks.workunit.client.1.vm09.stdout:4/339: dwrite db/d12/d16/f46 [0,4194304] 0 2026-03-09T15:00:45.757 INFO:tasks.workunit.client.1.vm09.stdout:6/362: write d6/db/d10/f1c [905466,120640] 0 2026-03-09T15:00:45.757 INFO:tasks.workunit.client.1.vm09.stdout:4/340: readlink db/d19/l61 0 2026-03-09T15:00:45.757 INFO:tasks.workunit.client.1.vm09.stdout:5/322: sync 2026-03-09T15:00:45.762 INFO:tasks.workunit.client.1.vm09.stdout:4/341: readlink db/d19/d23/l2c 0 2026-03-09T15:00:45.767 INFO:tasks.workunit.client.1.vm09.stdout:0/405: creat da/dc/d22/d76/f83 x:0 0 0 2026-03-09T15:00:45.770 INFO:tasks.workunit.client.1.vm09.stdout:6/363: sync 2026-03-09T15:00:45.777 INFO:tasks.workunit.client.1.vm09.stdout:4/342: creat db/d19/d35/f6c x:0 0 0 2026-03-09T15:00:45.779 INFO:tasks.workunit.client.1.vm09.stdout:2/376: dread df/f16 [0,4194304] 0 2026-03-09T15:00:45.784 INFO:tasks.workunit.client.1.vm09.stdout:4/343: rename db/d12/f25 to db/d19/d52/f6d 0 2026-03-09T15:00:45.784 INFO:tasks.workunit.client.1.vm09.stdout:6/364: link d6/db/c2e d6/d20/d38/c72 0 2026-03-09T15:00:45.786 INFO:tasks.workunit.client.1.vm09.stdout:5/323: dread d2/d37/d3c/f3a [0,4194304] 0 2026-03-09T15:00:45.788 INFO:tasks.workunit.client.1.vm09.stdout:4/344: mknod db/d19/d35/c6e 0 2026-03-09T15:00:45.788 INFO:tasks.workunit.client.1.vm09.stdout:6/365: creat d6/d20/d2a/d57/f73 x:0 0 0 2026-03-09T15:00:45.789 INFO:tasks.workunit.client.1.vm09.stdout:5/324: unlink d2/d37/d3c/d36/l76 0 2026-03-09T15:00:45.791 INFO:tasks.workunit.client.1.vm09.stdout:4/345: mknod db/d19/d32/c6f 0 2026-03-09T15:00:45.792 INFO:tasks.workunit.client.1.vm09.stdout:6/366: rename d6/df/d6a to d6/d74 0 2026-03-09T15:00:45.792 INFO:tasks.workunit.client.1.vm09.stdout:5/325: symlink d2/d37/d3c/d36/d4c/l7a 0 2026-03-09T15:00:45.792 INFO:tasks.workunit.client.1.vm09.stdout:2/377: dread df/d20/d29/f31 [0,4194304] 0 2026-03-09T15:00:45.793 INFO:tasks.workunit.client.1.vm09.stdout:3/323: truncate d3/d3a/d2b/d31/d4a/d62/f1a 2318897 0 2026-03-09T15:00:45.797 INFO:tasks.workunit.client.1.vm09.stdout:9/315: dwrite d1/f29 [0,4194304] 0 2026-03-09T15:00:45.799 INFO:tasks.workunit.client.1.vm09.stdout:2/378: sync 2026-03-09T15:00:45.801 INFO:tasks.workunit.client.1.vm09.stdout:3/324: rename d3/d3a/d2b/d36/f51 to d3/d3a/d2b/d31/d4a/d62/f78 0 2026-03-09T15:00:45.802 INFO:tasks.workunit.client.1.vm09.stdout:6/367: read d6/d20/f59 [2269113,67108] 0 2026-03-09T15:00:45.803 INFO:tasks.workunit.client.1.vm09.stdout:7/344: dread f1 [0,4194304] 0 2026-03-09T15:00:45.805 INFO:tasks.workunit.client.1.vm09.stdout:6/368: stat d6/d20/d24/f67 0 2026-03-09T15:00:45.805 INFO:tasks.workunit.client.1.vm09.stdout:9/316: chown d1/d7/d1e/f22 1 1 2026-03-09T15:00:45.807 INFO:tasks.workunit.client.1.vm09.stdout:2/379: dwrite df/d20/d2e/f4c [4194304,4194304] 0 2026-03-09T15:00:45.817 INFO:tasks.workunit.client.1.vm09.stdout:4/346: rename db/d19/d32/d3b/f58 to db/d19/d35/d5f/f70 0 2026-03-09T15:00:45.817 INFO:tasks.workunit.client.1.vm09.stdout:6/369: creat d6/d20/d38/d4e/f75 x:0 0 0 2026-03-09T15:00:45.820 INFO:tasks.workunit.client.1.vm09.stdout:9/317: rename d1/d7/d1e/l36 to d1/l6b 0 2026-03-09T15:00:45.822 INFO:tasks.workunit.client.1.vm09.stdout:2/380: mknod df/d20/d29/d53/c70 0 2026-03-09T15:00:45.822 INFO:tasks.workunit.client.1.vm09.stdout:2/381: chown df/d20/l2a 201355 1 2026-03-09T15:00:45.824 INFO:tasks.workunit.client.1.vm09.stdout:9/318: mknod d1/d7/d1e/d2b/d2e/c6c 0 2026-03-09T15:00:45.826 INFO:tasks.workunit.client.1.vm09.stdout:2/382: mkdir df/d1f/d47/d71 0 2026-03-09T15:00:45.826 INFO:tasks.workunit.client.1.vm09.stdout:2/383: chown df/d20/f3f 140 1 2026-03-09T15:00:45.826 INFO:tasks.workunit.client.1.vm09.stdout:1/312: truncate d8/ff 3048845 0 2026-03-09T15:00:45.828 INFO:tasks.workunit.client.1.vm09.stdout:8/352: dwrite df/d5b/d65/f20 [0,4194304] 0 2026-03-09T15:00:45.832 INFO:tasks.workunit.client.1.vm09.stdout:1/313: mknod d8/d10/c63 0 2026-03-09T15:00:45.839 INFO:tasks.workunit.client.1.vm09.stdout:8/353: fdatasync df/d5b/f35 0 2026-03-09T15:00:45.840 INFO:tasks.workunit.client.1.vm09.stdout:9/319: dread d1/d7/d1e/f2a [0,4194304] 0 2026-03-09T15:00:45.841 INFO:tasks.workunit.client.1.vm09.stdout:1/314: dwrite d8/d1b/f37 [0,4194304] 0 2026-03-09T15:00:45.842 INFO:tasks.workunit.client.1.vm09.stdout:0/406: truncate da/dc/d1c/d3c/d44/f71 3786307 0 2026-03-09T15:00:45.843 INFO:tasks.workunit.client.1.vm09.stdout:9/320: write d1/d7/d1e/d2b/f3f [7275041,119953] 0 2026-03-09T15:00:45.843 INFO:tasks.workunit.client.1.vm09.stdout:9/321: chown d1/d7 3940468 1 2026-03-09T15:00:45.844 INFO:tasks.workunit.client.1.vm09.stdout:5/326: write d2/f56 [500230,96149] 0 2026-03-09T15:00:45.844 INFO:tasks.workunit.client.1.vm09.stdout:2/384: creat df/d20/d29/d53/d5f/f72 x:0 0 0 2026-03-09T15:00:45.846 INFO:tasks.workunit.client.1.vm09.stdout:0/407: read - da/dc/d1c/d46/f7d zero size 2026-03-09T15:00:45.847 INFO:tasks.workunit.client.1.vm09.stdout:8/354: symlink df/d38/d64/d5f/l66 0 2026-03-09T15:00:45.849 INFO:tasks.workunit.client.1.vm09.stdout:5/327: write d2/d4/fd [2235686,44466] 0 2026-03-09T15:00:45.852 INFO:tasks.workunit.client.1.vm09.stdout:9/322: dwrite d1/f1f [0,4194304] 0 2026-03-09T15:00:45.867 INFO:tasks.workunit.client.1.vm09.stdout:3/325: getdents d3/d3a/d2b/d36 0 2026-03-09T15:00:45.868 INFO:tasks.workunit.client.1.vm09.stdout:9/323: dwrite d1/f4 [0,4194304] 0 2026-03-09T15:00:45.872 INFO:tasks.workunit.client.1.vm09.stdout:7/345: truncate d3/d1d/f11 4742656 0 2026-03-09T15:00:45.878 INFO:tasks.workunit.client.1.vm09.stdout:4/347: dread db/d19/d35/d5f/f70 [0,4194304] 0 2026-03-09T15:00:45.884 INFO:tasks.workunit.client.1.vm09.stdout:0/408: unlink da/dc/d22/c23 0 2026-03-09T15:00:45.885 INFO:tasks.workunit.client.1.vm09.stdout:2/385: creat df/d1f/d47/f73 x:0 0 0 2026-03-09T15:00:45.885 INFO:tasks.workunit.client.1.vm09.stdout:5/328: creat d2/d37/d3c/d55/f7b x:0 0 0 2026-03-09T15:00:45.885 INFO:tasks.workunit.client.1.vm09.stdout:6/370: rmdir d6/d74 0 2026-03-09T15:00:45.885 INFO:tasks.workunit.client.1.vm09.stdout:4/348: dwrite db/d19/d35/d5f/f66 [0,4194304] 0 2026-03-09T15:00:45.886 INFO:tasks.workunit.client.1.vm09.stdout:3/326: mkdir d3/d5b/d79 0 2026-03-09T15:00:45.886 INFO:tasks.workunit.client.1.vm09.stdout:9/324: mkdir d1/d7/d1e/d2b/d2e/d56/d6d 0 2026-03-09T15:00:45.888 INFO:tasks.workunit.client.1.vm09.stdout:1/315: write d8/ff [2330377,112270] 0 2026-03-09T15:00:45.889 INFO:tasks.workunit.client.1.vm09.stdout:0/409: fdatasync da/dc/f17 0 2026-03-09T15:00:45.890 INFO:tasks.workunit.client.1.vm09.stdout:2/386: fsync df/d20/d29/f31 0 2026-03-09T15:00:45.898 INFO:tasks.workunit.client.1.vm09.stdout:3/327: dread d3/d3a/d2b/d31/d4a/d62/f1b [0,4194304] 0 2026-03-09T15:00:45.902 INFO:tasks.workunit.client.1.vm09.stdout:7/346: link d3/f16 d3/db/d15/f68 0 2026-03-09T15:00:45.904 INFO:tasks.workunit.client.1.vm09.stdout:1/316: rmdir d8/d10/d24/d48 39 2026-03-09T15:00:45.904 INFO:tasks.workunit.client.1.vm09.stdout:9/325: mkdir d1/d6e 0 2026-03-09T15:00:45.904 INFO:tasks.workunit.client.1.vm09.stdout:8/355: rename df/f12 to df/d5b/f67 0 2026-03-09T15:00:45.908 INFO:tasks.workunit.client.1.vm09.stdout:6/371: dwrite d6/f39 [0,4194304] 0 2026-03-09T15:00:45.909 INFO:tasks.workunit.client.1.vm09.stdout:6/372: fsync d6/d20/d38/d56/d65/d68/f69 0 2026-03-09T15:00:45.915 INFO:tasks.workunit.client.1.vm09.stdout:7/347: dwrite d3/db/d25/f50 [0,4194304] 0 2026-03-09T15:00:45.917 INFO:tasks.workunit.client.1.vm09.stdout:8/356: read df/d2d/f57 [945617,103261] 0 2026-03-09T15:00:45.918 INFO:tasks.workunit.client.1.vm09.stdout:5/329: rename d2/d37/d3c/l60 to d2/d37/d67/l7c 0 2026-03-09T15:00:45.923 INFO:tasks.workunit.client.1.vm09.stdout:5/330: fdatasync d2/d37/d3c/d36/d45/f66 0 2026-03-09T15:00:45.923 INFO:tasks.workunit.client.1.vm09.stdout:5/331: chown d2/d37/f72 15 1 2026-03-09T15:00:45.925 INFO:tasks.workunit.client.1.vm09.stdout:2/387: dread df/d20/f49 [0,4194304] 0 2026-03-09T15:00:45.925 INFO:tasks.workunit.client.1.vm09.stdout:3/328: chown d3/d3a/d2b/d31/d4a/d62/l5 3281795 1 2026-03-09T15:00:45.928 INFO:tasks.workunit.client.1.vm09.stdout:3/329: write d3/d5b/f6d [862995,129499] 0 2026-03-09T15:00:45.929 INFO:tasks.workunit.client.1.vm09.stdout:0/410: mkdir da/dc/d84 0 2026-03-09T15:00:45.929 INFO:tasks.workunit.client.1.vm09.stdout:6/373: dwrite d6/f39 [0,4194304] 0 2026-03-09T15:00:45.932 INFO:tasks.workunit.client.1.vm09.stdout:1/317: dwrite d8/d22/d40/f52 [0,4194304] 0 2026-03-09T15:00:45.942 INFO:tasks.workunit.client.1.vm09.stdout:5/332: mknod d2/d37/d3c/d36/d4c/c7d 0 2026-03-09T15:00:45.942 INFO:tasks.workunit.client.1.vm09.stdout:4/349: rename db/d19/d35 to db/d19/d23/d71 0 2026-03-09T15:00:45.942 INFO:tasks.workunit.client.1.vm09.stdout:8/357: write df/d5b/d65/d1d/f36 [7623287,110085] 0 2026-03-09T15:00:45.942 INFO:tasks.workunit.client.1.vm09.stdout:2/388: mkdir df/d58/d74 0 2026-03-09T15:00:45.942 INFO:tasks.workunit.client.1.vm09.stdout:0/411: chown da/d30/d36 135801493 1 2026-03-09T15:00:45.946 INFO:tasks.workunit.client.1.vm09.stdout:1/318: mkdir d8/d22/d40/d64 0 2026-03-09T15:00:45.947 INFO:tasks.workunit.client.1.vm09.stdout:4/350: symlink db/d19/d23/l72 0 2026-03-09T15:00:45.949 INFO:tasks.workunit.client.1.vm09.stdout:2/389: chown df/c36 35897915 1 2026-03-09T15:00:45.950 INFO:tasks.workunit.client.1.vm09.stdout:9/326: rename d1/d7/l21 to d1/d58/l6f 0 2026-03-09T15:00:45.951 INFO:tasks.workunit.client.1.vm09.stdout:3/330: rmdir d3/d3a/d73 0 2026-03-09T15:00:45.960 INFO:tasks.workunit.client.1.vm09.stdout:8/358: link df/d38/f39 df/d5b/d65/d1d/f68 0 2026-03-09T15:00:45.960 INFO:tasks.workunit.client.1.vm09.stdout:7/348: creat d3/d28/f69 x:0 0 0 2026-03-09T15:00:45.960 INFO:tasks.workunit.client.1.vm09.stdout:9/327: write d1/f1f [2108957,66292] 0 2026-03-09T15:00:45.960 INFO:tasks.workunit.client.1.vm09.stdout:4/351: dread db/d19/d23/d71/f4e [0,4194304] 0 2026-03-09T15:00:45.960 INFO:tasks.workunit.client.1.vm09.stdout:2/390: rename df/d20/d29/l66 to df/d20/l75 0 2026-03-09T15:00:45.960 INFO:tasks.workunit.client.1.vm09.stdout:4/352: chown db/d19/d32/c6f 238 1 2026-03-09T15:00:45.960 INFO:tasks.workunit.client.1.vm09.stdout:6/374: getdents d6/d20/d44/d45 0 2026-03-09T15:00:45.960 INFO:tasks.workunit.client.1.vm09.stdout:7/349: symlink d3/db/d15/l6a 0 2026-03-09T15:00:45.966 INFO:tasks.workunit.client.1.vm09.stdout:7/350: dwrite d3/d1d/d2d/f57 [0,4194304] 0 2026-03-09T15:00:45.966 INFO:tasks.workunit.client.1.vm09.stdout:2/391: mkdir df/d2d/d76 0 2026-03-09T15:00:45.967 INFO:tasks.workunit.client.1.vm09.stdout:8/359: creat df/d38/d64/d5f/f69 x:0 0 0 2026-03-09T15:00:45.968 INFO:tasks.workunit.client.1.vm09.stdout:0/412: getdents da/dc/d22/d76 0 2026-03-09T15:00:45.970 INFO:tasks.workunit.client.1.vm09.stdout:0/413: truncate da/dc/d61/f66 395933 0 2026-03-09T15:00:45.977 INFO:tasks.workunit.client.1.vm09.stdout:9/328: getdents d1/d7/d1e/d2b/d2e/d56/d5e 0 2026-03-09T15:00:45.980 INFO:tasks.workunit.client.1.vm09.stdout:0/414: readlink da/l2e 0 2026-03-09T15:00:45.980 INFO:tasks.workunit.client.1.vm09.stdout:3/331: getdents d3/d3a/d2b 0 2026-03-09T15:00:45.981 INFO:tasks.workunit.client.1.vm09.stdout:6/375: creat d6/df/d23/f76 x:0 0 0 2026-03-09T15:00:45.982 INFO:tasks.workunit.client.1.vm09.stdout:9/329: chown d1/d7/f3e 21315170 1 2026-03-09T15:00:45.982 INFO:tasks.workunit.client.1.vm09.stdout:6/376: chown d6 31 1 2026-03-09T15:00:45.983 INFO:tasks.workunit.client.1.vm09.stdout:9/330: chown d1/d7/d1e/d2b/d40 2543 1 2026-03-09T15:00:45.984 INFO:tasks.workunit.client.1.vm09.stdout:7/351: dwrite d3/d3d/f5a [4194304,4194304] 0 2026-03-09T15:00:45.984 INFO:tasks.workunit.client.1.vm09.stdout:2/392: mknod df/d1f/c77 0 2026-03-09T15:00:45.985 INFO:tasks.workunit.client.1.vm09.stdout:8/360: symlink df/d5c/l6a 0 2026-03-09T15:00:45.991 INFO:tasks.workunit.client.1.vm09.stdout:0/415: rename da/l5f to da/dc/d1c/d46/d63/l85 0 2026-03-09T15:00:45.992 INFO:tasks.workunit.client.1.vm09.stdout:7/352: dread - d3/f3f zero size 2026-03-09T15:00:45.992 INFO:tasks.workunit.client.1.vm09.stdout:6/377: creat d6/d20/d38/d4e/d55/f77 x:0 0 0 2026-03-09T15:00:46.000 INFO:tasks.workunit.client.1.vm09.stdout:3/332: link d3/d3a/d2b/d53/c71 d3/d60/c7a 0 2026-03-09T15:00:46.000 INFO:tasks.workunit.client.1.vm09.stdout:6/378: creat d6/df/d23/f78 x:0 0 0 2026-03-09T15:00:46.000 INFO:tasks.workunit.client.1.vm09.stdout:6/379: dread - d6/d20/d2a/d57/f73 zero size 2026-03-09T15:00:46.001 INFO:tasks.workunit.client.1.vm09.stdout:6/380: write d6/df/f16 [3927417,49136] 0 2026-03-09T15:00:46.012 INFO:tasks.workunit.client.1.vm09.stdout:6/381: creat d6/d20/d2a/d3d/f79 x:0 0 0 2026-03-09T15:00:46.014 INFO:tasks.workunit.client.1.vm09.stdout:3/333: getdents d3/d3a/d2b/d39 0 2026-03-09T15:00:46.014 INFO:tasks.workunit.client.1.vm09.stdout:8/361: sync 2026-03-09T15:00:46.014 INFO:tasks.workunit.client.1.vm09.stdout:4/353: sync 2026-03-09T15:00:46.015 INFO:tasks.workunit.client.1.vm09.stdout:4/354: chown l8 125058275 1 2026-03-09T15:00:46.015 INFO:tasks.workunit.client.1.vm09.stdout:3/334: fdatasync d3/d3a/d2b/f64 0 2026-03-09T15:00:46.016 INFO:tasks.workunit.client.1.vm09.stdout:4/355: fdatasync db/d19/d23/d71/f43 0 2026-03-09T15:00:46.017 INFO:tasks.workunit.client.1.vm09.stdout:3/335: unlink d3/d3a/l56 0 2026-03-09T15:00:46.018 INFO:tasks.workunit.client.1.vm09.stdout:3/336: mkdir d3/d3a/d2b/d7b 0 2026-03-09T15:00:46.019 INFO:tasks.workunit.client.1.vm09.stdout:8/362: link df/d5b/d65/l2e df/d24/d37/l6b 0 2026-03-09T15:00:46.020 INFO:tasks.workunit.client.1.vm09.stdout:8/363: read df/d5b/d65/f20 [1282428,53372] 0 2026-03-09T15:00:46.021 INFO:tasks.workunit.client.1.vm09.stdout:3/337: creat d3/d3a/d2b/d31/d4a/f7c x:0 0 0 2026-03-09T15:00:46.022 INFO:tasks.workunit.client.1.vm09.stdout:8/364: rmdir df/d2d/d42 39 2026-03-09T15:00:46.022 INFO:tasks.workunit.client.1.vm09.stdout:8/365: chown df/d38/d64/d5f/l66 633679 1 2026-03-09T15:00:46.024 INFO:tasks.workunit.client.1.vm09.stdout:3/338: creat d3/d3a/d2b/d39/f7d x:0 0 0 2026-03-09T15:00:46.028 INFO:tasks.workunit.client.1.vm09.stdout:3/339: dwrite d3/d3a/d2b/d31/d4a/d62/f8 [0,4194304] 0 2026-03-09T15:00:46.029 INFO:tasks.workunit.client.1.vm09.stdout:3/340: creat d3/d3a/d2b/d53/f7e x:0 0 0 2026-03-09T15:00:46.031 INFO:tasks.workunit.client.1.vm09.stdout:3/341: write d3/d3a/d2b/f72 [563976,74906] 0 2026-03-09T15:00:46.033 INFO:tasks.workunit.client.1.vm09.stdout:4/356: sync 2026-03-09T15:00:46.042 INFO:tasks.workunit.client.1.vm09.stdout:4/357: mknod db/d12/d16/c73 0 2026-03-09T15:00:46.046 INFO:tasks.workunit.client.1.vm09.stdout:3/342: link d3/f3b d3/d3a/d2b/d31/d4a/f7f 0 2026-03-09T15:00:46.047 INFO:tasks.workunit.client.1.vm09.stdout:4/358: rename db/d12/d16/l24 to db/d2f/d5d/l74 0 2026-03-09T15:00:46.048 INFO:tasks.workunit.client.1.vm09.stdout:4/359: mknod db/c75 0 2026-03-09T15:00:46.048 INFO:tasks.workunit.client.1.vm09.stdout:4/360: write db/d12/f5a [3122739,120293] 0 2026-03-09T15:00:46.049 INFO:tasks.workunit.client.1.vm09.stdout:4/361: stat db/d19/d23/d71/f43 0 2026-03-09T15:00:46.053 INFO:tasks.workunit.client.1.vm09.stdout:4/362: rename db/d19/d32 to db/d19/d52/d76 0 2026-03-09T15:00:46.055 INFO:tasks.workunit.client.1.vm09.stdout:4/363: stat db 0 2026-03-09T15:00:46.055 INFO:tasks.workunit.client.1.vm09.stdout:3/343: dread d3/d3a/d2b/d31/f33 [0,4194304] 0 2026-03-09T15:00:46.055 INFO:tasks.workunit.client.1.vm09.stdout:3/344: stat d3/f9 0 2026-03-09T15:00:46.063 INFO:tasks.workunit.client.1.vm09.stdout:4/364: unlink db/d19/d23/l2c 0 2026-03-09T15:00:46.063 INFO:tasks.workunit.client.1.vm09.stdout:3/345: mkdir d3/d3a/d2b/d53/d80 0 2026-03-09T15:00:46.069 INFO:tasks.workunit.client.1.vm09.stdout:3/346: dwrite d3/f3e [0,4194304] 0 2026-03-09T15:00:46.070 INFO:tasks.workunit.client.1.vm09.stdout:3/347: readlink d3/d3a/d2b/d31/d4a/l5c 0 2026-03-09T15:00:46.080 INFO:tasks.workunit.client.1.vm09.stdout:5/333: write d2/d37/d3c/d55/f58 [722389,68731] 0 2026-03-09T15:00:46.081 INFO:tasks.workunit.client.1.vm09.stdout:3/348: creat d3/d3a/d2b/d39/f81 x:0 0 0 2026-03-09T15:00:46.082 INFO:tasks.workunit.client.1.vm09.stdout:5/334: stat d2/d37/d3c/d36 0 2026-03-09T15:00:46.084 INFO:tasks.workunit.client.1.vm09.stdout:4/365: truncate db/d12/d16/f36 3145038 0 2026-03-09T15:00:46.085 INFO:tasks.workunit.client.1.vm09.stdout:4/366: chown db/cf 58 1 2026-03-09T15:00:46.093 INFO:tasks.workunit.client.1.vm09.stdout:4/367: creat db/d2f/d5d/f77 x:0 0 0 2026-03-09T15:00:46.094 INFO:tasks.workunit.client.1.vm09.stdout:1/319: truncate d8/f59 3092284 0 2026-03-09T15:00:46.096 INFO:tasks.workunit.client.1.vm09.stdout:1/320: rename d8/d10/d24/d45/f53 to d8/d50/d39/f65 0 2026-03-09T15:00:46.100 INFO:tasks.workunit.client.1.vm09.stdout:4/368: getdents db/d12/d16/d5b 0 2026-03-09T15:00:46.100 INFO:tasks.workunit.client.1.vm09.stdout:3/349: dread d3/d3a/d2b/d31/d4a/f63 [0,4194304] 0 2026-03-09T15:00:46.103 INFO:tasks.workunit.client.1.vm09.stdout:5/335: getdents d2/d37 0 2026-03-09T15:00:46.110 INFO:tasks.workunit.client.1.vm09.stdout:5/336: dwrite d2/f22 [4194304,4194304] 0 2026-03-09T15:00:46.113 INFO:tasks.workunit.client.1.vm09.stdout:5/337: read d2/f34 [3658391,123842] 0 2026-03-09T15:00:46.114 INFO:tasks.workunit.client.1.vm09.stdout:5/338: chown d2/d4/fd 66174 1 2026-03-09T15:00:46.122 INFO:tasks.workunit.client.1.vm09.stdout:2/393: dread - df/d20/d2e/f59 zero size 2026-03-09T15:00:46.123 INFO:tasks.workunit.client.1.vm09.stdout:9/331: truncate d1/d7/f13 3438984 0 2026-03-09T15:00:46.124 INFO:tasks.workunit.client.1.vm09.stdout:7/353: write d3/db/f4d [374803,12634] 0 2026-03-09T15:00:46.128 INFO:tasks.workunit.client.1.vm09.stdout:3/350: dread d3/d3a/f1c [0,4194304] 0 2026-03-09T15:00:46.129 INFO:tasks.workunit.client.1.vm09.stdout:3/351: read d3/f9 [1059578,28074] 0 2026-03-09T15:00:46.130 INFO:tasks.workunit.client.1.vm09.stdout:3/352: write d3/d3a/d2b/f65 [206059,109955] 0 2026-03-09T15:00:46.133 INFO:tasks.workunit.client.1.vm09.stdout:5/339: unlink d2/d37/f72 0 2026-03-09T15:00:46.133 INFO:tasks.workunit.client.1.vm09.stdout:2/394: rmdir df/d20 39 2026-03-09T15:00:46.135 INFO:tasks.workunit.client.1.vm09.stdout:9/332: rename d1/d7/d1e/d2b/d2e/d56/l69 to d1/d7/d1e/d2b/d40/l70 0 2026-03-09T15:00:46.136 INFO:tasks.workunit.client.1.vm09.stdout:9/333: fdatasync d1/d7/d1e/d2b/d40/f57 0 2026-03-09T15:00:46.141 INFO:tasks.workunit.client.1.vm09.stdout:5/340: sync 2026-03-09T15:00:46.147 INFO:tasks.workunit.client.1.vm09.stdout:0/416: truncate da/dc/d1c/d3c/d44/f51 3511019 0 2026-03-09T15:00:46.147 INFO:tasks.workunit.client.1.vm09.stdout:2/395: dread df/f13 [0,4194304] 0 2026-03-09T15:00:46.149 INFO:tasks.workunit.client.1.vm09.stdout:6/382: write d6/df/d23/f29 [1497668,109269] 0 2026-03-09T15:00:46.155 INFO:tasks.workunit.client.1.vm09.stdout:8/366: write df/f30 [4570437,77328] 0 2026-03-09T15:00:46.162 INFO:tasks.workunit.client.1.vm09.stdout:7/354: symlink d3/db/d15/d5f/l6b 0 2026-03-09T15:00:46.162 INFO:tasks.workunit.client.1.vm09.stdout:2/396: dwrite df/d1f/d47/d5d/f5e [0,4194304] 0 2026-03-09T15:00:46.165 INFO:tasks.workunit.client.1.vm09.stdout:2/397: sync 2026-03-09T15:00:46.167 INFO:tasks.workunit.client.1.vm09.stdout:2/398: read f0 [496725,69740] 0 2026-03-09T15:00:46.167 INFO:tasks.workunit.client.1.vm09.stdout:2/399: write df/d2d/f3c [1547648,11075] 0 2026-03-09T15:00:46.171 INFO:tasks.workunit.client.1.vm09.stdout:9/334: rename d1/d7/d1e/d2b/d2e/ca to d1/d58/c71 0 2026-03-09T15:00:46.181 INFO:tasks.workunit.client.1.vm09.stdout:3/353: mknod d3/d3a/d2b/d31/d4a/c82 0 2026-03-09T15:00:46.181 INFO:tasks.workunit.client.1.vm09.stdout:0/417: mkdir da/dc/d1c/d46/d63/d86 0 2026-03-09T15:00:46.185 INFO:tasks.workunit.client.1.vm09.stdout:1/321: truncate d8/d1b/f37 1093460 0 2026-03-09T15:00:46.186 INFO:tasks.workunit.client.1.vm09.stdout:1/322: write d8/d10/f12 [516517,42304] 0 2026-03-09T15:00:46.187 INFO:tasks.workunit.client.1.vm09.stdout:8/367: mknod df/d24/d37/c6c 0 2026-03-09T15:00:46.187 INFO:tasks.workunit.client.1.vm09.stdout:7/355: rmdir d3/db/d46 39 2026-03-09T15:00:46.188 INFO:tasks.workunit.client.1.vm09.stdout:7/356: stat d3/db/f42 0 2026-03-09T15:00:46.189 INFO:tasks.workunit.client.1.vm09.stdout:3/354: dwrite d3/d3a/d2b/d53/f7e [0,4194304] 0 2026-03-09T15:00:46.190 INFO:tasks.workunit.client.1.vm09.stdout:3/355: dread - d3/d3a/d2b/d39/f7d zero size 2026-03-09T15:00:46.191 INFO:tasks.workunit.client.1.vm09.stdout:9/335: creat d1/d58/f72 x:0 0 0 2026-03-09T15:00:46.191 INFO:tasks.workunit.client.1.vm09.stdout:3/356: dread - d3/d3a/d54/f59 zero size 2026-03-09T15:00:46.193 INFO:tasks.workunit.client.1.vm09.stdout:9/336: chown d1/d7/d1e/d2b/d2e/l61 1368 1 2026-03-09T15:00:46.198 INFO:tasks.workunit.client.1.vm09.stdout:8/368: unlink df/d5b/d65/d1d/f36 0 2026-03-09T15:00:46.205 INFO:tasks.workunit.client.1.vm09.stdout:8/369: truncate df/f34 4926751 0 2026-03-09T15:00:46.205 INFO:tasks.workunit.client.1.vm09.stdout:2/400: chown df/d20/d29/f51 70 1 2026-03-09T15:00:46.205 INFO:tasks.workunit.client.1.vm09.stdout:6/383: mkdir d6/db/d10/d7a 0 2026-03-09T15:00:46.205 INFO:tasks.workunit.client.1.vm09.stdout:1/323: mknod d8/d10/d24/c66 0 2026-03-09T15:00:46.205 INFO:tasks.workunit.client.1.vm09.stdout:9/337: mknod d1/d7/d1e/d2b/d40/c73 0 2026-03-09T15:00:46.205 INFO:tasks.workunit.client.1.vm09.stdout:6/384: dread - d6/d20/d2a/f61 zero size 2026-03-09T15:00:46.206 INFO:tasks.workunit.client.1.vm09.stdout:6/385: chown d6/db/d10/l4d 862 1 2026-03-09T15:00:46.207 INFO:tasks.workunit.client.1.vm09.stdout:2/401: truncate df/d1f/f55 563285 0 2026-03-09T15:00:46.208 INFO:tasks.workunit.client.1.vm09.stdout:3/357: creat d3/d5b/d79/f83 x:0 0 0 2026-03-09T15:00:46.209 INFO:tasks.workunit.client.1.vm09.stdout:8/370: dread df/d5b/f35 [0,4194304] 0 2026-03-09T15:00:46.210 INFO:tasks.workunit.client.1.vm09.stdout:3/358: creat d3/d3a/d2b/d39/f84 x:0 0 0 2026-03-09T15:00:46.211 INFO:tasks.workunit.client.1.vm09.stdout:6/386: rename d6/db/d10/f19 to d6/d20/d38/d56/d65/f7b 0 2026-03-09T15:00:46.211 INFO:tasks.workunit.client.1.vm09.stdout:0/418: link da/c32 da/d30/c87 0 2026-03-09T15:00:46.212 INFO:tasks.workunit.client.1.vm09.stdout:9/338: creat d1/d6e/f74 x:0 0 0 2026-03-09T15:00:46.212 INFO:tasks.workunit.client.1.vm09.stdout:3/359: mknod d3/d3a/d2b/c85 0 2026-03-09T15:00:46.216 INFO:tasks.workunit.client.1.vm09.stdout:6/387: symlink d6/d20/d38/d56/l7c 0 2026-03-09T15:00:46.222 INFO:tasks.workunit.client.1.vm09.stdout:9/339: creat d1/d58/f75 x:0 0 0 2026-03-09T15:00:46.223 INFO:tasks.workunit.client.1.vm09.stdout:9/340: chown d1/d7/d1e/d2b/d2e/c49 5 1 2026-03-09T15:00:46.223 INFO:tasks.workunit.client.1.vm09.stdout:0/419: dwrite da/dc/d1c/d3c/f81 [0,4194304] 0 2026-03-09T15:00:46.233 INFO:tasks.workunit.client.1.vm09.stdout:3/360: truncate d3/d3a/f1c 3289721 0 2026-03-09T15:00:46.233 INFO:tasks.workunit.client.1.vm09.stdout:2/402: dread df/d1f/f39 [0,4194304] 0 2026-03-09T15:00:46.233 INFO:tasks.workunit.client.1.vm09.stdout:6/388: dwrite d6/d20/f27 [0,4194304] 0 2026-03-09T15:00:46.233 INFO:tasks.workunit.client.1.vm09.stdout:1/324: sync 2026-03-09T15:00:46.237 INFO:tasks.workunit.client.1.vm09.stdout:3/361: dread d3/d5b/f6d [0,4194304] 0 2026-03-09T15:00:46.237 INFO:tasks.workunit.client.1.vm09.stdout:3/362: write d3/f77 [268876,64780] 0 2026-03-09T15:00:46.238 INFO:tasks.workunit.client.1.vm09.stdout:0/420: unlink da/dc/d10/c26 0 2026-03-09T15:00:46.246 INFO:tasks.workunit.client.1.vm09.stdout:9/341: unlink d1/d7/ce 0 2026-03-09T15:00:46.247 INFO:tasks.workunit.client.1.vm09.stdout:8/371: getdents df/d2d/d46/d33 0 2026-03-09T15:00:46.247 INFO:tasks.workunit.client.1.vm09.stdout:1/325: rename d8/d10/f1a to d8/d10/f67 0 2026-03-09T15:00:46.248 INFO:tasks.workunit.client.1.vm09.stdout:1/326: fdatasync d8/d22/d40/f52 0 2026-03-09T15:00:46.249 INFO:tasks.workunit.client.1.vm09.stdout:3/363: dwrite d3/f3e [0,4194304] 0 2026-03-09T15:00:46.261 INFO:tasks.workunit.client.1.vm09.stdout:8/372: dwrite df/d24/d37/f63 [0,4194304] 0 2026-03-09T15:00:46.261 INFO:tasks.workunit.client.1.vm09.stdout:4/369: write db/d12/f1b [974803,77414] 0 2026-03-09T15:00:46.271 INFO:tasks.workunit.client.1.vm09.stdout:8/373: dwrite df/d5b/d65/d1d/f44 [0,4194304] 0 2026-03-09T15:00:46.281 INFO:tasks.workunit.client.1.vm09.stdout:2/403: rename df/l26 to df/d58/d74/l78 0 2026-03-09T15:00:46.281 INFO:tasks.workunit.client.1.vm09.stdout:2/404: chown df/d20/d2e/f54 867 1 2026-03-09T15:00:46.282 INFO:tasks.workunit.client.1.vm09.stdout:9/342: write d1/f28 [4579915,88355] 0 2026-03-09T15:00:46.282 INFO:tasks.workunit.client.1.vm09.stdout:2/405: write df/d58/f65 [484849,31616] 0 2026-03-09T15:00:46.290 INFO:tasks.workunit.client.1.vm09.stdout:0/421: unlink da/dc/d10/c1b 0 2026-03-09T15:00:46.291 INFO:tasks.workunit.client.1.vm09.stdout:4/370: sync 2026-03-09T15:00:46.294 INFO:tasks.workunit.client.1.vm09.stdout:3/364: mknod d3/d5b/d79/c86 0 2026-03-09T15:00:46.314 INFO:tasks.workunit.client.1.vm09.stdout:7/357: truncate d3/db/d25/f50 1475995 0 2026-03-09T15:00:46.315 INFO:tasks.workunit.client.1.vm09.stdout:7/358: fdatasync d3/d3d/f51 0 2026-03-09T15:00:46.316 INFO:tasks.workunit.client.1.vm09.stdout:0/422: creat da/dc/d1c/d3c/d78/f88 x:0 0 0 2026-03-09T15:00:46.317 INFO:tasks.workunit.client.1.vm09.stdout:0/423: fsync da/d30/f6f 0 2026-03-09T15:00:46.323 INFO:tasks.workunit.client.1.vm09.stdout:3/365: symlink d3/d5b/d79/l87 0 2026-03-09T15:00:46.329 INFO:tasks.workunit.client.1.vm09.stdout:5/341: symlink d2/d37/d3c/d36/d45/l7e 0 2026-03-09T15:00:46.338 INFO:tasks.workunit.client.1.vm09.stdout:8/374: dread df/f1a [0,4194304] 0 2026-03-09T15:00:46.339 INFO:tasks.workunit.client.1.vm09.stdout:8/375: truncate df/d38/f58 634141 0 2026-03-09T15:00:46.345 INFO:tasks.workunit.client.1.vm09.stdout:6/389: truncate d6/db/f42 60040 0 2026-03-09T15:00:46.355 INFO:tasks.workunit.client.1.vm09.stdout:0/424: dwrite da/dc/d1c/d46/d5b/f6a [0,4194304] 0 2026-03-09T15:00:46.361 INFO:tasks.workunit.client.1.vm09.stdout:2/406: write df/d2d/f41 [1208117,119573] 0 2026-03-09T15:00:46.370 INFO:tasks.workunit.client.1.vm09.stdout:5/342: readlink d2/d37/l52 0 2026-03-09T15:00:46.371 INFO:tasks.workunit.client.1.vm09.stdout:8/376: creat df/d2d/d46/f6d x:0 0 0 2026-03-09T15:00:46.371 INFO:tasks.workunit.client.1.vm09.stdout:9/343: truncate d1/f24 1611118 0 2026-03-09T15:00:46.373 INFO:tasks.workunit.client.1.vm09.stdout:6/390: mknod d6/d20/d24/c7d 0 2026-03-09T15:00:46.373 INFO:tasks.workunit.client.1.vm09.stdout:4/371: dwrite db/d12/d16/f26 [0,4194304] 0 2026-03-09T15:00:46.375 INFO:tasks.workunit.client.1.vm09.stdout:4/372: write db/d12/d16/f26 [5106604,31989] 0 2026-03-09T15:00:46.377 INFO:tasks.workunit.client.1.vm09.stdout:4/373: truncate db/d19/d23/d71/d5f/f70 187555 0 2026-03-09T15:00:46.380 INFO:tasks.workunit.client.1.vm09.stdout:2/407: symlink df/d58/d67/l79 0 2026-03-09T15:00:46.380 INFO:tasks.workunit.client.1.vm09.stdout:1/327: dwrite d8/f59 [0,4194304] 0 2026-03-09T15:00:46.381 INFO:tasks.workunit.client.1.vm09.stdout:1/328: truncate d8/d1b/f41 1429252 0 2026-03-09T15:00:46.388 INFO:tasks.workunit.client.1.vm09.stdout:6/391: write d6/d20/d2a/d3d/f43 [840461,60106] 0 2026-03-09T15:00:46.388 INFO:tasks.workunit.client.1.vm09.stdout:1/329: fdatasync d8/d10/f29 0 2026-03-09T15:00:46.393 INFO:tasks.workunit.client.1.vm09.stdout:8/377: link df/d2d/f2f df/d5b/d65/d1d/f6e 0 2026-03-09T15:00:46.396 INFO:tasks.workunit.client.1.vm09.stdout:8/378: stat df/d5b/d65/f20 0 2026-03-09T15:00:46.402 INFO:tasks.workunit.client.1.vm09.stdout:0/425: dwrite da/dc/d10/f4a [0,4194304] 0 2026-03-09T15:00:46.406 INFO:tasks.workunit.client.1.vm09.stdout:6/392: read d6/d20/d2a/f37 [1592574,110002] 0 2026-03-09T15:00:46.408 INFO:tasks.workunit.client.1.vm09.stdout:4/374: mkdir db/d12/d16/d5b/d78 0 2026-03-09T15:00:46.410 INFO:tasks.workunit.client.1.vm09.stdout:3/366: dwrite d3/f3b [8388608,4194304] 0 2026-03-09T15:00:46.411 INFO:tasks.workunit.client.1.vm09.stdout:3/367: write d3/d3a/d2b/d31/d4a/d62/f8 [3701871,1886] 0 2026-03-09T15:00:46.419 INFO:tasks.workunit.client.1.vm09.stdout:7/359: getdents d3/db/d15/d5f 0 2026-03-09T15:00:46.422 INFO:tasks.workunit.client.1.vm09.stdout:8/379: creat df/d38/d64/d5f/f6f x:0 0 0 2026-03-09T15:00:46.423 INFO:tasks.workunit.client.1.vm09.stdout:4/375: sync 2026-03-09T15:00:46.423 INFO:tasks.workunit.client.1.vm09.stdout:0/426: sync 2026-03-09T15:00:46.423 INFO:tasks.workunit.client.1.vm09.stdout:6/393: mkdir d6/d20/d24/d7e 0 2026-03-09T15:00:46.427 INFO:tasks.workunit.client.1.vm09.stdout:6/394: truncate d6/d20/d38/d56/d65/f7b 1592747 0 2026-03-09T15:00:46.429 INFO:tasks.workunit.client.1.vm09.stdout:8/380: dwrite df/d2d/d46/f6d [0,4194304] 0 2026-03-09T15:00:46.432 INFO:tasks.workunit.client.1.vm09.stdout:2/408: link df/d20/d2e/l69 df/d58/d67/l7a 0 2026-03-09T15:00:46.433 INFO:tasks.workunit.client.1.vm09.stdout:4/376: dwrite db/d19/d52/d76/d3b/f48 [0,4194304] 0 2026-03-09T15:00:46.437 INFO:tasks.workunit.client.1.vm09.stdout:8/381: truncate df/d38/d64/d5f/f62 464313 0 2026-03-09T15:00:46.438 INFO:tasks.workunit.client.1.vm09.stdout:8/382: fdatasync df/d24/f28 0 2026-03-09T15:00:46.442 INFO:tasks.workunit.client.1.vm09.stdout:4/377: write db/d12/f37 [2723768,91709] 0 2026-03-09T15:00:46.446 INFO:tasks.workunit.client.1.vm09.stdout:1/330: mkdir d8/d1b/d5d/d68 0 2026-03-09T15:00:46.446 INFO:tasks.workunit.client.1.vm09.stdout:4/378: dread - db/d12/d16/f60 zero size 2026-03-09T15:00:46.446 INFO:tasks.workunit.client.1.vm09.stdout:4/379: fsync db/d19/d52/d76/d3b/f49 0 2026-03-09T15:00:46.447 INFO:tasks.workunit.client.1.vm09.stdout:5/343: rename d2/d4/c5d to d2/c7f 0 2026-03-09T15:00:46.459 INFO:tasks.workunit.client.1.vm09.stdout:9/344: symlink d1/d7/d1e/d2b/d2e/d56/d5e/l76 0 2026-03-09T15:00:46.465 INFO:tasks.workunit.client.1.vm09.stdout:9/345: chown d1/d7/d1e/d2b/l47 2 1 2026-03-09T15:00:46.466 INFO:tasks.workunit.client.1.vm09.stdout:6/395: link d6/d20/d38/d4e/f5a d6/f7f 0 2026-03-09T15:00:46.467 INFO:tasks.workunit.client.1.vm09.stdout:2/409: link df/d1f/c22 df/d1f/d47/c7b 0 2026-03-09T15:00:46.468 INFO:tasks.workunit.client.1.vm09.stdout:4/380: link db/d2f/l3c db/d2f/d5d/l79 0 2026-03-09T15:00:46.469 INFO:tasks.workunit.client.1.vm09.stdout:1/331: sync 2026-03-09T15:00:46.470 INFO:tasks.workunit.client.1.vm09.stdout:5/344: rmdir d2/d37/d3c/d36/d4c/d51/d74 0 2026-03-09T15:00:46.472 INFO:tasks.workunit.client.1.vm09.stdout:6/396: rename d6/d20/d2a/f5d to d6/db/d10/d7a/f80 0 2026-03-09T15:00:46.473 INFO:tasks.workunit.client.1.vm09.stdout:2/410: symlink df/d20/d2e/l7c 0 2026-03-09T15:00:46.474 INFO:tasks.workunit.client.1.vm09.stdout:1/332: chown d8/d22/d56/c5a 607375 1 2026-03-09T15:00:46.474 INFO:tasks.workunit.client.1.vm09.stdout:5/345: symlink d2/d37/d3c/d55/l80 0 2026-03-09T15:00:46.478 INFO:tasks.workunit.client.1.vm09.stdout:6/397: mknod d6/db/d10/d4f/c81 0 2026-03-09T15:00:46.479 INFO:tasks.workunit.client.1.vm09.stdout:9/346: creat d1/d7/f77 x:0 0 0 2026-03-09T15:00:46.480 INFO:tasks.workunit.client.1.vm09.stdout:6/398: readlink d6/db/d10/l11 0 2026-03-09T15:00:46.480 INFO:tasks.workunit.client.1.vm09.stdout:6/399: chown d6/d20/d24 5776 1 2026-03-09T15:00:46.480 INFO:tasks.workunit.client.1.vm09.stdout:1/333: truncate d8/f3d 1571968 0 2026-03-09T15:00:46.481 INFO:tasks.workunit.client.1.vm09.stdout:1/334: write d8/fa [327381,2030] 0 2026-03-09T15:00:46.483 INFO:tasks.workunit.client.1.vm09.stdout:9/347: dread - d1/d7/f45 zero size 2026-03-09T15:00:46.484 INFO:tasks.workunit.client.1.vm09.stdout:5/346: creat d2/d37/d53/f81 x:0 0 0 2026-03-09T15:00:46.486 INFO:tasks.workunit.client.1.vm09.stdout:1/335: dwrite d8/f17 [0,4194304] 0 2026-03-09T15:00:46.487 INFO:tasks.workunit.client.1.vm09.stdout:6/400: mknod d6/d20/d44/d45/c82 0 2026-03-09T15:00:46.499 INFO:tasks.workunit.client.1.vm09.stdout:6/401: sync 2026-03-09T15:00:46.501 INFO:tasks.workunit.client.1.vm09.stdout:9/348: mknod d1/d7/d1e/d2b/d2e/c78 0 2026-03-09T15:00:46.502 INFO:tasks.workunit.client.1.vm09.stdout:1/336: creat d8/d10/f69 x:0 0 0 2026-03-09T15:00:46.504 INFO:tasks.workunit.client.1.vm09.stdout:1/337: chown d8/d10/l2d 2733 1 2026-03-09T15:00:46.505 INFO:tasks.workunit.client.1.vm09.stdout:5/347: creat d2/d37/d3c/d36/d4c/f82 x:0 0 0 2026-03-09T15:00:46.505 INFO:tasks.workunit.client.1.vm09.stdout:1/338: chown d8/d10/f2f 1398632 1 2026-03-09T15:00:46.508 INFO:tasks.workunit.client.1.vm09.stdout:5/348: mknod d2/d37/d3c/d36/c83 0 2026-03-09T15:00:46.508 INFO:tasks.workunit.client.1.vm09.stdout:2/411: dread fb [0,4194304] 0 2026-03-09T15:00:46.512 INFO:tasks.workunit.client.1.vm09.stdout:2/412: truncate df/d58/d67/f61 268727 0 2026-03-09T15:00:46.512 INFO:tasks.workunit.client.1.vm09.stdout:2/413: fsync df/f17 0 2026-03-09T15:00:46.516 INFO:tasks.workunit.client.1.vm09.stdout:2/414: dread df/d20/d29/f31 [0,4194304] 0 2026-03-09T15:00:46.516 INFO:tasks.workunit.client.1.vm09.stdout:9/349: rename d1/d7/d1e/d2b/d2e/f1d to d1/d7/d1e/f79 0 2026-03-09T15:00:46.518 INFO:tasks.workunit.client.1.vm09.stdout:0/427: dread da/dc/d22/f53 [0,4194304] 0 2026-03-09T15:00:46.518 INFO:tasks.workunit.client.1.vm09.stdout:0/428: read - da/dc/d1c/f6d zero size 2026-03-09T15:00:46.520 INFO:tasks.workunit.client.1.vm09.stdout:7/360: write d3/f16 [3846157,52320] 0 2026-03-09T15:00:46.526 INFO:tasks.workunit.client.1.vm09.stdout:7/361: dwrite d3/f9 [0,4194304] 0 2026-03-09T15:00:46.532 INFO:tasks.workunit.client.1.vm09.stdout:7/362: dwrite d3/f16 [0,4194304] 0 2026-03-09T15:00:46.533 INFO:tasks.workunit.client.1.vm09.stdout:1/339: rename d8/d10/l2d to d8/d1b/d5d/l6a 0 2026-03-09T15:00:46.533 INFO:tasks.workunit.client.1.vm09.stdout:2/415: rmdir df/d58/d74 39 2026-03-09T15:00:46.533 INFO:tasks.workunit.client.1.vm09.stdout:7/363: write d3/f16 [4718268,49736] 0 2026-03-09T15:00:46.537 INFO:tasks.workunit.client.1.vm09.stdout:0/429: creat da/dc/d1c/d3c/d44/f89 x:0 0 0 2026-03-09T15:00:46.538 INFO:tasks.workunit.client.1.vm09.stdout:3/368: unlink d3/d3a/f1c 0 2026-03-09T15:00:46.538 INFO:tasks.workunit.client.1.vm09.stdout:5/349: mknod d2/d37/d67/c84 0 2026-03-09T15:00:46.539 INFO:tasks.workunit.client.1.vm09.stdout:5/350: truncate d2/f3d 120676 0 2026-03-09T15:00:46.550 INFO:tasks.workunit.client.1.vm09.stdout:7/364: readlink d3/l18 0 2026-03-09T15:00:46.553 INFO:tasks.workunit.client.1.vm09.stdout:0/430: symlink da/dc/d22/d64/l8a 0 2026-03-09T15:00:46.553 INFO:tasks.workunit.client.1.vm09.stdout:1/340: getdents d8/d22/d40/d64 0 2026-03-09T15:00:46.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:46 vm05.local ceph-mon[50611]: pgmap v149: 65 pgs: 65 active+clean; 786 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 14 MiB/s rd, 96 MiB/s wr, 260 op/s 2026-03-09T15:00:46.554 INFO:tasks.workunit.client.1.vm09.stdout:1/341: write d8/d10/f12 [920704,88935] 0 2026-03-09T15:00:46.555 INFO:tasks.workunit.client.1.vm09.stdout:2/416: chown df/d58/d74/l78 47 1 2026-03-09T15:00:46.555 INFO:tasks.workunit.client.1.vm09.stdout:3/369: creat d3/d74/f88 x:0 0 0 2026-03-09T15:00:46.556 INFO:tasks.workunit.client.1.vm09.stdout:2/417: readlink df/d58/d67/l79 0 2026-03-09T15:00:46.558 INFO:tasks.workunit.client.1.vm09.stdout:3/370: write d3/d3a/d2b/f72 [1261262,50286] 0 2026-03-09T15:00:46.558 INFO:tasks.workunit.client.1.vm09.stdout:3/371: chown d3/f9 7676163 1 2026-03-09T15:00:46.559 INFO:tasks.workunit.client.1.vm09.stdout:5/351: symlink d2/d37/d3c/d36/l85 0 2026-03-09T15:00:46.560 INFO:tasks.workunit.client.1.vm09.stdout:9/350: getdents d1 0 2026-03-09T15:00:46.561 INFO:tasks.workunit.client.1.vm09.stdout:5/352: mkdir d2/d37/d53/d86 0 2026-03-09T15:00:46.562 INFO:tasks.workunit.client.1.vm09.stdout:3/372: truncate d3/f9 354435 0 2026-03-09T15:00:46.562 INFO:tasks.workunit.client.1.vm09.stdout:3/373: dread - d3/d3a/d2b/d36/f68 zero size 2026-03-09T15:00:46.563 INFO:tasks.workunit.client.1.vm09.stdout:7/365: creat d3/d61/f6c x:0 0 0 2026-03-09T15:00:46.564 INFO:tasks.workunit.client.1.vm09.stdout:2/418: mknod df/d1f/d6d/c7d 0 2026-03-09T15:00:46.565 INFO:tasks.workunit.client.1.vm09.stdout:7/366: write d3/d1d/d2d/f57 [993455,70982] 0 2026-03-09T15:00:46.565 INFO:tasks.workunit.client.1.vm09.stdout:2/419: write df/f16 [1359296,104233] 0 2026-03-09T15:00:46.567 INFO:tasks.workunit.client.1.vm09.stdout:1/342: rename d8/d22/d40/f52 to d8/f6b 0 2026-03-09T15:00:46.571 INFO:tasks.workunit.client.1.vm09.stdout:5/353: creat d2/d37/d53/d86/f87 x:0 0 0 2026-03-09T15:00:46.572 INFO:tasks.workunit.client.1.vm09.stdout:2/420: chown df/l11 218072521 1 2026-03-09T15:00:46.572 INFO:tasks.workunit.client.1.vm09.stdout:1/343: creat d8/d10/d24/d45/f6c x:0 0 0 2026-03-09T15:00:46.573 INFO:tasks.workunit.client.1.vm09.stdout:5/354: dwrite d2/d37/d3c/d55/f68 [0,4194304] 0 2026-03-09T15:00:46.577 INFO:tasks.workunit.client.1.vm09.stdout:7/367: mknod d3/d1d/c6d 0 2026-03-09T15:00:46.578 INFO:tasks.workunit.client.1.vm09.stdout:5/355: write d2/f34 [1337750,106240] 0 2026-03-09T15:00:46.584 INFO:tasks.workunit.client.1.vm09.stdout:2/421: dwrite df/d20/d2e/f64 [0,4194304] 0 2026-03-09T15:00:46.585 INFO:tasks.workunit.client.1.vm09.stdout:2/422: dread - df/d1f/d47/f56 zero size 2026-03-09T15:00:46.586 INFO:tasks.workunit.client.1.vm09.stdout:1/344: mknod d8/d10/d24/d45/d5f/c6d 0 2026-03-09T15:00:46.590 INFO:tasks.workunit.client.1.vm09.stdout:7/368: dwrite d3/f3f [0,4194304] 0 2026-03-09T15:00:46.598 INFO:tasks.workunit.client.1.vm09.stdout:8/383: dwrite df/d24/f32 [0,4194304] 0 2026-03-09T15:00:46.603 INFO:tasks.workunit.client.1.vm09.stdout:7/369: rmdir d3/d1d/d2d 39 2026-03-09T15:00:46.607 INFO:tasks.workunit.client.1.vm09.stdout:4/381: truncate db/d19/d52/d76/f3e 1097709 0 2026-03-09T15:00:46.607 INFO:tasks.workunit.client.1.vm09.stdout:1/345: dwrite d8/d10/f12 [0,4194304] 0 2026-03-09T15:00:46.611 INFO:tasks.workunit.client.1.vm09.stdout:4/382: dwrite db/d2f/d5d/f77 [0,4194304] 0 2026-03-09T15:00:46.619 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:46 vm09.local ceph-mon[59673]: pgmap v149: 65 pgs: 65 active+clean; 786 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 14 MiB/s rd, 96 MiB/s wr, 260 op/s 2026-03-09T15:00:46.624 INFO:tasks.workunit.client.1.vm09.stdout:7/370: dwrite d3/f9 [0,4194304] 0 2026-03-09T15:00:46.629 INFO:tasks.workunit.client.1.vm09.stdout:1/346: fsync d8/d10/f69 0 2026-03-09T15:00:46.638 INFO:tasks.workunit.client.1.vm09.stdout:8/384: dwrite df/d5b/f40 [0,4194304] 0 2026-03-09T15:00:46.642 INFO:tasks.workunit.client.1.vm09.stdout:0/431: write da/dc/d10/f16 [1578687,19407] 0 2026-03-09T15:00:46.644 INFO:tasks.workunit.client.1.vm09.stdout:7/371: readlink d3/d1d/l3a 0 2026-03-09T15:00:46.644 INFO:tasks.workunit.client.1.vm09.stdout:9/351: fsync d1/d7/d1e/d2b/f42 0 2026-03-09T15:00:46.645 INFO:tasks.workunit.client.1.vm09.stdout:4/383: dwrite db/d12/f37 [0,4194304] 0 2026-03-09T15:00:46.647 INFO:tasks.workunit.client.1.vm09.stdout:1/347: creat d8/d22/f6e x:0 0 0 2026-03-09T15:00:46.647 INFO:tasks.workunit.client.1.vm09.stdout:9/352: write d1/f29 [3588828,35891] 0 2026-03-09T15:00:46.648 INFO:tasks.workunit.client.1.vm09.stdout:9/353: read - d1/d58/f75 zero size 2026-03-09T15:00:46.656 INFO:tasks.workunit.client.1.vm09.stdout:8/385: dwrite fe [4194304,4194304] 0 2026-03-09T15:00:46.660 INFO:tasks.workunit.client.1.vm09.stdout:3/374: dwrite d3/d3a/d2b/d31/d4a/d62/f1b [0,4194304] 0 2026-03-09T15:00:46.669 INFO:tasks.workunit.client.1.vm09.stdout:4/384: rmdir db 39 2026-03-09T15:00:46.669 INFO:tasks.workunit.client.1.vm09.stdout:7/372: mkdir d3/db/d15/d5f/d6e 0 2026-03-09T15:00:46.669 INFO:tasks.workunit.client.1.vm09.stdout:0/432: fdatasync da/fb 0 2026-03-09T15:00:46.669 INFO:tasks.workunit.client.1.vm09.stdout:3/375: creat d3/d5b/d79/f89 x:0 0 0 2026-03-09T15:00:46.669 INFO:tasks.workunit.client.1.vm09.stdout:3/376: readlink d3/d3a/d2b/d39/d6a/l6c 0 2026-03-09T15:00:46.669 INFO:tasks.workunit.client.1.vm09.stdout:3/377: dread - d3/d5b/d79/f89 zero size 2026-03-09T15:00:46.670 INFO:tasks.workunit.client.1.vm09.stdout:3/378: truncate d3/d74/f88 19844 0 2026-03-09T15:00:46.670 INFO:tasks.workunit.client.1.vm09.stdout:3/379: chown d3/d3a/d2b/d31/d4a/d62/f8 1 1 2026-03-09T15:00:46.672 INFO:tasks.workunit.client.1.vm09.stdout:4/385: chown db/d19 9062 1 2026-03-09T15:00:46.673 INFO:tasks.workunit.client.1.vm09.stdout:4/386: chown db/d12/d16/d5b/d78 1489798 1 2026-03-09T15:00:46.679 INFO:tasks.workunit.client.1.vm09.stdout:3/380: dwrite d3/d3a/d2b/d31/f40 [0,4194304] 0 2026-03-09T15:00:46.681 INFO:tasks.workunit.client.1.vm09.stdout:7/373: rename d3/db/d25/f50 to d3/d1d/d65/f6f 0 2026-03-09T15:00:46.681 INFO:tasks.workunit.client.1.vm09.stdout:4/387: symlink db/d19/d23/d44/l7a 0 2026-03-09T15:00:46.682 INFO:tasks.workunit.client.1.vm09.stdout:8/386: getdents df/d38/d64/d5f 0 2026-03-09T15:00:46.682 INFO:tasks.workunit.client.1.vm09.stdout:0/433: creat da/dc/f8b x:0 0 0 2026-03-09T15:00:46.683 INFO:tasks.workunit.client.1.vm09.stdout:8/387: chown df/f1a 165 1 2026-03-09T15:00:46.684 INFO:tasks.workunit.client.1.vm09.stdout:4/388: chown db/d19/d23/c33 6559 1 2026-03-09T15:00:46.687 INFO:tasks.workunit.client.1.vm09.stdout:8/388: unlink df/d24/f61 0 2026-03-09T15:00:46.691 INFO:tasks.workunit.client.1.vm09.stdout:4/389: mknod db/d12/d16/d5b/d78/c7b 0 2026-03-09T15:00:46.691 INFO:tasks.workunit.client.1.vm09.stdout:0/434: fsync da/d30/f3d 0 2026-03-09T15:00:46.691 INFO:tasks.workunit.client.1.vm09.stdout:4/390: truncate db/d12/d16/f63 259988 0 2026-03-09T15:00:46.691 INFO:tasks.workunit.client.1.vm09.stdout:8/389: mkdir df/d2d/d42/d70 0 2026-03-09T15:00:46.691 INFO:tasks.workunit.client.1.vm09.stdout:7/374: read d3/d3d/f51 [189144,77669] 0 2026-03-09T15:00:46.697 INFO:tasks.workunit.client.1.vm09.stdout:0/435: rmdir da/dc/d1c/d3c/d44 39 2026-03-09T15:00:46.699 INFO:tasks.workunit.client.1.vm09.stdout:4/391: write db/d12/f3d [612470,129142] 0 2026-03-09T15:00:46.699 INFO:tasks.workunit.client.1.vm09.stdout:9/354: dread d1/d7/d1e/d2b/d40/f57 [0,4194304] 0 2026-03-09T15:00:46.699 INFO:tasks.workunit.client.1.vm09.stdout:8/390: dwrite df/f34 [8388608,4194304] 0 2026-03-09T15:00:46.699 INFO:tasks.workunit.client.1.vm09.stdout:9/355: chown d1/d6e/f74 17645545 1 2026-03-09T15:00:46.703 INFO:tasks.workunit.client.1.vm09.stdout:8/391: symlink df/d38/d64/l71 0 2026-03-09T15:00:46.706 INFO:tasks.workunit.client.1.vm09.stdout:9/356: mknod d1/d7/d1e/d2b/d2e/d56/c7a 0 2026-03-09T15:00:46.710 INFO:tasks.workunit.client.1.vm09.stdout:4/392: getdents db/d12 0 2026-03-09T15:00:46.711 INFO:tasks.workunit.client.1.vm09.stdout:8/392: creat df/d5c/f72 x:0 0 0 2026-03-09T15:00:46.711 INFO:tasks.workunit.client.1.vm09.stdout:9/357: mknod d1/d7/d1e/d2b/d2e/d56/d5e/c7b 0 2026-03-09T15:00:46.711 INFO:tasks.workunit.client.1.vm09.stdout:3/381: dread d3/d3a/d2b/d31/f3f [0,4194304] 0 2026-03-09T15:00:46.711 INFO:tasks.workunit.client.1.vm09.stdout:4/393: mkdir db/d19/d23/d44/d7c 0 2026-03-09T15:00:46.714 INFO:tasks.workunit.client.1.vm09.stdout:3/382: write d3/d3a/d2b/d31/f40 [2282172,54023] 0 2026-03-09T15:00:46.714 INFO:tasks.workunit.client.1.vm09.stdout:3/383: fdatasync d3/d5b/d79/f89 0 2026-03-09T15:00:46.714 INFO:tasks.workunit.client.1.vm09.stdout:3/384: chown d3/d3a/d2b/f64 12488668 1 2026-03-09T15:00:46.715 INFO:tasks.workunit.client.1.vm09.stdout:9/358: truncate d1/d4f/f64 47954 0 2026-03-09T15:00:46.715 INFO:tasks.workunit.client.1.vm09.stdout:8/393: mkdir df/d2d/d46/d73 0 2026-03-09T15:00:46.715 INFO:tasks.workunit.client.1.vm09.stdout:8/394: write df/f1a [4577612,10171] 0 2026-03-09T15:00:46.719 INFO:tasks.workunit.client.1.vm09.stdout:8/395: dwrite fe [8388608,4194304] 0 2026-03-09T15:00:46.724 INFO:tasks.workunit.client.1.vm09.stdout:4/394: mkdir db/d19/d23/d44/d7c/d7d 0 2026-03-09T15:00:46.724 INFO:tasks.workunit.client.1.vm09.stdout:8/396: write df/d38/f58 [842280,129992] 0 2026-03-09T15:00:46.724 INFO:tasks.workunit.client.1.vm09.stdout:4/395: readlink db/d19/d23/l72 0 2026-03-09T15:00:46.728 INFO:tasks.workunit.client.1.vm09.stdout:4/396: mknod db/d2f/d5d/c7e 0 2026-03-09T15:00:46.730 INFO:tasks.workunit.client.1.vm09.stdout:7/375: sync 2026-03-09T15:00:46.730 INFO:tasks.workunit.client.1.vm09.stdout:3/385: sync 2026-03-09T15:00:46.730 INFO:tasks.workunit.client.1.vm09.stdout:9/359: dwrite d1/d7/d1e/d2b/d2e/f19 [0,4194304] 0 2026-03-09T15:00:46.732 INFO:tasks.workunit.client.1.vm09.stdout:7/376: write f1 [395476,46341] 0 2026-03-09T15:00:46.744 INFO:tasks.workunit.client.1.vm09.stdout:9/360: mknod d1/d4f/c7c 0 2026-03-09T15:00:46.744 INFO:tasks.workunit.client.1.vm09.stdout:7/377: fsync d3/db/d25/f22 0 2026-03-09T15:00:46.748 INFO:tasks.workunit.client.1.vm09.stdout:7/378: symlink d3/db/d25/l70 0 2026-03-09T15:00:46.793 INFO:tasks.workunit.client.1.vm09.stdout:4/397: dread db/d12/f1b [0,4194304] 0 2026-03-09T15:00:46.819 INFO:tasks.workunit.client.1.vm09.stdout:2/423: write f4 [2676453,47630] 0 2026-03-09T15:00:46.822 INFO:tasks.workunit.client.1.vm09.stdout:5/356: dwrite d2/d4/f23 [0,4194304] 0 2026-03-09T15:00:46.824 INFO:tasks.workunit.client.1.vm09.stdout:2/424: rename df/d1f/d47/d5d/f5e to df/d1f/f7e 0 2026-03-09T15:00:46.825 INFO:tasks.workunit.client.1.vm09.stdout:2/425: truncate df/d2d/f41 2103950 0 2026-03-09T15:00:46.825 INFO:tasks.workunit.client.1.vm09.stdout:5/357: chown d2/la 907736 1 2026-03-09T15:00:46.826 INFO:tasks.workunit.client.1.vm09.stdout:5/358: chown d2/d37/d3c/d36/d4c 11667 1 2026-03-09T15:00:46.832 INFO:tasks.workunit.client.1.vm09.stdout:2/426: dwrite df/d20/f6a [0,4194304] 0 2026-03-09T15:00:46.832 INFO:tasks.workunit.client.1.vm09.stdout:5/359: sync 2026-03-09T15:00:46.837 INFO:tasks.workunit.client.1.vm09.stdout:2/427: creat df/d1f/d6d/f7f x:0 0 0 2026-03-09T15:00:46.840 INFO:tasks.workunit.client.1.vm09.stdout:2/428: fdatasync df/d58/d67/f4e 0 2026-03-09T15:00:46.845 INFO:tasks.workunit.client.1.vm09.stdout:2/429: mknod df/d1f/d47/d5d/c80 0 2026-03-09T15:00:46.847 INFO:tasks.workunit.client.1.vm09.stdout:4/398: dread db/d19/d52/d76/f3e [0,4194304] 0 2026-03-09T15:00:46.917 INFO:tasks.workunit.client.1.vm09.stdout:1/348: write d8/f3d [1469202,102817] 0 2026-03-09T15:00:46.918 INFO:tasks.workunit.client.1.vm09.stdout:7/379: link d3/db/d25/l56 d3/db/d15/d5f/d44/l71 0 2026-03-09T15:00:46.919 INFO:tasks.workunit.client.1.vm09.stdout:1/349: creat d8/d50/d5b/f6f x:0 0 0 2026-03-09T15:00:46.922 INFO:tasks.workunit.client.1.vm09.stdout:1/350: dwrite d8/d1b/f41 [0,4194304] 0 2026-03-09T15:00:46.927 INFO:tasks.workunit.client.1.vm09.stdout:1/351: symlink d8/d10/d24/d45/l70 0 2026-03-09T15:00:46.930 INFO:tasks.workunit.client.1.vm09.stdout:1/352: link d8/d10/d24/d45/c4e d8/d22/d56/c71 0 2026-03-09T15:00:46.931 INFO:tasks.workunit.client.1.vm09.stdout:7/380: sync 2026-03-09T15:00:46.932 INFO:tasks.workunit.client.1.vm09.stdout:7/381: fdatasync d3/d1d/d2d/f57 0 2026-03-09T15:00:46.934 INFO:tasks.workunit.client.1.vm09.stdout:1/353: dwrite d8/ff [0,4194304] 0 2026-03-09T15:00:46.936 INFO:tasks.workunit.client.1.vm09.stdout:1/354: readlink d8/d50/d39/l43 0 2026-03-09T15:00:46.938 INFO:tasks.workunit.client.1.vm09.stdout:7/382: dread d3/d1d/d2d/f57 [0,4194304] 0 2026-03-09T15:00:46.943 INFO:tasks.workunit.client.1.vm09.stdout:0/436: dwrite da/dc/d22/f55 [0,4194304] 0 2026-03-09T15:00:46.943 INFO:tasks.workunit.client.1.vm09.stdout:1/355: write d8/d22/f4c [3500490,106414] 0 2026-03-09T15:00:46.943 INFO:tasks.workunit.client.1.vm09.stdout:1/356: fdatasync d8/fa 0 2026-03-09T15:00:46.946 INFO:tasks.workunit.client.1.vm09.stdout:8/397: write df/d5b/d65/d1d/f6e [3744767,70214] 0 2026-03-09T15:00:46.952 INFO:tasks.workunit.client.1.vm09.stdout:3/386: truncate d3/ff 5158809 0 2026-03-09T15:00:46.953 INFO:tasks.workunit.client.1.vm09.stdout:9/361: truncate d1/d7/d1e/f34 1942515 0 2026-03-09T15:00:46.960 INFO:tasks.workunit.client.1.vm09.stdout:0/437: mkdir da/dc/d8c 0 2026-03-09T15:00:46.966 INFO:tasks.workunit.client.1.vm09.stdout:4/399: truncate f3 1036349 0 2026-03-09T15:00:46.966 INFO:tasks.workunit.client.1.vm09.stdout:3/387: creat d3/d3a/d2b/d36/f8a x:0 0 0 2026-03-09T15:00:46.966 INFO:tasks.workunit.client.1.vm09.stdout:5/360: dwrite d2/d37/d3c/f4e [4194304,4194304] 0 2026-03-09T15:00:46.968 INFO:tasks.workunit.client.1.vm09.stdout:9/362: mknod d1/d7/d1e/c7d 0 2026-03-09T15:00:46.970 INFO:tasks.workunit.client.1.vm09.stdout:7/383: dwrite d3/db/d46/f5b [0,4194304] 0 2026-03-09T15:00:46.974 INFO:tasks.workunit.client.1.vm09.stdout:1/357: getdents d8/d10/d24/d45/d5f 0 2026-03-09T15:00:46.974 INFO:tasks.workunit.client.1.vm09.stdout:0/438: dwrite da/dc/d10/f29 [0,4194304] 0 2026-03-09T15:00:46.974 INFO:tasks.workunit.client.1.vm09.stdout:1/358: read d8/f17 [2430818,72917] 0 2026-03-09T15:00:46.976 INFO:tasks.workunit.client.1.vm09.stdout:1/359: write d8/d50/d5b/f6f [1025093,120517] 0 2026-03-09T15:00:46.978 INFO:tasks.workunit.client.1.vm09.stdout:3/388: truncate d3/d3a/d2b/d36/f44 1022173 0 2026-03-09T15:00:46.979 INFO:tasks.workunit.client.1.vm09.stdout:0/439: mknod da/dc/d61/c8d 0 2026-03-09T15:00:46.979 INFO:tasks.workunit.client.1.vm09.stdout:5/361: dwrite d2/d37/d3c/f3a [0,4194304] 0 2026-03-09T15:00:46.990 INFO:tasks.workunit.client.1.vm09.stdout:9/363: dwrite d1/d4f/d52/f53 [0,4194304] 0 2026-03-09T15:00:46.992 INFO:tasks.workunit.client.1.vm09.stdout:0/440: rename da/dc/d10/f11 to da/dc/d22/d76/f8e 0 2026-03-09T15:00:46.992 INFO:tasks.workunit.client.1.vm09.stdout:0/441: write da/dc/d10/f4a [3705614,4876] 0 2026-03-09T15:00:46.993 INFO:tasks.workunit.client.1.vm09.stdout:5/362: mkdir d2/d37/d53/d86/d88 0 2026-03-09T15:00:46.993 INFO:tasks.workunit.client.1.vm09.stdout:5/363: dread - d2/d4/f73 zero size 2026-03-09T15:00:46.994 INFO:tasks.workunit.client.1.vm09.stdout:9/364: chown d1/d7/d1e/d2b/d2e/c50 9 1 2026-03-09T15:00:46.995 INFO:tasks.workunit.client.1.vm09.stdout:5/364: write d2/d37/d3c/d55/f58 [870180,120676] 0 2026-03-09T15:00:46.997 INFO:tasks.workunit.client.1.vm09.stdout:3/389: link d3/d3a/d2b/d31/d4a/d62/f26 d3/d5b/f8b 0 2026-03-09T15:00:46.999 INFO:tasks.workunit.client.1.vm09.stdout:0/442: rename da/dc/d10/c27 to da/c8f 0 2026-03-09T15:00:47.000 INFO:tasks.workunit.client.1.vm09.stdout:0/443: stat da/dc/d22/d76 0 2026-03-09T15:00:47.000 INFO:tasks.workunit.client.1.vm09.stdout:3/390: rmdir d3/d3a/d2b/d53 39 2026-03-09T15:00:47.001 INFO:tasks.workunit.client.1.vm09.stdout:0/444: write da/dc/d1c/d46/f52 [709991,124530] 0 2026-03-09T15:00:47.002 INFO:tasks.workunit.client.1.vm09.stdout:5/365: dread d2/d37/d3c/d55/f68 [0,4194304] 0 2026-03-09T15:00:47.003 INFO:tasks.workunit.client.1.vm09.stdout:3/391: chown d3/d3a/d2b/d36/f68 0 1 2026-03-09T15:00:47.007 INFO:tasks.workunit.client.1.vm09.stdout:0/445: creat da/dc/f90 x:0 0 0 2026-03-09T15:00:47.007 INFO:tasks.workunit.client.1.vm09.stdout:3/392: stat d3/d3a/d2b/d31/d4a/d62/c2c 0 2026-03-09T15:00:47.015 INFO:tasks.workunit.client.1.vm09.stdout:0/446: write da/dc/d1c/d46/f7d [1030688,75203] 0 2026-03-09T15:00:47.015 INFO:tasks.workunit.client.1.vm09.stdout:0/447: write da/dc/d22/f47 [1504053,107287] 0 2026-03-09T15:00:47.015 INFO:tasks.workunit.client.1.vm09.stdout:3/393: rmdir d3/d3a/d2b/d53 39 2026-03-09T15:00:47.016 INFO:tasks.workunit.client.1.vm09.stdout:5/366: dwrite d2/d37/d3c/d36/d45/f6e [0,4194304] 0 2026-03-09T15:00:47.017 INFO:tasks.workunit.client.1.vm09.stdout:5/367: truncate d2/f61 840355 0 2026-03-09T15:00:47.018 INFO:tasks.workunit.client.1.vm09.stdout:5/368: mkdir d2/d37/d3c/d36/d4c/d89 0 2026-03-09T15:00:47.022 INFO:tasks.workunit.client.1.vm09.stdout:3/394: creat d3/d3a/d2b/d7b/f8c x:0 0 0 2026-03-09T15:00:47.024 INFO:tasks.workunit.client.1.vm09.stdout:3/395: readlink d3/d3a/d2b/d31/d4a/l69 0 2026-03-09T15:00:47.024 INFO:tasks.workunit.client.1.vm09.stdout:3/396: stat d3/d3a/d2b/d31/d4a/l5e 0 2026-03-09T15:00:47.027 INFO:tasks.workunit.client.1.vm09.stdout:3/397: creat d3/d3a/d2b/d7b/f8d x:0 0 0 2026-03-09T15:00:47.034 INFO:tasks.workunit.client.1.vm09.stdout:9/365: dread d1/f29 [0,4194304] 0 2026-03-09T15:00:47.038 INFO:tasks.workunit.client.1.vm09.stdout:3/398: link d3/d3a/d2b/l37 d3/d3a/d2b/d39/l8e 0 2026-03-09T15:00:47.038 INFO:tasks.workunit.client.1.vm09.stdout:3/399: chown d3/l23 63607199 1 2026-03-09T15:00:47.046 INFO:tasks.workunit.client.1.vm09.stdout:3/400: dwrite d3/d3a/d2b/d36/f46 [0,4194304] 0 2026-03-09T15:00:47.048 INFO:tasks.workunit.client.1.vm09.stdout:3/401: write d3/d74/f88 [319845,32887] 0 2026-03-09T15:00:47.050 INFO:tasks.workunit.client.1.vm09.stdout:9/366: dwrite d1/d7/d1e/f5d [0,4194304] 0 2026-03-09T15:00:47.053 INFO:tasks.workunit.client.1.vm09.stdout:9/367: symlink d1/d7/d1e/d2b/d2e/l7e 0 2026-03-09T15:00:47.054 INFO:tasks.workunit.client.1.vm09.stdout:3/402: mknod d3/d3a/d2b/d53/d80/c8f 0 2026-03-09T15:00:47.064 INFO:tasks.workunit.client.1.vm09.stdout:9/368: dwrite d1/d7/f67 [0,4194304] 0 2026-03-09T15:00:47.075 INFO:tasks.workunit.client.1.vm09.stdout:9/369: unlink d1/d7/d1e/f79 0 2026-03-09T15:00:47.082 INFO:tasks.workunit.client.1.vm09.stdout:9/370: symlink d1/d6e/l7f 0 2026-03-09T15:00:47.082 INFO:tasks.workunit.client.1.vm09.stdout:9/371: dwrite d1/d7/f3e [0,4194304] 0 2026-03-09T15:00:47.090 INFO:tasks.workunit.client.1.vm09.stdout:9/372: readlink d1/d4f/d52/l5c 0 2026-03-09T15:00:47.093 INFO:tasks.workunit.client.1.vm09.stdout:9/373: fdatasync d1/d7/d1e/d2b/f3f 0 2026-03-09T15:00:47.098 INFO:tasks.workunit.client.1.vm09.stdout:9/374: dwrite d1/d7/d1e/d2b/d2e/f3a [4194304,4194304] 0 2026-03-09T15:00:47.103 INFO:tasks.workunit.client.1.vm09.stdout:9/375: rename d1/d7/d1e/d2b/f3f to d1/d58/f80 0 2026-03-09T15:00:47.104 INFO:tasks.workunit.client.1.vm09.stdout:9/376: creat d1/d7/d1e/d2b/f81 x:0 0 0 2026-03-09T15:00:47.105 INFO:tasks.workunit.client.1.vm09.stdout:9/377: chown d1/d7/d1e/d2b/d40 2 1 2026-03-09T15:00:47.106 INFO:tasks.workunit.client.1.vm09.stdout:9/378: symlink d1/l82 0 2026-03-09T15:00:47.108 INFO:tasks.workunit.client.1.vm09.stdout:9/379: rename d1/d7/f6a to d1/d7/f83 0 2026-03-09T15:00:47.117 INFO:tasks.workunit.client.1.vm09.stdout:9/380: chown d1/d4f/d52/l5c 2019932759 1 2026-03-09T15:00:47.118 INFO:tasks.workunit.client.1.vm09.stdout:9/381: mknod d1/d7/d1e/d2b/d40/c84 0 2026-03-09T15:00:47.118 INFO:tasks.workunit.client.1.vm09.stdout:9/382: chown d1/d58/l6f 778520 1 2026-03-09T15:00:47.118 INFO:tasks.workunit.client.1.vm09.stdout:9/383: rmdir d1/d7/d1e/d2b/d40 39 2026-03-09T15:00:47.118 INFO:tasks.workunit.client.1.vm09.stdout:9/384: symlink d1/d7/d1e/d2b/d2e/d56/d6d/l85 0 2026-03-09T15:00:47.118 INFO:tasks.workunit.client.1.vm09.stdout:9/385: rename d1/d7/c1a to d1/d7/d1e/d2b/d2e/d56/c86 0 2026-03-09T15:00:47.123 INFO:tasks.workunit.client.1.vm09.stdout:9/386: dwrite d1/d7/f67 [4194304,4194304] 0 2026-03-09T15:00:47.133 INFO:tasks.workunit.client.1.vm09.stdout:9/387: dwrite d1/f4 [4194304,4194304] 0 2026-03-09T15:00:47.144 INFO:tasks.workunit.client.1.vm09.stdout:8/398: dwrite df/d2d/f57 [0,4194304] 0 2026-03-09T15:00:47.148 INFO:tasks.workunit.client.1.vm09.stdout:9/388: dwrite d1/d58/f72 [0,4194304] 0 2026-03-09T15:00:47.153 INFO:tasks.workunit.client.1.vm09.stdout:5/369: fsync d2/d37/d3c/d36/d45/f6e 0 2026-03-09T15:00:47.155 INFO:tasks.workunit.client.1.vm09.stdout:5/370: chown d2/d37/f43 14087 1 2026-03-09T15:00:47.156 INFO:tasks.workunit.client.1.vm09.stdout:9/389: creat d1/d7/d1e/d2b/d2e/d56/d6d/f87 x:0 0 0 2026-03-09T15:00:47.161 INFO:tasks.workunit.client.1.vm09.stdout:9/390: write d1/d7/f83 [3284717,92451] 0 2026-03-09T15:00:47.166 INFO:tasks.workunit.client.1.vm09.stdout:5/371: getdents d2/d37/d3c/d36/d4c/d51 0 2026-03-09T15:00:47.166 INFO:tasks.workunit.client.1.vm09.stdout:5/372: dwrite d2/f22 [4194304,4194304] 0 2026-03-09T15:00:47.167 INFO:tasks.workunit.client.1.vm09.stdout:5/373: chown d2/d4/f16 114774 1 2026-03-09T15:00:47.173 INFO:tasks.workunit.client.1.vm09.stdout:5/374: mknod d2/d37/d3c/d55/c8a 0 2026-03-09T15:00:47.173 INFO:tasks.workunit.client.1.vm09.stdout:5/375: truncate d2/f3d 766816 0 2026-03-09T15:00:47.173 INFO:tasks.workunit.client.1.vm09.stdout:5/376: fdatasync d2/f3d 0 2026-03-09T15:00:47.176 INFO:tasks.workunit.client.1.vm09.stdout:5/377: rename d2/d37/d3c/d36/d45/d5c/c6a to d2/d37/d3c/d36/d45/d5c/c8b 0 2026-03-09T15:00:47.188 INFO:tasks.workunit.client.1.vm09.stdout:4/400: write f3 [1406391,107824] 0 2026-03-09T15:00:47.197 INFO:tasks.workunit.client.1.vm09.stdout:9/391: dread d1/d7/d1e/d2b/d2e/f3a [0,4194304] 0 2026-03-09T15:00:47.198 INFO:tasks.workunit.client.1.vm09.stdout:9/392: truncate d1/d4f/f64 356852 0 2026-03-09T15:00:47.200 INFO:tasks.workunit.client.1.vm09.stdout:9/393: chown d1/d7/d1e/d2b/d40/l70 4895 1 2026-03-09T15:00:47.236 INFO:tasks.workunit.client.1.vm09.stdout:5/378: dread d2/f14 [0,4194304] 0 2026-03-09T15:00:47.238 INFO:tasks.workunit.client.1.vm09.stdout:5/379: creat d2/d37/d3c/d36/d45/f8c x:0 0 0 2026-03-09T15:00:47.239 INFO:tasks.workunit.client.1.vm09.stdout:5/380: write d2/d4/f16 [708762,120574] 0 2026-03-09T15:00:47.241 INFO:tasks.workunit.client.1.vm09.stdout:5/381: creat d2/d37/d53/d86/d88/f8d x:0 0 0 2026-03-09T15:00:47.241 INFO:tasks.workunit.client.1.vm09.stdout:5/382: write d2/d4/f23 [4578269,65920] 0 2026-03-09T15:00:47.242 INFO:tasks.workunit.client.1.vm09.stdout:5/383: write d2/f34 [3959726,91456] 0 2026-03-09T15:00:47.246 INFO:tasks.workunit.client.1.vm09.stdout:5/384: symlink d2/d37/d3c/d36/d45/l8e 0 2026-03-09T15:00:47.255 INFO:tasks.workunit.client.1.vm09.stdout:5/385: dwrite d2/d37/d3c/d55/f7b [0,4194304] 0 2026-03-09T15:00:47.262 INFO:tasks.workunit.client.1.vm09.stdout:7/384: write d3/d1d/f11 [3344404,18067] 0 2026-03-09T15:00:47.270 INFO:tasks.workunit.client.1.vm09.stdout:7/385: chown d3/d1d/d65/f6f 215027339 1 2026-03-09T15:00:47.270 INFO:tasks.workunit.client.1.vm09.stdout:7/386: readlink d3/d1d/l27 0 2026-03-09T15:00:47.270 INFO:tasks.workunit.client.1.vm09.stdout:7/387: chown d3/d1d/f33 12782665 1 2026-03-09T15:00:47.270 INFO:tasks.workunit.client.1.vm09.stdout:7/388: chown d3 92619 1 2026-03-09T15:00:47.324 INFO:tasks.workunit.client.1.vm09.stdout:5/386: fdatasync d2/d4/f23 0 2026-03-09T15:00:47.327 INFO:tasks.workunit.client.1.vm09.stdout:0/448: getdents da/dc/d61 0 2026-03-09T15:00:47.329 INFO:tasks.workunit.client.1.vm09.stdout:5/387: dwrite d2/d37/d3c/d55/f7b [0,4194304] 0 2026-03-09T15:00:47.336 INFO:tasks.workunit.client.1.vm09.stdout:0/449: dwrite da/dc/d1c/d3c/d44/f67 [4194304,4194304] 0 2026-03-09T15:00:47.340 INFO:tasks.workunit.client.1.vm09.stdout:5/388: unlink d2/d37/d67/l7c 0 2026-03-09T15:00:47.344 INFO:tasks.workunit.client.1.vm09.stdout:5/389: rename d2/la to d2/d37/d53/d86/d88/l8f 0 2026-03-09T15:00:47.349 INFO:tasks.workunit.client.1.vm09.stdout:1/360: write d8/d1b/f37 [419076,73834] 0 2026-03-09T15:00:47.358 INFO:tasks.workunit.client.1.vm09.stdout:3/403: dread d3/f77 [0,4194304] 0 2026-03-09T15:00:47.358 INFO:tasks.workunit.client.1.vm09.stdout:3/404: dread - d3/d3a/d2b/f64 zero size 2026-03-09T15:00:47.359 INFO:tasks.workunit.client.1.vm09.stdout:3/405: fdatasync d3/d3a/d2b/d36/f68 0 2026-03-09T15:00:47.360 INFO:tasks.workunit.client.1.vm09.stdout:3/406: chown d3/l23 1763686 1 2026-03-09T15:00:47.360 INFO:tasks.workunit.client.1.vm09.stdout:5/390: dread d2/f38 [0,4194304] 0 2026-03-09T15:00:47.361 INFO:tasks.workunit.client.1.vm09.stdout:3/407: mkdir d3/d3a/d2b/d7b/d90 0 2026-03-09T15:00:47.362 INFO:tasks.workunit.client.1.vm09.stdout:3/408: dread - d3/d3a/d2b/d36/f68 zero size 2026-03-09T15:00:47.362 INFO:tasks.workunit.client.1.vm09.stdout:3/409: stat d3/d3a/d2b/f64 0 2026-03-09T15:00:47.363 INFO:tasks.workunit.client.1.vm09.stdout:3/410: dread - d3/d3a/d2b/d39/f7d zero size 2026-03-09T15:00:47.368 INFO:tasks.workunit.client.1.vm09.stdout:3/411: dwrite d3/d3a/d2b/d31/d4a/d62/f1b [0,4194304] 0 2026-03-09T15:00:47.369 INFO:tasks.workunit.client.1.vm09.stdout:3/412: dread - d3/d3a/d2b/f64 zero size 2026-03-09T15:00:47.369 INFO:tasks.workunit.client.1.vm09.stdout:3/413: write d3/d3a/d2b/d7b/f8d [430924,29862] 0 2026-03-09T15:00:47.370 INFO:tasks.workunit.client.1.vm09.stdout:3/414: fsync d3/d3a/d2b/d39/d6a/f6f 0 2026-03-09T15:00:47.373 INFO:tasks.workunit.client.1.vm09.stdout:3/415: dread - d3/d5b/d79/f83 zero size 2026-03-09T15:00:47.385 INFO:tasks.workunit.client.1.vm09.stdout:5/391: dread d2/d4/fd [0,4194304] 0 2026-03-09T15:00:47.388 INFO:tasks.workunit.client.1.vm09.stdout:5/392: getdents d2/d37/d3c 0 2026-03-09T15:00:47.392 INFO:tasks.workunit.client.1.vm09.stdout:5/393: link d2/d37/d53/f81 d2/d37/d3c/d36/d45/d5c/f90 0 2026-03-09T15:00:47.397 INFO:tasks.workunit.client.1.vm09.stdout:6/402: dread d6/f17 [0,4194304] 0 2026-03-09T15:00:47.397 INFO:tasks.workunit.client.1.vm09.stdout:6/403: chown d6/db/d10 6862050 1 2026-03-09T15:00:47.397 INFO:tasks.workunit.client.1.vm09.stdout:6/404: stat d6/d20/f27 0 2026-03-09T15:00:47.397 INFO:tasks.workunit.client.1.vm09.stdout:0/450: sync 2026-03-09T15:00:47.397 INFO:tasks.workunit.client.1.vm09.stdout:3/416: sync 2026-03-09T15:00:47.399 INFO:tasks.workunit.client.1.vm09.stdout:2/430: dread f5 [0,4194304] 0 2026-03-09T15:00:47.399 INFO:tasks.workunit.client.1.vm09.stdout:0/451: fsync da/d30/f3d 0 2026-03-09T15:00:47.400 INFO:tasks.workunit.client.1.vm09.stdout:0/452: write da/dc/d1c/d46/d5b/f6a [5072561,97018] 0 2026-03-09T15:00:47.401 INFO:tasks.workunit.client.1.vm09.stdout:3/417: creat d3/d74/f91 x:0 0 0 2026-03-09T15:00:47.405 INFO:tasks.workunit.client.1.vm09.stdout:5/394: dread d2/d4/f16 [0,4194304] 0 2026-03-09T15:00:47.406 INFO:tasks.workunit.client.1.vm09.stdout:6/405: creat d6/f83 x:0 0 0 2026-03-09T15:00:47.408 INFO:tasks.workunit.client.1.vm09.stdout:2/431: rmdir df/d2d/d76 0 2026-03-09T15:00:47.408 INFO:tasks.workunit.client.1.vm09.stdout:8/399: truncate df/f51 43839 0 2026-03-09T15:00:47.409 INFO:tasks.workunit.client.1.vm09.stdout:6/406: chown d6/db/d10/d7a 1336716 1 2026-03-09T15:00:47.409 INFO:tasks.workunit.client.1.vm09.stdout:4/401: write db/d19/d23/d44/f45 [459427,87735] 0 2026-03-09T15:00:47.410 INFO:tasks.workunit.client.1.vm09.stdout:9/394: truncate d1/d58/f80 1433686 0 2026-03-09T15:00:47.413 INFO:tasks.workunit.client.1.vm09.stdout:9/395: read d1/d7/d1e/d2b/d2e/f19 [412660,25673] 0 2026-03-09T15:00:47.417 INFO:tasks.workunit.client.1.vm09.stdout:1/361: dread d8/fa [0,4194304] 0 2026-03-09T15:00:47.418 INFO:tasks.workunit.client.1.vm09.stdout:4/402: mkdir db/d12/d16/d5b/d78/d7f 0 2026-03-09T15:00:47.418 INFO:tasks.workunit.client.1.vm09.stdout:3/418: dwrite d3/d3a/d54/f59 [0,4194304] 0 2026-03-09T15:00:47.419 INFO:tasks.workunit.client.1.vm09.stdout:3/419: chown d3/d3a/d2b/d36/c4e 24 1 2026-03-09T15:00:47.428 INFO:tasks.workunit.client.1.vm09.stdout:1/362: rmdir d8/d10 39 2026-03-09T15:00:47.429 INFO:tasks.workunit.client.1.vm09.stdout:4/403: fdatasync db/d12/f1b 0 2026-03-09T15:00:47.433 INFO:tasks.workunit.client.1.vm09.stdout:8/400: dwrite df/d24/f32 [0,4194304] 0 2026-03-09T15:00:47.435 INFO:tasks.workunit.client.1.vm09.stdout:2/432: dwrite df/f4a [0,4194304] 0 2026-03-09T15:00:47.441 INFO:tasks.workunit.client.1.vm09.stdout:3/420: write d3/d3a/d2b/d39/d6a/f6f [4989738,101094] 0 2026-03-09T15:00:47.442 INFO:tasks.workunit.client.1.vm09.stdout:7/389: dwrite d3/fd [0,4194304] 0 2026-03-09T15:00:47.445 INFO:tasks.workunit.client.1.vm09.stdout:2/433: mknod df/d1f/d47/d5d/c81 0 2026-03-09T15:00:47.447 INFO:tasks.workunit.client.1.vm09.stdout:4/404: symlink db/d19/d23/d71/l80 0 2026-03-09T15:00:47.454 INFO:tasks.workunit.client.1.vm09.stdout:4/405: write db/d19/d23/d71/d5f/f66 [899244,130469] 0 2026-03-09T15:00:47.454 INFO:tasks.workunit.client.1.vm09.stdout:1/363: rename d8/d22/d40 to d8/d22/d72 0 2026-03-09T15:00:47.456 INFO:tasks.workunit.client.1.vm09.stdout:2/434: mknod df/d20/d29/d53/d5f/c82 0 2026-03-09T15:00:47.456 INFO:tasks.workunit.client.1.vm09.stdout:1/364: chown d8/f17 40008441 1 2026-03-09T15:00:47.457 INFO:tasks.workunit.client.1.vm09.stdout:3/421: creat d3/d3a/d2b/f92 x:0 0 0 2026-03-09T15:00:47.468 INFO:tasks.workunit.client.1.vm09.stdout:2/435: mknod df/c83 0 2026-03-09T15:00:47.468 INFO:tasks.workunit.client.1.vm09.stdout:3/422: symlink d3/d60/l93 0 2026-03-09T15:00:47.469 INFO:tasks.workunit.client.1.vm09.stdout:7/390: dread d3/db/d46/f5b [0,4194304] 0 2026-03-09T15:00:47.470 INFO:tasks.workunit.client.1.vm09.stdout:2/436: chown df/d58 5429210 1 2026-03-09T15:00:47.475 INFO:tasks.workunit.client.1.vm09.stdout:4/406: dwrite db/d19/d23/d71/f43 [4194304,4194304] 0 2026-03-09T15:00:47.479 INFO:tasks.workunit.client.1.vm09.stdout:1/365: rename d8/d1b to d8/d10/d73 0 2026-03-09T15:00:47.493 INFO:tasks.workunit.client.1.vm09.stdout:1/366: write d8/d10/d24/d45/f62 [291940,35081] 0 2026-03-09T15:00:47.493 INFO:tasks.workunit.client.1.vm09.stdout:7/391: creat d3/d1d/f72 x:0 0 0 2026-03-09T15:00:47.493 INFO:tasks.workunit.client.1.vm09.stdout:5/395: dread d2/f47 [0,4194304] 0 2026-03-09T15:00:47.493 INFO:tasks.workunit.client.1.vm09.stdout:7/392: symlink d3/db/d15/d5f/d44/l73 0 2026-03-09T15:00:47.493 INFO:tasks.workunit.client.1.vm09.stdout:7/393: readlink d3/db/d46/l67 0 2026-03-09T15:00:47.493 INFO:tasks.workunit.client.1.vm09.stdout:1/367: read d8/d50/d5b/f6f [153880,37126] 0 2026-03-09T15:00:47.493 INFO:tasks.workunit.client.1.vm09.stdout:5/396: chown d2/c39 4 1 2026-03-09T15:00:47.493 INFO:tasks.workunit.client.1.vm09.stdout:1/368: write d8/f42 [656086,55257] 0 2026-03-09T15:00:47.493 INFO:tasks.workunit.client.1.vm09.stdout:1/369: truncate d8/d22/f61 79703 0 2026-03-09T15:00:47.493 INFO:tasks.workunit.client.1.vm09.stdout:1/370: unlink d8/d22/l2b 0 2026-03-09T15:00:47.495 INFO:tasks.workunit.client.1.vm09.stdout:7/394: link d3/db/c54 d3/db/d25/c74 0 2026-03-09T15:00:47.502 INFO:tasks.workunit.client.1.vm09.stdout:1/371: dread d8/d10/f12 [0,4194304] 0 2026-03-09T15:00:47.507 INFO:tasks.workunit.client.1.vm09.stdout:2/437: dread df/d2d/f4f [0,4194304] 0 2026-03-09T15:00:47.508 INFO:tasks.workunit.client.1.vm09.stdout:1/372: symlink d8/d10/d24/d48/l74 0 2026-03-09T15:00:47.530 INFO:tasks.workunit.client.1.vm09.stdout:1/373: sync 2026-03-09T15:00:47.535 INFO:tasks.workunit.client.1.vm09.stdout:1/374: dwrite d8/d10/d24/f2a [0,4194304] 0 2026-03-09T15:00:47.545 INFO:tasks.workunit.client.1.vm09.stdout:1/375: symlink d8/d10/d24/d48/l75 0 2026-03-09T15:00:47.548 INFO:tasks.workunit.client.1.vm09.stdout:2/438: read df/d1f/d47/f60 [3033613,125950] 0 2026-03-09T15:00:47.549 INFO:tasks.workunit.client.1.vm09.stdout:7/395: fsync d3/f26 0 2026-03-09T15:00:47.549 INFO:tasks.workunit.client.1.vm09.stdout:2/439: mkdir df/d1f/d47/d84 0 2026-03-09T15:00:47.550 INFO:tasks.workunit.client.1.vm09.stdout:2/440: chown df/d1f/d6d 13 1 2026-03-09T15:00:47.550 INFO:tasks.workunit.client.1.vm09.stdout:7/396: write d3/d61/f6c [481570,80784] 0 2026-03-09T15:00:47.550 INFO:tasks.workunit.client.1.vm09.stdout:2/441: fdatasync df/d58/d67/f46 0 2026-03-09T15:00:47.551 INFO:tasks.workunit.client.1.vm09.stdout:2/442: write df/d1f/f39 [703167,76873] 0 2026-03-09T15:00:47.557 INFO:tasks.workunit.client.1.vm09.stdout:2/443: dwrite df/d1f/d6d/f7f [0,4194304] 0 2026-03-09T15:00:47.568 INFO:tasks.workunit.client.1.vm09.stdout:7/397: dwrite d3/f5 [0,4194304] 0 2026-03-09T15:00:47.568 INFO:tasks.workunit.client.1.vm09.stdout:2/444: symlink df/d20/d29/d53/l85 0 2026-03-09T15:00:47.568 INFO:tasks.workunit.client.1.vm09.stdout:2/445: stat df/f17 0 2026-03-09T15:00:47.569 INFO:tasks.workunit.client.1.vm09.stdout:1/376: sync 2026-03-09T15:00:47.571 INFO:tasks.workunit.client.1.vm09.stdout:7/398: getdents d3/db/d15 0 2026-03-09T15:00:47.574 INFO:tasks.workunit.client.1.vm09.stdout:7/399: read - d3/d28/f69 zero size 2026-03-09T15:00:47.574 INFO:tasks.workunit.client.1.vm09.stdout:1/377: dwrite d8/d22/f61 [0,4194304] 0 2026-03-09T15:00:47.584 INFO:tasks.workunit.client.1.vm09.stdout:1/378: chown d8/d10/f15 0 1 2026-03-09T15:00:47.585 INFO:tasks.workunit.client.1.vm09.stdout:2/446: rename df/d1f/c2f to df/d6e/c86 0 2026-03-09T15:00:47.585 INFO:tasks.workunit.client.1.vm09.stdout:7/400: mkdir d3/db/d25/d5c/d75 0 2026-03-09T15:00:47.585 INFO:tasks.workunit.client.1.vm09.stdout:2/447: write df/d2d/f41 [1368884,9861] 0 2026-03-09T15:00:47.590 INFO:tasks.workunit.client.1.vm09.stdout:7/401: dwrite d3/d1d/f72 [0,4194304] 0 2026-03-09T15:00:47.600 INFO:tasks.workunit.client.1.vm09.stdout:1/379: link d8/d10/d73/f41 d8/d10/d24/d48/f76 0 2026-03-09T15:00:47.600 INFO:tasks.workunit.client.1.vm09.stdout:2/448: getdents df/d1f/d47/d71 0 2026-03-09T15:00:47.600 INFO:tasks.workunit.client.1.vm09.stdout:2/449: chown df/d20/d29/d53/d5f 2 1 2026-03-09T15:00:47.603 INFO:tasks.workunit.client.1.vm09.stdout:2/450: read df/d58/d67/f4e [1092292,129831] 0 2026-03-09T15:00:47.604 INFO:tasks.workunit.client.1.vm09.stdout:1/380: dread d8/d10/f12 [0,4194304] 0 2026-03-09T15:00:47.609 INFO:tasks.workunit.client.1.vm09.stdout:2/451: sync 2026-03-09T15:00:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:47 vm09.local ceph-mon[59673]: pgmap v150: 65 pgs: 65 active+clean; 927 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 20 MiB/s rd, 111 MiB/s wr, 234 op/s 2026-03-09T15:00:47.632 INFO:tasks.workunit.client.1.vm09.stdout:2/452: dread df/d1f/d47/f60 [0,4194304] 0 2026-03-09T15:00:47.668 INFO:tasks.workunit.client.1.vm09.stdout:9/396: rmdir d1/d58 39 2026-03-09T15:00:47.669 INFO:tasks.workunit.client.1.vm09.stdout:9/397: creat d1/d6e/f88 x:0 0 0 2026-03-09T15:00:47.671 INFO:tasks.workunit.client.1.vm09.stdout:0/453: dwrite da/dc/d22/f3b [0,4194304] 0 2026-03-09T15:00:47.671 INFO:tasks.workunit.client.1.vm09.stdout:9/398: creat d1/d4f/f89 x:0 0 0 2026-03-09T15:00:47.672 INFO:tasks.workunit.client.1.vm09.stdout:0/454: chown da/dc/f90 1112 1 2026-03-09T15:00:47.685 INFO:tasks.workunit.client.1.vm09.stdout:0/455: dwrite f7 [4194304,4194304] 0 2026-03-09T15:00:47.695 INFO:tasks.workunit.client.1.vm09.stdout:0/456: link da/dc/d1c/d3c/d44/f71 da/dc/d1c/d46/d63/f91 0 2026-03-09T15:00:47.698 INFO:tasks.workunit.client.1.vm09.stdout:0/457: mkdir da/dc/d92 0 2026-03-09T15:00:47.703 INFO:tasks.workunit.client.1.vm09.stdout:0/458: link da/d30/f6f da/dc/d1c/d3c/d44/d6c/f93 0 2026-03-09T15:00:47.704 INFO:tasks.workunit.client.1.vm09.stdout:0/459: dread - da/dc/f90 zero size 2026-03-09T15:00:47.710 INFO:tasks.workunit.client.1.vm09.stdout:0/460: link da/c8f da/dc/d22/d64/c94 0 2026-03-09T15:00:47.717 INFO:tasks.workunit.client.1.vm09.stdout:0/461: dwrite da/dc/d22/f7c [0,4194304] 0 2026-03-09T15:00:47.725 INFO:tasks.workunit.client.1.vm09.stdout:0/462: mknod da/dc/c95 0 2026-03-09T15:00:47.726 INFO:tasks.workunit.client.1.vm09.stdout:0/463: write da/dc/d61/f66 [206919,24534] 0 2026-03-09T15:00:47.728 INFO:tasks.workunit.client.1.vm09.stdout:0/464: symlink da/dc/d1c/l96 0 2026-03-09T15:00:47.730 INFO:tasks.workunit.client.1.vm09.stdout:0/465: write da/d30/f6f [367530,84410] 0 2026-03-09T15:00:47.730 INFO:tasks.workunit.client.1.vm09.stdout:0/466: stat f7 0 2026-03-09T15:00:47.733 INFO:tasks.workunit.client.1.vm09.stdout:6/407: dwrite d6/db/f66 [0,4194304] 0 2026-03-09T15:00:47.735 INFO:tasks.workunit.client.1.vm09.stdout:6/408: chown d6/db/d10/d4f 14342177 1 2026-03-09T15:00:47.739 INFO:tasks.workunit.client.1.vm09.stdout:6/409: dwrite d6/d20/d38/d4e/f75 [0,4194304] 0 2026-03-09T15:00:47.746 INFO:tasks.workunit.client.1.vm09.stdout:6/410: creat d6/d20/d2a/d3d/d46/f84 x:0 0 0 2026-03-09T15:00:47.754 INFO:tasks.workunit.client.1.vm09.stdout:6/411: dwrite d6/d20/d24/f49 [4194304,4194304] 0 2026-03-09T15:00:47.764 INFO:tasks.workunit.client.1.vm09.stdout:6/412: rename d6/f25 to d6/d20/d38/d56/d65/d68/d6f/f85 0 2026-03-09T15:00:47.764 INFO:tasks.workunit.client.1.vm09.stdout:6/413: dread - d6/df/d23/f76 zero size 2026-03-09T15:00:47.765 INFO:tasks.workunit.client.1.vm09.stdout:6/414: readlink d6/d20/l3c 0 2026-03-09T15:00:47.767 INFO:tasks.workunit.client.1.vm09.stdout:6/415: truncate d6/d20/d38/d4e/f5a 889667 0 2026-03-09T15:00:47.768 INFO:tasks.workunit.client.1.vm09.stdout:6/416: mkdir d6/d20/d38/d56/d65/d68/d86 0 2026-03-09T15:00:47.769 INFO:tasks.workunit.client.1.vm09.stdout:6/417: readlink d6/db/ld 0 2026-03-09T15:00:47.770 INFO:tasks.workunit.client.1.vm09.stdout:6/418: creat d6/d20/d38/d4e/f87 x:0 0 0 2026-03-09T15:00:47.771 INFO:tasks.workunit.client.1.vm09.stdout:6/419: rmdir d6/d20/d2a/d57 39 2026-03-09T15:00:47.772 INFO:tasks.workunit.client.1.vm09.stdout:6/420: fsync d6/d20/d24/f60 0 2026-03-09T15:00:47.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:47 vm05.local ceph-mon[50611]: pgmap v150: 65 pgs: 65 active+clean; 927 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 20 MiB/s rd, 111 MiB/s wr, 234 op/s 2026-03-09T15:00:47.829 INFO:tasks.workunit.client.1.vm09.stdout:8/401: dwrite df/d5b/d65/f20 [0,4194304] 0 2026-03-09T15:00:47.834 INFO:tasks.workunit.client.1.vm09.stdout:8/402: dwrite df/f34 [0,4194304] 0 2026-03-09T15:00:47.840 INFO:tasks.workunit.client.1.vm09.stdout:8/403: mknod df/d38/c74 0 2026-03-09T15:00:47.840 INFO:tasks.workunit.client.1.vm09.stdout:8/404: readlink df/d38/d64/l47 0 2026-03-09T15:00:47.843 INFO:tasks.workunit.client.1.vm09.stdout:8/405: read df/d5b/f31 [6049414,64825] 0 2026-03-09T15:00:47.846 INFO:tasks.workunit.client.1.vm09.stdout:8/406: rmdir df/d2d 39 2026-03-09T15:00:47.848 INFO:tasks.workunit.client.1.vm09.stdout:8/407: write df/d2d/d46/f6d [2133901,37134] 0 2026-03-09T15:00:47.850 INFO:tasks.workunit.client.1.vm09.stdout:8/408: read df/d24/f28 [518901,120515] 0 2026-03-09T15:00:47.850 INFO:tasks.workunit.client.1.vm09.stdout:8/409: fdatasync df/d24/d37/f59 0 2026-03-09T15:00:47.858 INFO:tasks.workunit.client.1.vm09.stdout:8/410: chown df/d5b 18 1 2026-03-09T15:00:47.858 INFO:tasks.workunit.client.1.vm09.stdout:8/411: rmdir df/d5b/d65 39 2026-03-09T15:00:47.858 INFO:tasks.workunit.client.1.vm09.stdout:8/412: mkdir df/d75 0 2026-03-09T15:00:47.858 INFO:tasks.workunit.client.1.vm09.stdout:8/413: creat df/d38/d64/d5f/f76 x:0 0 0 2026-03-09T15:00:47.858 INFO:tasks.workunit.client.1.vm09.stdout:8/414: symlink df/d24/d37/d60/l77 0 2026-03-09T15:00:47.858 INFO:tasks.workunit.client.1.vm09.stdout:8/415: chown df/d38/d64/d5f 6 1 2026-03-09T15:00:47.858 INFO:tasks.workunit.client.1.vm09.stdout:8/416: stat df/d2d/f57 0 2026-03-09T15:00:47.858 INFO:tasks.workunit.client.1.vm09.stdout:8/417: write df/d5b/d65/d1d/f41 [122696,11829] 0 2026-03-09T15:00:47.863 INFO:tasks.workunit.client.1.vm09.stdout:8/418: dread df/d24/f32 [0,4194304] 0 2026-03-09T15:00:47.866 INFO:tasks.workunit.client.1.vm09.stdout:8/419: creat df/d5c/f78 x:0 0 0 2026-03-09T15:00:47.873 INFO:tasks.workunit.client.1.vm09.stdout:8/420: mkdir df/d2d/d42/d79 0 2026-03-09T15:00:47.873 INFO:tasks.workunit.client.1.vm09.stdout:8/421: getdents df/d2d/d46/d73 0 2026-03-09T15:00:47.873 INFO:tasks.workunit.client.1.vm09.stdout:8/422: dwrite df/d38/f52 [0,4194304] 0 2026-03-09T15:00:47.873 INFO:tasks.workunit.client.1.vm09.stdout:8/423: rename df/d24/f28 to df/d24/f7a 0 2026-03-09T15:00:47.874 INFO:tasks.workunit.client.1.vm09.stdout:8/424: write df/d38/d64/d5f/f6f [959549,111667] 0 2026-03-09T15:00:47.877 INFO:tasks.workunit.client.1.vm09.stdout:8/425: write df/d38/d64/f50 [488690,89411] 0 2026-03-09T15:00:47.902 INFO:tasks.workunit.client.1.vm09.stdout:8/426: fdatasync df/d38/d64/f50 0 2026-03-09T15:00:47.904 INFO:tasks.workunit.client.1.vm09.stdout:8/427: mknod df/c7b 0 2026-03-09T15:00:47.907 INFO:tasks.workunit.client.1.vm09.stdout:8/428: creat df/d2d/d42/f7c x:0 0 0 2026-03-09T15:00:47.908 INFO:tasks.workunit.client.1.vm09.stdout:8/429: readlink df/d24/l3d 0 2026-03-09T15:00:47.910 INFO:tasks.workunit.client.1.vm09.stdout:8/430: mknod df/d5b/c7d 0 2026-03-09T15:00:47.914 INFO:tasks.workunit.client.1.vm09.stdout:8/431: unlink df/d5b/d65/d1d/f2b 0 2026-03-09T15:00:47.923 INFO:tasks.workunit.client.1.vm09.stdout:8/432: symlink df/d5b/d65/l7e 0 2026-03-09T15:00:47.924 INFO:tasks.workunit.client.1.vm09.stdout:3/423: write d3/d3a/d2b/d31/f3f [4703294,70337] 0 2026-03-09T15:00:47.925 INFO:tasks.workunit.client.1.vm09.stdout:8/433: write df/d5b/d65/d1d/f44 [211965,47219] 0 2026-03-09T15:00:47.930 INFO:tasks.workunit.client.1.vm09.stdout:8/434: dread df/d38/d64/d5f/f6f [0,4194304] 0 2026-03-09T15:00:47.930 INFO:tasks.workunit.client.1.vm09.stdout:3/424: rename d3/d60/l93 to d3/d60/l94 0 2026-03-09T15:00:47.932 INFO:tasks.workunit.client.1.vm09.stdout:3/425: dread - d3/d3a/d2b/d31/d4a/f7c zero size 2026-03-09T15:00:47.935 INFO:tasks.workunit.client.1.vm09.stdout:3/426: rmdir d3/d3a/d54 39 2026-03-09T15:00:47.936 INFO:tasks.workunit.client.1.vm09.stdout:8/435: dwrite df/d38/d64/d5f/f62 [0,4194304] 0 2026-03-09T15:00:47.938 INFO:tasks.workunit.client.1.vm09.stdout:4/407: write db/f1c [551553,8097] 0 2026-03-09T15:00:47.940 INFO:tasks.workunit.client.1.vm09.stdout:3/427: read d3/d74/f88 [68593,3744] 0 2026-03-09T15:00:47.941 INFO:tasks.workunit.client.1.vm09.stdout:8/436: creat df/d2d/d46/f7f x:0 0 0 2026-03-09T15:00:47.942 INFO:tasks.workunit.client.1.vm09.stdout:4/408: rename db/d2f to db/d19/d81 0 2026-03-09T15:00:47.943 INFO:tasks.workunit.client.1.vm09.stdout:4/409: fdatasync db/d19/d52/d76/d3b/f48 0 2026-03-09T15:00:47.944 INFO:tasks.workunit.client.1.vm09.stdout:3/428: mknod d3/d3a/d2b/d7b/d90/c95 0 2026-03-09T15:00:47.946 INFO:tasks.workunit.client.1.vm09.stdout:8/437: link df/d38/d64/d5f/f69 df/d38/f80 0 2026-03-09T15:00:47.948 INFO:tasks.workunit.client.1.vm09.stdout:4/410: write db/d19/d52/f6d [5097039,94116] 0 2026-03-09T15:00:47.951 INFO:tasks.workunit.client.1.vm09.stdout:4/411: fsync db/d12/d16/f60 0 2026-03-09T15:00:47.951 INFO:tasks.workunit.client.1.vm09.stdout:8/438: dread df/d38/f53 [0,4194304] 0 2026-03-09T15:00:47.952 INFO:tasks.workunit.client.1.vm09.stdout:4/412: rmdir db/d19 39 2026-03-09T15:00:47.953 INFO:tasks.workunit.client.1.vm09.stdout:3/429: dwrite d3/d3a/d2b/f64 [0,4194304] 0 2026-03-09T15:00:47.961 INFO:tasks.workunit.client.1.vm09.stdout:4/413: truncate db/f14 4754415 0 2026-03-09T15:00:47.966 INFO:tasks.workunit.client.1.vm09.stdout:4/414: dwrite db/d12/f5a [4194304,4194304] 0 2026-03-09T15:00:47.968 INFO:tasks.workunit.client.1.vm09.stdout:4/415: write db/d12/d16/f46 [677441,37433] 0 2026-03-09T15:00:47.975 INFO:tasks.workunit.client.1.vm09.stdout:3/430: stat d3/d3a/d2b/d31/d4a/d62/f1a 0 2026-03-09T15:00:47.976 INFO:tasks.workunit.client.1.vm09.stdout:3/431: readlink d3/d3a/d2b/d36/l38 0 2026-03-09T15:00:47.981 INFO:tasks.workunit.client.1.vm09.stdout:5/397: dwrite d2/f2e [0,4194304] 0 2026-03-09T15:00:47.986 INFO:tasks.workunit.client.1.vm09.stdout:3/432: mknod d3/d3a/c96 0 2026-03-09T15:00:47.986 INFO:tasks.workunit.client.1.vm09.stdout:5/398: creat d2/d37/d3c/d36/d45/d5c/f91 x:0 0 0 2026-03-09T15:00:47.986 INFO:tasks.workunit.client.1.vm09.stdout:3/433: creat d3/d3a/d2b/d53/f97 x:0 0 0 2026-03-09T15:00:47.986 INFO:tasks.workunit.client.1.vm09.stdout:4/416: link db/d19/d81/d5d/c7e db/d19/d23/d71/d5f/c82 0 2026-03-09T15:00:47.987 INFO:tasks.workunit.client.1.vm09.stdout:3/434: write d3/d3a/d2b/d53/f7e [1802903,35200] 0 2026-03-09T15:00:47.987 INFO:tasks.workunit.client.1.vm09.stdout:5/399: creat d2/d37/d3c/d55/f92 x:0 0 0 2026-03-09T15:00:47.987 INFO:tasks.workunit.client.1.vm09.stdout:3/435: read - d3/d5b/d79/f89 zero size 2026-03-09T15:00:47.987 INFO:tasks.workunit.client.1.vm09.stdout:4/417: creat db/d12/d16/f83 x:0 0 0 2026-03-09T15:00:47.989 INFO:tasks.workunit.client.1.vm09.stdout:3/436: stat d3/d3a/d2b/d53/c71 0 2026-03-09T15:00:47.990 INFO:tasks.workunit.client.1.vm09.stdout:5/400: chown d2/d4/l1a 12990445 1 2026-03-09T15:00:47.995 INFO:tasks.workunit.client.1.vm09.stdout:4/418: mkdir db/d19/d23/d44/d84 0 2026-03-09T15:00:47.995 INFO:tasks.workunit.client.1.vm09.stdout:3/437: mkdir d3/d3a/d2b/d53/d98 0 2026-03-09T15:00:47.996 INFO:tasks.workunit.client.1.vm09.stdout:4/419: chown db/d19/d23/d44/d84 1 1 2026-03-09T15:00:47.997 INFO:tasks.workunit.client.1.vm09.stdout:3/438: dread - d3/d3a/d2b/d53/f97 zero size 2026-03-09T15:00:48.000 INFO:tasks.workunit.client.1.vm09.stdout:5/401: dwrite d2/f4f [0,4194304] 0 2026-03-09T15:00:48.001 INFO:tasks.workunit.client.1.vm09.stdout:3/439: symlink d3/d3a/d2b/d36/l99 0 2026-03-09T15:00:48.009 INFO:tasks.workunit.client.1.vm09.stdout:3/440: dwrite d3/d3a/d2b/d7b/f8c [0,4194304] 0 2026-03-09T15:00:48.009 INFO:tasks.workunit.client.1.vm09.stdout:5/402: truncate d2/d37/f43 255111 0 2026-03-09T15:00:48.010 INFO:tasks.workunit.client.1.vm09.stdout:5/403: write d2/f61 [1888293,72351] 0 2026-03-09T15:00:48.015 INFO:tasks.workunit.client.1.vm09.stdout:5/404: truncate d2/d37/d3c/f4b 1584700 0 2026-03-09T15:00:48.028 INFO:tasks.workunit.client.1.vm09.stdout:5/405: dread d2/d37/d3c/d55/f57 [0,4194304] 0 2026-03-09T15:00:48.029 INFO:tasks.workunit.client.1.vm09.stdout:5/406: chown d2/d37/d3c/d36/d45/l64 221985 1 2026-03-09T15:00:48.031 INFO:tasks.workunit.client.1.vm09.stdout:5/407: creat d2/f93 x:0 0 0 2026-03-09T15:00:48.032 INFO:tasks.workunit.client.1.vm09.stdout:5/408: write d2/f93 [769451,19006] 0 2026-03-09T15:00:48.034 INFO:tasks.workunit.client.1.vm09.stdout:5/409: stat d2/d37/d3c/d36/d45/d5c 0 2026-03-09T15:00:48.107 INFO:tasks.workunit.client.1.vm09.stdout:2/453: getdents df/d1f 0 2026-03-09T15:00:48.108 INFO:tasks.workunit.client.1.vm09.stdout:2/454: truncate f5 2796195 0 2026-03-09T15:00:48.109 INFO:tasks.workunit.client.1.vm09.stdout:2/455: chown df/d20/d2e/f59 321679 1 2026-03-09T15:00:48.110 INFO:tasks.workunit.client.1.vm09.stdout:2/456: write df/d20/d29/d53/d5f/f63 [974260,123596] 0 2026-03-09T15:00:48.113 INFO:tasks.workunit.client.1.vm09.stdout:2/457: creat df/d2d/f87 x:0 0 0 2026-03-09T15:00:48.119 INFO:tasks.workunit.client.1.vm09.stdout:2/458: getdents df/d58/d67 0 2026-03-09T15:00:48.131 INFO:tasks.workunit.client.1.vm09.stdout:7/402: truncate d3/db/d25/d5c/f5e 3630179 0 2026-03-09T15:00:48.135 INFO:tasks.workunit.client.1.vm09.stdout:7/403: dwrite d3/f5 [0,4194304] 0 2026-03-09T15:00:48.138 INFO:tasks.workunit.client.1.vm09.stdout:1/381: truncate d8/d22/f61 579643 0 2026-03-09T15:00:48.184 INFO:tasks.workunit.client.1.vm09.stdout:9/399: fsync d1/d4f/f89 0 2026-03-09T15:00:48.187 INFO:tasks.workunit.client.1.vm09.stdout:9/400: dwrite d1/d7/f67 [0,4194304] 0 2026-03-09T15:00:48.197 INFO:tasks.workunit.client.1.vm09.stdout:9/401: rename d1/d7/d1e/d2b/d2e/f3a to d1/d7/d1e/d2b/d2e/f8a 0 2026-03-09T15:00:48.237 INFO:tasks.workunit.client.1.vm09.stdout:0/467: mkdir da/dc/d1c/d3c/d78/d97 0 2026-03-09T15:00:48.244 INFO:tasks.workunit.client.1.vm09.stdout:0/468: creat da/d80/f98 x:0 0 0 2026-03-09T15:00:48.252 INFO:tasks.workunit.client.1.vm09.stdout:0/469: unlink da/dc/d22/f55 0 2026-03-09T15:00:48.257 INFO:tasks.workunit.client.1.vm09.stdout:0/470: rmdir da/dc/d1c/d3c/d78/d97 0 2026-03-09T15:00:48.258 INFO:tasks.workunit.client.1.vm09.stdout:0/471: read da/dc/d22/f3b [1569068,89246] 0 2026-03-09T15:00:48.261 INFO:tasks.workunit.client.1.vm09.stdout:0/472: creat da/d30/d36/f99 x:0 0 0 2026-03-09T15:00:48.261 INFO:tasks.workunit.client.1.vm09.stdout:0/473: chown da/dc/f17 693309277 1 2026-03-09T15:00:48.267 INFO:tasks.workunit.client.1.vm09.stdout:0/474: symlink da/dc/d1c/l9a 0 2026-03-09T15:00:48.270 INFO:tasks.workunit.client.1.vm09.stdout:0/475: symlink da/dc/d1c/d46/d63/l9b 0 2026-03-09T15:00:48.277 INFO:tasks.workunit.client.1.vm09.stdout:8/439: rename df/d24/d37 to df/d2d/d46/d73/d81 0 2026-03-09T15:00:48.279 INFO:tasks.workunit.client.1.vm09.stdout:0/476: dread da/d57/f60 [0,4194304] 0 2026-03-09T15:00:48.279 INFO:tasks.workunit.client.1.vm09.stdout:4/420: rename db/d12/d16/c20 to db/d19/c85 0 2026-03-09T15:00:48.281 INFO:tasks.workunit.client.1.vm09.stdout:4/421: mknod db/d12/c86 0 2026-03-09T15:00:48.282 INFO:tasks.workunit.client.1.vm09.stdout:8/440: getdents df/d2d/d46/d73/d81/d60 0 2026-03-09T15:00:48.282 INFO:tasks.workunit.client.1.vm09.stdout:0/477: mkdir da/dc/d1c/d3c/d78/d7a/d9c 0 2026-03-09T15:00:48.283 INFO:tasks.workunit.client.1.vm09.stdout:4/422: creat db/d19/d23/d71/d5f/f87 x:0 0 0 2026-03-09T15:00:48.285 INFO:tasks.workunit.client.1.vm09.stdout:4/423: creat db/d19/d23/d44/d7c/f88 x:0 0 0 2026-03-09T15:00:48.287 INFO:tasks.workunit.client.1.vm09.stdout:8/441: creat df/d5b/f82 x:0 0 0 2026-03-09T15:00:48.288 INFO:tasks.workunit.client.1.vm09.stdout:8/442: write df/f30 [12991960,77084] 0 2026-03-09T15:00:48.292 INFO:tasks.workunit.client.1.vm09.stdout:8/443: rmdir df/d38 39 2026-03-09T15:00:48.293 INFO:tasks.workunit.client.1.vm09.stdout:8/444: stat df/d5c/f72 0 2026-03-09T15:00:48.296 INFO:tasks.workunit.client.1.vm09.stdout:0/478: dread da/dc/d1c/d3c/d44/f71 [0,4194304] 0 2026-03-09T15:00:48.297 INFO:tasks.workunit.client.1.vm09.stdout:8/445: getdents df/d2d 0 2026-03-09T15:00:48.298 INFO:tasks.workunit.client.1.vm09.stdout:0/479: creat da/dc/d22/f9d x:0 0 0 2026-03-09T15:00:48.299 INFO:tasks.workunit.client.1.vm09.stdout:8/446: creat df/d24/f83 x:0 0 0 2026-03-09T15:00:48.299 INFO:tasks.workunit.client.1.vm09.stdout:0/480: write da/dc/d22/f73 [1420663,60897] 0 2026-03-09T15:00:48.308 INFO:tasks.workunit.client.1.vm09.stdout:8/447: fdatasync df/d38/d64/d5f/f6f 0 2026-03-09T15:00:48.312 INFO:tasks.workunit.client.1.vm09.stdout:8/448: unlink df/d2d/d46/d73/d81/f48 0 2026-03-09T15:00:48.312 INFO:tasks.workunit.client.1.vm09.stdout:4/424: dread db/d19/d81/d5d/f77 [0,4194304] 0 2026-03-09T15:00:48.313 INFO:tasks.workunit.client.1.vm09.stdout:8/449: mknod df/d2d/d46/d73/d81/c84 0 2026-03-09T15:00:48.316 INFO:tasks.workunit.client.1.vm09.stdout:8/450: creat df/d5b/f85 x:0 0 0 2026-03-09T15:00:48.334 INFO:tasks.workunit.client.1.vm09.stdout:8/451: fdatasync df/d24/f83 0 2026-03-09T15:00:48.334 INFO:tasks.workunit.client.1.vm09.stdout:8/452: dread - df/d38/f80 zero size 2026-03-09T15:00:48.334 INFO:tasks.workunit.client.1.vm09.stdout:8/453: truncate df/d38/d64/d5f/f76 283801 0 2026-03-09T15:00:48.334 INFO:tasks.workunit.client.1.vm09.stdout:8/454: creat df/d24/f86 x:0 0 0 2026-03-09T15:00:48.334 INFO:tasks.workunit.client.1.vm09.stdout:8/455: write df/d24/f7a [1159768,111206] 0 2026-03-09T15:00:48.334 INFO:tasks.workunit.client.1.vm09.stdout:8/456: link df/d38/d64/d5f/f76 df/d2d/d46/d73/f87 0 2026-03-09T15:00:48.337 INFO:tasks.workunit.client.1.vm09.stdout:0/481: sync 2026-03-09T15:00:48.339 INFO:tasks.workunit.client.1.vm09.stdout:0/482: mkdir da/dc/d92/d9e 0 2026-03-09T15:00:48.342 INFO:tasks.workunit.client.1.vm09.stdout:0/483: getdents da/d30 0 2026-03-09T15:00:48.343 INFO:tasks.workunit.client.1.vm09.stdout:0/484: chown da/l15 282314 1 2026-03-09T15:00:48.344 INFO:tasks.workunit.client.1.vm09.stdout:0/485: mkdir da/dc/d1c/d46/d5b/d9f 0 2026-03-09T15:00:48.345 INFO:tasks.workunit.client.1.vm09.stdout:0/486: rmdir da/dc/d1c/d46/d63 39 2026-03-09T15:00:48.349 INFO:tasks.workunit.client.1.vm09.stdout:0/487: creat da/dc/d1c/fa0 x:0 0 0 2026-03-09T15:00:48.350 INFO:tasks.workunit.client.1.vm09.stdout:0/488: symlink da/dc/la1 0 2026-03-09T15:00:48.352 INFO:tasks.workunit.client.1.vm09.stdout:0/489: chown da/dc/d1c/d46/d5b/l62 1991021582 1 2026-03-09T15:00:48.353 INFO:tasks.workunit.client.1.vm09.stdout:0/490: chown da/d30/c34 315 1 2026-03-09T15:00:48.357 INFO:tasks.workunit.client.1.vm09.stdout:0/491: unlink da/dc/c35 0 2026-03-09T15:00:48.364 INFO:tasks.workunit.client.1.vm09.stdout:0/492: dwrite da/dc/d1c/d3c/d78/f88 [0,4194304] 0 2026-03-09T15:00:48.374 INFO:tasks.workunit.client.1.vm09.stdout:0/493: creat da/dc/d92/d9e/fa2 x:0 0 0 2026-03-09T15:00:48.376 INFO:tasks.workunit.client.1.vm09.stdout:0/494: dread da/dc/d1c/d3c/d78/f88 [0,4194304] 0 2026-03-09T15:00:48.383 INFO:tasks.workunit.client.1.vm09.stdout:0/495: readlink da/dc/d22/d64/l8a 0 2026-03-09T15:00:48.387 INFO:tasks.workunit.client.1.vm09.stdout:0/496: sync 2026-03-09T15:00:48.393 INFO:tasks.workunit.client.1.vm09.stdout:0/497: dwrite da/dc/f8b [0,4194304] 0 2026-03-09T15:00:48.405 INFO:tasks.workunit.client.1.vm09.stdout:0/498: mknod da/dc/d1c/d46/d63/d86/ca3 0 2026-03-09T15:00:48.422 INFO:tasks.workunit.client.1.vm09.stdout:0/499: sync 2026-03-09T15:00:48.425 INFO:tasks.workunit.client.1.vm09.stdout:0/500: symlink da/dc/d1c/d3c/d78/d7a/d9c/la4 0 2026-03-09T15:00:48.445 INFO:tasks.workunit.client.1.vm09.stdout:0/501: dread da/dc/d1c/d3c/f81 [0,4194304] 0 2026-03-09T15:00:48.445 INFO:tasks.workunit.client.1.vm09.stdout:3/441: write d3/ff [5786209,65711] 0 2026-03-09T15:00:48.446 INFO:tasks.workunit.client.1.vm09.stdout:0/502: read da/dc/d1c/d3c/d44/d6c/f93 [105875,70534] 0 2026-03-09T15:00:48.449 INFO:tasks.workunit.client.1.vm09.stdout:3/442: rename d3/d3a/d2b/d53 to d3/d9a 0 2026-03-09T15:00:48.452 INFO:tasks.workunit.client.1.vm09.stdout:5/410: write d2/d37/d53/f81 [738144,92465] 0 2026-03-09T15:00:48.455 INFO:tasks.workunit.client.1.vm09.stdout:0/503: truncate da/d30/f3d 3377753 0 2026-03-09T15:00:48.457 INFO:tasks.workunit.client.1.vm09.stdout:0/504: readlink da/dc/d1c/l56 0 2026-03-09T15:00:48.457 INFO:tasks.workunit.client.1.vm09.stdout:7/404: truncate d3/d1d/f33 2775649 0 2026-03-09T15:00:48.457 INFO:tasks.workunit.client.1.vm09.stdout:3/443: link d3/d3a/d2b/d36/f44 d3/d74/f9b 0 2026-03-09T15:00:48.458 INFO:tasks.workunit.client.1.vm09.stdout:3/444: read - d3/d3a/d2b/d31/d4a/d62/f16 zero size 2026-03-09T15:00:48.460 INFO:tasks.workunit.client.1.vm09.stdout:0/505: mknod da/d30/d36/ca5 0 2026-03-09T15:00:48.460 INFO:tasks.workunit.client.1.vm09.stdout:7/405: creat d3/d1d/d65/f76 x:0 0 0 2026-03-09T15:00:48.461 INFO:tasks.workunit.client.1.vm09.stdout:0/506: fdatasync da/dc/d1c/fa0 0 2026-03-09T15:00:48.461 INFO:tasks.workunit.client.1.vm09.stdout:5/411: link d2/d37/d3c/d36/d4c/l63 d2/d37/d3c/d36/d45/l94 0 2026-03-09T15:00:48.466 INFO:tasks.workunit.client.1.vm09.stdout:5/412: truncate d2/d37/f6c 955743 0 2026-03-09T15:00:48.468 INFO:tasks.workunit.client.1.vm09.stdout:3/445: dwrite d3/d3a/d2b/d39/f81 [0,4194304] 0 2026-03-09T15:00:48.468 INFO:tasks.workunit.client.1.vm09.stdout:0/507: symlink da/dc/d1c/d3c/d78/d7a/d9c/la6 0 2026-03-09T15:00:48.474 INFO:tasks.workunit.client.1.vm09.stdout:2/459: truncate df/d1f/f7e 3426740 0 2026-03-09T15:00:48.476 INFO:tasks.workunit.client.1.vm09.stdout:0/508: fdatasync da/dc/d22/d76/f8e 0 2026-03-09T15:00:48.476 INFO:tasks.workunit.client.1.vm09.stdout:2/460: truncate df/d2d/f41 2701155 0 2026-03-09T15:00:48.477 INFO:tasks.workunit.client.1.vm09.stdout:3/446: fdatasync d3/d74/f9b 0 2026-03-09T15:00:48.477 INFO:tasks.workunit.client.1.vm09.stdout:0/509: chown da/dc/d1c/l5d 1949173 1 2026-03-09T15:00:48.484 INFO:tasks.workunit.client.1.vm09.stdout:3/447: link d3/d3a/d2b/d31/d4a/l69 d3/d3a/d2b/d39/l9c 0 2026-03-09T15:00:48.488 INFO:tasks.workunit.client.1.vm09.stdout:5/413: dread d2/f61 [0,4194304] 0 2026-03-09T15:00:48.492 INFO:tasks.workunit.client.1.vm09.stdout:5/414: dwrite d2/f56 [0,4194304] 0 2026-03-09T15:00:48.498 INFO:tasks.workunit.client.1.vm09.stdout:3/448: dread d3/d3a/d2b/d31/d4a/d62/f26 [0,4194304] 0 2026-03-09T15:00:48.499 INFO:tasks.workunit.client.1.vm09.stdout:1/382: rmdir d8/d22 39 2026-03-09T15:00:48.499 INFO:tasks.workunit.client.1.vm09.stdout:1/383: chown d8/d10/d24 1941 1 2026-03-09T15:00:48.501 INFO:tasks.workunit.client.1.vm09.stdout:3/449: read - d3/d3a/f6b zero size 2026-03-09T15:00:48.502 INFO:tasks.workunit.client.1.vm09.stdout:5/415: dwrite d2/d37/d3c/d36/d45/f8c [0,4194304] 0 2026-03-09T15:00:48.507 INFO:tasks.workunit.client.1.vm09.stdout:5/416: chown d2/d37/d3c/d36/d45/f8c 1549 1 2026-03-09T15:00:48.508 INFO:tasks.workunit.client.1.vm09.stdout:3/450: dwrite d3/f29 [4194304,4194304] 0 2026-03-09T15:00:48.509 INFO:tasks.workunit.client.1.vm09.stdout:1/384: unlink d8/d10/f15 0 2026-03-09T15:00:48.509 INFO:tasks.workunit.client.1.vm09.stdout:7/406: dread d3/f3f [0,4194304] 0 2026-03-09T15:00:48.510 INFO:tasks.workunit.client.1.vm09.stdout:3/451: truncate d3/d60/f6e 332074 0 2026-03-09T15:00:48.516 INFO:tasks.workunit.client.1.vm09.stdout:5/417: chown d2/d37/f6d 1186152 1 2026-03-09T15:00:48.522 INFO:tasks.workunit.client.1.vm09.stdout:3/452: rename d3/d9a/d98 to d3/d5b/d79/d9d 0 2026-03-09T15:00:48.522 INFO:tasks.workunit.client.1.vm09.stdout:1/385: dwrite d8/d10/f13 [0,4194304] 0 2026-03-09T15:00:48.525 INFO:tasks.workunit.client.1.vm09.stdout:3/453: write d3/d3a/d2b/d39/f3c [2517290,78361] 0 2026-03-09T15:00:48.527 INFO:tasks.workunit.client.1.vm09.stdout:5/418: dwrite d2/f34 [0,4194304] 0 2026-03-09T15:00:48.529 INFO:tasks.workunit.client.1.vm09.stdout:7/407: truncate d3/d1d/d2d/f57 445318 0 2026-03-09T15:00:48.535 INFO:tasks.workunit.client.1.vm09.stdout:1/386: creat d8/d22/d72/f77 x:0 0 0 2026-03-09T15:00:48.535 INFO:tasks.workunit.client.1.vm09.stdout:3/454: unlink d3/d3a/d2b/c85 0 2026-03-09T15:00:48.540 INFO:tasks.workunit.client.1.vm09.stdout:1/387: mkdir d8/d10/d73/d5d/d78 0 2026-03-09T15:00:48.545 INFO:tasks.workunit.client.1.vm09.stdout:3/455: dwrite d3/d9a/f97 [0,4194304] 0 2026-03-09T15:00:48.545 INFO:tasks.workunit.client.1.vm09.stdout:1/388: mknod d8/d50/c79 0 2026-03-09T15:00:48.546 INFO:tasks.workunit.client.1.vm09.stdout:1/389: dread - d8/d10/f69 zero size 2026-03-09T15:00:48.551 INFO:tasks.workunit.client.1.vm09.stdout:1/390: read d8/f3d [134179,56000] 0 2026-03-09T15:00:48.552 INFO:tasks.workunit.client.1.vm09.stdout:1/391: chown d8/d10/d73/c34 9391 1 2026-03-09T15:00:48.552 INFO:tasks.workunit.client.1.vm09.stdout:3/456: dread d3/d3a/d2b/d7b/f8d [0,4194304] 0 2026-03-09T15:00:48.555 INFO:tasks.workunit.client.1.vm09.stdout:1/392: dread d8/d10/f44 [0,4194304] 0 2026-03-09T15:00:48.565 INFO:tasks.workunit.client.1.vm09.stdout:7/408: link d3/l17 d3/db/d46/l77 0 2026-03-09T15:00:48.565 INFO:tasks.workunit.client.1.vm09.stdout:3/457: mkdir d3/d3a/d2b/d31/d9e 0 2026-03-09T15:00:48.569 INFO:tasks.workunit.client.1.vm09.stdout:3/458: truncate d3/d3a/d2b/d31/d4a/d62/f16 527050 0 2026-03-09T15:00:48.571 INFO:tasks.workunit.client.1.vm09.stdout:1/393: dread d8/d10/d73/f37 [0,4194304] 0 2026-03-09T15:00:48.572 INFO:tasks.workunit.client.1.vm09.stdout:3/459: creat d3/d5b/d79/d9d/f9f x:0 0 0 2026-03-09T15:00:48.576 INFO:tasks.workunit.client.1.vm09.stdout:3/460: mkdir d3/d3a/d2b/d39/d48/da0 0 2026-03-09T15:00:48.576 INFO:tasks.workunit.client.1.vm09.stdout:3/461: readlink d3/l30 0 2026-03-09T15:00:48.577 INFO:tasks.workunit.client.1.vm09.stdout:1/394: truncate d8/d10/f29 3236216 0 2026-03-09T15:00:48.578 INFO:tasks.workunit.client.1.vm09.stdout:3/462: creat d3/d3a/d2b/d31/fa1 x:0 0 0 2026-03-09T15:00:48.579 INFO:tasks.workunit.client.1.vm09.stdout:1/395: mknod d8/d10/d24/d48/c7a 0 2026-03-09T15:00:48.580 INFO:tasks.workunit.client.1.vm09.stdout:3/463: mknod d3/d3a/d2b/d39/d48/da0/ca2 0 2026-03-09T15:00:48.582 INFO:tasks.workunit.client.1.vm09.stdout:3/464: creat d3/d3a/d2b/d7b/d90/fa3 x:0 0 0 2026-03-09T15:00:48.624 INFO:tasks.workunit.client.1.vm09.stdout:9/402: truncate d1/d7/f83 600578 0 2026-03-09T15:00:48.624 INFO:tasks.workunit.client.1.vm09.stdout:9/403: chown d1/f28 58052 1 2026-03-09T15:00:48.626 INFO:tasks.workunit.client.1.vm09.stdout:9/404: creat d1/d4f/d52/f8b x:0 0 0 2026-03-09T15:00:48.627 INFO:tasks.workunit.client.1.vm09.stdout:9/405: chown d1/d7/d1e/f20 32790 1 2026-03-09T15:00:48.630 INFO:tasks.workunit.client.1.vm09.stdout:9/406: stat d1/d7/d1e/c3b 0 2026-03-09T15:00:48.631 INFO:tasks.workunit.client.1.vm09.stdout:9/407: read d1/d7/d1e/d2b/f42 [3284981,39094] 0 2026-03-09T15:00:48.641 INFO:tasks.workunit.client.1.vm09.stdout:6/421: write d6/f7f [255773,61597] 0 2026-03-09T15:00:48.641 INFO:tasks.workunit.client.1.vm09.stdout:9/408: dread d1/d7/d1e/f34 [0,4194304] 0 2026-03-09T15:00:48.642 INFO:tasks.workunit.client.1.vm09.stdout:9/409: chown d1/d4f/f89 0 1 2026-03-09T15:00:48.650 INFO:tasks.workunit.client.1.vm09.stdout:0/510: rmdir da/dc/d22 39 2026-03-09T15:00:48.650 INFO:tasks.workunit.client.1.vm09.stdout:0/511: stat da/d30/c34 0 2026-03-09T15:00:48.651 INFO:tasks.workunit.client.1.vm09.stdout:6/422: mkdir d6/d20/d24/d7e/d88 0 2026-03-09T15:00:48.655 INFO:tasks.workunit.client.1.vm09.stdout:0/512: readlink da/dc/d22/d64/l8a 0 2026-03-09T15:00:48.656 INFO:tasks.workunit.client.1.vm09.stdout:4/425: mkdir db/d12/d16/d89 0 2026-03-09T15:00:48.656 INFO:tasks.workunit.client.1.vm09.stdout:0/513: truncate da/dc/d1c/fa0 8003 0 2026-03-09T15:00:48.657 INFO:tasks.workunit.client.1.vm09.stdout:9/410: link d1/l26 d1/d7/d1e/d2b/d40/l8c 0 2026-03-09T15:00:48.657 INFO:tasks.workunit.client.1.vm09.stdout:0/514: readlink da/dc/d22/l7e 0 2026-03-09T15:00:48.676 INFO:tasks.workunit.client.1.vm09.stdout:4/426: dread db/d19/d23/d71/d5f/f66 [0,4194304] 0 2026-03-09T15:00:48.678 INFO:tasks.workunit.client.1.vm09.stdout:4/427: creat db/d19/d81/d5d/f8a x:0 0 0 2026-03-09T15:00:48.681 INFO:tasks.workunit.client.1.vm09.stdout:4/428: dwrite db/d12/f37 [0,4194304] 0 2026-03-09T15:00:48.687 INFO:tasks.workunit.client.1.vm09.stdout:4/429: symlink db/d19/d52/d76/d3b/l8b 0 2026-03-09T15:00:48.697 INFO:tasks.workunit.client.1.vm09.stdout:8/457: truncate df/d5b/d65/d1d/f44 3694411 0 2026-03-09T15:00:48.697 INFO:tasks.workunit.client.1.vm09.stdout:8/458: readlink df/d2d/d46/d73/d81/d60/l77 0 2026-03-09T15:00:48.699 INFO:tasks.workunit.client.1.vm09.stdout:4/430: getdents db/d12/d16/d5b/d78/d7f 0 2026-03-09T15:00:48.707 INFO:tasks.workunit.client.1.vm09.stdout:4/431: chown db/d19/d81/d5d 95218665 1 2026-03-09T15:00:48.707 INFO:tasks.workunit.client.1.vm09.stdout:4/432: fdatasync db/d12/d16/f36 0 2026-03-09T15:00:48.707 INFO:tasks.workunit.client.1.vm09.stdout:4/433: rename db/d19/l61 to db/d19/d23/d44/l8c 0 2026-03-09T15:00:48.712 INFO:tasks.workunit.client.1.vm09.stdout:4/434: creat db/d19/d23/d71/d5f/f8d x:0 0 0 2026-03-09T15:00:48.721 INFO:tasks.workunit.client.1.vm09.stdout:4/435: creat db/d19/f8e x:0 0 0 2026-03-09T15:00:48.730 INFO:tasks.workunit.client.1.vm09.stdout:4/436: symlink db/d12/d16/d5b/d78/d7f/l8f 0 2026-03-09T15:00:48.731 INFO:tasks.workunit.client.1.vm09.stdout:4/437: stat db/d19/d23/d71/d5f 0 2026-03-09T15:00:48.732 INFO:tasks.workunit.client.1.vm09.stdout:4/438: truncate db/d19/d23/d44/d7c/f88 724842 0 2026-03-09T15:00:48.734 INFO:tasks.workunit.client.1.vm09.stdout:4/439: mknod db/d19/d23/d71/d5f/c90 0 2026-03-09T15:00:48.739 INFO:tasks.workunit.client.1.vm09.stdout:4/440: creat db/d19/d23/f91 x:0 0 0 2026-03-09T15:00:48.741 INFO:tasks.workunit.client.1.vm09.stdout:4/441: write db/d19/d23/d71/d5f/f66 [665944,9165] 0 2026-03-09T15:00:48.747 INFO:tasks.workunit.client.1.vm09.stdout:4/442: rename db/l40 to db/d19/d23/d44/d7c/l92 0 2026-03-09T15:00:48.750 INFO:tasks.workunit.client.1.vm09.stdout:4/443: mknod db/d19/d81/d5d/c93 0 2026-03-09T15:00:48.754 INFO:tasks.workunit.client.1.vm09.stdout:4/444: unlink db/d19/d81/l39 0 2026-03-09T15:00:48.754 INFO:tasks.workunit.client.1.vm09.stdout:4/445: stat db/d12/d16/d5b 0 2026-03-09T15:00:48.755 INFO:tasks.workunit.client.1.vm09.stdout:4/446: chown db/f14 154233 1 2026-03-09T15:00:48.757 INFO:tasks.workunit.client.1.vm09.stdout:4/447: mknod db/d19/d52/c94 0 2026-03-09T15:00:48.758 INFO:tasks.workunit.client.1.vm09.stdout:4/448: write db/d19/d23/d71/d5f/f66 [4957275,70558] 0 2026-03-09T15:00:48.759 INFO:tasks.workunit.client.1.vm09.stdout:4/449: stat db/d19/d23/d44/d7c/f88 0 2026-03-09T15:00:48.768 INFO:tasks.workunit.client.1.vm09.stdout:4/450: creat db/d19/d23/d44/f95 x:0 0 0 2026-03-09T15:00:48.778 INFO:tasks.workunit.client.1.vm09.stdout:4/451: dread db/d19/d52/f6d [0,4194304] 0 2026-03-09T15:00:48.803 INFO:tasks.workunit.client.1.vm09.stdout:4/452: sync 2026-03-09T15:00:48.808 INFO:tasks.workunit.client.1.vm09.stdout:3/465: truncate d3/d3a/d2b/d36/f44 1559558 0 2026-03-09T15:00:48.814 INFO:tasks.workunit.client.1.vm09.stdout:4/453: link db/d19/d23/d44/d7c/f88 db/d19/f96 0 2026-03-09T15:00:48.814 INFO:tasks.workunit.client.1.vm09.stdout:4/454: stat db/d19/d23/d44/l4a 0 2026-03-09T15:00:48.822 INFO:tasks.workunit.client.1.vm09.stdout:3/466: rename d3/c27 to d3/d9a/d80/ca4 0 2026-03-09T15:00:48.826 INFO:tasks.workunit.client.1.vm09.stdout:4/455: rename db/d12/d16/d89 to db/d19/d23/d44/d7c/d7d/d97 0 2026-03-09T15:00:48.827 INFO:tasks.workunit.client.1.vm09.stdout:2/461: truncate df/d1f/d6d/f7f 2495823 0 2026-03-09T15:00:48.827 INFO:tasks.workunit.client.1.vm09.stdout:3/467: fsync d3/d3a/d2b/d31/f33 0 2026-03-09T15:00:48.828 INFO:tasks.workunit.client.1.vm09.stdout:2/462: chown df/d2d/f41 814179886 1 2026-03-09T15:00:48.833 INFO:tasks.workunit.client.1.vm09.stdout:4/456: readlink db/d19/d81/l3c 0 2026-03-09T15:00:48.838 INFO:tasks.workunit.client.1.vm09.stdout:3/468: stat d3/d3a/d2b/d39/d48/c61 0 2026-03-09T15:00:48.839 INFO:tasks.workunit.client.1.vm09.stdout:2/463: rename df/d1f/d47/f56 to df/d58/d74/f88 0 2026-03-09T15:00:48.841 INFO:tasks.workunit.client.1.vm09.stdout:4/457: symlink db/d19/d81/d5d/l98 0 2026-03-09T15:00:48.843 INFO:tasks.workunit.client.1.vm09.stdout:2/464: dwrite df/f17 [0,4194304] 0 2026-03-09T15:00:48.845 INFO:tasks.workunit.client.1.vm09.stdout:3/469: symlink d3/d60/la5 0 2026-03-09T15:00:48.848 INFO:tasks.workunit.client.1.vm09.stdout:4/458: dwrite db/d19/d23/d71/d5f/f87 [0,4194304] 0 2026-03-09T15:00:48.855 INFO:tasks.workunit.client.1.vm09.stdout:3/470: read d3/d3a/d2b/d31/f3f [1735877,128558] 0 2026-03-09T15:00:48.862 INFO:tasks.workunit.client.1.vm09.stdout:2/465: creat df/d1f/d47/f89 x:0 0 0 2026-03-09T15:00:48.864 INFO:tasks.workunit.client.1.vm09.stdout:4/459: rename db/d19/d23/d71/d5f/f8d to db/d19/d23/d71/d53/f99 0 2026-03-09T15:00:48.872 INFO:tasks.workunit.client.1.vm09.stdout:4/460: symlink db/d12/l9a 0 2026-03-09T15:00:48.879 INFO:tasks.workunit.client.1.vm09.stdout:2/466: getdents df/d20 0 2026-03-09T15:00:48.879 INFO:tasks.workunit.client.1.vm09.stdout:4/461: write db/d12/f3d [2281133,14688] 0 2026-03-09T15:00:48.880 INFO:tasks.workunit.client.1.vm09.stdout:2/467: dwrite f3 [0,4194304] 0 2026-03-09T15:00:48.881 INFO:tasks.workunit.client.1.vm09.stdout:5/419: dwrite d2/d4/f1d [4194304,4194304] 0 2026-03-09T15:00:48.902 INFO:tasks.workunit.client.1.vm09.stdout:5/420: dread d2/d37/d3c/f4e [4194304,4194304] 0 2026-03-09T15:00:48.910 INFO:tasks.workunit.client.1.vm09.stdout:5/421: mkdir d2/d37/d67/d95 0 2026-03-09T15:00:48.911 INFO:tasks.workunit.client.1.vm09.stdout:5/422: chown d2/d37/d3c/d36/d45/l8e 4675754 1 2026-03-09T15:00:48.916 INFO:tasks.workunit.client.1.vm09.stdout:5/423: rename d2/d4 to d2/d37/d3c/d36/d4c/d51/d96 0 2026-03-09T15:00:48.916 INFO:tasks.workunit.client.1.vm09.stdout:5/424: truncate d2/d37/f6c 1661159 0 2026-03-09T15:00:48.925 INFO:tasks.workunit.client.1.vm09.stdout:6/423: truncate d6/d20/d44/f4a 4172835 0 2026-03-09T15:00:48.926 INFO:tasks.workunit.client.1.vm09.stdout:5/425: sync 2026-03-09T15:00:48.927 INFO:tasks.workunit.client.1.vm09.stdout:5/426: chown d2/d37/d3c/d36/d4c/d51/d96/f16 9682 1 2026-03-09T15:00:48.927 INFO:tasks.workunit.client.1.vm09.stdout:6/424: mkdir d6/df/d23/d89 0 2026-03-09T15:00:48.929 INFO:tasks.workunit.client.1.vm09.stdout:0/515: write da/dc/d22/f3b [4025926,6256] 0 2026-03-09T15:00:48.931 INFO:tasks.workunit.client.1.vm09.stdout:0/516: write da/dc/d22/d76/f83 [588825,116883] 0 2026-03-09T15:00:48.932 INFO:tasks.workunit.client.1.vm09.stdout:8/459: truncate df/f1a 4585327 0 2026-03-09T15:00:48.934 INFO:tasks.workunit.client.1.vm09.stdout:8/460: chown df/d38 850343247 1 2026-03-09T15:00:48.936 INFO:tasks.workunit.client.1.vm09.stdout:5/427: dwrite d2/d37/d53/d86/d88/f8d [0,4194304] 0 2026-03-09T15:00:48.936 INFO:tasks.workunit.client.1.vm09.stdout:6/425: link d6/f39 d6/d20/d38/d4e/d55/f8a 0 2026-03-09T15:00:48.937 INFO:tasks.workunit.client.1.vm09.stdout:9/411: dwrite d1/d7/d1e/d2b/d2e/f12 [0,4194304] 0 2026-03-09T15:00:48.937 INFO:tasks.workunit.client.1.vm09.stdout:0/517: creat da/dc/d1c/d46/d63/d86/fa7 x:0 0 0 2026-03-09T15:00:48.957 INFO:tasks.workunit.client.1.vm09.stdout:0/518: dread da/dc/d1c/d3c/f4f [0,4194304] 0 2026-03-09T15:00:48.958 INFO:tasks.workunit.client.1.vm09.stdout:0/519: read da/dc/d61/f66 [374184,3148] 0 2026-03-09T15:00:48.963 INFO:tasks.workunit.client.1.vm09.stdout:0/520: dwrite da/dc/d10/f4a [4194304,4194304] 0 2026-03-09T15:00:48.968 INFO:tasks.workunit.client.1.vm09.stdout:0/521: readlink da/dc/d1c/d46/d63/l75 0 2026-03-09T15:00:48.976 INFO:tasks.workunit.client.1.vm09.stdout:5/428: creat d2/d37/d3c/d36/f97 x:0 0 0 2026-03-09T15:00:48.979 INFO:tasks.workunit.client.1.vm09.stdout:0/522: chown da/dc/d22/d64/c94 14 1 2026-03-09T15:00:48.979 INFO:tasks.workunit.client.1.vm09.stdout:9/412: dread d1/d58/f72 [0,4194304] 0 2026-03-09T15:00:48.980 INFO:tasks.workunit.client.1.vm09.stdout:0/523: write da/dc/f28 [650593,4643] 0 2026-03-09T15:00:48.980 INFO:tasks.workunit.client.1.vm09.stdout:9/413: fsync d1/f1f 0 2026-03-09T15:00:48.984 INFO:tasks.workunit.client.1.vm09.stdout:6/426: getdents d6/d20/d38/d4e 0 2026-03-09T15:00:48.985 INFO:tasks.workunit.client.1.vm09.stdout:7/409: dwrite d3/d1d/f33 [0,4194304] 0 2026-03-09T15:00:48.986 INFO:tasks.workunit.client.1.vm09.stdout:7/410: chown d3/f5 4011 1 2026-03-09T15:00:48.990 INFO:tasks.workunit.client.1.vm09.stdout:0/524: creat da/dc/d1c/d3c/d44/d6c/d7b/fa8 x:0 0 0 2026-03-09T15:00:48.993 INFO:tasks.workunit.client.1.vm09.stdout:0/525: fdatasync f7 0 2026-03-09T15:00:48.993 INFO:tasks.workunit.client.1.vm09.stdout:7/411: dwrite d3/d1d/f72 [0,4194304] 0 2026-03-09T15:00:48.996 INFO:tasks.workunit.client.1.vm09.stdout:0/526: dread da/dc/d10/f2d [0,4194304] 0 2026-03-09T15:00:48.996 INFO:tasks.workunit.client.1.vm09.stdout:6/427: rename d6/d20/d2a/d57 to d6/db/d8b 0 2026-03-09T15:00:49.004 INFO:tasks.workunit.client.1.vm09.stdout:6/428: creat d6/d20/d38/d56/f8c x:0 0 0 2026-03-09T15:00:49.013 INFO:tasks.workunit.client.1.vm09.stdout:7/412: mknod d3/d1d/c78 0 2026-03-09T15:00:49.013 INFO:tasks.workunit.client.1.vm09.stdout:7/413: dread - d3/db/d46/f66 zero size 2026-03-09T15:00:49.013 INFO:tasks.workunit.client.1.vm09.stdout:6/429: dwrite d6/d20/d38/d4e/d55/f8a [0,4194304] 0 2026-03-09T15:00:49.021 INFO:tasks.workunit.client.1.vm09.stdout:6/430: link d6/d20/d38/c72 d6/df/d23/d89/c8d 0 2026-03-09T15:00:49.024 INFO:tasks.workunit.client.1.vm09.stdout:6/431: creat d6/df/d23/d89/f8e x:0 0 0 2026-03-09T15:00:49.026 INFO:tasks.workunit.client.1.vm09.stdout:6/432: dread d6/db/f42 [0,4194304] 0 2026-03-09T15:00:49.029 INFO:tasks.workunit.client.1.vm09.stdout:6/433: dwrite d6/d20/d24/f60 [0,4194304] 0 2026-03-09T15:00:49.037 INFO:tasks.workunit.client.1.vm09.stdout:6/434: mkdir d6/d20/d44/d8f 0 2026-03-09T15:00:49.039 INFO:tasks.workunit.client.1.vm09.stdout:6/435: link d6/d20/f27 d6/d20/d38/d4e/d55/f90 0 2026-03-09T15:00:49.051 INFO:tasks.workunit.client.1.vm09.stdout:6/436: write d6/df/d23/f78 [104934,47919] 0 2026-03-09T15:00:49.051 INFO:tasks.workunit.client.1.vm09.stdout:6/437: dwrite d6/db/d8b/f73 [0,4194304] 0 2026-03-09T15:00:49.051 INFO:tasks.workunit.client.1.vm09.stdout:6/438: dwrite d6/d20/f6e [0,4194304] 0 2026-03-09T15:00:49.067 INFO:tasks.workunit.client.1.vm09.stdout:6/439: mkdir d6/d20/d2a/d3b/d91 0 2026-03-09T15:00:49.068 INFO:tasks.workunit.client.1.vm09.stdout:6/440: fdatasync d6/f39 0 2026-03-09T15:00:49.071 INFO:tasks.workunit.client.1.vm09.stdout:6/441: readlink d6/d20/d24/l33 0 2026-03-09T15:00:49.073 INFO:tasks.workunit.client.1.vm09.stdout:6/442: creat d6/d20/d38/d56/d65/d68/d86/f92 x:0 0 0 2026-03-09T15:00:49.077 INFO:tasks.workunit.client.1.vm09.stdout:6/443: dwrite d6/db/f42 [4194304,4194304] 0 2026-03-09T15:00:49.103 INFO:tasks.workunit.client.1.vm09.stdout:6/444: dread d6/df/d23/f78 [0,4194304] 0 2026-03-09T15:00:49.108 INFO:tasks.workunit.client.1.vm09.stdout:6/445: dwrite d6/d20/d38/d4e/d55/f8a [0,4194304] 0 2026-03-09T15:00:49.110 INFO:tasks.workunit.client.1.vm09.stdout:2/468: rmdir df/d1f/d6d 39 2026-03-09T15:00:49.111 INFO:tasks.workunit.client.1.vm09.stdout:2/469: chown df/d2d/f87 2143416534 1 2026-03-09T15:00:49.125 INFO:tasks.workunit.client.1.vm09.stdout:6/446: rename d6/db/d10/l14 to d6/d20/d2a/d3b/l93 0 2026-03-09T15:00:49.134 INFO:tasks.workunit.client.1.vm09.stdout:2/470: dread df/d20/f24 [0,4194304] 0 2026-03-09T15:00:49.150 INFO:tasks.workunit.client.1.vm09.stdout:4/462: rmdir db/d19 39 2026-03-09T15:00:49.151 INFO:tasks.workunit.client.1.vm09.stdout:3/471: truncate d3/d3a/d2b/f64 763612 0 2026-03-09T15:00:49.152 INFO:tasks.workunit.client.1.vm09.stdout:4/463: fsync db/d19/d23/d71/f43 0 2026-03-09T15:00:49.153 INFO:tasks.workunit.client.1.vm09.stdout:4/464: write db/d19/d23/d71/d5f/f87 [4002393,22148] 0 2026-03-09T15:00:49.165 INFO:tasks.workunit.client.1.vm09.stdout:1/396: truncate d8/d10/f29 1287312 0 2026-03-09T15:00:49.170 INFO:tasks.workunit.client.1.vm09.stdout:3/472: rename d3/d3a/l21 to d3/d3a/d2b/la6 0 2026-03-09T15:00:49.171 INFO:tasks.workunit.client.1.vm09.stdout:3/473: chown d3/d3a/d2b/f92 8061028 1 2026-03-09T15:00:49.172 INFO:tasks.workunit.client.1.vm09.stdout:3/474: write d3/d3a/d2b/d7b/f8c [4922003,75197] 0 2026-03-09T15:00:49.173 INFO:tasks.workunit.client.1.vm09.stdout:3/475: dread - d3/d3a/d2b/d36/f8a zero size 2026-03-09T15:00:49.175 INFO:tasks.workunit.client.1.vm09.stdout:1/397: mknod d8/c7b 0 2026-03-09T15:00:49.176 INFO:tasks.workunit.client.1.vm09.stdout:1/398: write d8/d10/d24/d45/f6c [175893,18877] 0 2026-03-09T15:00:49.186 INFO:tasks.workunit.client.1.vm09.stdout:8/461: dread df/f1a [0,4194304] 0 2026-03-09T15:00:49.186 INFO:tasks.workunit.client.1.vm09.stdout:3/476: creat d3/d3a/d2b/d7b/d90/fa7 x:0 0 0 2026-03-09T15:00:49.187 INFO:tasks.workunit.client.1.vm09.stdout:4/465: link db/d19/d23/l59 db/d12/d16/d5b/l9b 0 2026-03-09T15:00:49.189 INFO:tasks.workunit.client.1.vm09.stdout:4/466: dread db/d12/d16/f2a [0,4194304] 0 2026-03-09T15:00:49.189 INFO:tasks.workunit.client.1.vm09.stdout:8/462: stat df/d5b/l4c 0 2026-03-09T15:00:49.189 INFO:tasks.workunit.client.1.vm09.stdout:1/399: creat d8/d10/d73/d5d/d78/f7c x:0 0 0 2026-03-09T15:00:49.190 INFO:tasks.workunit.client.1.vm09.stdout:8/463: write df/d24/f86 [918005,85148] 0 2026-03-09T15:00:49.190 INFO:tasks.workunit.client.1.vm09.stdout:4/467: creat db/d12/d16/f9c x:0 0 0 2026-03-09T15:00:49.194 INFO:tasks.workunit.client.1.vm09.stdout:1/400: dread - d8/d10/d73/f54 zero size 2026-03-09T15:00:49.196 INFO:tasks.workunit.client.1.vm09.stdout:8/464: symlink df/d2d/d46/d73/d81/l88 0 2026-03-09T15:00:49.197 INFO:tasks.workunit.client.1.vm09.stdout:4/468: unlink db/d19/d52/d76/c6f 0 2026-03-09T15:00:49.198 INFO:tasks.workunit.client.1.vm09.stdout:3/477: creat d3/d3a/d2b/d31/d4a/fa8 x:0 0 0 2026-03-09T15:00:49.199 INFO:tasks.workunit.client.1.vm09.stdout:8/465: truncate df/d38/f53 260084 0 2026-03-09T15:00:49.204 INFO:tasks.workunit.client.1.vm09.stdout:4/469: dwrite db/d19/d52/d76/d3b/f48 [0,4194304] 0 2026-03-09T15:00:49.207 INFO:tasks.workunit.client.1.vm09.stdout:8/466: fdatasync df/d38/f39 0 2026-03-09T15:00:49.207 INFO:tasks.workunit.client.1.vm09.stdout:5/429: write d2/d37/d3c/d36/d4c/d51/f5f [638298,12136] 0 2026-03-09T15:00:49.207 INFO:tasks.workunit.client.1.vm09.stdout:8/467: chown df/d2d/d46/d33/c55 30457 1 2026-03-09T15:00:49.214 INFO:tasks.workunit.client.1.vm09.stdout:5/430: dwrite d2/f34 [0,4194304] 0 2026-03-09T15:00:49.221 INFO:tasks.workunit.client.1.vm09.stdout:4/470: creat db/d12/d16/d5b/d78/d7f/f9d x:0 0 0 2026-03-09T15:00:49.227 INFO:tasks.workunit.client.1.vm09.stdout:3/478: creat d3/d3a/d2b/d31/d4a/fa9 x:0 0 0 2026-03-09T15:00:49.228 INFO:tasks.workunit.client.1.vm09.stdout:3/479: symlink d3/d3a/d2b/d7b/laa 0 2026-03-09T15:00:49.233 INFO:tasks.workunit.client.1.vm09.stdout:4/471: mkdir db/d12/d9e 0 2026-03-09T15:00:49.234 INFO:tasks.workunit.client.1.vm09.stdout:3/480: mknod d3/d60/cab 0 2026-03-09T15:00:49.235 INFO:tasks.workunit.client.1.vm09.stdout:3/481: dread - d3/d3a/d2b/d39/f84 zero size 2026-03-09T15:00:49.236 INFO:tasks.workunit.client.1.vm09.stdout:3/482: write d3/d3a/d2b/d36/f8a [970606,105868] 0 2026-03-09T15:00:49.236 INFO:tasks.workunit.client.1.vm09.stdout:3/483: readlink d3/l30 0 2026-03-09T15:00:49.237 INFO:tasks.workunit.client.1.vm09.stdout:5/431: creat d2/d37/d3c/d36/f98 x:0 0 0 2026-03-09T15:00:49.237 INFO:tasks.workunit.client.1.vm09.stdout:3/484: readlink d3/l23 0 2026-03-09T15:00:49.237 INFO:tasks.workunit.client.1.vm09.stdout:8/468: dread df/f30 [8388608,4194304] 0 2026-03-09T15:00:49.238 INFO:tasks.workunit.client.1.vm09.stdout:5/432: fdatasync d2/d37/d3c/d36/d4c/d51/d96/f23 0 2026-03-09T15:00:49.239 INFO:tasks.workunit.client.1.vm09.stdout:4/472: unlink db/d19/d23/d71/d5f/f70 0 2026-03-09T15:00:49.240 INFO:tasks.workunit.client.1.vm09.stdout:4/473: write db/d19/d52/d76/d3b/f48 [3839698,61702] 0 2026-03-09T15:00:49.241 INFO:tasks.workunit.client.1.vm09.stdout:4/474: write db/f29 [4546137,82833] 0 2026-03-09T15:00:49.249 INFO:tasks.workunit.client.1.vm09.stdout:3/485: dread d3/d3a/d2b/d31/d4a/d62/f8 [0,4194304] 0 2026-03-09T15:00:49.250 INFO:tasks.workunit.client.1.vm09.stdout:9/414: truncate d1/f28 1224311 0 2026-03-09T15:00:49.253 INFO:tasks.workunit.client.1.vm09.stdout:5/433: creat d2/d37/d67/d95/f99 x:0 0 0 2026-03-09T15:00:49.256 INFO:tasks.workunit.client.1.vm09.stdout:4/475: dwrite db/d12/f6b [0,4194304] 0 2026-03-09T15:00:49.264 INFO:tasks.workunit.client.1.vm09.stdout:0/527: write da/dc/d1c/d46/d63/f91 [4653027,125695] 0 2026-03-09T15:00:49.264 INFO:tasks.workunit.client.1.vm09.stdout:3/486: chown d3/l1f 5438 1 2026-03-09T15:00:49.265 INFO:tasks.workunit.client.1.vm09.stdout:3/487: chown d3/d3a/d2b/d31/f45 1 1 2026-03-09T15:00:49.267 INFO:tasks.workunit.client.1.vm09.stdout:5/434: unlink d2/d37/d3c/d36/c83 0 2026-03-09T15:00:49.268 INFO:tasks.workunit.client.1.vm09.stdout:8/469: rename df/d38/d64/d5f/f76 to df/f89 0 2026-03-09T15:00:49.271 INFO:tasks.workunit.client.1.vm09.stdout:0/528: unlink da/dc/la1 0 2026-03-09T15:00:49.272 INFO:tasks.workunit.client.1.vm09.stdout:0/529: stat da/dc/d10/f4a 0 2026-03-09T15:00:49.274 INFO:tasks.workunit.client.1.vm09.stdout:9/415: mkdir d1/d7/d1e/d2b/d8d 0 2026-03-09T15:00:49.275 INFO:tasks.workunit.client.1.vm09.stdout:3/488: mkdir d3/d3a/d2b/d36/dac 0 2026-03-09T15:00:49.276 INFO:tasks.workunit.client.1.vm09.stdout:8/470: unlink df/d5b/d65/l1e 0 2026-03-09T15:00:49.278 INFO:tasks.workunit.client.1.vm09.stdout:9/416: dwrite d1/d7/d1e/d2b/f5f [0,4194304] 0 2026-03-09T15:00:49.284 INFO:tasks.workunit.client.1.vm09.stdout:9/417: creat d1/d7/d1e/d2b/d2e/f8e x:0 0 0 2026-03-09T15:00:49.289 INFO:tasks.workunit.client.1.vm09.stdout:0/530: creat da/fa9 x:0 0 0 2026-03-09T15:00:49.290 INFO:tasks.workunit.client.1.vm09.stdout:0/531: fdatasync da/dc/f8b 0 2026-03-09T15:00:49.290 INFO:tasks.workunit.client.1.vm09.stdout:9/418: mkdir d1/d4f/d8f 0 2026-03-09T15:00:49.291 INFO:tasks.workunit.client.1.vm09.stdout:0/532: write da/fb [907888,89868] 0 2026-03-09T15:00:49.294 INFO:tasks.workunit.client.1.vm09.stdout:9/419: rename d1/d7/d1e/d2b/d40/l8c to d1/d6e/l90 0 2026-03-09T15:00:49.299 INFO:tasks.workunit.client.1.vm09.stdout:0/533: dwrite da/dc/d92/d9e/fa2 [0,4194304] 0 2026-03-09T15:00:49.299 INFO:tasks.workunit.client.1.vm09.stdout:9/420: mkdir d1/d4f/d8f/d91 0 2026-03-09T15:00:49.304 INFO:tasks.workunit.client.1.vm09.stdout:0/534: creat da/dc/d1c/d46/d63/faa x:0 0 0 2026-03-09T15:00:49.305 INFO:tasks.workunit.client.1.vm09.stdout:9/421: truncate d1/d7/d1e/d2b/d40/f4d 972818 0 2026-03-09T15:00:49.306 INFO:tasks.workunit.client.1.vm09.stdout:0/535: truncate da/dc/d61/f66 159157 0 2026-03-09T15:00:49.308 INFO:tasks.workunit.client.1.vm09.stdout:5/435: sync 2026-03-09T15:00:49.308 INFO:tasks.workunit.client.1.vm09.stdout:0/536: mknod da/dc/d1c/d46/d5b/d9f/cab 0 2026-03-09T15:00:49.309 INFO:tasks.workunit.client.1.vm09.stdout:9/422: dwrite d1/d4f/f89 [0,4194304] 0 2026-03-09T15:00:49.318 INFO:tasks.workunit.client.1.vm09.stdout:5/436: rmdir d2/d37/d3c/d36/d45/d5c 39 2026-03-09T15:00:49.320 INFO:tasks.workunit.client.1.vm09.stdout:0/537: read da/f4c [1069599,9183] 0 2026-03-09T15:00:49.320 INFO:tasks.workunit.client.1.vm09.stdout:5/437: mknod d2/d37/d3c/c9a 0 2026-03-09T15:00:49.321 INFO:tasks.workunit.client.1.vm09.stdout:9/423: unlink d1/d7/cf 0 2026-03-09T15:00:49.321 INFO:tasks.workunit.client.1.vm09.stdout:5/438: chown d2/d37/d3c/c2a 10643120 1 2026-03-09T15:00:49.321 INFO:tasks.workunit.client.1.vm09.stdout:0/538: chown da/d30/c87 489 1 2026-03-09T15:00:49.322 INFO:tasks.workunit.client.1.vm09.stdout:0/539: read - da/fa9 zero size 2026-03-09T15:00:49.322 INFO:tasks.workunit.client.1.vm09.stdout:5/439: write d2/d37/d53/f79 [199075,125311] 0 2026-03-09T15:00:49.326 INFO:tasks.workunit.client.1.vm09.stdout:5/440: mknod d2/d37/d3c/d36/d4c/d89/c9b 0 2026-03-09T15:00:49.327 INFO:tasks.workunit.client.1.vm09.stdout:5/441: fsync d2/f22 0 2026-03-09T15:00:49.328 INFO:tasks.workunit.client.1.vm09.stdout:0/540: link da/dc/d22/c5a da/dc/d8c/cac 0 2026-03-09T15:00:49.328 INFO:tasks.workunit.client.1.vm09.stdout:0/541: dread - da/dc/d1c/d3c/d44/f89 zero size 2026-03-09T15:00:49.329 INFO:tasks.workunit.client.1.vm09.stdout:9/424: link d1/d7/d1e/d2b/d2e/d56/c86 d1/d7/d1e/c92 0 2026-03-09T15:00:49.333 INFO:tasks.workunit.client.1.vm09.stdout:5/442: rename d2/d37/d3c/d36/d4c/d51/d96/fd to d2/d37/d3c/d36/d45/d5c/f9c 0 2026-03-09T15:00:49.333 INFO:tasks.workunit.client.1.vm09.stdout:0/542: truncate da/dc/d1c/d3c/d44/d6c/f93 994554 0 2026-03-09T15:00:49.336 INFO:tasks.workunit.client.1.vm09.stdout:9/425: dwrite d1/d7/f67 [4194304,4194304] 0 2026-03-09T15:00:49.337 INFO:tasks.workunit.client.1.vm09.stdout:0/543: rename da/dc/d1c to da/dc/d1c/d3c/d44/dad 22 2026-03-09T15:00:49.337 INFO:tasks.workunit.client.1.vm09.stdout:9/426: read - d1/d58/f75 zero size 2026-03-09T15:00:49.338 INFO:tasks.workunit.client.1.vm09.stdout:9/427: write d1/f4 [55276,65091] 0 2026-03-09T15:00:49.341 INFO:tasks.workunit.client.1.vm09.stdout:5/443: mknod d2/d37/d3c/d36/d4c/d51/d96/c9d 0 2026-03-09T15:00:49.342 INFO:tasks.workunit.client.1.vm09.stdout:9/428: mknod d1/d58/c93 0 2026-03-09T15:00:49.350 INFO:tasks.workunit.client.1.vm09.stdout:7/414: write d3/db/d25/f22 [600818,18160] 0 2026-03-09T15:00:49.354 INFO:tasks.workunit.client.1.vm09.stdout:0/544: getdents da/dc/d1c 0 2026-03-09T15:00:49.358 INFO:tasks.workunit.client.1.vm09.stdout:5/444: dread d2/f22 [0,4194304] 0 2026-03-09T15:00:49.359 INFO:tasks.workunit.client.1.vm09.stdout:7/415: truncate d3/d1d/f30 511632 0 2026-03-09T15:00:49.365 INFO:tasks.workunit.client.1.vm09.stdout:0/545: dwrite da/dc/d10/f4a [0,4194304] 0 2026-03-09T15:00:49.365 INFO:tasks.workunit.client.1.vm09.stdout:0/546: dread - da/dc/d1c/d46/d63/faa zero size 2026-03-09T15:00:49.369 INFO:tasks.workunit.client.1.vm09.stdout:5/445: sync 2026-03-09T15:00:49.372 INFO:tasks.workunit.client.1.vm09.stdout:5/446: dread d2/f3d [0,4194304] 0 2026-03-09T15:00:49.374 INFO:tasks.workunit.client.1.vm09.stdout:0/547: creat da/dc/fae x:0 0 0 2026-03-09T15:00:49.375 INFO:tasks.workunit.client.1.vm09.stdout:5/447: creat d2/d37/d67/f9e x:0 0 0 2026-03-09T15:00:49.375 INFO:tasks.workunit.client.1.vm09.stdout:7/416: dwrite d3/db/d15/d5f/d44/f62 [8388608,4194304] 0 2026-03-09T15:00:49.388 INFO:tasks.workunit.client.1.vm09.stdout:5/448: symlink d2/d37/d3c/d36/d45/l9f 0 2026-03-09T15:00:49.390 INFO:tasks.workunit.client.1.vm09.stdout:7/417: link d3/f9 d3/d1d/f79 0 2026-03-09T15:00:49.398 INFO:tasks.workunit.client.1.vm09.stdout:7/418: chown d3/db/d15/l48 4 1 2026-03-09T15:00:49.398 INFO:tasks.workunit.client.1.vm09.stdout:0/548: link da/dc/c82 da/dc/caf 0 2026-03-09T15:00:49.398 INFO:tasks.workunit.client.1.vm09.stdout:0/549: write da/dc/f28 [254623,14978] 0 2026-03-09T15:00:49.398 INFO:tasks.workunit.client.1.vm09.stdout:0/550: fdatasync da/fb 0 2026-03-09T15:00:49.398 INFO:tasks.workunit.client.1.vm09.stdout:0/551: mknod da/d30/cb0 0 2026-03-09T15:00:49.400 INFO:tasks.workunit.client.1.vm09.stdout:7/419: link d3/fd d3/f7a 0 2026-03-09T15:00:49.400 INFO:tasks.workunit.client.1.vm09.stdout:5/449: getdents d2/d37/d53 0 2026-03-09T15:00:49.401 INFO:tasks.workunit.client.1.vm09.stdout:5/450: write d2/f3d [1129855,34064] 0 2026-03-09T15:00:49.404 INFO:tasks.workunit.client.1.vm09.stdout:0/552: dread da/dc/f8b [0,4194304] 0 2026-03-09T15:00:49.407 INFO:tasks.workunit.client.1.vm09.stdout:7/420: sync 2026-03-09T15:00:49.407 INFO:tasks.workunit.client.1.vm09.stdout:0/553: creat da/dc/d1c/d3c/d78/fb1 x:0 0 0 2026-03-09T15:00:49.409 INFO:tasks.workunit.client.1.vm09.stdout:5/451: creat d2/d37/d3c/d36/d45/fa0 x:0 0 0 2026-03-09T15:00:49.412 INFO:tasks.workunit.client.1.vm09.stdout:0/554: creat da/dc/d1c/d3c/d78/d7a/fb2 x:0 0 0 2026-03-09T15:00:49.420 INFO:tasks.workunit.client.1.vm09.stdout:5/452: dwrite d2/d37/d53/f79 [0,4194304] 0 2026-03-09T15:00:49.426 INFO:tasks.workunit.client.1.vm09.stdout:2/471: write df/f13 [5214112,29190] 0 2026-03-09T15:00:49.426 INFO:tasks.workunit.client.1.vm09.stdout:6/447: write d6/d20/d38/d4e/d55/f5c [384134,64007] 0 2026-03-09T15:00:49.426 INFO:tasks.workunit.client.1.vm09.stdout:0/555: mknod da/dc/d1c/d46/cb3 0 2026-03-09T15:00:49.427 INFO:tasks.workunit.client.1.vm09.stdout:2/472: chown df/d58/d67 1317 1 2026-03-09T15:00:49.436 INFO:tasks.workunit.client.1.vm09.stdout:0/556: symlink da/dc/d1c/d3c/d44/lb4 0 2026-03-09T15:00:49.439 INFO:tasks.workunit.client.1.vm09.stdout:1/401: write d8/d10/f29 [2319506,106878] 0 2026-03-09T15:00:49.444 INFO:tasks.workunit.client.1.vm09.stdout:5/453: dread d2/f38 [0,4194304] 0 2026-03-09T15:00:49.449 INFO:tasks.workunit.client.1.vm09.stdout:5/454: chown d2/d37/d3c/d36/d4c/d51/d96/c11 87 1 2026-03-09T15:00:49.480 INFO:tasks.workunit.client.1.vm09.stdout:0/557: mknod da/dc/d84/cb5 0 2026-03-09T15:00:49.490 INFO:tasks.workunit.client.1.vm09.stdout:1/402: link d8/l9 d8/d10/d24/d45/d5f/l7d 0 2026-03-09T15:00:49.501 INFO:tasks.workunit.client.1.vm09.stdout:1/403: creat d8/f7e x:0 0 0 2026-03-09T15:00:49.501 INFO:tasks.workunit.client.1.vm09.stdout:4/476: dwrite db/d19/d52/f6d [4194304,4194304] 0 2026-03-09T15:00:49.512 INFO:tasks.workunit.client.1.vm09.stdout:9/429: chown d1/d6e/l90 7 1 2026-03-09T15:00:49.513 INFO:tasks.workunit.client.1.vm09.stdout:3/489: dwrite d3/d5b/f8b [0,4194304] 0 2026-03-09T15:00:49.514 INFO:tasks.workunit.client.1.vm09.stdout:8/471: dwrite df/d5b/f31 [4194304,4194304] 0 2026-03-09T15:00:49.518 INFO:tasks.workunit.client.1.vm09.stdout:9/430: chown d1/d7/d1e/f20 2 1 2026-03-09T15:00:49.519 INFO:tasks.workunit.client.1.vm09.stdout:1/404: dwrite d8/f6b [0,4194304] 0 2026-03-09T15:00:49.523 INFO:tasks.workunit.client.1.vm09.stdout:3/490: truncate d3/d5b/f6d 1151932 0 2026-03-09T15:00:49.526 INFO:tasks.workunit.client.1.vm09.stdout:8/472: read df/d24/f7a [813385,50043] 0 2026-03-09T15:00:49.526 INFO:tasks.workunit.client.1.vm09.stdout:8/473: write df/d2d/f57 [4092348,75630] 0 2026-03-09T15:00:49.538 INFO:tasks.workunit.client.1.vm09.stdout:9/431: dread d1/d7/d1e/d2b/f30 [0,4194304] 0 2026-03-09T15:00:49.540 INFO:tasks.workunit.client.1.vm09.stdout:1/405: rename d8/d10/d24/d45/f62 to d8/d10/d24/d48/f7f 0 2026-03-09T15:00:49.546 INFO:tasks.workunit.client.1.vm09.stdout:9/432: creat d1/d4f/d52/f94 x:0 0 0 2026-03-09T15:00:49.547 INFO:tasks.workunit.client.1.vm09.stdout:3/491: rename d3/d3a/d2b/d31/d4a/d62/c14 to d3/d9a/cad 0 2026-03-09T15:00:49.555 INFO:tasks.workunit.client.1.vm09.stdout:1/406: dread d8/fa [0,4194304] 0 2026-03-09T15:00:49.556 INFO:tasks.workunit.client.1.vm09.stdout:4/477: dread db/d12/d16/f26 [0,4194304] 0 2026-03-09T15:00:49.556 INFO:tasks.workunit.client.1.vm09.stdout:8/474: unlink df/d5b/d65/l2e 0 2026-03-09T15:00:49.556 INFO:tasks.workunit.client.1.vm09.stdout:4/478: dread db/f1c [0,4194304] 0 2026-03-09T15:00:49.556 INFO:tasks.workunit.client.1.vm09.stdout:9/433: chown d1/d7/d1e/d2b/d40/f4d 47752633 1 2026-03-09T15:00:49.556 INFO:tasks.workunit.client.1.vm09.stdout:9/434: dread - d1/d7/d1e/d2b/d2e/f8e zero size 2026-03-09T15:00:49.556 INFO:tasks.workunit.client.1.vm09.stdout:3/492: write d3/d3a/d2b/d31/d4a/d62/f8 [260256,103268] 0 2026-03-09T15:00:49.557 INFO:tasks.workunit.client.1.vm09.stdout:9/435: truncate d1/d4f/d52/f8b 81055 0 2026-03-09T15:00:49.564 INFO:tasks.workunit.client.1.vm09.stdout:3/493: mknod d3/d3a/d2b/d39/d48/cae 0 2026-03-09T15:00:49.579 INFO:tasks.workunit.client.1.vm09.stdout:3/494: unlink d3/d60/cab 0 2026-03-09T15:00:49.579 INFO:tasks.workunit.client.1.vm09.stdout:7/421: dwrite d3/d28/f35 [0,4194304] 0 2026-03-09T15:00:49.581 INFO:tasks.workunit.client.1.vm09.stdout:3/495: chown d3/d5b/d79/l87 12 1 2026-03-09T15:00:49.581 INFO:tasks.workunit.client.1.vm09.stdout:3/496: readlink d3/l1f 0 2026-03-09T15:00:49.589 INFO:tasks.workunit.client.1.vm09.stdout:1/407: getdents d8/d50 0 2026-03-09T15:00:49.589 INFO:tasks.workunit.client.1.vm09.stdout:1/408: chown d8/ff 73892843 1 2026-03-09T15:00:49.590 INFO:tasks.workunit.client.1.vm09.stdout:3/497: dread d3/d3a/d2b/f72 [0,4194304] 0 2026-03-09T15:00:49.590 INFO:tasks.workunit.client.1.vm09.stdout:1/409: fdatasync d8/d22/f4c 0 2026-03-09T15:00:49.592 INFO:tasks.workunit.client.1.vm09.stdout:6/448: truncate d6/db/d10/f1c 3372395 0 2026-03-09T15:00:49.594 INFO:tasks.workunit.client.1.vm09.stdout:7/422: creat d3/db/d15/d5f/d6e/f7b x:0 0 0 2026-03-09T15:00:49.595 INFO:tasks.workunit.client.1.vm09.stdout:3/498: dwrite d3/f3b [0,4194304] 0 2026-03-09T15:00:49.599 INFO:tasks.workunit.client.1.vm09.stdout:6/449: symlink d6/d20/d24/d7e/d88/l94 0 2026-03-09T15:00:49.600 INFO:tasks.workunit.client.1.vm09.stdout:1/410: link d8/d10/c4b d8/d10/d73/c80 0 2026-03-09T15:00:49.609 INFO:tasks.workunit.client.1.vm09.stdout:6/450: dwrite d6/d20/d38/d56/d65/f7b [0,4194304] 0 2026-03-09T15:00:49.609 INFO:tasks.workunit.client.1.vm09.stdout:6/451: write d6/d20/d2a/f61 [1046429,110187] 0 2026-03-09T15:00:49.614 INFO:tasks.workunit.client.1.vm09.stdout:6/452: getdents d6/df/d23 0 2026-03-09T15:00:49.629 INFO:tasks.workunit.client.1.vm09.stdout:3/499: dread d3/d3a/d2b/d39/f81 [0,4194304] 0 2026-03-09T15:00:49.630 INFO:tasks.workunit.client.1.vm09.stdout:3/500: chown d3/d3a/d2b/d31/d4a/d62/c35 16753968 1 2026-03-09T15:00:49.632 INFO:tasks.workunit.client.1.vm09.stdout:3/501: creat d3/d5b/d79/d9d/faf x:0 0 0 2026-03-09T15:00:49.632 INFO:tasks.workunit.client.1.vm09.stdout:3/502: readlink d3/d3a/d2b/d39/d48/l4b 0 2026-03-09T15:00:49.637 INFO:tasks.workunit.client.1.vm09.stdout:6/453: dread d6/d20/d38/d56/d65/d68/d6f/f85 [0,4194304] 0 2026-03-09T15:00:49.641 INFO:tasks.workunit.client.1.vm09.stdout:3/503: dwrite d3/d3a/d2b/d39/f7d [0,4194304] 0 2026-03-09T15:00:49.647 INFO:tasks.workunit.client.1.vm09.stdout:6/454: dread d6/f39 [0,4194304] 0 2026-03-09T15:00:49.648 INFO:tasks.workunit.client.1.vm09.stdout:3/504: dread - d3/d3a/d2b/d31/d4a/fa8 zero size 2026-03-09T15:00:49.649 INFO:tasks.workunit.client.1.vm09.stdout:6/455: write d6/d20/d38/d4e/d55/f5c [1123358,13956] 0 2026-03-09T15:00:49.652 INFO:tasks.workunit.client.1.vm09.stdout:6/456: write d6/f39 [1881108,75593] 0 2026-03-09T15:00:49.653 INFO:tasks.workunit.client.1.vm09.stdout:3/505: mkdir d3/d3a/d2b/d7b/db0 0 2026-03-09T15:00:49.656 INFO:tasks.workunit.client.1.vm09.stdout:3/506: symlink d3/d9a/d80/lb1 0 2026-03-09T15:00:49.667 INFO:tasks.workunit.client.1.vm09.stdout:6/457: dread d6/f17 [0,4194304] 0 2026-03-09T15:00:49.668 INFO:tasks.workunit.client.1.vm09.stdout:6/458: fdatasync d6/d20/d38/d56/d65/f7b 0 2026-03-09T15:00:49.670 INFO:tasks.workunit.client.1.vm09.stdout:6/459: getdents d6/df/d23/d89 0 2026-03-09T15:00:49.671 INFO:tasks.workunit.client.1.vm09.stdout:6/460: rmdir d6/db/d10 39 2026-03-09T15:00:49.672 INFO:tasks.workunit.client.1.vm09.stdout:6/461: write d6/d20/d2a/f5e [79806,118692] 0 2026-03-09T15:00:49.675 INFO:tasks.workunit.client.1.vm09.stdout:6/462: symlink d6/d20/d38/d56/d65/d68/d6f/l95 0 2026-03-09T15:00:49.681 INFO:tasks.workunit.client.1.vm09.stdout:6/463: dwrite d6/d20/d38/d4e/d55/f77 [0,4194304] 0 2026-03-09T15:00:49.686 INFO:tasks.workunit.client.1.vm09.stdout:3/507: sync 2026-03-09T15:00:49.690 INFO:tasks.workunit.client.1.vm09.stdout:3/508: mknod d3/d5b/d79/cb2 0 2026-03-09T15:00:49.690 INFO:tasks.workunit.client.1.vm09.stdout:3/509: dread - d3/d3a/f6b zero size 2026-03-09T15:00:49.693 INFO:tasks.workunit.client.1.vm09.stdout:3/510: rename d3/d3a/d2b/d31/d4a/d62/f1a to d3/d5b/d79/d9d/fb3 0 2026-03-09T15:00:49.786 INFO:tasks.workunit.client.1.vm09.stdout:2/473: write df/d1f/d6d/f7f [1894451,86981] 0 2026-03-09T15:00:49.793 INFO:tasks.workunit.client.1.vm09.stdout:2/474: chown df/d1f/d47 117971 1 2026-03-09T15:00:49.797 INFO:tasks.workunit.client.1.vm09.stdout:2/475: symlink df/d1f/d47/d5d/l8a 0 2026-03-09T15:00:49.798 INFO:tasks.workunit.client.1.vm09.stdout:2/476: write df/d20/d2e/f59 [926792,6253] 0 2026-03-09T15:00:49.799 INFO:tasks.workunit.client.1.vm09.stdout:2/477: truncate df/f13 5610982 0 2026-03-09T15:00:49.808 INFO:tasks.workunit.client.1.vm09.stdout:2/478: dread df/f42 [0,4194304] 0 2026-03-09T15:00:49.809 INFO:tasks.workunit.client.1.vm09.stdout:0/558: truncate da/dc/d1c/d3c/f81 678515 0 2026-03-09T15:00:49.810 INFO:tasks.workunit.client.1.vm09.stdout:0/559: write da/dc/d22/f53 [4061578,24721] 0 2026-03-09T15:00:49.811 INFO:tasks.workunit.client.1.vm09.stdout:0/560: dread - da/fa9 zero size 2026-03-09T15:00:49.812 INFO:tasks.workunit.client.1.vm09.stdout:0/561: write da/dc/d1c/d46/d63/f91 [2018719,84266] 0 2026-03-09T15:00:49.820 INFO:tasks.workunit.client.1.vm09.stdout:0/562: mknod da/dc/d1c/d3c/d44/d6c/cb6 0 2026-03-09T15:00:49.828 INFO:tasks.workunit.client.1.vm09.stdout:0/563: mknod da/dc/d1c/d46/d5b/cb7 0 2026-03-09T15:00:49.830 INFO:tasks.workunit.client.1.vm09.stdout:0/564: chown da/dc/d1c/d3c/d44/d6c/f93 564628 1 2026-03-09T15:00:49.834 INFO:tasks.workunit.client.1.vm09.stdout:0/565: dwrite da/dc/d1c/d3c/d44/d6c/d7b/fa8 [0,4194304] 0 2026-03-09T15:00:49.884 INFO:tasks.workunit.client.1.vm09.stdout:1/411: getdents d8/d10/d24/d45 0 2026-03-09T15:00:49.917 INFO:tasks.workunit.client.1.vm09.stdout:9/436: write d1/d7/d1e/d2b/d40/f4d [1714113,51980] 0 2026-03-09T15:00:49.924 INFO:tasks.workunit.client.1.vm09.stdout:9/437: creat d1/d7/d1e/d2b/d2e/f95 x:0 0 0 2026-03-09T15:00:49.926 INFO:tasks.workunit.client.1.vm09.stdout:5/455: creat d2/d37/d3c/d36/d45/fa1 x:0 0 0 2026-03-09T15:00:49.927 INFO:tasks.workunit.client.1.vm09.stdout:9/438: mknod d1/d7/d1e/d2b/d2e/d56/d6d/c96 0 2026-03-09T15:00:49.929 INFO:tasks.workunit.client.1.vm09.stdout:5/456: symlink d2/d37/d3c/d36/d45/d5c/la2 0 2026-03-09T15:00:49.930 INFO:tasks.workunit.client.1.vm09.stdout:5/457: dread - d2/d37/d3c/d36/d45/fa1 zero size 2026-03-09T15:00:49.930 INFO:tasks.workunit.client.1.vm09.stdout:9/439: stat d1/d58/c71 0 2026-03-09T15:00:49.934 INFO:tasks.workunit.client.1.vm09.stdout:9/440: rename d1/d7/d1e/d2b/f42 to d1/d7/d1e/d2b/d8d/f97 0 2026-03-09T15:00:49.935 INFO:tasks.workunit.client.1.vm09.stdout:9/441: write d1/d6e/f88 [917898,119410] 0 2026-03-09T15:00:49.936 INFO:tasks.workunit.client.1.vm09.stdout:8/475: dwrite df/d2d/d46/d73/f87 [0,4194304] 0 2026-03-09T15:00:49.938 INFO:tasks.workunit.client.1.vm09.stdout:9/442: read d1/f24 [283071,9580] 0 2026-03-09T15:00:49.947 INFO:tasks.workunit.client.1.vm09.stdout:9/443: read d1/d7/d1e/f2a [790782,31708] 0 2026-03-09T15:00:49.949 INFO:tasks.workunit.client.1.vm09.stdout:4/479: truncate db/f21 4034336 0 2026-03-09T15:00:49.949 INFO:tasks.workunit.client.1.vm09.stdout:4/480: stat db/d12/d16/f36 0 2026-03-09T15:00:49.952 INFO:tasks.workunit.client.1.vm09.stdout:9/444: write d1/d7/d1e/d2b/d2e/f8a [6123095,108068] 0 2026-03-09T15:00:49.952 INFO:tasks.workunit.client.1.vm09.stdout:9/445: chown d1/d7/d1e/c92 7 1 2026-03-09T15:00:49.954 INFO:tasks.workunit.client.1.vm09.stdout:4/481: write db/d19/d23/d71/f4e [2073592,85752] 0 2026-03-09T15:00:49.954 INFO:tasks.workunit.client.1.vm09.stdout:4/482: write db/f29 [3384082,16055] 0 2026-03-09T15:00:49.956 INFO:tasks.workunit.client.1.vm09.stdout:9/446: dread d1/d7/f67 [4194304,4194304] 0 2026-03-09T15:00:49.959 INFO:tasks.workunit.client.1.vm09.stdout:9/447: dwrite d1/d4f/d52/f8b [0,4194304] 0 2026-03-09T15:00:49.982 INFO:tasks.workunit.client.1.vm09.stdout:8/476: sync 2026-03-09T15:00:49.989 INFO:tasks.workunit.client.1.vm09.stdout:8/477: rmdir df/d75 0 2026-03-09T15:00:49.993 INFO:tasks.workunit.client.1.vm09.stdout:8/478: link df/d2d/d46/f6d df/d38/f8a 0 2026-03-09T15:00:49.999 INFO:tasks.workunit.client.1.vm09.stdout:8/479: dwrite df/d2d/d46/f6d [0,4194304] 0 2026-03-09T15:00:50.010 INFO:tasks.workunit.client.1.vm09.stdout:8/480: creat df/d5c/f8b x:0 0 0 2026-03-09T15:00:50.014 INFO:tasks.workunit.client.1.vm09.stdout:8/481: dwrite df/d5b/f82 [0,4194304] 0 2026-03-09T15:00:50.017 INFO:tasks.workunit.client.1.vm09.stdout:8/482: write df/d5c/f72 [741207,65980] 0 2026-03-09T15:00:50.040 INFO:tasks.workunit.client.1.vm09.stdout:4/483: dread db/d19/d23/d71/f4e [0,4194304] 0 2026-03-09T15:00:50.041 INFO:tasks.workunit.client.1.vm09.stdout:8/483: sync 2026-03-09T15:00:50.043 INFO:tasks.workunit.client.1.vm09.stdout:4/484: unlink db/d19/d23/d71/c6e 0 2026-03-09T15:00:50.045 INFO:tasks.workunit.client.1.vm09.stdout:4/485: mknod db/c9f 0 2026-03-09T15:00:50.048 INFO:tasks.workunit.client.1.vm09.stdout:4/486: creat db/d19/d23/d71/d53/fa0 x:0 0 0 2026-03-09T15:00:50.050 INFO:tasks.workunit.client.1.vm09.stdout:4/487: getdents db/d19/d23/d44/d7c/d7d/d97 0 2026-03-09T15:00:50.051 INFO:tasks.workunit.client.1.vm09.stdout:4/488: write db/d19/d23/d71/f4e [1891219,17385] 0 2026-03-09T15:00:50.094 INFO:tasks.workunit.client.1.vm09.stdout:7/423: dwrite d3/d1d/d65/f6f [0,4194304] 0 2026-03-09T15:00:50.139 INFO:tasks.workunit.client.1.vm09.stdout:6/464: symlink d6/d20/d38/d4e/l96 0 2026-03-09T15:00:50.143 INFO:tasks.workunit.client.1.vm09.stdout:6/465: dwrite d6/db/d8b/f73 [0,4194304] 0 2026-03-09T15:00:50.155 INFO:tasks.workunit.client.1.vm09.stdout:3/511: write d3/d3a/d2b/d31/d4a/d62/f78 [875746,125416] 0 2026-03-09T15:00:50.169 INFO:tasks.workunit.client.1.vm09.stdout:1/412: rmdir d8/d10/d24/d48 39 2026-03-09T15:00:50.170 INFO:tasks.workunit.client.1.vm09.stdout:1/413: symlink d8/d10/d24/d45/l81 0 2026-03-09T15:00:50.171 INFO:tasks.workunit.client.1.vm09.stdout:1/414: creat d8/d22/d72/d64/f82 x:0 0 0 2026-03-09T15:00:50.173 INFO:tasks.workunit.client.1.vm09.stdout:1/415: mkdir d8/d10/d24/d83 0 2026-03-09T15:00:50.174 INFO:tasks.workunit.client.1.vm09.stdout:1/416: fdatasync d8/d10/d24/d48/f76 0 2026-03-09T15:00:50.175 INFO:tasks.workunit.client.1.vm09.stdout:1/417: rmdir d8/d10/d24/d45 39 2026-03-09T15:00:50.175 INFO:tasks.workunit.client.1.vm09.stdout:1/418: chown d8/d50 585243 1 2026-03-09T15:00:50.181 INFO:tasks.workunit.client.1.vm09.stdout:3/512: sync 2026-03-09T15:00:50.184 INFO:tasks.workunit.client.1.vm09.stdout:2/479: mknod df/d1f/c8b 0 2026-03-09T15:00:50.187 INFO:tasks.workunit.client.1.vm09.stdout:2/480: dwrite df/d2d/f41 [0,4194304] 0 2026-03-09T15:00:50.191 INFO:tasks.workunit.client.1.vm09.stdout:2/481: dwrite f4 [4194304,4194304] 0 2026-03-09T15:00:50.192 INFO:tasks.workunit.client.1.vm09.stdout:2/482: fsync df/d1f/d47/f89 0 2026-03-09T15:00:50.192 INFO:tasks.workunit.client.1.vm09.stdout:2/483: write df/d2d/f87 [74939,112584] 0 2026-03-09T15:00:50.198 INFO:tasks.workunit.client.1.vm09.stdout:2/484: dwrite df/f4a [0,4194304] 0 2026-03-09T15:00:50.202 INFO:tasks.workunit.client.1.vm09.stdout:2/485: write df/d58/d67/f46 [568462,23369] 0 2026-03-09T15:00:50.202 INFO:tasks.workunit.client.1.vm09.stdout:2/486: fsync fb 0 2026-03-09T15:00:50.205 INFO:tasks.workunit.client.1.vm09.stdout:2/487: rmdir df/d1f/d47 39 2026-03-09T15:00:50.205 INFO:tasks.workunit.client.1.vm09.stdout:2/488: write df/d2d/f3c [564446,60880] 0 2026-03-09T15:00:50.206 INFO:tasks.workunit.client.1.vm09.stdout:2/489: creat df/d2d/f8c x:0 0 0 2026-03-09T15:00:50.207 INFO:tasks.workunit.client.1.vm09.stdout:2/490: chown df/d2d/f4f 28 1 2026-03-09T15:00:50.209 INFO:tasks.workunit.client.1.vm09.stdout:2/491: rename c6 to df/d58/d67/c8d 0 2026-03-09T15:00:50.211 INFO:tasks.workunit.client.1.vm09.stdout:2/492: rename df/d1f/l3a to df/d1f/l8e 0 2026-03-09T15:00:50.213 INFO:tasks.workunit.client.1.vm09.stdout:2/493: stat df/d1f/d47/d84 0 2026-03-09T15:00:50.213 INFO:tasks.workunit.client.1.vm09.stdout:2/494: fdatasync df/d1f/d6d/f7f 0 2026-03-09T15:00:50.217 INFO:tasks.workunit.client.1.vm09.stdout:2/495: dread df/d20/f49 [0,4194304] 0 2026-03-09T15:00:50.224 INFO:tasks.workunit.client.1.vm09.stdout:2/496: dread df/f14 [0,4194304] 0 2026-03-09T15:00:50.224 INFO:tasks.workunit.client.1.vm09.stdout:2/497: fsync df/d1f/d47/d5d/f6c 0 2026-03-09T15:00:50.228 INFO:tasks.workunit.client.1.vm09.stdout:2/498: rename df/d20/d29/d53 to df/d1f/d6d/d8f 0 2026-03-09T15:00:50.229 INFO:tasks.workunit.client.1.vm09.stdout:2/499: mkdir df/d1f/d47/d5d/d90 0 2026-03-09T15:00:50.281 INFO:tasks.workunit.client.1.vm09.stdout:0/566: dwrite da/dc/d1c/d46/d63/f7f [0,4194304] 0 2026-03-09T15:00:50.284 INFO:tasks.workunit.client.1.vm09.stdout:0/567: fdatasync da/dc/d1c/d3c/d44/f89 0 2026-03-09T15:00:50.286 INFO:tasks.workunit.client.1.vm09.stdout:0/568: chown da/d30/d36 28898 1 2026-03-09T15:00:50.287 INFO:tasks.workunit.client.1.vm09.stdout:0/569: chown da/dc/d1c/d3c/d44/c72 41448 1 2026-03-09T15:00:50.289 INFO:tasks.workunit.client.1.vm09.stdout:0/570: mkdir da/dc/d84/db8 0 2026-03-09T15:00:50.290 INFO:tasks.workunit.client.1.vm09.stdout:0/571: creat da/dc/d1c/d3c/d44/d6c/d7b/fb9 x:0 0 0 2026-03-09T15:00:50.291 INFO:tasks.workunit.client.1.vm09.stdout:0/572: rmdir da/d30/d36 39 2026-03-09T15:00:50.292 INFO:tasks.workunit.client.1.vm09.stdout:0/573: chown da/dc/d1c/d46/d5b/l62 500 1 2026-03-09T15:00:50.317 INFO:tasks.workunit.client.1.vm09.stdout:5/458: write d2/f15 [4160533,67756] 0 2026-03-09T15:00:50.326 INFO:tasks.workunit.client.1.vm09.stdout:5/459: dread d2/d37/d3c/d36/d45/f66 [0,4194304] 0 2026-03-09T15:00:50.327 INFO:tasks.workunit.client.1.vm09.stdout:5/460: write d2/f2e [479660,27377] 0 2026-03-09T15:00:50.332 INFO:tasks.workunit.client.1.vm09.stdout:5/461: dwrite d2/d37/d3c/d36/d4c/d51/d96/f73 [0,4194304] 0 2026-03-09T15:00:50.341 INFO:tasks.workunit.client.1.vm09.stdout:5/462: symlink d2/d37/d3c/d36/d4c/la3 0 2026-03-09T15:00:50.341 INFO:tasks.workunit.client.1.vm09.stdout:5/463: stat d2/f29 0 2026-03-09T15:00:50.343 INFO:tasks.workunit.client.1.vm09.stdout:9/448: dwrite d1/d7/d1e/d2b/f5f [4194304,4194304] 0 2026-03-09T15:00:50.345 INFO:tasks.workunit.client.1.vm09.stdout:9/449: chown d1/l26 26615927 1 2026-03-09T15:00:50.348 INFO:tasks.workunit.client.1.vm09.stdout:5/464: dread d2/f93 [0,4194304] 0 2026-03-09T15:00:50.349 INFO:tasks.workunit.client.1.vm09.stdout:5/465: chown d2/d37/d3c/d36/d4c/d51/d96/c9d 14475556 1 2026-03-09T15:00:50.353 INFO:tasks.workunit.client.1.vm09.stdout:9/450: mknod d1/d4f/d8f/d91/c98 0 2026-03-09T15:00:50.354 INFO:tasks.workunit.client.1.vm09.stdout:9/451: stat d1/d58/l6f 0 2026-03-09T15:00:50.355 INFO:tasks.workunit.client.1.vm09.stdout:5/466: creat d2/d37/d3c/d36/d4c/d51/d96/fa4 x:0 0 0 2026-03-09T15:00:50.358 INFO:tasks.workunit.client.1.vm09.stdout:9/452: creat d1/d58/f99 x:0 0 0 2026-03-09T15:00:50.359 INFO:tasks.workunit.client.1.vm09.stdout:5/467: creat d2/d37/fa5 x:0 0 0 2026-03-09T15:00:50.359 INFO:tasks.workunit.client.1.vm09.stdout:9/453: fdatasync d1/d7/d1e/d2b/d8d/f97 0 2026-03-09T15:00:50.362 INFO:tasks.workunit.client.1.vm09.stdout:5/468: creat d2/d37/d3c/d36/d4c/fa6 x:0 0 0 2026-03-09T15:00:50.384 INFO:tasks.workunit.client.1.vm09.stdout:5/469: sync 2026-03-09T15:00:50.390 INFO:tasks.workunit.client.1.vm09.stdout:8/484: write df/d5b/d65/d1d/f68 [3381183,110591] 0 2026-03-09T15:00:50.391 INFO:tasks.workunit.client.1.vm09.stdout:8/485: stat df/d2d/d46/d73/d81/f59 0 2026-03-09T15:00:50.395 INFO:tasks.workunit.client.1.vm09.stdout:0/574: mkdir da/dc/d1c/d3c/d44/dba 0 2026-03-09T15:00:50.396 INFO:tasks.workunit.client.1.vm09.stdout:4/489: write db/f1c [316126,84278] 0 2026-03-09T15:00:50.398 INFO:tasks.workunit.client.1.vm09.stdout:5/470: getdents d2/d37/d3c/d36/d45 0 2026-03-09T15:00:50.400 INFO:tasks.workunit.client.1.vm09.stdout:8/486: dwrite df/d24/f7a [0,4194304] 0 2026-03-09T15:00:50.402 INFO:tasks.workunit.client.1.vm09.stdout:4/490: mkdir db/d12/da1 0 2026-03-09T15:00:50.403 INFO:tasks.workunit.client.1.vm09.stdout:8/487: sync 2026-03-09T15:00:50.403 INFO:tasks.workunit.client.1.vm09.stdout:0/575: truncate da/fb 2035039 0 2026-03-09T15:00:50.405 INFO:tasks.workunit.client.1.vm09.stdout:0/576: write da/dc/d1c/fa0 [23144,50054] 0 2026-03-09T15:00:50.405 INFO:tasks.workunit.client.1.vm09.stdout:4/491: read db/d19/d52/d76/d3b/f48 [763805,55822] 0 2026-03-09T15:00:50.407 INFO:tasks.workunit.client.1.vm09.stdout:8/488: creat df/d2d/d42/f8c x:0 0 0 2026-03-09T15:00:50.411 INFO:tasks.workunit.client.1.vm09.stdout:5/471: rename d2/d37/d3c/d36/d4c/d51/d96/f1d to d2/d37/d67/fa7 0 2026-03-09T15:00:50.419 INFO:tasks.workunit.client.1.vm09.stdout:8/489: dwrite df/d5b/d65/d1d/f6e [4194304,4194304] 0 2026-03-09T15:00:50.424 INFO:tasks.workunit.client.1.vm09.stdout:8/490: dread df/d5b/d65/d1d/f6e [4194304,4194304] 0 2026-03-09T15:00:50.425 INFO:tasks.workunit.client.1.vm09.stdout:0/577: mkdir da/dc/d1c/d3c/d78/d7a/dbb 0 2026-03-09T15:00:50.425 INFO:tasks.workunit.client.1.vm09.stdout:8/491: truncate df/d2d/d42/f7c 452328 0 2026-03-09T15:00:50.430 INFO:tasks.workunit.client.1.vm09.stdout:0/578: creat da/dc/d22/d64/fbc x:0 0 0 2026-03-09T15:00:50.431 INFO:tasks.workunit.client.1.vm09.stdout:8/492: symlink df/d5b/d65/d1d/l8d 0 2026-03-09T15:00:50.434 INFO:tasks.workunit.client.1.vm09.stdout:0/579: symlink da/dc/d1c/lbd 0 2026-03-09T15:00:50.442 INFO:tasks.workunit.client.1.vm09.stdout:8/493: fdatasync fe 0 2026-03-09T15:00:50.443 INFO:tasks.workunit.client.1.vm09.stdout:0/580: chown da/dc/d22/l7e 10 1 2026-03-09T15:00:50.443 INFO:tasks.workunit.client.1.vm09.stdout:5/472: dread d2/d37/d3c/d36/d4c/d51/d96/f1f [0,4194304] 0 2026-03-09T15:00:50.443 INFO:tasks.workunit.client.1.vm09.stdout:4/492: dread db/d12/f3d [0,4194304] 0 2026-03-09T15:00:50.443 INFO:tasks.workunit.client.1.vm09.stdout:8/494: dread df/f23 [0,4194304] 0 2026-03-09T15:00:50.448 INFO:tasks.workunit.client.1.vm09.stdout:4/493: dread db/d19/d52/d76/d3b/f48 [0,4194304] 0 2026-03-09T15:00:50.450 INFO:tasks.workunit.client.1.vm09.stdout:4/494: chown db/d19/d23/d71/d5f/f87 2 1 2026-03-09T15:00:50.451 INFO:tasks.workunit.client.1.vm09.stdout:4/495: chown db/d19/d81/d5d/c93 931476366 1 2026-03-09T15:00:50.452 INFO:tasks.workunit.client.1.vm09.stdout:4/496: write db/d19/d23/d71/f6c [918335,114873] 0 2026-03-09T15:00:50.456 INFO:tasks.workunit.client.1.vm09.stdout:8/495: creat df/d2d/d46/d33/f8e x:0 0 0 2026-03-09T15:00:50.456 INFO:tasks.workunit.client.1.vm09.stdout:4/497: dread db/d12/d16/f63 [0,4194304] 0 2026-03-09T15:00:50.459 INFO:tasks.workunit.client.1.vm09.stdout:5/473: link d2/d37/d3c/d36/d4c/d51/d96/l13 d2/d37/d67/la8 0 2026-03-09T15:00:50.459 INFO:tasks.workunit.client.1.vm09.stdout:8/496: dread df/d2d/d46/d73/d81/f59 [0,4194304] 0 2026-03-09T15:00:50.461 INFO:tasks.workunit.client.1.vm09.stdout:7/424: dwrite d3/f9 [0,4194304] 0 2026-03-09T15:00:50.462 INFO:tasks.workunit.client.1.vm09.stdout:8/497: chown df/d2d/d46 62 1 2026-03-09T15:00:50.469 INFO:tasks.workunit.client.1.vm09.stdout:8/498: dread df/d5b/d65/d1d/f41 [0,4194304] 0 2026-03-09T15:00:50.471 INFO:tasks.workunit.client.1.vm09.stdout:5/474: mkdir d2/da9 0 2026-03-09T15:00:50.473 INFO:tasks.workunit.client.1.vm09.stdout:4/498: read db/d12/f1b [577261,96353] 0 2026-03-09T15:00:50.475 INFO:tasks.workunit.client.1.vm09.stdout:5/475: mkdir d2/daa 0 2026-03-09T15:00:50.481 INFO:tasks.workunit.client.1.vm09.stdout:5/476: dread d2/d37/f6c [0,4194304] 0 2026-03-09T15:00:50.481 INFO:tasks.workunit.client.1.vm09.stdout:4/499: rename db/d19/d23/d71/l5e to db/d12/d16/la2 0 2026-03-09T15:00:50.481 INFO:tasks.workunit.client.1.vm09.stdout:4/500: readlink db/d19/d23/d44/l4a 0 2026-03-09T15:00:50.481 INFO:tasks.workunit.client.1.vm09.stdout:5/477: fsync d2/d37/d3c/d36/d45/f6e 0 2026-03-09T15:00:50.483 INFO:tasks.workunit.client.1.vm09.stdout:5/478: creat d2/d37/d3c/d36/d45/fab x:0 0 0 2026-03-09T15:00:50.483 INFO:tasks.workunit.client.1.vm09.stdout:4/501: dread db/d19/d52/f6d [4194304,4194304] 0 2026-03-09T15:00:50.484 INFO:tasks.workunit.client.1.vm09.stdout:5/479: fdatasync d2/d37/d3c/d36/d4c/f82 0 2026-03-09T15:00:50.484 INFO:tasks.workunit.client.1.vm09.stdout:4/502: read - db/d19/d52/f6a zero size 2026-03-09T15:00:50.486 INFO:tasks.workunit.client.1.vm09.stdout:5/480: creat d2/d37/d3c/fac x:0 0 0 2026-03-09T15:00:50.486 INFO:tasks.workunit.client.1.vm09.stdout:8/499: sync 2026-03-09T15:00:50.487 INFO:tasks.workunit.client.1.vm09.stdout:5/481: write d2/d37/d67/f9e [983170,104947] 0 2026-03-09T15:00:50.488 INFO:tasks.workunit.client.1.vm09.stdout:7/425: fdatasync d3/d1d/f79 0 2026-03-09T15:00:50.490 INFO:tasks.workunit.client.1.vm09.stdout:6/466: write d6/d20/d44/f4a [3515367,70260] 0 2026-03-09T15:00:50.493 INFO:tasks.workunit.client.1.vm09.stdout:6/467: chown d6/db/f66 2073227 1 2026-03-09T15:00:50.496 INFO:tasks.workunit.client.1.vm09.stdout:1/419: truncate d8/d10/f12 1914741 0 2026-03-09T15:00:50.496 INFO:tasks.workunit.client.1.vm09.stdout:7/426: fdatasync d3/db/d46/f66 0 2026-03-09T15:00:50.497 INFO:tasks.workunit.client.1.vm09.stdout:7/427: chown d3/f5 9590 1 2026-03-09T15:00:50.499 INFO:tasks.workunit.client.1.vm09.stdout:5/482: mkdir d2/d37/d53/d86/dad 0 2026-03-09T15:00:50.501 INFO:tasks.workunit.client.1.vm09.stdout:8/500: symlink df/d2d/d4f/l8f 0 2026-03-09T15:00:50.502 INFO:tasks.workunit.client.1.vm09.stdout:5/483: sync 2026-03-09T15:00:50.504 INFO:tasks.workunit.client.1.vm09.stdout:7/428: rmdir d3/db/d15/d5f/d6e 39 2026-03-09T15:00:50.506 INFO:tasks.workunit.client.1.vm09.stdout:1/420: link d8/d10/d73/d5d/d78/f7c d8/d22/d72/f84 0 2026-03-09T15:00:50.506 INFO:tasks.workunit.client.1.vm09.stdout:3/513: dwrite d3/d3a/d2b/f72 [0,4194304] 0 2026-03-09T15:00:50.508 INFO:tasks.workunit.client.1.vm09.stdout:3/514: write d3/d5b/d79/f89 [811912,36787] 0 2026-03-09T15:00:50.511 INFO:tasks.workunit.client.1.vm09.stdout:8/501: mkdir df/d2d/d90 0 2026-03-09T15:00:50.516 INFO:tasks.workunit.client.1.vm09.stdout:3/515: chown d3/d3a/d2b/d39/f3c 10445818 1 2026-03-09T15:00:50.516 INFO:tasks.workunit.client.1.vm09.stdout:1/421: truncate d8/d10/d73/f54 467183 0 2026-03-09T15:00:50.518 INFO:tasks.workunit.client.1.vm09.stdout:3/516: readlink d3/d3a/d2b/d31/d4a/l5e 0 2026-03-09T15:00:50.518 INFO:tasks.workunit.client.1.vm09.stdout:7/429: write d3/db/d15/d5f/f36 [3446069,2577] 0 2026-03-09T15:00:50.519 INFO:tasks.workunit.client.1.vm09.stdout:7/430: fdatasync d3/f16 0 2026-03-09T15:00:50.520 INFO:tasks.workunit.client.1.vm09.stdout:1/422: sync 2026-03-09T15:00:50.524 INFO:tasks.workunit.client.1.vm09.stdout:1/423: creat d8/d22/d56/f85 x:0 0 0 2026-03-09T15:00:50.526 INFO:tasks.workunit.client.1.vm09.stdout:7/431: creat d3/db/d25/d5c/f7c x:0 0 0 2026-03-09T15:00:50.526 INFO:tasks.workunit.client.1.vm09.stdout:3/517: dread d3/d74/f9b [0,4194304] 0 2026-03-09T15:00:50.527 INFO:tasks.workunit.client.1.vm09.stdout:5/484: getdents d2/d37/d3c/d36 0 2026-03-09T15:00:50.527 INFO:tasks.workunit.client.1.vm09.stdout:3/518: dread - d3/d3a/d2b/d31/d4a/f7c zero size 2026-03-09T15:00:50.528 INFO:tasks.workunit.client.1.vm09.stdout:1/424: readlink d8/d10/d24/d45/l81 0 2026-03-09T15:00:50.529 INFO:tasks.workunit.client.1.vm09.stdout:7/432: mkdir d3/db/d25/d7d 0 2026-03-09T15:00:50.529 INFO:tasks.workunit.client.1.vm09.stdout:7/433: readlink d3/db/d46/l67 0 2026-03-09T15:00:50.531 INFO:tasks.workunit.client.1.vm09.stdout:7/434: write d3/d28/f69 [708283,28855] 0 2026-03-09T15:00:50.537 INFO:tasks.workunit.client.1.vm09.stdout:3/519: rename d3/d3a/d2b/d31/d4a/f63 to d3/d74/fb4 0 2026-03-09T15:00:50.538 INFO:tasks.workunit.client.1.vm09.stdout:1/425: dwrite d8/d10/d24/d48/f7f [0,4194304] 0 2026-03-09T15:00:50.543 INFO:tasks.workunit.client.1.vm09.stdout:7/435: unlink d3/db/d25/f22 0 2026-03-09T15:00:50.553 INFO:tasks.workunit.client.1.vm09.stdout:1/426: mknod d8/d22/d72/d64/c86 0 2026-03-09T15:00:50.553 INFO:tasks.workunit.client.1.vm09.stdout:3/520: creat d3/d3a/d2b/d7b/fb5 x:0 0 0 2026-03-09T15:00:50.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:50 vm05.local ceph-mon[50611]: pgmap v151: 65 pgs: 65 active+clean; 997 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 24 MiB/s rd, 116 MiB/s wr, 262 op/s 2026-03-09T15:00:50.554 INFO:tasks.workunit.client.1.vm09.stdout:7/436: unlink d3/d1d/l21 0 2026-03-09T15:00:50.554 INFO:tasks.workunit.client.1.vm09.stdout:7/437: stat d3/d1d/d2d 0 2026-03-09T15:00:50.562 INFO:tasks.workunit.client.1.vm09.stdout:7/438: getdents d3/d1d/d65 0 2026-03-09T15:00:50.564 INFO:tasks.workunit.client.1.vm09.stdout:1/427: sync 2026-03-09T15:00:50.565 INFO:tasks.workunit.client.1.vm09.stdout:1/428: write d8/d10/d73/f21 [2811981,56854] 0 2026-03-09T15:00:50.565 INFO:tasks.workunit.client.1.vm09.stdout:1/429: fdatasync d8/d22/f6e 0 2026-03-09T15:00:50.567 INFO:tasks.workunit.client.1.vm09.stdout:7/439: creat d3/db/d25/d5c/d75/f7e x:0 0 0 2026-03-09T15:00:50.569 INFO:tasks.workunit.client.1.vm09.stdout:1/430: rmdir d8/d10/d73/d5d 39 2026-03-09T15:00:50.573 INFO:tasks.workunit.client.1.vm09.stdout:2/500: dwrite f5 [0,4194304] 0 2026-03-09T15:00:50.574 INFO:tasks.workunit.client.1.vm09.stdout:1/431: rename d8/d22/l27 to d8/d10/d73/d5d/l87 0 2026-03-09T15:00:50.574 INFO:tasks.workunit.client.1.vm09.stdout:7/440: creat d3/d28/f7f x:0 0 0 2026-03-09T15:00:50.581 INFO:tasks.workunit.client.1.vm09.stdout:7/441: dwrite d3/d3d/f51 [0,4194304] 0 2026-03-09T15:00:50.586 INFO:tasks.workunit.client.1.vm09.stdout:7/442: creat d3/db/d15/f80 x:0 0 0 2026-03-09T15:00:50.587 INFO:tasks.workunit.client.1.vm09.stdout:2/501: mknod df/d1f/c91 0 2026-03-09T15:00:50.596 INFO:tasks.workunit.client.1.vm09.stdout:2/502: symlink df/d1f/d47/d84/l92 0 2026-03-09T15:00:50.596 INFO:tasks.workunit.client.1.vm09.stdout:1/432: dread d8/d10/d24/d45/f6c [0,4194304] 0 2026-03-09T15:00:50.601 INFO:tasks.workunit.client.1.vm09.stdout:1/433: dwrite d8/f6b [0,4194304] 0 2026-03-09T15:00:50.606 INFO:tasks.workunit.client.1.vm09.stdout:1/434: rename d8/d22/l35 to d8/d10/d73/d5d/d68/l88 0 2026-03-09T15:00:50.608 INFO:tasks.workunit.client.1.vm09.stdout:2/503: mkdir df/d93 0 2026-03-09T15:00:50.610 INFO:tasks.workunit.client.1.vm09.stdout:1/435: rmdir d8/d10/d73/d5d 39 2026-03-09T15:00:50.615 INFO:tasks.workunit.client.1.vm09.stdout:2/504: write df/d20/d2e/f48 [1573911,93040] 0 2026-03-09T15:00:50.615 INFO:tasks.workunit.client.1.vm09.stdout:2/505: unlink df/d20/f3f 0 2026-03-09T15:00:50.615 INFO:tasks.workunit.client.1.vm09.stdout:2/506: creat df/d58/d74/f94 x:0 0 0 2026-03-09T15:00:50.616 INFO:tasks.workunit.client.1.vm09.stdout:1/436: read d8/d50/d5b/f6f [213043,16817] 0 2026-03-09T15:00:50.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:50 vm09.local ceph-mon[59673]: pgmap v151: 65 pgs: 65 active+clean; 997 MiB data, 3.3 GiB used, 117 GiB / 120 GiB avail; 24 MiB/s rd, 116 MiB/s wr, 262 op/s 2026-03-09T15:00:50.618 INFO:tasks.workunit.client.1.vm09.stdout:1/437: rename d8/d22/d72/f84 to d8/d50/d5b/f89 0 2026-03-09T15:00:50.659 INFO:tasks.workunit.client.1.vm09.stdout:9/454: truncate d1/f29 33355 0 2026-03-09T15:00:50.660 INFO:tasks.workunit.client.1.vm09.stdout:9/455: chown d1/d7/d1e/d2b/d2e/f12 14040741 1 2026-03-09T15:00:50.660 INFO:tasks.workunit.client.1.vm09.stdout:9/456: write d1/d58/f75 [1024845,13151] 0 2026-03-09T15:00:50.661 INFO:tasks.workunit.client.1.vm09.stdout:9/457: chown d1/d7/d1e 0 1 2026-03-09T15:00:50.663 INFO:tasks.workunit.client.1.vm09.stdout:9/458: symlink d1/d4f/d8f/d91/l9a 0 2026-03-09T15:00:50.665 INFO:tasks.workunit.client.1.vm09.stdout:9/459: rename d1/d7/d1e/d2b/d2e/f19 to d1/d6e/f9b 0 2026-03-09T15:00:50.666 INFO:tasks.workunit.client.1.vm09.stdout:9/460: write d1/d58/f99 [513652,85390] 0 2026-03-09T15:00:50.674 INFO:tasks.workunit.client.1.vm09.stdout:9/461: getdents d1/d4f 0 2026-03-09T15:00:50.680 INFO:tasks.workunit.client.1.vm09.stdout:9/462: symlink d1/d7/d1e/d2b/l9c 0 2026-03-09T15:00:50.684 INFO:tasks.workunit.client.1.vm09.stdout:9/463: creat d1/d7/d1e/d2b/d8d/f9d x:0 0 0 2026-03-09T15:00:50.684 INFO:tasks.workunit.client.1.vm09.stdout:9/464: readlink d1/d6e/l90 0 2026-03-09T15:00:50.695 INFO:tasks.workunit.client.1.vm09.stdout:9/465: rename d1/d7/d1e/d2b/d8d/f97 to d1/d7/d1e/f9e 0 2026-03-09T15:00:50.701 INFO:tasks.workunit.client.1.vm09.stdout:9/466: mkdir d1/d7/d9f 0 2026-03-09T15:00:50.702 INFO:tasks.workunit.client.1.vm09.stdout:9/467: write d1/d7/f3e [2826286,111823] 0 2026-03-09T15:00:50.705 INFO:tasks.workunit.client.1.vm09.stdout:0/581: rename da/dc/d22/d64/c94 to da/dc/d1c/d3c/d44/cbe 0 2026-03-09T15:00:50.705 INFO:tasks.workunit.client.1.vm09.stdout:0/582: chown da/d30/c87 0 1 2026-03-09T15:00:50.706 INFO:tasks.workunit.client.1.vm09.stdout:0/583: chown da/dc/d84/db8 29846892 1 2026-03-09T15:00:50.714 INFO:tasks.workunit.client.1.vm09.stdout:9/468: sync 2026-03-09T15:00:50.720 INFO:tasks.workunit.client.1.vm09.stdout:0/584: link da/dc/d1c/l9a da/dc/d1c/d46/d5b/lbf 0 2026-03-09T15:00:50.729 INFO:tasks.workunit.client.1.vm09.stdout:0/585: mkdir da/dc/dc0 0 2026-03-09T15:00:50.732 INFO:tasks.workunit.client.1.vm09.stdout:0/586: mknod da/dc/d22/d64/cc1 0 2026-03-09T15:00:50.737 INFO:tasks.workunit.client.1.vm09.stdout:0/587: link da/dc/d1c/d3c/d78/d7a/d9c/la4 da/dc/d1c/d46/d5b/d9f/lc2 0 2026-03-09T15:00:50.739 INFO:tasks.workunit.client.1.vm09.stdout:0/588: mknod da/dc/d22/d64/cc3 0 2026-03-09T15:00:50.743 INFO:tasks.workunit.client.1.vm09.stdout:0/589: dwrite da/dc/d22/d64/fbc [0,4194304] 0 2026-03-09T15:00:50.744 INFO:tasks.workunit.client.1.vm09.stdout:0/590: write da/dc/d92/d9e/fa2 [142846,100090] 0 2026-03-09T15:00:50.747 INFO:tasks.workunit.client.1.vm09.stdout:4/503: write db/d19/d52/d76/f65 [1467125,53507] 0 2026-03-09T15:00:50.747 INFO:tasks.workunit.client.1.vm09.stdout:0/591: symlink da/dc/d1c/d3c/d44/d6c/lc4 0 2026-03-09T15:00:50.748 INFO:tasks.workunit.client.1.vm09.stdout:0/592: write da/dc/f8b [136788,8897] 0 2026-03-09T15:00:50.755 INFO:tasks.workunit.client.1.vm09.stdout:4/504: mkdir db/d19/d23/d44/d7c/d7d/d97/da3 0 2026-03-09T15:00:50.755 INFO:tasks.workunit.client.1.vm09.stdout:0/593: chown da/dc/c82 10 1 2026-03-09T15:00:50.759 INFO:tasks.workunit.client.1.vm09.stdout:1/438: dread d8/d10/f12 [0,4194304] 0 2026-03-09T15:00:50.760 INFO:tasks.workunit.client.1.vm09.stdout:6/468: truncate d6/d20/d2a/f5e 3365445 0 2026-03-09T15:00:50.766 INFO:tasks.workunit.client.1.vm09.stdout:0/594: dwrite da/fa9 [0,4194304] 0 2026-03-09T15:00:50.778 INFO:tasks.workunit.client.1.vm09.stdout:8/502: dwrite df/f51 [0,4194304] 0 2026-03-09T15:00:50.779 INFO:tasks.workunit.client.1.vm09.stdout:6/469: symlink d6/d20/d2a/d3b/l97 0 2026-03-09T15:00:50.781 INFO:tasks.workunit.client.1.vm09.stdout:1/439: dread d8/d10/f2f [0,4194304] 0 2026-03-09T15:00:50.783 INFO:tasks.workunit.client.1.vm09.stdout:5/485: write d2/d37/f6d [609494,10827] 0 2026-03-09T15:00:50.790 INFO:tasks.workunit.client.1.vm09.stdout:3/521: dwrite d3/d3a/d2b/d31/f45 [0,4194304] 0 2026-03-09T15:00:50.790 INFO:tasks.workunit.client.1.vm09.stdout:0/595: symlink da/dc/lc5 0 2026-03-09T15:00:50.792 INFO:tasks.workunit.client.1.vm09.stdout:3/522: readlink d3/d3a/d2b/d39/l50 0 2026-03-09T15:00:50.792 INFO:tasks.workunit.client.1.vm09.stdout:5/486: dread d2/f5e [0,4194304] 0 2026-03-09T15:00:50.796 INFO:tasks.workunit.client.1.vm09.stdout:8/503: symlink df/d2d/d46/l91 0 2026-03-09T15:00:50.801 INFO:tasks.workunit.client.1.vm09.stdout:4/505: dread db/f21 [0,4194304] 0 2026-03-09T15:00:50.806 INFO:tasks.workunit.client.1.vm09.stdout:0/596: dwrite da/dc/d1c/d3c/d44/f71 [0,4194304] 0 2026-03-09T15:00:50.812 INFO:tasks.workunit.client.1.vm09.stdout:0/597: chown da/dc/d1c/d3c/d44/f89 285640535 1 2026-03-09T15:00:50.812 INFO:tasks.workunit.client.1.vm09.stdout:0/598: stat da/dc/d1c/d3c/d44/f71 0 2026-03-09T15:00:50.821 INFO:tasks.workunit.client.1.vm09.stdout:8/504: unlink df/d38/f53 0 2026-03-09T15:00:50.823 INFO:tasks.workunit.client.1.vm09.stdout:5/487: dwrite d2/d37/d3c/d36/d45/d5c/f91 [0,4194304] 0 2026-03-09T15:00:50.831 INFO:tasks.workunit.client.1.vm09.stdout:5/488: write d2/d37/d3c/f3a [4387178,67281] 0 2026-03-09T15:00:50.831 INFO:tasks.workunit.client.1.vm09.stdout:4/506: mknod db/d19/d23/d44/ca4 0 2026-03-09T15:00:50.841 INFO:tasks.workunit.client.1.vm09.stdout:0/599: dread da/dc/d1c/fa0 [0,4194304] 0 2026-03-09T15:00:50.841 INFO:tasks.workunit.client.1.vm09.stdout:4/507: truncate db/d12/d16/f83 263535 0 2026-03-09T15:00:50.846 INFO:tasks.workunit.client.1.vm09.stdout:4/508: chown db/d19 106261652 1 2026-03-09T15:00:50.848 INFO:tasks.workunit.client.1.vm09.stdout:7/443: dwrite d3/db/d25/d5c/f5e [4194304,4194304] 0 2026-03-09T15:00:50.852 INFO:tasks.workunit.client.1.vm09.stdout:5/489: mkdir d2/d37/d3c/d36/d45/dae 0 2026-03-09T15:00:50.854 INFO:tasks.workunit.client.1.vm09.stdout:5/490: write d2/d37/d67/d95/f99 [978009,87356] 0 2026-03-09T15:00:50.856 INFO:tasks.workunit.client.1.vm09.stdout:4/509: dread db/d19/d23/d71/f6c [0,4194304] 0 2026-03-09T15:00:50.876 INFO:tasks.workunit.client.1.vm09.stdout:2/507: dread - df/d20/d2e/f54 zero size 2026-03-09T15:00:50.876 INFO:tasks.workunit.client.1.vm09.stdout:5/491: truncate d2/f29 4316083 0 2026-03-09T15:00:50.879 INFO:tasks.workunit.client.1.vm09.stdout:4/510: mkdir db/d12/d16/d5b/da5 0 2026-03-09T15:00:50.880 INFO:tasks.workunit.client.1.vm09.stdout:4/511: read - db/d19/d23/d71/d53/f99 zero size 2026-03-09T15:00:50.881 INFO:tasks.workunit.client.1.vm09.stdout:4/512: write db/d19/f8e [561313,69148] 0 2026-03-09T15:00:50.882 INFO:tasks.workunit.client.1.vm09.stdout:4/513: stat db/d12/d16/d5b 0 2026-03-09T15:00:50.883 INFO:tasks.workunit.client.1.vm09.stdout:2/508: dread df/d58/d67/f4e [0,4194304] 0 2026-03-09T15:00:50.884 INFO:tasks.workunit.client.1.vm09.stdout:0/600: truncate da/dc/d61/f66 958349 0 2026-03-09T15:00:50.891 INFO:tasks.workunit.client.1.vm09.stdout:4/514: unlink db/d12/d16/l64 0 2026-03-09T15:00:50.892 INFO:tasks.workunit.client.1.vm09.stdout:4/515: chown db/d19/d23/d44/d84 292 1 2026-03-09T15:00:50.892 INFO:tasks.workunit.client.1.vm09.stdout:6/470: dread d6/d20/d24/f60 [0,4194304] 0 2026-03-09T15:00:50.894 INFO:tasks.workunit.client.1.vm09.stdout:2/509: creat df/d58/d74/f95 x:0 0 0 2026-03-09T15:00:50.895 INFO:tasks.workunit.client.1.vm09.stdout:5/492: symlink d2/daa/laf 0 2026-03-09T15:00:50.897 INFO:tasks.workunit.client.1.vm09.stdout:0/601: dread da/dc/d1c/d3c/f81 [0,4194304] 0 2026-03-09T15:00:50.897 INFO:tasks.workunit.client.1.vm09.stdout:5/493: readlink d2/d37/d3c/d36/d4c/la3 0 2026-03-09T15:00:50.904 INFO:tasks.workunit.client.1.vm09.stdout:2/510: creat df/d1f/d47/d5d/f96 x:0 0 0 2026-03-09T15:00:50.914 INFO:tasks.workunit.client.1.vm09.stdout:4/516: creat db/d12/da1/fa6 x:0 0 0 2026-03-09T15:00:50.918 INFO:tasks.workunit.client.1.vm09.stdout:9/469: dwrite d1/d7/d1e/f34 [0,4194304] 0 2026-03-09T15:00:50.922 INFO:tasks.workunit.client.1.vm09.stdout:6/471: rename d6/d20/d38/d56/d65/d68/f69 to d6/d20/d2a/f98 0 2026-03-09T15:00:50.924 INFO:tasks.workunit.client.1.vm09.stdout:6/472: write d6/db/f66 [912052,31422] 0 2026-03-09T15:00:50.924 INFO:tasks.workunit.client.1.vm09.stdout:1/440: truncate d8/d10/f12 2303908 0 2026-03-09T15:00:50.926 INFO:tasks.workunit.client.1.vm09.stdout:2/511: mknod df/d93/c97 0 2026-03-09T15:00:50.928 INFO:tasks.workunit.client.1.vm09.stdout:9/470: dwrite d1/d4f/d52/f8b [0,4194304] 0 2026-03-09T15:00:50.931 INFO:tasks.workunit.client.1.vm09.stdout:2/512: read f4 [3225486,78983] 0 2026-03-09T15:00:50.938 INFO:tasks.workunit.client.1.vm09.stdout:3/523: dwrite d3/d3a/f1d [0,4194304] 0 2026-03-09T15:00:50.939 INFO:tasks.workunit.client.1.vm09.stdout:3/524: write d3/d5b/d79/d9d/faf [22345,96366] 0 2026-03-09T15:00:50.941 INFO:tasks.workunit.client.1.vm09.stdout:6/473: sync 2026-03-09T15:00:50.942 INFO:tasks.workunit.client.1.vm09.stdout:6/474: chown d6/d20/d38/d4e/f87 724145 1 2026-03-09T15:00:50.946 INFO:tasks.workunit.client.1.vm09.stdout:1/441: mknod d8/d22/d72/c8a 0 2026-03-09T15:00:50.948 INFO:tasks.workunit.client.1.vm09.stdout:5/494: truncate d2/d37/d3c/d36/d45/f6e 2889082 0 2026-03-09T15:00:50.952 INFO:tasks.workunit.client.1.vm09.stdout:8/505: dwrite df/d5b/d65/d1d/f41 [0,4194304] 0 2026-03-09T15:00:50.961 INFO:tasks.workunit.client.1.vm09.stdout:2/513: rmdir df/d1f/d47 39 2026-03-09T15:00:50.968 INFO:tasks.workunit.client.1.vm09.stdout:6/475: truncate d6/d20/d2a/f37 1133434 0 2026-03-09T15:00:50.972 INFO:tasks.workunit.client.1.vm09.stdout:5/495: readlink d2/d37/l78 0 2026-03-09T15:00:50.974 INFO:tasks.workunit.client.1.vm09.stdout:7/444: rename d3/db/d15/d5f/d44/f62 to d3/d1d/d2d/f81 0 2026-03-09T15:00:50.976 INFO:tasks.workunit.client.1.vm09.stdout:2/514: write df/d20/d29/f31 [1442396,47001] 0 2026-03-09T15:00:50.976 INFO:tasks.workunit.client.1.vm09.stdout:6/476: unlink d6/d20/c26 0 2026-03-09T15:00:50.977 INFO:tasks.workunit.client.1.vm09.stdout:1/442: mkdir d8/d10/d73/d5d/d78/d8b 0 2026-03-09T15:00:50.978 INFO:tasks.workunit.client.1.vm09.stdout:8/506: rename df/d5b/d65/f20 to df/d2d/d46/f92 0 2026-03-09T15:00:50.981 INFO:tasks.workunit.client.1.vm09.stdout:7/445: creat d3/db/d15/d5f/d44/f82 x:0 0 0 2026-03-09T15:00:50.981 INFO:tasks.workunit.client.1.vm09.stdout:8/507: truncate df/d2d/d46/d33/f8e 781670 0 2026-03-09T15:00:50.982 INFO:tasks.workunit.client.1.vm09.stdout:8/508: dread - df/d38/d64/d5f/f69 zero size 2026-03-09T15:00:50.983 INFO:tasks.workunit.client.1.vm09.stdout:0/602: write da/dc/d1c/d3c/f81 [412731,107742] 0 2026-03-09T15:00:50.987 INFO:tasks.workunit.client.1.vm09.stdout:5/496: rename d2/d37/d53/d86/d88/f8d to d2/d37/d3c/d36/d4c/d51/fb0 0 2026-03-09T15:00:50.987 INFO:tasks.workunit.client.1.vm09.stdout:4/517: write db/d12/d16/f36 [412604,12274] 0 2026-03-09T15:00:50.989 INFO:tasks.workunit.client.1.vm09.stdout:3/525: getdents d3/d3a/d2b/d31/d4a 0 2026-03-09T15:00:50.990 INFO:tasks.workunit.client.1.vm09.stdout:1/443: unlink d8/d10/d73/d5d/l87 0 2026-03-09T15:00:50.990 INFO:tasks.workunit.client.1.vm09.stdout:9/471: dread d1/f29 [0,4194304] 0 2026-03-09T15:00:50.993 INFO:tasks.workunit.client.1.vm09.stdout:2/515: rename df/d58/d74/l78 to df/d1f/d47/d84/l98 0 2026-03-09T15:00:50.994 INFO:tasks.workunit.client.1.vm09.stdout:7/446: mkdir d3/db/d15/d5f/d6e/d83 0 2026-03-09T15:00:50.997 INFO:tasks.workunit.client.1.vm09.stdout:1/444: mknod d8/d50/c8c 0 2026-03-09T15:00:50.997 INFO:tasks.workunit.client.1.vm09.stdout:3/526: mkdir d3/d3a/d2b/d7b/db6 0 2026-03-09T15:00:50.999 INFO:tasks.workunit.client.1.vm09.stdout:5/497: dwrite d2/d37/fa5 [0,4194304] 0 2026-03-09T15:00:51.002 INFO:tasks.workunit.client.1.vm09.stdout:6/477: getdents d6/d20/d38/d56/d65/d68/d86 0 2026-03-09T15:00:51.003 INFO:tasks.workunit.client.1.vm09.stdout:1/445: mkdir d8/d10/d24/d45/d5f/d8d 0 2026-03-09T15:00:51.004 INFO:tasks.workunit.client.1.vm09.stdout:7/447: creat d3/d1d/d2d/f84 x:0 0 0 2026-03-09T15:00:51.004 INFO:tasks.workunit.client.1.vm09.stdout:2/516: creat df/d1f/d6d/d8f/f99 x:0 0 0 2026-03-09T15:00:51.005 INFO:tasks.workunit.client.1.vm09.stdout:6/478: creat d6/d20/d38/d56/d65/d68/f99 x:0 0 0 2026-03-09T15:00:51.006 INFO:tasks.workunit.client.1.vm09.stdout:3/527: rename d3/d3a/d54/f59 to d3/d3a/d2b/d31/d9e/fb7 0 2026-03-09T15:00:51.007 INFO:tasks.workunit.client.1.vm09.stdout:4/518: getdents db/d19 0 2026-03-09T15:00:51.007 INFO:tasks.workunit.client.1.vm09.stdout:5/498: mkdir d2/db1 0 2026-03-09T15:00:51.009 INFO:tasks.workunit.client.1.vm09.stdout:7/448: rmdir d3/d1d 39 2026-03-09T15:00:51.009 INFO:tasks.workunit.client.1.vm09.stdout:6/479: read d6/d20/d2a/f61 [686514,106734] 0 2026-03-09T15:00:51.011 INFO:tasks.workunit.client.1.vm09.stdout:6/480: fdatasync d6/d20/d38/d4e/f87 0 2026-03-09T15:00:51.012 INFO:tasks.workunit.client.1.vm09.stdout:4/519: symlink db/d19/d81/la7 0 2026-03-09T15:00:51.012 INFO:tasks.workunit.client.1.vm09.stdout:2/517: mknod df/d58/c9a 0 2026-03-09T15:00:51.013 INFO:tasks.workunit.client.1.vm09.stdout:5/499: mkdir d2/db1/db2 0 2026-03-09T15:00:51.014 INFO:tasks.workunit.client.1.vm09.stdout:6/481: write d6/d20/d24/f49 [432182,62651] 0 2026-03-09T15:00:51.015 INFO:tasks.workunit.client.1.vm09.stdout:7/449: mknod d3/db/d15/d5f/d44/c85 0 2026-03-09T15:00:51.016 INFO:tasks.workunit.client.1.vm09.stdout:1/446: getdents d8/d10/d73 0 2026-03-09T15:00:51.017 INFO:tasks.workunit.client.1.vm09.stdout:2/518: fdatasync df/f5b 0 2026-03-09T15:00:51.021 INFO:tasks.workunit.client.1.vm09.stdout:5/500: write d2/d37/d3c/d36/d45/d5c/f9c [4231276,87581] 0 2026-03-09T15:00:51.024 INFO:tasks.workunit.client.1.vm09.stdout:1/447: symlink d8/d10/d24/d48/l8e 0 2026-03-09T15:00:51.030 INFO:tasks.workunit.client.1.vm09.stdout:5/501: creat d2/d37/d3c/d36/fb3 x:0 0 0 2026-03-09T15:00:51.031 INFO:tasks.workunit.client.1.vm09.stdout:7/450: dwrite d3/d61/f6c [0,4194304] 0 2026-03-09T15:00:51.031 INFO:tasks.workunit.client.1.vm09.stdout:6/482: dwrite d6/df/f16 [0,4194304] 0 2026-03-09T15:00:51.054 INFO:tasks.workunit.client.1.vm09.stdout:6/483: symlink d6/d20/d38/d56/d65/l9a 0 2026-03-09T15:00:51.056 INFO:tasks.workunit.client.1.vm09.stdout:1/448: rename d8/f3d to d8/d10/d24/f8f 0 2026-03-09T15:00:51.057 INFO:tasks.workunit.client.1.vm09.stdout:5/502: mknod d2/d37/d53/d86/dad/cb4 0 2026-03-09T15:00:51.057 INFO:tasks.workunit.client.1.vm09.stdout:5/503: fdatasync d2/d37/d3c/d36/d4c/f82 0 2026-03-09T15:00:51.058 INFO:tasks.workunit.client.1.vm09.stdout:7/451: truncate d3/f7a 3536993 0 2026-03-09T15:00:51.059 INFO:tasks.workunit.client.1.vm09.stdout:6/484: fdatasync d6/db/f1f 0 2026-03-09T15:00:51.063 INFO:tasks.workunit.client.1.vm09.stdout:7/452: write d3/d1d/f72 [4127207,104146] 0 2026-03-09T15:00:51.065 INFO:tasks.workunit.client.1.vm09.stdout:7/453: write d3/db/d25/d5c/d75/f7e [86982,128085] 0 2026-03-09T15:00:51.066 INFO:tasks.workunit.client.1.vm09.stdout:5/504: dread d2/d37/fa5 [0,4194304] 0 2026-03-09T15:00:51.067 INFO:tasks.workunit.client.1.vm09.stdout:7/454: write d3/db/d46/f66 [766900,130421] 0 2026-03-09T15:00:51.067 INFO:tasks.workunit.client.1.vm09.stdout:6/485: read d6/db/f1f [6502422,55768] 0 2026-03-09T15:00:51.068 INFO:tasks.workunit.client.1.vm09.stdout:7/455: chown d3/db/d25/l49 108676 1 2026-03-09T15:00:51.069 INFO:tasks.workunit.client.1.vm09.stdout:2/519: dread df/f13 [0,4194304] 0 2026-03-09T15:00:51.072 INFO:tasks.workunit.client.1.vm09.stdout:5/505: mkdir d2/d37/d67/d95/db5 0 2026-03-09T15:00:51.075 INFO:tasks.workunit.client.1.vm09.stdout:8/509: dread f8 [4194304,4194304] 0 2026-03-09T15:00:51.084 INFO:tasks.workunit.client.1.vm09.stdout:5/506: unlink d2/d37/d3c/f3a 0 2026-03-09T15:00:51.085 INFO:tasks.workunit.client.1.vm09.stdout:2/520: unlink df/d58/d67/l7a 0 2026-03-09T15:00:51.086 INFO:tasks.workunit.client.1.vm09.stdout:8/510: dwrite df/f30 [0,4194304] 0 2026-03-09T15:00:51.086 INFO:tasks.workunit.client.1.vm09.stdout:6/486: rename d6/db/d10/l4d to d6/d20/d44/l9b 0 2026-03-09T15:00:51.090 INFO:tasks.workunit.client.1.vm09.stdout:8/511: read df/d38/f58 [923189,2272] 0 2026-03-09T15:00:51.097 INFO:tasks.workunit.client.1.vm09.stdout:0/603: dread da/dc/d10/f29 [4194304,4194304] 0 2026-03-09T15:00:51.102 INFO:tasks.workunit.client.1.vm09.stdout:8/512: dwrite df/d5c/f8b [0,4194304] 0 2026-03-09T15:00:51.102 INFO:tasks.workunit.client.1.vm09.stdout:2/521: creat df/f9b x:0 0 0 2026-03-09T15:00:51.104 INFO:tasks.workunit.client.1.vm09.stdout:0/604: symlink da/dc/d1c/d46/lc6 0 2026-03-09T15:00:51.106 INFO:tasks.workunit.client.1.vm09.stdout:0/605: fdatasync da/dc/d1c/d46/d63/faa 0 2026-03-09T15:00:51.107 INFO:tasks.workunit.client.1.vm09.stdout:7/456: dread d3/d1d/f30 [0,4194304] 0 2026-03-09T15:00:51.112 INFO:tasks.workunit.client.1.vm09.stdout:9/472: write d1/d7/d1e/f5a [573171,49666] 0 2026-03-09T15:00:51.114 INFO:tasks.workunit.client.1.vm09.stdout:2/522: unlink df/d20/d2e/f64 0 2026-03-09T15:00:51.116 INFO:tasks.workunit.client.1.vm09.stdout:0/606: unlink da/dc/d84/cb5 0 2026-03-09T15:00:51.117 INFO:tasks.workunit.client.1.vm09.stdout:9/473: dwrite d1/d6e/f74 [0,4194304] 0 2026-03-09T15:00:51.119 INFO:tasks.workunit.client.1.vm09.stdout:7/457: creat d3/d61/f86 x:0 0 0 2026-03-09T15:00:51.120 INFO:tasks.workunit.client.1.vm09.stdout:0/607: fsync da/dc/d1c/d3c/d78/d7a/fb2 0 2026-03-09T15:00:51.121 INFO:tasks.workunit.client.1.vm09.stdout:9/474: write d1/d6e/f88 [2073926,47218] 0 2026-03-09T15:00:51.127 INFO:tasks.workunit.client.1.vm09.stdout:8/513: symlink df/d5b/d65/l93 0 2026-03-09T15:00:51.130 INFO:tasks.workunit.client.1.vm09.stdout:3/528: write d3/f9 [271491,88246] 0 2026-03-09T15:00:51.130 INFO:tasks.workunit.client.1.vm09.stdout:0/608: dwrite da/dc/d1c/d3c/d44/f89 [0,4194304] 0 2026-03-09T15:00:51.132 INFO:tasks.workunit.client.1.vm09.stdout:3/529: dread - d3/d3a/f6b zero size 2026-03-09T15:00:51.136 INFO:tasks.workunit.client.1.vm09.stdout:8/514: dread df/d5c/f72 [0,4194304] 0 2026-03-09T15:00:51.140 INFO:tasks.workunit.client.1.vm09.stdout:7/458: getdents d3/db/d25/d7d 0 2026-03-09T15:00:51.142 INFO:tasks.workunit.client.1.vm09.stdout:9/475: symlink d1/la0 0 2026-03-09T15:00:51.144 INFO:tasks.workunit.client.1.vm09.stdout:2/523: creat df/d1f/f9c x:0 0 0 2026-03-09T15:00:51.146 INFO:tasks.workunit.client.1.vm09.stdout:3/530: creat d3/d3a/fb8 x:0 0 0 2026-03-09T15:00:51.147 INFO:tasks.workunit.client.1.vm09.stdout:0/609: dwrite f7 [8388608,4194304] 0 2026-03-09T15:00:51.151 INFO:tasks.workunit.client.1.vm09.stdout:8/515: dread df/f51 [0,4194304] 0 2026-03-09T15:00:51.153 INFO:tasks.workunit.client.1.vm09.stdout:2/524: creat df/d20/f9d x:0 0 0 2026-03-09T15:00:51.157 INFO:tasks.workunit.client.1.vm09.stdout:7/459: dwrite d3/d1d/d2d/f84 [0,4194304] 0 2026-03-09T15:00:51.168 INFO:tasks.workunit.client.1.vm09.stdout:8/516: dwrite df/d5c/f78 [0,4194304] 0 2026-03-09T15:00:51.172 INFO:tasks.workunit.client.1.vm09.stdout:3/531: link d3/d3a/c22 d3/d3a/cb9 0 2026-03-09T15:00:51.175 INFO:tasks.workunit.client.1.vm09.stdout:8/517: chown df/d2d/d46/d73/d81/c84 12379 1 2026-03-09T15:00:51.176 INFO:tasks.workunit.client.1.vm09.stdout:3/532: truncate d3/d3a/f6b 878392 0 2026-03-09T15:00:51.176 INFO:tasks.workunit.client.1.vm09.stdout:0/610: dread da/dc/f8b [0,4194304] 0 2026-03-09T15:00:51.179 INFO:tasks.workunit.client.1.vm09.stdout:3/533: mknod d3/d5b/d79/cba 0 2026-03-09T15:00:51.181 INFO:tasks.workunit.client.1.vm09.stdout:0/611: mknod da/dc/d22/d64/cc7 0 2026-03-09T15:00:51.181 INFO:tasks.workunit.client.1.vm09.stdout:7/460: mknod d3/d1d/d65/c87 0 2026-03-09T15:00:51.186 INFO:tasks.workunit.client.1.vm09.stdout:2/525: dwrite df/d20/d29/f51 [0,4194304] 0 2026-03-09T15:00:51.186 INFO:tasks.workunit.client.1.vm09.stdout:3/534: write d3/f77 [1193850,45557] 0 2026-03-09T15:00:51.206 INFO:tasks.workunit.client.1.vm09.stdout:8/518: getdents df/d2d/d42 0 2026-03-09T15:00:51.207 INFO:tasks.workunit.client.1.vm09.stdout:3/535: dread d3/d3a/d2b/d39/f81 [0,4194304] 0 2026-03-09T15:00:51.210 INFO:tasks.workunit.client.1.vm09.stdout:3/536: mknod d3/d5b/d79/d9d/cbb 0 2026-03-09T15:00:51.211 INFO:tasks.workunit.client.1.vm09.stdout:3/537: dread - d3/d3a/d2b/f66 zero size 2026-03-09T15:00:51.212 INFO:tasks.workunit.client.1.vm09.stdout:3/538: chown d3/d3a/d2b/d31/f40 1 1 2026-03-09T15:00:51.212 INFO:tasks.workunit.client.1.vm09.stdout:3/539: dread - d3/d5b/d79/f83 zero size 2026-03-09T15:00:51.213 INFO:tasks.workunit.client.1.vm09.stdout:3/540: write d3/d3a/d2b/d39/f84 [631096,51299] 0 2026-03-09T15:00:51.216 INFO:tasks.workunit.client.1.vm09.stdout:3/541: chown d3/d3a/d2b/d39/f84 334004 1 2026-03-09T15:00:51.218 INFO:tasks.workunit.client.1.vm09.stdout:2/526: dread df/f17 [0,4194304] 0 2026-03-09T15:00:51.219 INFO:tasks.workunit.client.1.vm09.stdout:3/542: symlink d3/d3a/d2b/d39/d48/lbc 0 2026-03-09T15:00:51.220 INFO:tasks.workunit.client.1.vm09.stdout:2/527: mkdir df/d1f/d6d/d8f/d5f/d9e 0 2026-03-09T15:00:51.221 INFO:tasks.workunit.client.1.vm09.stdout:3/543: dread - d3/d3a/d2b/d31/fa1 zero size 2026-03-09T15:00:51.222 INFO:tasks.workunit.client.1.vm09.stdout:2/528: write df/f9b [612672,54273] 0 2026-03-09T15:00:51.224 INFO:tasks.workunit.client.1.vm09.stdout:2/529: unlink f4 0 2026-03-09T15:00:51.224 INFO:tasks.workunit.client.1.vm09.stdout:2/530: stat df/d1f/d47/d5d/d90 0 2026-03-09T15:00:51.226 INFO:tasks.workunit.client.1.vm09.stdout:2/531: symlink df/d6e/l9f 0 2026-03-09T15:00:51.234 INFO:tasks.workunit.client.1.vm09.stdout:0/612: sync 2026-03-09T15:00:51.235 INFO:tasks.workunit.client.1.vm09.stdout:8/519: sync 2026-03-09T15:00:51.235 INFO:tasks.workunit.client.1.vm09.stdout:2/532: dread df/f42 [0,4194304] 0 2026-03-09T15:00:51.237 INFO:tasks.workunit.client.1.vm09.stdout:8/520: write df/d2d/f57 [689753,75568] 0 2026-03-09T15:00:51.237 INFO:tasks.workunit.client.1.vm09.stdout:2/533: chown df/f23 8994 1 2026-03-09T15:00:51.237 INFO:tasks.workunit.client.1.vm09.stdout:0/613: mkdir da/dc/d1c/d3c/d78/d7a/dbb/dc8 0 2026-03-09T15:00:51.239 INFO:tasks.workunit.client.1.vm09.stdout:8/521: creat df/d2d/d46/f94 x:0 0 0 2026-03-09T15:00:51.239 INFO:tasks.workunit.client.1.vm09.stdout:0/614: write da/dc/d1c/d46/d5b/f6a [5115580,60698] 0 2026-03-09T15:00:51.240 INFO:tasks.workunit.client.1.vm09.stdout:2/534: rmdir df/d20/d29 39 2026-03-09T15:00:51.240 INFO:tasks.workunit.client.1.vm09.stdout:8/522: write df/d38/d64/f50 [183866,97078] 0 2026-03-09T15:00:51.243 INFO:tasks.workunit.client.1.vm09.stdout:4/520: truncate db/d12/d16/f36 1906836 0 2026-03-09T15:00:51.249 INFO:tasks.workunit.client.1.vm09.stdout:4/521: readlink db/d19/d23/l72 0 2026-03-09T15:00:51.249 INFO:tasks.workunit.client.1.vm09.stdout:0/615: symlink da/d57/lc9 0 2026-03-09T15:00:51.249 INFO:tasks.workunit.client.1.vm09.stdout:8/523: mkdir df/d24/d95 0 2026-03-09T15:00:51.249 INFO:tasks.workunit.client.1.vm09.stdout:2/535: mkdir df/da0 0 2026-03-09T15:00:51.249 INFO:tasks.workunit.client.1.vm09.stdout:4/522: mkdir db/d19/d23/d44/d7c/d7d/d97/da8 0 2026-03-09T15:00:51.249 INFO:tasks.workunit.client.1.vm09.stdout:8/524: chown df/d2d/d46/d33/f8e 189 1 2026-03-09T15:00:51.252 INFO:tasks.workunit.client.1.vm09.stdout:8/525: chown df/d2d/d46/d73/d81/l88 3136 1 2026-03-09T15:00:51.253 INFO:tasks.workunit.client.1.vm09.stdout:0/616: creat da/dc/d1c/d3c/d44/fca x:0 0 0 2026-03-09T15:00:51.253 INFO:tasks.workunit.client.1.vm09.stdout:2/536: rename df/d1f/d6d/d8f/d5f/d9e to df/d1f/d47/d84/da1 0 2026-03-09T15:00:51.254 INFO:tasks.workunit.client.1.vm09.stdout:0/617: write da/dc/d22/d76/f8e [5263421,101183] 0 2026-03-09T15:00:51.260 INFO:tasks.workunit.client.1.vm09.stdout:0/618: write da/dc/d10/f29 [3326270,19300] 0 2026-03-09T15:00:51.263 INFO:tasks.workunit.client.1.vm09.stdout:2/537: dread df/f5b [0,4194304] 0 2026-03-09T15:00:51.263 INFO:tasks.workunit.client.1.vm09.stdout:0/619: chown da/dc/d1c/d3c/d44/f89 7 1 2026-03-09T15:00:51.265 INFO:tasks.workunit.client.1.vm09.stdout:2/538: write df/d1f/d47/d5d/f96 [491837,33753] 0 2026-03-09T15:00:51.267 INFO:tasks.workunit.client.1.vm09.stdout:2/539: chown le 143 1 2026-03-09T15:00:51.269 INFO:tasks.workunit.client.1.vm09.stdout:2/540: truncate df/d2d/f8c 487304 0 2026-03-09T15:00:51.270 INFO:tasks.workunit.client.1.vm09.stdout:3/544: fdatasync d3/d3a/d2b/d36/f44 0 2026-03-09T15:00:51.277 INFO:tasks.workunit.client.1.vm09.stdout:8/526: dread df/f26 [0,4194304] 0 2026-03-09T15:00:51.278 INFO:tasks.workunit.client.1.vm09.stdout:3/545: dwrite d3/d3a/d2b/d36/f44 [0,4194304] 0 2026-03-09T15:00:51.278 INFO:tasks.workunit.client.1.vm09.stdout:2/541: symlink df/d1f/d6d/d8f/la2 0 2026-03-09T15:00:51.278 INFO:tasks.workunit.client.1.vm09.stdout:2/542: chown df/d58/d67/f46 140188 1 2026-03-09T15:00:51.285 INFO:tasks.workunit.client.1.vm09.stdout:2/543: chown df/d1f/d6d/d8f/l85 7272506 1 2026-03-09T15:00:51.285 INFO:tasks.workunit.client.1.vm09.stdout:8/527: truncate df/d5b/d65/d1d/f44 2786750 0 2026-03-09T15:00:51.287 INFO:tasks.workunit.client.1.vm09.stdout:8/528: write df/d5c/f8b [1872572,19014] 0 2026-03-09T15:00:51.289 INFO:tasks.workunit.client.1.vm09.stdout:2/544: dread df/f23 [0,4194304] 0 2026-03-09T15:00:51.293 INFO:tasks.workunit.client.1.vm09.stdout:3/546: dwrite d3/d3a/d2b/d31/d4a/fa9 [0,4194304] 0 2026-03-09T15:00:51.296 INFO:tasks.workunit.client.1.vm09.stdout:2/545: mkdir df/d93/da3 0 2026-03-09T15:00:51.298 INFO:tasks.workunit.client.1.vm09.stdout:8/529: truncate df/d38/d64/d5f/f6f 577184 0 2026-03-09T15:00:51.300 INFO:tasks.workunit.client.1.vm09.stdout:2/546: symlink df/d1f/d6d/d8f/la4 0 2026-03-09T15:00:51.302 INFO:tasks.workunit.client.1.vm09.stdout:3/547: dread d3/d5b/f6d [0,4194304] 0 2026-03-09T15:00:51.303 INFO:tasks.workunit.client.1.vm09.stdout:3/548: stat d3/d9a/f97 0 2026-03-09T15:00:51.307 INFO:tasks.workunit.client.1.vm09.stdout:3/549: write d3/d5b/d79/d9d/fb3 [4236901,120223] 0 2026-03-09T15:00:51.310 INFO:tasks.workunit.client.1.vm09.stdout:3/550: creat d3/d3a/d54/fbd x:0 0 0 2026-03-09T15:00:51.326 INFO:tasks.workunit.client.1.vm09.stdout:7/461: read d3/f7a [283887,66921] 0 2026-03-09T15:00:51.327 INFO:tasks.workunit.client.1.vm09.stdout:1/449: write d8/f57 [2818835,91771] 0 2026-03-09T15:00:51.328 INFO:tasks.workunit.client.1.vm09.stdout:1/450: chown d8/d10/d73/d5d/d78/d8b 24144397 1 2026-03-09T15:00:51.328 INFO:tasks.workunit.client.1.vm09.stdout:7/462: creat d3/db/d25/d5c/f88 x:0 0 0 2026-03-09T15:00:51.331 INFO:tasks.workunit.client.1.vm09.stdout:1/451: mkdir d8/d90 0 2026-03-09T15:00:51.332 INFO:tasks.workunit.client.1.vm09.stdout:1/452: write d8/f7e [584070,66649] 0 2026-03-09T15:00:51.334 INFO:tasks.workunit.client.1.vm09.stdout:8/530: fsync df/d5c/f8b 0 2026-03-09T15:00:51.335 INFO:tasks.workunit.client.1.vm09.stdout:7/463: creat d3/db/d15/d5f/f89 x:0 0 0 2026-03-09T15:00:51.336 INFO:tasks.workunit.client.1.vm09.stdout:8/531: fdatasync df/d5b/f31 0 2026-03-09T15:00:51.341 INFO:tasks.workunit.client.1.vm09.stdout:1/453: dwrite d8/ff [4194304,4194304] 0 2026-03-09T15:00:51.341 INFO:tasks.workunit.client.1.vm09.stdout:5/507: dread - d2/d37/f75 zero size 2026-03-09T15:00:51.341 INFO:tasks.workunit.client.1.vm09.stdout:1/454: stat d8/f7e 0 2026-03-09T15:00:51.343 INFO:tasks.workunit.client.1.vm09.stdout:5/508: dread - d2/d37/d3c/fac zero size 2026-03-09T15:00:51.349 INFO:tasks.workunit.client.1.vm09.stdout:1/455: mknod d8/d10/d73/c91 0 2026-03-09T15:00:51.349 INFO:tasks.workunit.client.1.vm09.stdout:8/532: dwrite df/f51 [0,4194304] 0 2026-03-09T15:00:51.351 INFO:tasks.workunit.client.1.vm09.stdout:5/509: unlink d2/d37/d3c/d36/d4c/d51/d96/c4d 0 2026-03-09T15:00:51.353 INFO:tasks.workunit.client.1.vm09.stdout:5/510: stat d2/d37/d3c/l33 0 2026-03-09T15:00:51.353 INFO:tasks.workunit.client.1.vm09.stdout:8/533: creat df/d2d/d42/f96 x:0 0 0 2026-03-09T15:00:51.356 INFO:tasks.workunit.client.1.vm09.stdout:5/511: creat d2/d37/d67/d95/db5/fb6 x:0 0 0 2026-03-09T15:00:51.358 INFO:tasks.workunit.client.1.vm09.stdout:5/512: mknod d2/d37/d67/d95/cb7 0 2026-03-09T15:00:51.362 INFO:tasks.workunit.client.1.vm09.stdout:5/513: mkdir d2/d37/d67/d95/db8 0 2026-03-09T15:00:51.363 INFO:tasks.workunit.client.1.vm09.stdout:8/534: dwrite df/f30 [12582912,4194304] 0 2026-03-09T15:00:51.368 INFO:tasks.workunit.client.1.vm09.stdout:8/535: symlink df/d38/d64/d5f/l97 0 2026-03-09T15:00:51.372 INFO:tasks.workunit.client.1.vm09.stdout:9/476: write d1/f24 [668678,120331] 0 2026-03-09T15:00:51.378 INFO:tasks.workunit.client.1.vm09.stdout:5/514: dread d2/d37/d53/f79 [0,4194304] 0 2026-03-09T15:00:51.384 INFO:tasks.workunit.client.1.vm09.stdout:5/515: dread - d2/d37/d3c/d36/d4c/f82 zero size 2026-03-09T15:00:51.384 INFO:tasks.workunit.client.1.vm09.stdout:8/536: mkdir df/d5b/d98 0 2026-03-09T15:00:51.390 INFO:tasks.workunit.client.1.vm09.stdout:4/523: truncate db/d19/d23/d71/f4e 2661139 0 2026-03-09T15:00:51.390 INFO:tasks.workunit.client.1.vm09.stdout:5/516: creat d2/da9/fb9 x:0 0 0 2026-03-09T15:00:51.391 INFO:tasks.workunit.client.1.vm09.stdout:9/477: rmdir d1/d4f/d52 39 2026-03-09T15:00:51.392 INFO:tasks.workunit.client.1.vm09.stdout:5/517: write d2/d37/d3c/d36/d45/d5c/f90 [1804923,99055] 0 2026-03-09T15:00:51.401 INFO:tasks.workunit.client.1.vm09.stdout:8/537: dwrite df/d2d/d46/d73/d81/f59 [0,4194304] 0 2026-03-09T15:00:51.406 INFO:tasks.workunit.client.1.vm09.stdout:8/538: mkdir df/d24/d99 0 2026-03-09T15:00:51.406 INFO:tasks.workunit.client.1.vm09.stdout:4/524: dwrite db/f14 [0,4194304] 0 2026-03-09T15:00:51.411 INFO:tasks.workunit.client.1.vm09.stdout:8/539: mkdir df/d2d/d42/d79/d9a 0 2026-03-09T15:00:51.414 INFO:tasks.workunit.client.1.vm09.stdout:8/540: mkdir df/d2d/d42/d79/d9b 0 2026-03-09T15:00:51.416 INFO:tasks.workunit.client.1.vm09.stdout:8/541: write df/d24/f83 [725932,18005] 0 2026-03-09T15:00:51.425 INFO:tasks.workunit.client.1.vm09.stdout:8/542: fdatasync df/d38/f58 0 2026-03-09T15:00:51.431 INFO:tasks.workunit.client.1.vm09.stdout:1/456: dread d8/f57 [0,4194304] 0 2026-03-09T15:00:51.434 INFO:tasks.workunit.client.1.vm09.stdout:1/457: creat d8/d10/d24/d45/f92 x:0 0 0 2026-03-09T15:00:51.436 INFO:tasks.workunit.client.1.vm09.stdout:1/458: fsync d8/d10/f44 0 2026-03-09T15:00:51.440 INFO:tasks.workunit.client.1.vm09.stdout:4/525: sync 2026-03-09T15:00:51.442 INFO:tasks.workunit.client.1.vm09.stdout:1/459: rename d8/d10/c63 to d8/d22/d72/d64/c93 0 2026-03-09T15:00:51.447 INFO:tasks.workunit.client.1.vm09.stdout:2/547: dwrite df/d1f/f7e [4194304,4194304] 0 2026-03-09T15:00:51.450 INFO:tasks.workunit.client.1.vm09.stdout:4/526: dread db/d12/d16/f54 [0,4194304] 0 2026-03-09T15:00:51.451 INFO:tasks.workunit.client.1.vm09.stdout:2/548: sync 2026-03-09T15:00:51.452 INFO:tasks.workunit.client.1.vm09.stdout:2/549: chown df/d1f/c8b 1897812480 1 2026-03-09T15:00:51.458 INFO:tasks.workunit.client.1.vm09.stdout:4/527: stat db/d12/d16/la2 0 2026-03-09T15:00:51.459 INFO:tasks.workunit.client.1.vm09.stdout:2/550: symlink df/d1f/d6d/d8f/d5f/la5 0 2026-03-09T15:00:51.459 INFO:tasks.workunit.client.1.vm09.stdout:4/528: write db/d12/f1b [1010100,90589] 0 2026-03-09T15:00:51.464 INFO:tasks.workunit.client.1.vm09.stdout:3/551: write d3/d5b/f6d [304417,128987] 0 2026-03-09T15:00:51.465 INFO:tasks.workunit.client.1.vm09.stdout:2/551: sync 2026-03-09T15:00:51.466 INFO:tasks.workunit.client.1.vm09.stdout:2/552: write df/d58/f65 [4851792,7043] 0 2026-03-09T15:00:51.467 INFO:tasks.workunit.client.1.vm09.stdout:4/529: dread db/f1c [0,4194304] 0 2026-03-09T15:00:51.474 INFO:tasks.workunit.client.1.vm09.stdout:3/552: dwrite d3/d5b/d79/f83 [0,4194304] 0 2026-03-09T15:00:51.475 INFO:tasks.workunit.client.1.vm09.stdout:4/530: creat db/d19/d23/d71/d53/fa9 x:0 0 0 2026-03-09T15:00:51.482 INFO:tasks.workunit.client.1.vm09.stdout:2/553: dread df/d20/f49 [0,4194304] 0 2026-03-09T15:00:51.484 INFO:tasks.workunit.client.1.vm09.stdout:3/553: dwrite d3/d9a/f7e [0,4194304] 0 2026-03-09T15:00:51.494 INFO:tasks.workunit.client.1.vm09.stdout:7/464: truncate d3/d1d/d2d/f84 2516147 0 2026-03-09T15:00:51.497 INFO:tasks.workunit.client.1.vm09.stdout:0/620: dwrite da/f12 [0,4194304] 0 2026-03-09T15:00:51.508 INFO:tasks.workunit.client.1.vm09.stdout:0/621: dread da/dc/d10/f29 [4194304,4194304] 0 2026-03-09T15:00:51.509 INFO:tasks.workunit.client.1.vm09.stdout:7/465: read d3/db/f42 [2386435,66282] 0 2026-03-09T15:00:51.511 INFO:tasks.workunit.client.1.vm09.stdout:4/531: dread db/d19/d23/d71/d5f/f87 [0,4194304] 0 2026-03-09T15:00:51.517 INFO:tasks.workunit.client.1.vm09.stdout:7/466: creat d3/db/d25/d5c/f8a x:0 0 0 2026-03-09T15:00:51.517 INFO:tasks.workunit.client.1.vm09.stdout:0/622: mkdir da/dc/dcb 0 2026-03-09T15:00:51.520 INFO:tasks.workunit.client.1.vm09.stdout:4/532: symlink db/d19/d52/laa 0 2026-03-09T15:00:51.527 INFO:tasks.workunit.client.1.vm09.stdout:6/487: dwrite d6/d20/d2a/f37 [0,4194304] 0 2026-03-09T15:00:51.531 INFO:tasks.workunit.client.1.vm09.stdout:0/623: creat da/dc/d22/d64/fcc x:0 0 0 2026-03-09T15:00:51.534 INFO:tasks.workunit.client.1.vm09.stdout:0/624: write da/dc/d1c/d3c/d44/f89 [3189307,5905] 0 2026-03-09T15:00:51.537 INFO:tasks.workunit.client.1.vm09.stdout:1/460: dwrite d8/d10/d73/f21 [0,4194304] 0 2026-03-09T15:00:51.544 INFO:tasks.workunit.client.1.vm09.stdout:5/518: truncate d2/d37/d3c/d55/f7b 3195531 0 2026-03-09T15:00:51.545 INFO:tasks.workunit.client.1.vm09.stdout:6/488: chown d6/l9 30243 1 2026-03-09T15:00:51.545 INFO:tasks.workunit.client.1.vm09.stdout:9/478: truncate d1/d4f/d52/f53 1598959 0 2026-03-09T15:00:51.545 INFO:tasks.workunit.client.1.vm09.stdout:4/533: read db/d12/f5a [33553,42896] 0 2026-03-09T15:00:51.551 INFO:tasks.workunit.client.1.vm09.stdout:0/625: rename da/dc/d1c/d3c/d44/d6c to da/dc/d1c/d46/d63/d86/dcd 0 2026-03-09T15:00:51.551 INFO:tasks.workunit.client.1.vm09.stdout:0/626: readlink da/dc/d10/l13 0 2026-03-09T15:00:51.559 INFO:tasks.workunit.client.1.vm09.stdout:4/534: creat db/d12/d16/fab x:0 0 0 2026-03-09T15:00:51.559 INFO:tasks.workunit.client.1.vm09.stdout:6/489: truncate d6/d20/d24/f67 802436 0 2026-03-09T15:00:51.560 INFO:tasks.workunit.client.1.vm09.stdout:0/627: creat da/dc/d1c/d3c/d78/d7a/d9c/fce x:0 0 0 2026-03-09T15:00:51.560 INFO:tasks.workunit.client.1.vm09.stdout:8/543: dwrite df/d38/d64/d5f/f6f [0,4194304] 0 2026-03-09T15:00:51.567 INFO:tasks.workunit.client.1.vm09.stdout:0/628: write da/dc/d22/d76/f8e [3069694,52262] 0 2026-03-09T15:00:51.567 INFO:tasks.workunit.client.1.vm09.stdout:8/544: write df/d2d/d46/d73/d81/f63 [3038610,90349] 0 2026-03-09T15:00:51.571 INFO:tasks.workunit.client.1.vm09.stdout:8/545: write df/f51 [143459,49474] 0 2026-03-09T15:00:51.573 INFO:tasks.workunit.client.1.vm09.stdout:7/467: dread d3/d3d/f5a [0,4194304] 0 2026-03-09T15:00:51.574 INFO:tasks.workunit.client.1.vm09.stdout:5/519: symlink d2/d37/lba 0 2026-03-09T15:00:51.575 INFO:tasks.workunit.client.1.vm09.stdout:5/520: fdatasync d2/d37/d3c/fac 0 2026-03-09T15:00:51.575 INFO:tasks.workunit.client.1.vm09.stdout:4/535: sync 2026-03-09T15:00:51.581 INFO:tasks.workunit.client.1.vm09.stdout:6/490: rename d6/d20/d38/d4e/d55/f90 to d6/d20/d24/d7e/f9c 0 2026-03-09T15:00:51.585 INFO:tasks.workunit.client.1.vm09.stdout:0/629: truncate da/f4c 3829490 0 2026-03-09T15:00:51.585 INFO:tasks.workunit.client.1.vm09.stdout:8/546: symlink df/d2d/d46/d73/l9c 0 2026-03-09T15:00:51.586 INFO:tasks.workunit.client.1.vm09.stdout:7/468: symlink d3/db/d46/l8b 0 2026-03-09T15:00:51.586 INFO:tasks.workunit.client.1.vm09.stdout:5/521: symlink d2/da9/lbb 0 2026-03-09T15:00:51.587 INFO:tasks.workunit.client.1.vm09.stdout:4/536: creat db/d12/d16/d5b/fac x:0 0 0 2026-03-09T15:00:51.587 INFO:tasks.workunit.client.1.vm09.stdout:8/547: write df/d5b/f82 [3534162,40118] 0 2026-03-09T15:00:51.587 INFO:tasks.workunit.client.1.vm09.stdout:4/537: read - db/d19/d23/d71/d53/fa9 zero size 2026-03-09T15:00:51.588 INFO:tasks.workunit.client.1.vm09.stdout:4/538: write f3 [2117937,109188] 0 2026-03-09T15:00:51.604 INFO:tasks.workunit.client.1.vm09.stdout:0/630: creat da/dc/d22/d76/fcf x:0 0 0 2026-03-09T15:00:51.608 INFO:tasks.workunit.client.1.vm09.stdout:7/469: dread d3/db/f42 [0,4194304] 0 2026-03-09T15:00:51.608 INFO:tasks.workunit.client.1.vm09.stdout:2/554: write df/f13 [1357380,106965] 0 2026-03-09T15:00:51.609 INFO:tasks.workunit.client.1.vm09.stdout:5/522: symlink d2/d37/d53/lbc 0 2026-03-09T15:00:51.612 INFO:tasks.workunit.client.1.vm09.stdout:2/555: sync 2026-03-09T15:00:51.615 INFO:tasks.workunit.client.1.vm09.stdout:3/554: write d3/d3a/d2b/f64 [1492777,129164] 0 2026-03-09T15:00:51.615 INFO:tasks.workunit.client.1.vm09.stdout:3/555: readlink d3/d5b/d79/l87 0 2026-03-09T15:00:51.623 INFO:tasks.workunit.client.1.vm09.stdout:3/556: dread d3/d3a/d2b/d31/d4a/d62/f26 [0,4194304] 0 2026-03-09T15:00:51.624 INFO:tasks.workunit.client.1.vm09.stdout:3/557: chown d3/d3a/c96 14305181 1 2026-03-09T15:00:51.630 INFO:tasks.workunit.client.1.vm09.stdout:0/631: mkdir da/dc/d10/dd0 0 2026-03-09T15:00:51.638 INFO:tasks.workunit.client.1.vm09.stdout:3/558: dread d3/d3a/d2b/d31/d9e/fb7 [0,4194304] 0 2026-03-09T15:00:51.642 INFO:tasks.workunit.client.1.vm09.stdout:5/523: dwrite d2/d37/d3c/d36/d45/fa1 [0,4194304] 0 2026-03-09T15:00:51.647 INFO:tasks.workunit.client.1.vm09.stdout:3/559: dwrite d3/f3e [0,4194304] 0 2026-03-09T15:00:51.649 INFO:tasks.workunit.client.1.vm09.stdout:6/491: link d6/d20/d2a/d3d/f79 d6/d20/d44/d8f/f9d 0 2026-03-09T15:00:51.649 INFO:tasks.workunit.client.1.vm09.stdout:4/539: creat db/d19/d23/d44/d7c/d7d/d97/da3/fad x:0 0 0 2026-03-09T15:00:51.665 INFO:tasks.workunit.client.1.vm09.stdout:9/479: unlink d1/d4f/d52/f53 0 2026-03-09T15:00:51.665 INFO:tasks.workunit.client.1.vm09.stdout:0/632: rename da/dc/d1c/d46/d63/d86/ca3 to da/dc/d1c/d3c/d44/dba/cd1 0 2026-03-09T15:00:51.665 INFO:tasks.workunit.client.1.vm09.stdout:0/633: write da/dc/d10/f2d [3025156,68981] 0 2026-03-09T15:00:51.666 INFO:tasks.workunit.client.1.vm09.stdout:1/461: write d8/d22/f61 [774525,59758] 0 2026-03-09T15:00:51.667 INFO:tasks.workunit.client.1.vm09.stdout:0/634: chown da/dc/d1c/d46/d63/faa 1156 1 2026-03-09T15:00:51.669 INFO:tasks.workunit.client.1.vm09.stdout:8/548: link df/l19 df/d5b/d98/l9d 0 2026-03-09T15:00:51.672 INFO:tasks.workunit.client.1.vm09.stdout:2/556: symlink df/d1f/d6d/d8f/la6 0 2026-03-09T15:00:51.674 INFO:tasks.workunit.client.1.vm09.stdout:2/557: dread df/f5b [0,4194304] 0 2026-03-09T15:00:51.674 INFO:tasks.workunit.client.1.vm09.stdout:2/558: chown df/f16 8483470 1 2026-03-09T15:00:51.674 INFO:tasks.workunit.client.1.vm09.stdout:3/560: read - d3/d3a/d2b/d39/f70 zero size 2026-03-09T15:00:51.678 INFO:tasks.workunit.client.1.vm09.stdout:2/559: read df/d58/d67/f46 [499194,92196] 0 2026-03-09T15:00:51.679 INFO:tasks.workunit.client.1.vm09.stdout:6/492: symlink d6/d20/d38/d4e/d55/l9e 0 2026-03-09T15:00:51.681 INFO:tasks.workunit.client.1.vm09.stdout:5/524: creat d2/d37/d3c/d36/fbd x:0 0 0 2026-03-09T15:00:51.684 INFO:tasks.workunit.client.1.vm09.stdout:2/560: sync 2026-03-09T15:00:51.685 INFO:tasks.workunit.client.1.vm09.stdout:2/561: chown df/d1f/d6d/d8f/d5f/f72 210476 1 2026-03-09T15:00:51.686 INFO:tasks.workunit.client.1.vm09.stdout:9/480: mknod d1/d6e/ca1 0 2026-03-09T15:00:51.687 INFO:tasks.workunit.client.1.vm09.stdout:9/481: read d1/d58/f75 [432309,80704] 0 2026-03-09T15:00:51.689 INFO:tasks.workunit.client.1.vm09.stdout:2/562: dwrite df/f4a [4194304,4194304] 0 2026-03-09T15:00:51.698 INFO:tasks.workunit.client.1.vm09.stdout:1/462: rmdir d8/d10/d24/d48 39 2026-03-09T15:00:51.699 INFO:tasks.workunit.client.1.vm09.stdout:1/463: dread - d8/d10/f69 zero size 2026-03-09T15:00:51.701 INFO:tasks.workunit.client.1.vm09.stdout:8/549: symlink df/d2d/d46/d73/d81/d60/l9e 0 2026-03-09T15:00:51.702 INFO:tasks.workunit.client.1.vm09.stdout:1/464: dwrite d8/f42 [0,4194304] 0 2026-03-09T15:00:51.711 INFO:tasks.workunit.client.1.vm09.stdout:4/540: creat db/d12/d9e/fae x:0 0 0 2026-03-09T15:00:51.714 INFO:tasks.workunit.client.1.vm09.stdout:4/541: dread db/f1c [0,4194304] 0 2026-03-09T15:00:51.715 INFO:tasks.workunit.client.1.vm09.stdout:5/525: creat d2/d37/d53/d86/d88/fbe x:0 0 0 2026-03-09T15:00:51.718 INFO:tasks.workunit.client.1.vm09.stdout:9/482: symlink d1/d6e/la2 0 2026-03-09T15:00:51.719 INFO:tasks.workunit.client.1.vm09.stdout:7/470: getdents d3/db/d46 0 2026-03-09T15:00:51.719 INFO:tasks.workunit.client.1.vm09.stdout:9/483: stat d1/d7/d1e/d2b/d40 0 2026-03-09T15:00:51.720 INFO:tasks.workunit.client.1.vm09.stdout:2/563: mkdir df/d1f/d47/d84/da1/da7 0 2026-03-09T15:00:51.720 INFO:tasks.workunit.client.1.vm09.stdout:7/471: read - d3/db/d15/d5f/d44/f82 zero size 2026-03-09T15:00:51.722 INFO:tasks.workunit.client.1.vm09.stdout:8/550: creat df/d24/d56/f9f x:0 0 0 2026-03-09T15:00:51.723 INFO:tasks.workunit.client.1.vm09.stdout:1/465: chown d8/d10/d24/f8f 164715392 1 2026-03-09T15:00:51.729 INFO:tasks.workunit.client.1.vm09.stdout:5/526: write d2/f5e [416472,16032] 0 2026-03-09T15:00:51.729 INFO:tasks.workunit.client.1.vm09.stdout:8/551: rmdir df/d5c 39 2026-03-09T15:00:51.729 INFO:tasks.workunit.client.1.vm09.stdout:7/472: chown d3/db/c54 509125 1 2026-03-09T15:00:51.731 INFO:tasks.workunit.client.1.vm09.stdout:1/466: dread d8/d10/d73/f54 [0,4194304] 0 2026-03-09T15:00:51.733 INFO:tasks.workunit.client.1.vm09.stdout:0/635: getdents da/d80 0 2026-03-09T15:00:51.735 INFO:tasks.workunit.client.1.vm09.stdout:2/564: creat df/d1f/d47/d84/da1/da7/fa8 x:0 0 0 2026-03-09T15:00:51.735 INFO:tasks.workunit.client.1.vm09.stdout:2/565: chown df/d1f 89411 1 2026-03-09T15:00:51.735 INFO:tasks.workunit.client.1.vm09.stdout:2/566: read df/f42 [1042381,31374] 0 2026-03-09T15:00:51.736 INFO:tasks.workunit.client.1.vm09.stdout:2/567: dread df/f5b [0,4194304] 0 2026-03-09T15:00:51.739 INFO:tasks.workunit.client.1.vm09.stdout:5/527: mkdir d2/d37/d3c/dbf 0 2026-03-09T15:00:51.740 INFO:tasks.workunit.client.1.vm09.stdout:8/552: symlink df/d38/d64/d5f/la0 0 2026-03-09T15:00:51.744 INFO:tasks.workunit.client.1.vm09.stdout:0/636: chown da/dc/d1c/l37 108046015 1 2026-03-09T15:00:51.753 INFO:tasks.workunit.client.1.vm09.stdout:0/637: write da/dc/d1c/d3c/f81 [586923,75238] 0 2026-03-09T15:00:51.753 INFO:tasks.workunit.client.1.vm09.stdout:9/484: creat d1/d4f/fa3 x:0 0 0 2026-03-09T15:00:51.754 INFO:tasks.workunit.client.1.vm09.stdout:9/485: read d1/d7/d1e/f5d [706276,102264] 0 2026-03-09T15:00:51.754 INFO:tasks.workunit.client.1.vm09.stdout:5/528: creat d2/d37/d67/fc0 x:0 0 0 2026-03-09T15:00:51.755 INFO:tasks.workunit.client.1.vm09.stdout:8/553: fsync df/f23 0 2026-03-09T15:00:51.757 INFO:tasks.workunit.client.1.vm09.stdout:7/473: creat d3/db/d25/d7d/f8c x:0 0 0 2026-03-09T15:00:51.758 INFO:tasks.workunit.client.1.vm09.stdout:6/493: dwrite d6/d20/f52 [0,4194304] 0 2026-03-09T15:00:51.763 INFO:tasks.workunit.client.1.vm09.stdout:1/467: mknod d8/d10/d73/d5d/d68/c94 0 2026-03-09T15:00:51.770 INFO:tasks.workunit.client.1.vm09.stdout:8/554: read df/d38/f8a [3792445,36735] 0 2026-03-09T15:00:51.771 INFO:tasks.workunit.client.1.vm09.stdout:6/494: dwrite d6/d20/d38/d4e/d55/f77 [4194304,4194304] 0 2026-03-09T15:00:51.775 INFO:tasks.workunit.client.1.vm09.stdout:0/638: dread da/dc/d92/d9e/fa2 [0,4194304] 0 2026-03-09T15:00:51.780 INFO:tasks.workunit.client.1.vm09.stdout:3/561: truncate d3/f29 3419720 0 2026-03-09T15:00:51.791 INFO:tasks.workunit.client.1.vm09.stdout:1/468: dread d8/d10/f2f [0,4194304] 0 2026-03-09T15:00:51.791 INFO:tasks.workunit.client.1.vm09.stdout:3/562: write d3/d5b/d79/d9d/faf [438265,41347] 0 2026-03-09T15:00:51.791 INFO:tasks.workunit.client.1.vm09.stdout:3/563: chown d3/d3a/d2b/d31/d4a/d62/l7 907 1 2026-03-09T15:00:51.791 INFO:tasks.workunit.client.1.vm09.stdout:4/542: dwrite db/d12/f2b [0,4194304] 0 2026-03-09T15:00:51.791 INFO:tasks.workunit.client.1.vm09.stdout:5/529: mknod d2/d37/d3c/d55/cc1 0 2026-03-09T15:00:51.800 INFO:tasks.workunit.client.1.vm09.stdout:7/474: mkdir d3/db/d25/d8d 0 2026-03-09T15:00:51.800 INFO:tasks.workunit.client.1.vm09.stdout:2/568: mkdir df/d20/d29/da9 0 2026-03-09T15:00:51.801 INFO:tasks.workunit.client.1.vm09.stdout:0/639: dwrite da/fa9 [0,4194304] 0 2026-03-09T15:00:51.810 INFO:tasks.workunit.client.1.vm09.stdout:9/486: creat d1/d7/d9f/fa4 x:0 0 0 2026-03-09T15:00:51.811 INFO:tasks.workunit.client.1.vm09.stdout:5/530: dwrite d2/d37/d3c/d36/d45/d5c/f91 [4194304,4194304] 0 2026-03-09T15:00:51.812 INFO:tasks.workunit.client.1.vm09.stdout:3/564: creat d3/d3a/d2b/d7b/fbe x:0 0 0 2026-03-09T15:00:51.819 INFO:tasks.workunit.client.1.vm09.stdout:8/555: creat df/d2d/d4f/fa1 x:0 0 0 2026-03-09T15:00:51.819 INFO:tasks.workunit.client.1.vm09.stdout:7/475: unlink d3/db/d25/c45 0 2026-03-09T15:00:51.820 INFO:tasks.workunit.client.1.vm09.stdout:5/531: stat d2/d37/d3c/d36/d4c/fa6 0 2026-03-09T15:00:51.822 INFO:tasks.workunit.client.1.vm09.stdout:2/569: write df/d58/d74/f88 [782570,35309] 0 2026-03-09T15:00:51.824 INFO:tasks.workunit.client.1.vm09.stdout:3/565: creat d3/d3a/d2b/d31/fbf x:0 0 0 2026-03-09T15:00:51.827 INFO:tasks.workunit.client.1.vm09.stdout:7/476: rmdir d3 39 2026-03-09T15:00:51.833 INFO:tasks.workunit.client.1.vm09.stdout:3/566: write d3/d3a/f6b [983887,101306] 0 2026-03-09T15:00:51.833 INFO:tasks.workunit.client.1.vm09.stdout:8/556: dwrite df/d24/f86 [0,4194304] 0 2026-03-09T15:00:51.833 INFO:tasks.workunit.client.1.vm09.stdout:5/532: readlink d2/d37/l52 0 2026-03-09T15:00:51.833 INFO:tasks.workunit.client.1.vm09.stdout:6/495: rename d6/db/d10/l3e to d6/d20/d2a/l9f 0 2026-03-09T15:00:51.833 INFO:tasks.workunit.client.1.vm09.stdout:3/567: creat d3/d5b/fc0 x:0 0 0 2026-03-09T15:00:51.833 INFO:tasks.workunit.client.1.vm09.stdout:3/568: readlink d3/d3a/d2b/l42 0 2026-03-09T15:00:51.833 INFO:tasks.workunit.client.1.vm09.stdout:2/570: creat df/d1f/d47/d5d/d90/faa x:0 0 0 2026-03-09T15:00:51.836 INFO:tasks.workunit.client.1.vm09.stdout:4/543: dread db/d19/d23/d71/f4e [0,4194304] 0 2026-03-09T15:00:51.838 INFO:tasks.workunit.client.1.vm09.stdout:0/640: dread da/fb [0,4194304] 0 2026-03-09T15:00:51.838 INFO:tasks.workunit.client.1.vm09.stdout:8/557: write df/d2d/d46/f6d [1230407,22011] 0 2026-03-09T15:00:51.848 INFO:tasks.workunit.client.1.vm09.stdout:7/477: unlink d3/f3f 0 2026-03-09T15:00:51.848 INFO:tasks.workunit.client.1.vm09.stdout:5/533: dwrite d2/d37/d3c/d55/f68 [0,4194304] 0 2026-03-09T15:00:51.848 INFO:tasks.workunit.client.1.vm09.stdout:7/478: dwrite d3/d3d/f51 [0,4194304] 0 2026-03-09T15:00:51.854 INFO:tasks.workunit.client.1.vm09.stdout:9/487: getdents d1/d4f/d8f 0 2026-03-09T15:00:51.856 INFO:tasks.workunit.client.1.vm09.stdout:6/496: getdents d6/d20/d2a/d3b/d91 0 2026-03-09T15:00:51.863 INFO:tasks.workunit.client.1.vm09.stdout:4/544: creat db/d12/d16/d5b/da5/faf x:0 0 0 2026-03-09T15:00:51.871 INFO:tasks.workunit.client.1.vm09.stdout:0/641: symlink da/dc/d84/db8/ld2 0 2026-03-09T15:00:51.876 INFO:tasks.workunit.client.1.vm09.stdout:9/488: creat d1/d4f/d8f/d91/fa5 x:0 0 0 2026-03-09T15:00:51.876 INFO:tasks.workunit.client.1.vm09.stdout:9/489: dread - d1/d4f/d52/f94 zero size 2026-03-09T15:00:51.880 INFO:tasks.workunit.client.1.vm09.stdout:1/469: rename d8/d22 to d8/d50/d39/d95 0 2026-03-09T15:00:51.883 INFO:tasks.workunit.client.1.vm09.stdout:4/545: symlink db/d12/d16/lb0 0 2026-03-09T15:00:51.884 INFO:tasks.workunit.client.1.vm09.stdout:4/546: chown db/d19/d23/d71/d53/fa9 120863 1 2026-03-09T15:00:51.886 INFO:tasks.workunit.client.1.vm09.stdout:4/547: chown db/d12/d16/f26 0 1 2026-03-09T15:00:51.886 INFO:tasks.workunit.client.1.vm09.stdout:5/534: fdatasync d2/d37/d3c/d36/d45/f6e 0 2026-03-09T15:00:51.887 INFO:tasks.workunit.client.1.vm09.stdout:7/479: mknod d3/db/d15/d5f/d6e/d83/c8e 0 2026-03-09T15:00:51.889 INFO:tasks.workunit.client.1.vm09.stdout:3/569: link d3/d3a/d2b/d36/l38 d3/d3a/d2b/lc1 0 2026-03-09T15:00:51.897 INFO:tasks.workunit.client.1.vm09.stdout:4/548: sync 2026-03-09T15:00:51.902 INFO:tasks.workunit.client.1.vm09.stdout:5/535: creat d2/d37/d67/fc2 x:0 0 0 2026-03-09T15:00:51.912 INFO:tasks.workunit.client.1.vm09.stdout:9/490: mkdir d1/d7/da6 0 2026-03-09T15:00:51.912 INFO:tasks.workunit.client.1.vm09.stdout:1/470: rmdir d8/d10/d24/d48 39 2026-03-09T15:00:51.912 INFO:tasks.workunit.client.1.vm09.stdout:7/480: creat d3/d1d/d2d/f8f x:0 0 0 2026-03-09T15:00:51.912 INFO:tasks.workunit.client.1.vm09.stdout:6/497: creat d6/db/d10/fa0 x:0 0 0 2026-03-09T15:00:51.912 INFO:tasks.workunit.client.1.vm09.stdout:0/642: link da/dc/d1c/d46/d63/d86/dcd/lc4 da/dc/d22/d76/ld3 0 2026-03-09T15:00:51.912 INFO:tasks.workunit.client.1.vm09.stdout:3/570: creat d3/d9a/fc2 x:0 0 0 2026-03-09T15:00:51.912 INFO:tasks.workunit.client.1.vm09.stdout:5/536: rename d2/d37/d3c/d55 to d2/d37/d3c/d36/d45/dae/dc3 0 2026-03-09T15:00:51.912 INFO:tasks.workunit.client.1.vm09.stdout:4/549: chown db/d12/l30 83193 1 2026-03-09T15:00:51.916 INFO:tasks.workunit.client.1.vm09.stdout:6/498: symlink d6/d20/d38/d4e/la1 0 2026-03-09T15:00:51.917 INFO:tasks.workunit.client.1.vm09.stdout:1/471: dread d8/f6b [0,4194304] 0 2026-03-09T15:00:51.920 INFO:tasks.workunit.client.1.vm09.stdout:0/643: dread da/dc/d22/f53 [0,4194304] 0 2026-03-09T15:00:51.927 INFO:tasks.workunit.client.1.vm09.stdout:7/481: dread d3/db/d46/f5b [0,4194304] 0 2026-03-09T15:00:51.946 INFO:tasks.workunit.client.1.vm09.stdout:0/644: rmdir da/dc/d1c/d3c/d44/dba 39 2026-03-09T15:00:51.949 INFO:tasks.workunit.client.1.vm09.stdout:8/558: write df/d38/f58 [529005,74988] 0 2026-03-09T15:00:51.949 INFO:tasks.workunit.client.1.vm09.stdout:4/550: symlink db/d19/d23/d44/d7c/d7d/d97/da8/lb1 0 2026-03-09T15:00:51.952 INFO:tasks.workunit.client.1.vm09.stdout:2/571: truncate df/d58/d74/f88 593661 0 2026-03-09T15:00:51.953 INFO:tasks.workunit.client.1.vm09.stdout:9/491: rename d1/d7/d1e/d2b/l47 to d1/d7/d1e/la7 0 2026-03-09T15:00:51.957 INFO:tasks.workunit.client.1.vm09.stdout:9/492: chown d1/f1f 218508990 1 2026-03-09T15:00:51.957 INFO:tasks.workunit.client.1.vm09.stdout:8/559: dwrite df/d5b/d65/d1d/f6e [4194304,4194304] 0 2026-03-09T15:00:51.958 INFO:tasks.workunit.client.1.vm09.stdout:8/560: fdatasync df/f89 0 2026-03-09T15:00:51.965 INFO:tasks.workunit.client.1.vm09.stdout:4/551: write db/d19/d23/d44/d7c/d7d/d97/da3/fad [678903,41165] 0 2026-03-09T15:00:51.965 INFO:tasks.workunit.client.1.vm09.stdout:3/571: rename d3/d3a/d2b/d31/d4a/f7f to d3/d74/fc3 0 2026-03-09T15:00:51.968 INFO:tasks.workunit.client.1.vm09.stdout:4/552: truncate db/d12/da1/fa6 87318 0 2026-03-09T15:00:51.970 INFO:tasks.workunit.client.1.vm09.stdout:4/553: chown db/d19/d81/c3a 44 1 2026-03-09T15:00:51.971 INFO:tasks.workunit.client.1.vm09.stdout:4/554: dread db/d12/da1/fa6 [0,4194304] 0 2026-03-09T15:00:51.971 INFO:tasks.workunit.client.1.vm09.stdout:0/645: mkdir da/dc/dcb/dd4 0 2026-03-09T15:00:51.983 INFO:tasks.workunit.client.1.vm09.stdout:9/493: dwrite d1/d7/d1e/f20 [0,4194304] 0 2026-03-09T15:00:51.983 INFO:tasks.workunit.client.1.vm09.stdout:8/561: dread df/d5b/d65/d1d/f68 [0,4194304] 0 2026-03-09T15:00:51.988 INFO:tasks.workunit.client.1.vm09.stdout:3/572: dread d3/d5b/d79/d9d/fb3 [0,4194304] 0 2026-03-09T15:00:51.990 INFO:tasks.workunit.client.1.vm09.stdout:9/494: dwrite d1/d4f/f64 [0,4194304] 0 2026-03-09T15:00:52.002 INFO:tasks.workunit.client.1.vm09.stdout:2/572: symlink df/d93/da3/lab 0 2026-03-09T15:00:52.015 INFO:tasks.workunit.client.1.vm09.stdout:0/646: dread da/f4c [0,4194304] 0 2026-03-09T15:00:52.018 INFO:tasks.workunit.client.1.vm09.stdout:0/647: fsync da/dc/d1c/d46/d63/d86/dcd/d7b/fa8 0 2026-03-09T15:00:52.022 INFO:tasks.workunit.client.1.vm09.stdout:8/562: dread df/d38/f8a [0,4194304] 0 2026-03-09T15:00:52.023 INFO:tasks.workunit.client.1.vm09.stdout:1/472: creat d8/d50/d39/f96 x:0 0 0 2026-03-09T15:00:52.034 INFO:tasks.workunit.client.1.vm09.stdout:2/573: symlink df/d1f/d47/d71/lac 0 2026-03-09T15:00:52.034 INFO:tasks.workunit.client.1.vm09.stdout:1/473: write d8/d50/d39/d95/d56/f85 [126506,100084] 0 2026-03-09T15:00:52.034 INFO:tasks.workunit.client.1.vm09.stdout:2/574: rename df/d58/d67/l3d to df/d1f/d47/d5d/d90/lad 0 2026-03-09T15:00:52.034 INFO:tasks.workunit.client.1.vm09.stdout:2/575: write df/f13 [4334,98528] 0 2026-03-09T15:00:52.034 INFO:tasks.workunit.client.1.vm09.stdout:6/499: dwrite f0 [0,4194304] 0 2026-03-09T15:00:52.034 INFO:tasks.workunit.client.1.vm09.stdout:5/537: write d2/d37/f43 [451918,37805] 0 2026-03-09T15:00:52.035 INFO:tasks.workunit.client.1.vm09.stdout:2/576: write df/d58/d74/f95 [850217,119823] 0 2026-03-09T15:00:52.035 INFO:tasks.workunit.client.1.vm09.stdout:6/500: write d6/d20/d38/d4e/d55/f8a [2160323,45663] 0 2026-03-09T15:00:52.043 INFO:tasks.workunit.client.1.vm09.stdout:1/474: mknod d8/d50/d39/d95/d56/c97 0 2026-03-09T15:00:52.059 INFO:tasks.workunit.client.1.vm09.stdout:7/482: write d3/d28/f29 [409756,20183] 0 2026-03-09T15:00:52.059 INFO:tasks.workunit.client.1.vm09.stdout:7/483: chown d3/db/d25/d8d 1 1 2026-03-09T15:00:52.059 INFO:tasks.workunit.client.1.vm09.stdout:7/484: stat d3/db/d25/d5c/f8a 0 2026-03-09T15:00:52.059 INFO:tasks.workunit.client.1.vm09.stdout:2/577: symlink df/d1f/d6d/lae 0 2026-03-09T15:00:52.059 INFO:tasks.workunit.client.1.vm09.stdout:5/538: truncate d2/d37/d3c/d36/d45/f6e 3178763 0 2026-03-09T15:00:52.059 INFO:tasks.workunit.client.1.vm09.stdout:1/475: symlink d8/d50/d5b/l98 0 2026-03-09T15:00:52.059 INFO:tasks.workunit.client.1.vm09.stdout:6/501: symlink d6/df/la2 0 2026-03-09T15:00:52.059 INFO:tasks.workunit.client.1.vm09.stdout:5/539: write d2/d37/d3c/f4e [1505795,112413] 0 2026-03-09T15:00:52.059 INFO:tasks.workunit.client.1.vm09.stdout:2/578: mknod df/d1f/d6d/d8f/d5f/caf 0 2026-03-09T15:00:52.065 INFO:tasks.workunit.client.1.vm09.stdout:6/502: link d6/d20/d44/f4a d6/d20/d38/fa3 0 2026-03-09T15:00:52.084 INFO:tasks.workunit.client.1.vm09.stdout:1/476: truncate d8/d10/d73/f41 2388544 0 2026-03-09T15:00:52.084 INFO:tasks.workunit.client.1.vm09.stdout:5/540: dwrite d2/d37/d3c/d36/d4c/d51/f5f [0,4194304] 0 2026-03-09T15:00:52.084 INFO:tasks.workunit.client.1.vm09.stdout:5/541: chown d2/d37/d3c/d36/d4c/d51/l54 47633 1 2026-03-09T15:00:52.084 INFO:tasks.workunit.client.1.vm09.stdout:2/579: truncate df/d20/d2e/f4c 1954773 0 2026-03-09T15:00:52.084 INFO:tasks.workunit.client.1.vm09.stdout:2/580: write df/d20/f9d [325924,93267] 0 2026-03-09T15:00:52.084 INFO:tasks.workunit.client.1.vm09.stdout:5/542: mkdir d2/d37/d53/dc4 0 2026-03-09T15:00:52.084 INFO:tasks.workunit.client.1.vm09.stdout:6/503: symlink d6/la4 0 2026-03-09T15:00:52.084 INFO:tasks.workunit.client.1.vm09.stdout:5/543: dread - d2/d37/d3c/d36/fb3 zero size 2026-03-09T15:00:52.084 INFO:tasks.workunit.client.1.vm09.stdout:6/504: mkdir d6/d20/d24/da5 0 2026-03-09T15:00:52.084 INFO:tasks.workunit.client.1.vm09.stdout:5/544: write d2/d37/d53/f79 [3215713,65012] 0 2026-03-09T15:00:52.084 INFO:tasks.workunit.client.1.vm09.stdout:2/581: dwrite f0 [0,4194304] 0 2026-03-09T15:00:52.084 INFO:tasks.workunit.client.1.vm09.stdout:2/582: truncate df/f9b 1298665 0 2026-03-09T15:00:52.088 INFO:tasks.workunit.client.1.vm09.stdout:0/648: sync 2026-03-09T15:00:52.091 INFO:tasks.workunit.client.1.vm09.stdout:5/545: creat d2/d37/d3c/d36/d4c/d89/fc5 x:0 0 0 2026-03-09T15:00:52.091 INFO:tasks.workunit.client.1.vm09.stdout:0/649: write da/dc/d1c/d3c/d78/d7a/fb2 [793015,53914] 0 2026-03-09T15:00:52.097 INFO:tasks.workunit.client.1.vm09.stdout:7/485: dread d3/d61/f6c [0,4194304] 0 2026-03-09T15:00:52.103 INFO:tasks.workunit.client.1.vm09.stdout:7/486: readlink d3/db/d46/l67 0 2026-03-09T15:00:52.103 INFO:tasks.workunit.client.1.vm09.stdout:0/650: fsync da/dc/d10/f16 0 2026-03-09T15:00:52.105 INFO:tasks.workunit.client.1.vm09.stdout:0/651: write da/fa9 [2486759,76644] 0 2026-03-09T15:00:52.110 INFO:tasks.workunit.client.1.vm09.stdout:5/546: getdents d2/d37/d67/d95/db8 0 2026-03-09T15:00:52.110 INFO:tasks.workunit.client.1.vm09.stdout:7/487: creat d3/d61/f90 x:0 0 0 2026-03-09T15:00:52.111 INFO:tasks.workunit.client.1.vm09.stdout:7/488: write d3/d1d/d65/f6f [2903318,82993] 0 2026-03-09T15:00:52.112 INFO:tasks.workunit.client.1.vm09.stdout:7/489: write d3/d61/f86 [895835,122222] 0 2026-03-09T15:00:52.126 INFO:tasks.workunit.client.1.vm09.stdout:2/583: link df/l19 df/d1f/d47/d84/lb0 0 2026-03-09T15:00:52.126 INFO:tasks.workunit.client.1.vm09.stdout:5/547: unlink d2/d37/d3c/l69 0 2026-03-09T15:00:52.127 INFO:tasks.workunit.client.1.vm09.stdout:0/652: getdents da/dc/d10/dd0 0 2026-03-09T15:00:52.127 INFO:tasks.workunit.client.1.vm09.stdout:5/548: write d2/d37/d3c/d36/d4c/d51/d96/fa4 [670146,105680] 0 2026-03-09T15:00:52.129 INFO:tasks.workunit.client.1.vm09.stdout:2/584: fsync fb 0 2026-03-09T15:00:52.133 INFO:tasks.workunit.client.1.vm09.stdout:5/549: fdatasync d2/d37/f6c 0 2026-03-09T15:00:52.134 INFO:tasks.workunit.client.1.vm09.stdout:0/653: rename da/dc/d1c/d46/d5b/f6b to da/dc/d84/fd5 0 2026-03-09T15:00:52.135 INFO:tasks.workunit.client.1.vm09.stdout:7/490: getdents d3/d3d 0 2026-03-09T15:00:52.139 INFO:tasks.workunit.client.1.vm09.stdout:2/585: dwrite df/d1f/f39 [0,4194304] 0 2026-03-09T15:00:52.149 INFO:tasks.workunit.client.1.vm09.stdout:5/550: link d2/d37/d3c/d36/d4c/d51/d96/l13 d2/d37/d3c/d36/lc6 0 2026-03-09T15:00:52.150 INFO:tasks.workunit.client.1.vm09.stdout:0/654: link da/dc/d22/d76/ld3 da/dc/dc0/ld6 0 2026-03-09T15:00:52.153 INFO:tasks.workunit.client.1.vm09.stdout:0/655: rename da/l2e to da/d57/ld7 0 2026-03-09T15:00:52.154 INFO:tasks.workunit.client.1.vm09.stdout:5/551: fdatasync d2/d37/d3c/d36/d45/dae/dc3/f7b 0 2026-03-09T15:00:52.180 INFO:tasks.workunit.client.1.vm09.stdout:5/552: creat d2/d37/d3c/d36/d4c/d51/fc7 x:0 0 0 2026-03-09T15:00:52.180 INFO:tasks.workunit.client.1.vm09.stdout:0/656: creat da/dc/d1c/d46/fd8 x:0 0 0 2026-03-09T15:00:52.186 INFO:tasks.workunit.client.1.vm09.stdout:5/553: dread d2/d37/d3c/d36/d4c/d51/d96/f16 [0,4194304] 0 2026-03-09T15:00:52.221 INFO:tasks.workunit.client.1.vm09.stdout:5/554: rename d2/d37/d3c/d36/d45/l9f to d2/d37/d3c/d36/d4c/d51/lc8 0 2026-03-09T15:00:52.221 INFO:tasks.workunit.client.1.vm09.stdout:5/555: mkdir d2/d37/d53/d86/d88/dc9 0 2026-03-09T15:00:52.221 INFO:tasks.workunit.client.1.vm09.stdout:5/556: chown d2/d37/f43 370813823 1 2026-03-09T15:00:52.251 INFO:tasks.workunit.client.1.vm09.stdout:3/573: chown d3/d3a/d2b/d31/d4a/d62 4378068 1 2026-03-09T15:00:52.253 INFO:tasks.workunit.client.1.vm09.stdout:3/574: mkdir d3/d3a/d2b/d7b/d90/dc4 0 2026-03-09T15:00:52.253 INFO:tasks.workunit.client.1.vm09.stdout:3/575: stat d3/d5b/fc0 0 2026-03-09T15:00:52.253 INFO:tasks.workunit.client.1.vm09.stdout:3/576: chown d3/d3a/d2b/d39/l50 51080 1 2026-03-09T15:00:52.254 INFO:tasks.workunit.client.1.vm09.stdout:4/555: write db/d19/d52/d76/d3b/f69 [112832,39555] 0 2026-03-09T15:00:52.255 INFO:tasks.workunit.client.1.vm09.stdout:3/577: mkdir d3/d3a/d2b/d39/d48/dc5 0 2026-03-09T15:00:52.255 INFO:tasks.workunit.client.1.vm09.stdout:4/556: stat db/d19/d23/d44/d7c 0 2026-03-09T15:00:52.255 INFO:tasks.workunit.client.1.vm09.stdout:3/578: dread - d3/d3a/d2b/d36/f68 zero size 2026-03-09T15:00:52.256 INFO:tasks.workunit.client.1.vm09.stdout:4/557: dread - db/d12/d16/d5b/d78/d7f/f9d zero size 2026-03-09T15:00:52.256 INFO:tasks.workunit.client.1.vm09.stdout:9/495: write d1/d7/d1e/f5d [1069974,65483] 0 2026-03-09T15:00:52.259 INFO:tasks.workunit.client.1.vm09.stdout:9/496: mkdir d1/d58/da8 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:4/558: mknod db/d19/d52/d76/cb2 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:8/563: truncate fe 6640012 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:4/559: creat db/d19/d23/d71/fb3 x:0 0 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:9/497: symlink d1/d4f/la9 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:9/498: write d1/d4f/d52/f94 [475593,1175] 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:4/560: rename db/d12/l30 to db/d19/d23/d44/d7c/lb4 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:9/499: chown d1/d7/d1e/d2b/d2e/f12 0 1 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:8/564: dwrite df/f51 [0,4194304] 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:8/565: mknod df/d2d/d46/ca2 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:8/566: unlink df/d38/d64/d5f/f69 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:8/567: write df/d2d/d46/f7f [680462,55058] 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:8/568: fdatasync f8 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:8/569: write df/d5b/d65/d1d/f41 [903402,83029] 0 2026-03-09T15:00:52.314 INFO:tasks.workunit.client.1.vm09.stdout:8/570: readlink df/d2d/d46/d73/d81/d60/l77 0 2026-03-09T15:00:52.350 INFO:tasks.workunit.client.1.vm09.stdout:1/477: write d8/d10/f44 [343679,84827] 0 2026-03-09T15:00:52.351 INFO:tasks.workunit.client.1.vm09.stdout:1/478: chown d8/d10/d24/d83 3813836 1 2026-03-09T15:00:52.355 INFO:tasks.workunit.client.1.vm09.stdout:1/479: creat d8/d90/f99 x:0 0 0 2026-03-09T15:00:52.403 INFO:tasks.workunit.client.1.vm09.stdout:3/579: sync 2026-03-09T15:00:52.404 INFO:tasks.workunit.client.1.vm09.stdout:3/580: write d3/d9a/fc2 [102795,89890] 0 2026-03-09T15:00:52.410 INFO:tasks.workunit.client.1.vm09.stdout:3/581: dwrite d3/d3a/d2b/d7b/fbe [0,4194304] 0 2026-03-09T15:00:52.416 INFO:tasks.workunit.client.1.vm09.stdout:2/586: dread df/d20/f6a [0,4194304] 0 2026-03-09T15:00:52.418 INFO:tasks.workunit.client.1.vm09.stdout:3/582: dread d3/f3e [0,4194304] 0 2026-03-09T15:00:52.425 INFO:tasks.workunit.client.1.vm09.stdout:7/491: truncate d3/d28/f69 665671 0 2026-03-09T15:00:52.430 INFO:tasks.workunit.client.1.vm09.stdout:7/492: chown d3/db/d15/d5f/d44/l71 4001291 1 2026-03-09T15:00:52.432 INFO:tasks.workunit.client.1.vm09.stdout:3/583: rename d3/d3a/d2b/d36/l38 to d3/d3a/d2b/d39/d6a/lc6 0 2026-03-09T15:00:52.432 INFO:tasks.workunit.client.1.vm09.stdout:7/493: chown d3/d3d 27169243 1 2026-03-09T15:00:52.433 INFO:tasks.workunit.client.1.vm09.stdout:3/584: read d3/f9 [2977043,62080] 0 2026-03-09T15:00:52.434 INFO:tasks.workunit.client.1.vm09.stdout:3/585: fdatasync d3/d3a/d2b/f92 0 2026-03-09T15:00:52.435 INFO:tasks.workunit.client.1.vm09.stdout:3/586: chown d3/d3a/d2b/d36/dac 315530 1 2026-03-09T15:00:52.437 INFO:tasks.workunit.client.1.vm09.stdout:7/494: truncate d3/db/d25/d5c/d75/f7e 577902 0 2026-03-09T15:00:52.437 INFO:tasks.workunit.client.1.vm09.stdout:7/495: fsync d3/d3d/f51 0 2026-03-09T15:00:52.440 INFO:tasks.workunit.client.1.vm09.stdout:6/505: dread d6/d20/d38/d4e/d55/f8a [0,4194304] 0 2026-03-09T15:00:52.442 INFO:tasks.workunit.client.1.vm09.stdout:6/506: dread - d6/db/d10/d7a/f80 zero size 2026-03-09T15:00:52.442 INFO:tasks.workunit.client.1.vm09.stdout:0/657: truncate da/dc/d1c/d3c/d44/f89 22007 0 2026-03-09T15:00:52.443 INFO:tasks.workunit.client.1.vm09.stdout:3/587: creat d3/d3a/d2b/d7b/db0/fc7 x:0 0 0 2026-03-09T15:00:52.449 INFO:tasks.workunit.client.1.vm09.stdout:3/588: dwrite d3/d3a/d2b/d31/d4a/d62/f1b [4194304,4194304] 0 2026-03-09T15:00:52.451 INFO:tasks.workunit.client.1.vm09.stdout:5/557: write d2/d37/d3c/d36/d4c/d51/f59 [261131,11630] 0 2026-03-09T15:00:52.459 INFO:tasks.workunit.client.1.vm09.stdout:2/587: dread df/d20/d29/f51 [0,4194304] 0 2026-03-09T15:00:52.466 INFO:tasks.workunit.client.1.vm09.stdout:0/658: read da/dc/d1c/d46/f52 [608618,66521] 0 2026-03-09T15:00:52.468 INFO:tasks.workunit.client.1.vm09.stdout:3/589: creat d3/d3a/d2b/d31/d9e/fc8 x:0 0 0 2026-03-09T15:00:52.470 INFO:tasks.workunit.client.1.vm09.stdout:2/588: mknod df/d6e/cb1 0 2026-03-09T15:00:52.471 INFO:tasks.workunit.client.1.vm09.stdout:8/571: truncate df/d24/f7a 1384139 0 2026-03-09T15:00:52.472 INFO:tasks.workunit.client.1.vm09.stdout:0/659: chown da/dc/c45 211292 1 2026-03-09T15:00:52.474 INFO:tasks.workunit.client.1.vm09.stdout:9/500: dwrite d1/d7/f83 [0,4194304] 0 2026-03-09T15:00:52.475 INFO:tasks.workunit.client.1.vm09.stdout:1/480: truncate d8/d10/f29 1875764 0 2026-03-09T15:00:52.478 INFO:tasks.workunit.client.1.vm09.stdout:3/590: sync 2026-03-09T15:00:52.479 INFO:tasks.workunit.client.1.vm09.stdout:2/589: dread f5 [0,4194304] 0 2026-03-09T15:00:52.490 INFO:tasks.workunit.client.1.vm09.stdout:0/660: rmdir da/dc/d84 39 2026-03-09T15:00:52.494 INFO:tasks.workunit.client.1.vm09.stdout:5/558: dread d2/d37/d3c/d36/f4a [0,4194304] 0 2026-03-09T15:00:52.494 INFO:tasks.workunit.client.1.vm09.stdout:9/501: dwrite d1/d7/f67 [8388608,4194304] 0 2026-03-09T15:00:52.497 INFO:tasks.workunit.client.1.vm09.stdout:3/591: rename d3/d3a/d2b/d7b/fbe to d3/fc9 0 2026-03-09T15:00:52.499 INFO:tasks.workunit.client.1.vm09.stdout:2/590: symlink df/d1f/d47/d5d/lb2 0 2026-03-09T15:00:52.500 INFO:tasks.workunit.client.1.vm09.stdout:8/572: symlink df/d2d/d42/d79/d9a/la3 0 2026-03-09T15:00:52.501 INFO:tasks.workunit.client.1.vm09.stdout:1/481: creat d8/d10/d24/d48/f9a x:0 0 0 2026-03-09T15:00:52.502 INFO:tasks.workunit.client.1.vm09.stdout:8/573: readlink df/d2d/d46/l91 0 2026-03-09T15:00:52.503 INFO:tasks.workunit.client.1.vm09.stdout:9/502: mkdir d1/d7/d9f/daa 0 2026-03-09T15:00:52.503 INFO:tasks.workunit.client.1.vm09.stdout:5/559: rmdir d2/d37/d3c/d36/d4c/d51/d96 39 2026-03-09T15:00:52.505 INFO:tasks.workunit.client.1.vm09.stdout:5/560: write d2/d37/d67/d95/db5/fb6 [149232,69015] 0 2026-03-09T15:00:52.506 INFO:tasks.workunit.client.1.vm09.stdout:0/661: write da/dc/d1c/d46/d63/d86/dcd/d7b/fb9 [849584,47244] 0 2026-03-09T15:00:52.506 INFO:tasks.workunit.client.1.vm09.stdout:9/503: creat d1/d7/d1e/d2b/d2e/d56/d6d/fab x:0 0 0 2026-03-09T15:00:52.506 INFO:tasks.workunit.client.1.vm09.stdout:3/592: mknod d3/d3a/d2b/d39/d48/dc5/cca 0 2026-03-09T15:00:52.510 INFO:tasks.workunit.client.1.vm09.stdout:4/561: fdatasync db/d19/d52/d76/d3b/f69 0 2026-03-09T15:00:52.511 INFO:tasks.workunit.client.1.vm09.stdout:0/662: write da/dc/d1c/d3c/d78/d7a/fb2 [312748,91880] 0 2026-03-09T15:00:52.511 INFO:tasks.workunit.client.1.vm09.stdout:3/593: truncate d3/d74/f88 1058065 0 2026-03-09T15:00:52.515 INFO:tasks.workunit.client.1.vm09.stdout:2/591: dwrite df/d2d/f3c [0,4194304] 0 2026-03-09T15:00:52.515 INFO:tasks.workunit.client.1.vm09.stdout:9/504: dwrite d1/d6e/f74 [0,4194304] 0 2026-03-09T15:00:52.515 INFO:tasks.workunit.client.1.vm09.stdout:9/505: readlink d1/d7/d1e/d2b/d2e/l61 0 2026-03-09T15:00:52.517 INFO:tasks.workunit.client.1.vm09.stdout:9/506: dread - d1/d7/d9f/fa4 zero size 2026-03-09T15:00:52.522 INFO:tasks.workunit.client.1.vm09.stdout:3/594: mknod d3/d3a/d2b/d7b/d90/ccb 0 2026-03-09T15:00:52.524 INFO:tasks.workunit.client.1.vm09.stdout:2/592: readlink df/d1f/l8e 0 2026-03-09T15:00:52.529 INFO:tasks.workunit.client.1.vm09.stdout:0/663: dwrite da/dc/d1c/d46/d63/f7f [0,4194304] 0 2026-03-09T15:00:52.533 INFO:tasks.workunit.client.1.vm09.stdout:0/664: stat da/dc/d1c/d3c/f4f 0 2026-03-09T15:00:52.535 INFO:tasks.workunit.client.1.vm09.stdout:4/562: dwrite db/d12/d16/f26 [0,4194304] 0 2026-03-09T15:00:52.546 INFO:tasks.workunit.client.1.vm09.stdout:9/507: rename d1/d7/d1e/d2b/d2e/c6c to d1/d4f/d8f/cac 0 2026-03-09T15:00:52.548 INFO:tasks.workunit.client.1.vm09.stdout:3/595: symlink d3/d5b/lcc 0 2026-03-09T15:00:52.549 INFO:tasks.workunit.client.1.vm09.stdout:0/665: mkdir da/dc/dcb/dd9 0 2026-03-09T15:00:52.553 INFO:tasks.workunit.client.1.vm09.stdout:4/563: creat db/d19/d52/fb5 x:0 0 0 2026-03-09T15:00:52.557 INFO:tasks.workunit.client.1.vm09.stdout:9/508: getdents d1/d7/da6 0 2026-03-09T15:00:52.558 INFO:tasks.workunit.client.1.vm09.stdout:9/509: rename d1/d7/d1e/d2b to d1/d7/d1e/d2b/d2e/d56/dad 22 2026-03-09T15:00:52.559 INFO:tasks.workunit.client.1.vm09.stdout:9/510: chown d1/d7/d9f/daa 88476300 1 2026-03-09T15:00:52.559 INFO:tasks.workunit.client.1.vm09.stdout:4/564: truncate db/d19/d52/d76/d3b/f48 3952580 0 2026-03-09T15:00:52.560 INFO:tasks.workunit.client.1.vm09.stdout:8/574: dread df/f23 [0,4194304] 0 2026-03-09T15:00:52.561 INFO:tasks.workunit.client.1.vm09.stdout:8/575: chown df/d2d/d46/d73 40941067 1 2026-03-09T15:00:52.561 INFO:tasks.workunit.client.1.vm09.stdout:9/511: fsync d1/d58/f72 0 2026-03-09T15:00:52.561 INFO:tasks.workunit.client.1.vm09.stdout:8/576: fsync df/d2d/d4f/fa1 0 2026-03-09T15:00:52.562 INFO:tasks.workunit.client.1.vm09.stdout:8/577: readlink df/d5b/l4c 0 2026-03-09T15:00:52.562 INFO:tasks.workunit.client.1.vm09.stdout:9/512: write d1/d7/d1e/f34 [4468725,85223] 0 2026-03-09T15:00:52.563 INFO:tasks.workunit.client.1.vm09.stdout:4/565: truncate db/d12/d16/f54 73677 0 2026-03-09T15:00:52.563 INFO:tasks.workunit.client.1.vm09.stdout:9/513: fsync d1/d58/f72 0 2026-03-09T15:00:52.567 INFO:tasks.workunit.client.1.vm09.stdout:9/514: chown d1/d4f/d8f 28961 1 2026-03-09T15:00:52.569 INFO:tasks.workunit.client.1.vm09.stdout:4/566: symlink db/d12/da1/lb6 0 2026-03-09T15:00:52.570 INFO:tasks.workunit.client.1.vm09.stdout:4/567: mkdir db/d19/d23/d44/d7c/d7d/db7 0 2026-03-09T15:00:52.570 INFO:tasks.workunit.client.1.vm09.stdout:4/568: fdatasync db/d19/d23/f91 0 2026-03-09T15:00:52.580 INFO:tasks.workunit.client.1.vm09.stdout:3/596: dread d3/d3a/f6b [0,4194304] 0 2026-03-09T15:00:52.580 INFO:tasks.workunit.client.1.vm09.stdout:3/597: write d3/d3a/d2b/d31/f33 [213182,37832] 0 2026-03-09T15:00:52.585 INFO:tasks.workunit.client.1.vm09.stdout:3/598: rename d3/d9a/cad to d3/d3a/d2b/d31/d9e/ccd 0 2026-03-09T15:00:52.586 INFO:tasks.workunit.client.1.vm09.stdout:9/515: dread d1/d7/d1e/f9e [0,4194304] 0 2026-03-09T15:00:52.590 INFO:tasks.workunit.client.1.vm09.stdout:3/599: symlink d3/d9a/lce 0 2026-03-09T15:00:52.592 INFO:tasks.workunit.client.1.vm09.stdout:3/600: creat d3/d3a/d2b/d7b/d90/fcf x:0 0 0 2026-03-09T15:00:52.593 INFO:tasks.workunit.client.1.vm09.stdout:3/601: chown d3/d9a/f97 29 1 2026-03-09T15:00:52.595 INFO:tasks.workunit.client.1.vm09.stdout:9/516: truncate d1/f1f 4073717 0 2026-03-09T15:00:52.596 INFO:tasks.workunit.client.1.vm09.stdout:3/602: rename d3/d3a/d2b/d39/d6a/l6c to d3/ld0 0 2026-03-09T15:00:52.601 INFO:tasks.workunit.client.1.vm09.stdout:3/603: rename d3/d5b/d79/d9d/cbb to d3/d5b/d79/d9d/cd1 0 2026-03-09T15:00:52.608 INFO:tasks.workunit.client.1.vm09.stdout:3/604: creat d3/d3a/d2b/d31/d4a/fd2 x:0 0 0 2026-03-09T15:00:52.611 INFO:tasks.workunit.client.1.vm09.stdout:3/605: mkdir d3/d3a/d2b/d7b/dd3 0 2026-03-09T15:00:52.616 INFO:tasks.workunit.client.1.vm09.stdout:3/606: dread d3/d3a/d2b/d31/d9e/fb7 [0,4194304] 0 2026-03-09T15:00:52.619 INFO:tasks.workunit.client.1.vm09.stdout:3/607: creat d3/d3a/d2b/d31/fd4 x:0 0 0 2026-03-09T15:00:52.622 INFO:tasks.workunit.client.1.vm09.stdout:3/608: mkdir d3/d3a/d2b/d39/d6a/dd5 0 2026-03-09T15:00:52.623 INFO:tasks.workunit.client.1.vm09.stdout:3/609: creat d3/d3a/d2b/d39/d6a/fd6 x:0 0 0 2026-03-09T15:00:52.625 INFO:tasks.workunit.client.1.vm09.stdout:3/610: rmdir d3/d3a/d2b/d39/d48/da0 39 2026-03-09T15:00:52.626 INFO:tasks.workunit.client.1.vm09.stdout:3/611: write d3/d5b/d79/f89 [475853,129713] 0 2026-03-09T15:00:52.628 INFO:tasks.workunit.client.1.vm09.stdout:3/612: mknod d3/d9a/d80/cd7 0 2026-03-09T15:00:52.631 INFO:tasks.workunit.client.1.vm09.stdout:7/496: dwrite d3/f8 [0,4194304] 0 2026-03-09T15:00:52.637 INFO:tasks.workunit.client.1.vm09.stdout:6/507: dwrite d6/db/d10/f1c [0,4194304] 0 2026-03-09T15:00:52.639 INFO:tasks.workunit.client.1.vm09.stdout:7/497: read d3/d1d/d2d/f81 [12354969,105237] 0 2026-03-09T15:00:52.641 INFO:tasks.workunit.client.1.vm09.stdout:6/508: dwrite d6/db/d8b/f73 [0,4194304] 0 2026-03-09T15:00:52.643 INFO:tasks.workunit.client.1.vm09.stdout:3/613: creat d3/d3a/d2b/d7b/db0/fd8 x:0 0 0 2026-03-09T15:00:52.650 INFO:tasks.workunit.client.1.vm09.stdout:7/498: write d3/d1d/f11 [1365900,129603] 0 2026-03-09T15:00:52.666 INFO:tasks.workunit.client.1.vm09.stdout:6/509: truncate d6/d20/d2a/d3d/f79 183534 0 2026-03-09T15:00:52.668 INFO:tasks.workunit.client.1.vm09.stdout:3/614: mkdir d3/d3a/d2b/d7b/db6/dd9 0 2026-03-09T15:00:52.668 INFO:tasks.workunit.client.1.vm09.stdout:5/561: truncate d2/f4f 3495592 0 2026-03-09T15:00:52.669 INFO:tasks.workunit.client.1.vm09.stdout:5/562: dread - d2/d37/d3c/fac zero size 2026-03-09T15:00:52.669 INFO:tasks.workunit.client.1.vm09.stdout:5/563: stat d2/d37/d53/c71 0 2026-03-09T15:00:52.672 INFO:tasks.workunit.client.1.vm09.stdout:3/615: mknod d3/cda 0 2026-03-09T15:00:52.675 INFO:tasks.workunit.client.1.vm09.stdout:7/499: rename d3/d1d/c60 to d3/d61/c91 0 2026-03-09T15:00:52.675 INFO:tasks.workunit.client.1.vm09.stdout:5/564: readlink d2/l49 0 2026-03-09T15:00:52.676 INFO:tasks.workunit.client.1.vm09.stdout:2/593: write df/f23 [1146916,33954] 0 2026-03-09T15:00:52.677 INFO:tasks.workunit.client.1.vm09.stdout:1/482: dread d8/d10/f29 [0,4194304] 0 2026-03-09T15:00:52.678 INFO:tasks.workunit.client.1.vm09.stdout:7/500: chown d3/db/d25/l2c 3087 1 2026-03-09T15:00:52.678 INFO:tasks.workunit.client.1.vm09.stdout:3/616: dwrite d3/d3a/d2b/d7b/d90/fcf [0,4194304] 0 2026-03-09T15:00:52.680 INFO:tasks.workunit.client.1.vm09.stdout:7/501: stat d3/db/d25/l49 0 2026-03-09T15:00:52.686 INFO:tasks.workunit.client.1.vm09.stdout:0/666: rmdir da/dc 39 2026-03-09T15:00:52.689 INFO:tasks.workunit.client.1.vm09.stdout:6/510: dread d6/db/f66 [0,4194304] 0 2026-03-09T15:00:52.695 INFO:tasks.workunit.client.1.vm09.stdout:5/565: stat d2/d37/d3c/d36/d45/fa0 0 2026-03-09T15:00:52.696 INFO:tasks.workunit.client.1.vm09.stdout:2/594: write df/f42 [1115763,98181] 0 2026-03-09T15:00:52.704 INFO:tasks.workunit.client.1.vm09.stdout:0/667: write da/dc/d22/f9d [367902,47607] 0 2026-03-09T15:00:52.706 INFO:tasks.workunit.client.1.vm09.stdout:6/511: mknod d6/d20/d38/d56/d65/d68/d6f/ca6 0 2026-03-09T15:00:52.707 INFO:tasks.workunit.client.1.vm09.stdout:5/566: dread d2/d37/d3c/d36/d45/dae/dc3/f57 [0,4194304] 0 2026-03-09T15:00:52.708 INFO:tasks.workunit.client.1.vm09.stdout:6/512: chown d6/db/d8b 5 1 2026-03-09T15:00:52.709 INFO:tasks.workunit.client.1.vm09.stdout:5/567: write d2/d37/d3c/d36/d45/fa0 [122852,115624] 0 2026-03-09T15:00:52.710 INFO:tasks.workunit.client.1.vm09.stdout:8/578: write df/d5b/f35 [1073555,53047] 0 2026-03-09T15:00:52.711 INFO:tasks.workunit.client.1.vm09.stdout:3/617: unlink d3/d3a/d2b/d31/d4a/d62/f57 0 2026-03-09T15:00:52.722 INFO:tasks.workunit.client.1.vm09.stdout:4/569: write db/d19/f38 [4072756,17944] 0 2026-03-09T15:00:52.724 INFO:tasks.workunit.client.1.vm09.stdout:6/513: creat d6/d20/d38/d56/d65/d68/d6f/fa7 x:0 0 0 2026-03-09T15:00:52.725 INFO:tasks.workunit.client.1.vm09.stdout:8/579: rmdir df/d2d/d46/d73 39 2026-03-09T15:00:52.727 INFO:tasks.workunit.client.1.vm09.stdout:8/580: dread df/d2d/d46/d33/f8e [0,4194304] 0 2026-03-09T15:00:52.733 INFO:tasks.workunit.client.1.vm09.stdout:4/570: read db/d19/d52/d76/f3e [666034,43640] 0 2026-03-09T15:00:52.734 INFO:tasks.workunit.client.1.vm09.stdout:6/514: write d6/d20/d38/d56/d65/d68/d6f/f85 [2302619,34943] 0 2026-03-09T15:00:52.735 INFO:tasks.workunit.client.1.vm09.stdout:6/515: dread - d6/df/d23/f76 zero size 2026-03-09T15:00:52.743 INFO:tasks.workunit.client.1.vm09.stdout:9/517: write d1/d7/d1e/d2b/d40/f57 [688875,11348] 0 2026-03-09T15:00:52.745 INFO:tasks.workunit.client.1.vm09.stdout:6/516: mknod d6/df/d23/ca8 0 2026-03-09T15:00:52.746 INFO:tasks.workunit.client.1.vm09.stdout:6/517: chown d6/db/d10/d7a/f80 6 1 2026-03-09T15:00:52.748 INFO:tasks.workunit.client.1.vm09.stdout:5/568: creat d2/d37/d3c/d36/d4c/d51/fca x:0 0 0 2026-03-09T15:00:52.751 INFO:tasks.workunit.client.1.vm09.stdout:4/571: unlink db/d19/d23/l59 0 2026-03-09T15:00:52.752 INFO:tasks.workunit.client.1.vm09.stdout:4/572: chown db/d19/d81/d5d/f77 543579935 1 2026-03-09T15:00:52.752 INFO:tasks.workunit.client.1.vm09.stdout:6/518: dwrite d6/db/d10/fa0 [0,4194304] 0 2026-03-09T15:00:52.755 INFO:tasks.workunit.client.1.vm09.stdout:5/569: dwrite d2/f3d [0,4194304] 0 2026-03-09T15:00:52.755 INFO:tasks.workunit.client.1.vm09.stdout:4/573: stat db/d19/d81/d5d 0 2026-03-09T15:00:52.772 INFO:tasks.workunit.client.1.vm09.stdout:9/518: creat d1/d7/d9f/daa/fae x:0 0 0 2026-03-09T15:00:52.772 INFO:tasks.workunit.client.1.vm09.stdout:9/519: write d1/d58/f72 [3638175,34161] 0 2026-03-09T15:00:52.773 INFO:tasks.workunit.client.1.vm09.stdout:6/519: fsync d6/df/d23/f6d 0 2026-03-09T15:00:52.773 INFO:tasks.workunit.client.1.vm09.stdout:9/520: readlink d1/d4f/l54 0 2026-03-09T15:00:52.786 INFO:tasks.workunit.client.1.vm09.stdout:9/521: rmdir d1/d4f/d8f/d91 39 2026-03-09T15:00:52.787 INFO:tasks.workunit.client.1.vm09.stdout:8/581: dread df/d24/f7a [0,4194304] 0 2026-03-09T15:00:52.791 INFO:tasks.workunit.client.1.vm09.stdout:5/570: dread d2/d37/d3c/d36/d4c/d51/d96/f73 [0,4194304] 0 2026-03-09T15:00:52.796 INFO:tasks.workunit.client.1.vm09.stdout:4/574: creat db/d12/fb8 x:0 0 0 2026-03-09T15:00:52.796 INFO:tasks.workunit.client.1.vm09.stdout:4/575: read - db/d12/d16/f9c zero size 2026-03-09T15:00:52.802 INFO:tasks.workunit.client.1.vm09.stdout:8/582: fsync df/d24/f7a 0 2026-03-09T15:00:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:52 vm05.local ceph-mon[50611]: pgmap v152: 65 pgs: 65 active+clean; 1.2 GiB data, 4.0 GiB used, 116 GiB / 120 GiB avail; 40 MiB/s rd, 151 MiB/s wr, 364 op/s 2026-03-09T15:00:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:52 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:00:52.813 INFO:tasks.workunit.client.1.vm09.stdout:4/576: dread - db/d19/d52/d76/d3b/f49 zero size 2026-03-09T15:00:52.813 INFO:tasks.workunit.client.1.vm09.stdout:4/577: stat db/c62 0 2026-03-09T15:00:52.813 INFO:tasks.workunit.client.1.vm09.stdout:4/578: readlink db/d19/d23/d71/l80 0 2026-03-09T15:00:52.814 INFO:tasks.workunit.client.1.vm09.stdout:4/579: write db/f29 [431463,37658] 0 2026-03-09T15:00:52.817 INFO:tasks.workunit.client.1.vm09.stdout:1/483: dwrite d8/d10/d73/f41 [0,4194304] 0 2026-03-09T15:00:52.819 INFO:tasks.workunit.client.1.vm09.stdout:8/583: fsync df/f26 0 2026-03-09T15:00:52.831 INFO:tasks.workunit.client.1.vm09.stdout:7/502: dwrite d3/d1d/d2d/f84 [0,4194304] 0 2026-03-09T15:00:52.836 INFO:tasks.workunit.client.1.vm09.stdout:2/595: dwrite df/d20/f6a [0,4194304] 0 2026-03-09T15:00:52.838 INFO:tasks.workunit.client.1.vm09.stdout:2/596: read - df/d20/d2e/f54 zero size 2026-03-09T15:00:52.839 INFO:tasks.workunit.client.1.vm09.stdout:9/522: creat d1/d7/faf x:0 0 0 2026-03-09T15:00:52.850 INFO:tasks.workunit.client.1.vm09.stdout:3/618: dwrite d3/d3a/d54/f58 [4194304,4194304] 0 2026-03-09T15:00:52.851 INFO:tasks.workunit.client.1.vm09.stdout:8/584: dread df/d2d/f57 [0,4194304] 0 2026-03-09T15:00:52.852 INFO:tasks.workunit.client.1.vm09.stdout:4/580: creat db/d19/d23/d44/d7c/d7d/fb9 x:0 0 0 2026-03-09T15:00:52.854 INFO:tasks.workunit.client.1.vm09.stdout:4/581: fdatasync db/d19/d23/d71/d53/fa9 0 2026-03-09T15:00:52.859 INFO:tasks.workunit.client.1.vm09.stdout:0/668: truncate da/dc/d1c/d46/d5b/f6a 243945 0 2026-03-09T15:00:52.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:52 vm09.local ceph-mon[59673]: pgmap v152: 65 pgs: 65 active+clean; 1.2 GiB data, 4.0 GiB used, 116 GiB / 120 GiB avail; 40 MiB/s rd, 151 MiB/s wr, 364 op/s 2026-03-09T15:00:52.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:52 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:00:52.869 INFO:tasks.workunit.client.1.vm09.stdout:4/582: dread db/d12/f3d [0,4194304] 0 2026-03-09T15:00:52.876 INFO:tasks.workunit.client.1.vm09.stdout:1/484: rename d8/d10/d73/d5d to d8/d10/d24/d48/d9b 0 2026-03-09T15:00:52.878 INFO:tasks.workunit.client.1.vm09.stdout:7/503: creat d3/d1d/d65/f92 x:0 0 0 2026-03-09T15:00:52.883 INFO:tasks.workunit.client.1.vm09.stdout:7/504: chown d3/db/c55 4 1 2026-03-09T15:00:52.883 INFO:tasks.workunit.client.1.vm09.stdout:9/523: write d1/d7/d1e/f9e [588331,105432] 0 2026-03-09T15:00:52.891 INFO:tasks.workunit.client.1.vm09.stdout:3/619: mknod d3/d3a/d2b/d7b/db6/dd9/cdb 0 2026-03-09T15:00:52.892 INFO:tasks.workunit.client.1.vm09.stdout:8/585: mkdir df/d24/d95/da4 0 2026-03-09T15:00:52.895 INFO:tasks.workunit.client.1.vm09.stdout:7/505: mknod d3/db/c93 0 2026-03-09T15:00:52.896 INFO:tasks.workunit.client.1.vm09.stdout:8/586: fdatasync df/d5b/d65/d1d/f68 0 2026-03-09T15:00:52.899 INFO:tasks.workunit.client.1.vm09.stdout:9/524: creat d1/d7/fb0 x:0 0 0 2026-03-09T15:00:52.901 INFO:tasks.workunit.client.1.vm09.stdout:6/520: write d6/d20/d44/d8f/f9d [395975,30280] 0 2026-03-09T15:00:52.903 INFO:tasks.workunit.client.1.vm09.stdout:8/587: fsync df/d38/f39 0 2026-03-09T15:00:52.918 INFO:tasks.workunit.client.1.vm09.stdout:7/506: dwrite d3/db/d15/f68 [0,4194304] 0 2026-03-09T15:00:52.919 INFO:tasks.workunit.client.1.vm09.stdout:5/571: dwrite d2/d37/d53/d86/f87 [0,4194304] 0 2026-03-09T15:00:52.919 INFO:tasks.workunit.client.1.vm09.stdout:8/588: mknod df/d2d/ca5 0 2026-03-09T15:00:52.919 INFO:tasks.workunit.client.1.vm09.stdout:5/572: rename d2/d37/d3c/d36/l3f to d2/d37/d67/d95/db8/lcb 0 2026-03-09T15:00:52.919 INFO:tasks.workunit.client.1.vm09.stdout:8/589: write df/d38/d64/d5f/f6f [4404268,25698] 0 2026-03-09T15:00:52.921 INFO:tasks.workunit.client.1.vm09.stdout:7/507: mkdir d3/d1d/d94 0 2026-03-09T15:00:52.922 INFO:tasks.workunit.client.1.vm09.stdout:7/508: dread - d3/db/d15/d5f/d44/f82 zero size 2026-03-09T15:00:52.923 INFO:tasks.workunit.client.1.vm09.stdout:5/573: creat d2/d37/d3c/d36/fcc x:0 0 0 2026-03-09T15:00:52.924 INFO:tasks.workunit.client.1.vm09.stdout:5/574: stat d2/l18 0 2026-03-09T15:00:52.928 INFO:tasks.workunit.client.1.vm09.stdout:5/575: readlink d2/d37/d3c/d36/d45/l64 0 2026-03-09T15:00:52.929 INFO:tasks.workunit.client.1.vm09.stdout:7/509: dwrite d3/d3d/f51 [0,4194304] 0 2026-03-09T15:00:52.930 INFO:tasks.workunit.client.1.vm09.stdout:8/590: creat df/d2d/d46/d73/d81/fa6 x:0 0 0 2026-03-09T15:00:52.933 INFO:tasks.workunit.client.1.vm09.stdout:8/591: creat df/d38/d64/fa7 x:0 0 0 2026-03-09T15:00:52.939 INFO:tasks.workunit.client.1.vm09.stdout:8/592: dwrite df/d5b/d65/d1d/f41 [0,4194304] 0 2026-03-09T15:00:52.939 INFO:tasks.workunit.client.1.vm09.stdout:3/620: sync 2026-03-09T15:00:52.939 INFO:tasks.workunit.client.1.vm09.stdout:1/485: sync 2026-03-09T15:00:52.958 INFO:tasks.workunit.client.1.vm09.stdout:3/621: sync 2026-03-09T15:00:52.960 INFO:tasks.workunit.client.1.vm09.stdout:1/486: symlink d8/d10/d24/d45/d5f/l9c 0 2026-03-09T15:00:52.965 INFO:tasks.workunit.client.1.vm09.stdout:3/622: mknod d3/d3a/d2b/d7b/cdc 0 2026-03-09T15:00:52.965 INFO:tasks.workunit.client.1.vm09.stdout:1/487: symlink d8/d50/d5b/l9d 0 2026-03-09T15:00:52.965 INFO:tasks.workunit.client.1.vm09.stdout:3/623: unlink d3/d3a/d2b/d7b/fb5 0 2026-03-09T15:00:52.965 INFO:tasks.workunit.client.1.vm09.stdout:8/593: getdents df/d2d/d46/d73/d81/d60 0 2026-03-09T15:00:52.965 INFO:tasks.workunit.client.1.vm09.stdout:1/488: symlink d8/d50/d39/d95/l9e 0 2026-03-09T15:00:52.966 INFO:tasks.workunit.client.1.vm09.stdout:2/597: write df/d58/d67/f4e [456236,842] 0 2026-03-09T15:00:52.967 INFO:tasks.workunit.client.1.vm09.stdout:8/594: symlink df/d5b/d65/d1d/la8 0 2026-03-09T15:00:52.967 INFO:tasks.workunit.client.1.vm09.stdout:4/583: write db/d19/d52/d76/f3e [259964,70990] 0 2026-03-09T15:00:52.969 INFO:tasks.workunit.client.1.vm09.stdout:1/489: write d8/d10/d24/d45/f6c [111042,10809] 0 2026-03-09T15:00:52.970 INFO:tasks.workunit.client.1.vm09.stdout:2/598: creat df/d1f/d6d/fb3 x:0 0 0 2026-03-09T15:00:52.973 INFO:tasks.workunit.client.1.vm09.stdout:1/490: creat d8/d50/d39/d95/d56/f9f x:0 0 0 2026-03-09T15:00:52.973 INFO:tasks.workunit.client.1.vm09.stdout:2/599: write df/f13 [2856944,33818] 0 2026-03-09T15:00:52.974 INFO:tasks.workunit.client.1.vm09.stdout:2/600: read df/d58/d67/f4e [1250592,29625] 0 2026-03-09T15:00:52.975 INFO:tasks.workunit.client.1.vm09.stdout:1/491: rmdir d8/d50/d39/d95 39 2026-03-09T15:00:52.976 INFO:tasks.workunit.client.1.vm09.stdout:8/595: rename df/d38/f39 to df/d2d/d46/fa9 0 2026-03-09T15:00:52.977 INFO:tasks.workunit.client.1.vm09.stdout:2/601: mknod df/d93/da3/cb4 0 2026-03-09T15:00:52.983 INFO:tasks.workunit.client.1.vm09.stdout:8/596: dread df/f51 [0,4194304] 0 2026-03-09T15:00:52.990 INFO:tasks.workunit.client.1.vm09.stdout:6/521: write d6/d20/d24/f6c [80195,47916] 0 2026-03-09T15:00:52.992 INFO:tasks.workunit.client.1.vm09.stdout:9/525: dwrite d1/d7/d1e/d2b/f32 [0,4194304] 0 2026-03-09T15:00:52.992 INFO:tasks.workunit.client.1.vm09.stdout:0/669: dwrite da/d30/f3d [0,4194304] 0 2026-03-09T15:00:52.994 INFO:tasks.workunit.client.1.vm09.stdout:0/670: chown da/dc/d1c/d46/f7d 439804706 1 2026-03-09T15:00:52.997 INFO:tasks.workunit.client.1.vm09.stdout:9/526: chown d1/d7/d1e/d2b/c66 1924 1 2026-03-09T15:00:53.010 INFO:tasks.workunit.client.1.vm09.stdout:8/597: dwrite df/f1a [4194304,4194304] 0 2026-03-09T15:00:53.018 INFO:tasks.workunit.client.1.vm09.stdout:5/576: dwrite d2/d37/d3c/d36/d4c/d51/f62 [0,4194304] 0 2026-03-09T15:00:53.019 INFO:tasks.workunit.client.1.vm09.stdout:4/584: dread f3 [0,4194304] 0 2026-03-09T15:00:53.019 INFO:tasks.workunit.client.1.vm09.stdout:0/671: mknod da/dc/d1c/d46/d5b/d9f/cda 0 2026-03-09T15:00:53.023 INFO:tasks.workunit.client.1.vm09.stdout:5/577: write d2/d37/d3c/d36/d45/d5c/f90 [859544,37180] 0 2026-03-09T15:00:53.024 INFO:tasks.workunit.client.1.vm09.stdout:6/522: creat d6/df/d23/d5b/fa9 x:0 0 0 2026-03-09T15:00:53.029 INFO:tasks.workunit.client.1.vm09.stdout:9/527: creat d1/d7/d1e/d2b/d2e/d56/d6d/fb1 x:0 0 0 2026-03-09T15:00:53.033 INFO:tasks.workunit.client.1.vm09.stdout:7/510: dwrite d3/d28/f69 [0,4194304] 0 2026-03-09T15:00:53.045 INFO:tasks.workunit.client.1.vm09.stdout:0/672: read da/dc/d1c/d3c/d44/f89 [9042,104890] 0 2026-03-09T15:00:53.047 INFO:tasks.workunit.client.1.vm09.stdout:3/624: write d3/f29 [1990075,55475] 0 2026-03-09T15:00:53.048 INFO:tasks.workunit.client.1.vm09.stdout:4/585: creat db/d19/d23/d44/d7c/d7d/d97/da3/fba x:0 0 0 2026-03-09T15:00:53.049 INFO:tasks.workunit.client.1.vm09.stdout:2/602: rmdir df/d1f 39 2026-03-09T15:00:53.051 INFO:tasks.workunit.client.1.vm09.stdout:5/578: rename d2/da9/lbb to d2/d37/d67/d95/db8/lcd 0 2026-03-09T15:00:53.054 INFO:tasks.workunit.client.1.vm09.stdout:6/523: symlink d6/d20/d38/d56/laa 0 2026-03-09T15:00:53.056 INFO:tasks.workunit.client.1.vm09.stdout:1/492: write d8/d50/d5b/f89 [253497,60246] 0 2026-03-09T15:00:53.062 INFO:tasks.workunit.client.1.vm09.stdout:1/493: read d8/f7e [346723,121592] 0 2026-03-09T15:00:53.062 INFO:tasks.workunit.client.1.vm09.stdout:9/528: dread d1/d58/f75 [0,4194304] 0 2026-03-09T15:00:53.064 INFO:tasks.workunit.client.1.vm09.stdout:9/529: truncate d1/d58/f72 4658296 0 2026-03-09T15:00:53.069 INFO:tasks.workunit.client.1.vm09.stdout:2/603: rename f0 to df/d20/d2e/fb5 0 2026-03-09T15:00:53.071 INFO:tasks.workunit.client.1.vm09.stdout:6/524: stat d6/d20/d2a/f5e 0 2026-03-09T15:00:53.083 INFO:tasks.workunit.client.1.vm09.stdout:9/530: mknod d1/d7/d1e/d2b/d2e/d56/d6d/cb2 0 2026-03-09T15:00:53.084 INFO:tasks.workunit.client.1.vm09.stdout:2/604: rmdir df/d58/d74 39 2026-03-09T15:00:53.089 INFO:tasks.workunit.client.1.vm09.stdout:6/525: mknod d6/d20/d38/d56/d65/d68/d6f/cab 0 2026-03-09T15:00:53.091 INFO:tasks.workunit.client.1.vm09.stdout:7/511: creat d3/d28/f95 x:0 0 0 2026-03-09T15:00:53.091 INFO:tasks.workunit.client.1.vm09.stdout:2/605: dwrite df/d20/d2e/f59 [0,4194304] 0 2026-03-09T15:00:53.101 INFO:tasks.workunit.client.1.vm09.stdout:1/494: creat d8/d10/d24/d45/d5f/d8d/fa0 x:0 0 0 2026-03-09T15:00:53.103 INFO:tasks.workunit.client.1.vm09.stdout:7/512: dread d3/d1d/d2d/f84 [0,4194304] 0 2026-03-09T15:00:53.111 INFO:tasks.workunit.client.1.vm09.stdout:0/673: truncate da/dc/d1c/d46/d63/d86/dcd/d7b/fb9 290712 0 2026-03-09T15:00:53.113 INFO:tasks.workunit.client.1.vm09.stdout:4/586: truncate db/d19/d23/d44/f45 53039 0 2026-03-09T15:00:53.113 INFO:tasks.workunit.client.1.vm09.stdout:3/625: write d3/d3a/d2b/d31/d4a/d62/f26 [464181,80366] 0 2026-03-09T15:00:53.120 INFO:tasks.workunit.client.1.vm09.stdout:0/674: dwrite da/dc/d1c/d46/d63/f7f [0,4194304] 0 2026-03-09T15:00:53.122 INFO:tasks.workunit.client.1.vm09.stdout:3/626: sync 2026-03-09T15:00:53.123 INFO:tasks.workunit.client.1.vm09.stdout:3/627: read d3/d3a/d54/f58 [6879884,54910] 0 2026-03-09T15:00:53.132 INFO:tasks.workunit.client.1.vm09.stdout:8/598: truncate df/d5c/f8b 3011354 0 2026-03-09T15:00:53.133 INFO:tasks.workunit.client.1.vm09.stdout:8/599: chown l5 119 1 2026-03-09T15:00:53.133 INFO:tasks.workunit.client.1.vm09.stdout:8/600: dread - df/d24/d56/f9f zero size 2026-03-09T15:00:53.134 INFO:tasks.workunit.client.1.vm09.stdout:8/601: readlink df/d5b/d65/d1d/la8 0 2026-03-09T15:00:53.138 INFO:tasks.workunit.client.1.vm09.stdout:5/579: creat d2/d37/d3c/d36/d4c/d51/fce x:0 0 0 2026-03-09T15:00:53.140 INFO:tasks.workunit.client.1.vm09.stdout:5/580: chown d2/c39 232418 1 2026-03-09T15:00:53.140 INFO:tasks.workunit.client.1.vm09.stdout:6/526: creat d6/d20/d38/d4e/d55/fac x:0 0 0 2026-03-09T15:00:53.141 INFO:tasks.workunit.client.1.vm09.stdout:6/527: write d6/d20/d24/f6c [1164700,41377] 0 2026-03-09T15:00:53.143 INFO:tasks.workunit.client.1.vm09.stdout:0/675: dread da/f4c [0,4194304] 0 2026-03-09T15:00:53.145 INFO:tasks.workunit.client.1.vm09.stdout:4/587: creat db/d19/d23/d44/d7c/d7d/d97/da3/fbb x:0 0 0 2026-03-09T15:00:53.147 INFO:tasks.workunit.client.1.vm09.stdout:3/628: rename d3/d3a/d2b/d39/f81 to d3/d9a/d80/fdd 0 2026-03-09T15:00:53.150 INFO:tasks.workunit.client.1.vm09.stdout:5/581: unlink d2/d37/d53/c71 0 2026-03-09T15:00:53.152 INFO:tasks.workunit.client.1.vm09.stdout:5/582: truncate d2/d37/d3c/d36/d45/d5c/f90 2010678 0 2026-03-09T15:00:53.153 INFO:tasks.workunit.client.1.vm09.stdout:3/629: dwrite d3/d3a/f1d [8388608,4194304] 0 2026-03-09T15:00:53.166 INFO:tasks.workunit.client.1.vm09.stdout:2/606: symlink df/d1f/d47/d84/da1/da7/lb6 0 2026-03-09T15:00:53.170 INFO:tasks.workunit.client.1.vm09.stdout:9/531: dwrite d1/d7/f77 [0,4194304] 0 2026-03-09T15:00:53.171 INFO:tasks.workunit.client.1.vm09.stdout:1/495: write d8/d10/d73/f41 [1928138,99772] 0 2026-03-09T15:00:53.172 INFO:tasks.workunit.client.1.vm09.stdout:0/676: stat da/dc/d1c/c40 0 2026-03-09T15:00:53.172 INFO:tasks.workunit.client.1.vm09.stdout:7/513: truncate d3/d1d/f37 4492665 0 2026-03-09T15:00:53.180 INFO:tasks.workunit.client.1.vm09.stdout:8/602: rename df/f30 to df/d24/faa 0 2026-03-09T15:00:53.186 INFO:tasks.workunit.client.1.vm09.stdout:9/532: dwrite d1/d7/d1e/d2b/d40/f4d [0,4194304] 0 2026-03-09T15:00:53.186 INFO:tasks.workunit.client.1.vm09.stdout:3/630: readlink d3/d3a/d2b/l37 0 2026-03-09T15:00:53.188 INFO:tasks.workunit.client.1.vm09.stdout:6/528: mknod d6/db/d10/cad 0 2026-03-09T15:00:53.188 INFO:tasks.workunit.client.1.vm09.stdout:5/583: dwrite d2/d37/d3c/d36/d45/f8c [0,4194304] 0 2026-03-09T15:00:53.188 INFO:tasks.workunit.client.1.vm09.stdout:6/529: write f0 [604033,80371] 0 2026-03-09T15:00:53.196 INFO:tasks.workunit.client.1.vm09.stdout:2/607: mkdir df/d1f/d47/d84/db7 0 2026-03-09T15:00:53.197 INFO:tasks.workunit.client.1.vm09.stdout:1/496: dread - d8/d50/d39/d95/f6e zero size 2026-03-09T15:00:53.198 INFO:tasks.workunit.client.1.vm09.stdout:7/514: symlink d3/db/d15/d5f/d6e/l96 0 2026-03-09T15:00:53.206 INFO:tasks.workunit.client.1.vm09.stdout:7/515: dwrite d3/db/d25/d7d/f8c [0,4194304] 0 2026-03-09T15:00:53.217 INFO:tasks.workunit.client.1.vm09.stdout:7/516: dwrite d3/db/d25/d5c/d75/f7e [0,4194304] 0 2026-03-09T15:00:53.217 INFO:tasks.workunit.client.1.vm09.stdout:6/530: dread d6/d20/d2a/d3d/f79 [0,4194304] 0 2026-03-09T15:00:53.217 INFO:tasks.workunit.client.1.vm09.stdout:5/584: creat d2/d37/d3c/d36/d4c/d89/fcf x:0 0 0 2026-03-09T15:00:53.220 INFO:tasks.workunit.client.1.vm09.stdout:9/533: mkdir d1/d7/da6/db3 0 2026-03-09T15:00:53.224 INFO:tasks.workunit.client.1.vm09.stdout:0/677: creat da/fdb x:0 0 0 2026-03-09T15:00:53.224 INFO:tasks.workunit.client.1.vm09.stdout:2/608: unlink df/l19 0 2026-03-09T15:00:53.226 INFO:tasks.workunit.client.1.vm09.stdout:2/609: readlink df/d1f/d6d/d8f/la6 0 2026-03-09T15:00:53.227 INFO:tasks.workunit.client.1.vm09.stdout:4/588: dread db/d12/f27 [0,4194304] 0 2026-03-09T15:00:53.232 INFO:tasks.workunit.client.1.vm09.stdout:6/531: creat d6/df/d23/fae x:0 0 0 2026-03-09T15:00:53.233 INFO:tasks.workunit.client.1.vm09.stdout:5/585: dwrite d2/d37/d3c/d36/d45/f8c [0,4194304] 0 2026-03-09T15:00:53.233 INFO:tasks.workunit.client.1.vm09.stdout:3/631: dwrite d3/d3a/d2b/d31/d4a/fa9 [0,4194304] 0 2026-03-09T15:00:53.233 INFO:tasks.workunit.client.1.vm09.stdout:3/632: chown d3/d3a/d2b/d7b/d90/ccb 61 1 2026-03-09T15:00:53.233 INFO:tasks.workunit.client.1.vm09.stdout:3/633: chown d3/d5b/d79/f89 0 1 2026-03-09T15:00:53.233 INFO:tasks.workunit.client.1.vm09.stdout:3/634: write d3/d5b/d79/f89 [987516,73357] 0 2026-03-09T15:00:53.240 INFO:tasks.workunit.client.1.vm09.stdout:8/603: getdents df/d2d/d46/d33 0 2026-03-09T15:00:53.243 INFO:tasks.workunit.client.1.vm09.stdout:7/517: creat d3/f97 x:0 0 0 2026-03-09T15:00:53.245 INFO:tasks.workunit.client.1.vm09.stdout:5/586: write d2/d37/d3c/d36/d4c/d51/fb0 [2694387,32611] 0 2026-03-09T15:00:53.246 INFO:tasks.workunit.client.1.vm09.stdout:5/587: dread - d2/d37/d3c/d36/d45/fab zero size 2026-03-09T15:00:53.249 INFO:tasks.workunit.client.1.vm09.stdout:0/678: creat da/fdc x:0 0 0 2026-03-09T15:00:53.249 INFO:tasks.workunit.client.1.vm09.stdout:9/534: dread d1/d4f/d52/f94 [0,4194304] 0 2026-03-09T15:00:53.254 INFO:tasks.workunit.client.1.vm09.stdout:5/588: chown d2/d37/d3c/c44 403 1 2026-03-09T15:00:53.255 INFO:tasks.workunit.client.1.vm09.stdout:2/610: creat df/d20/fb8 x:0 0 0 2026-03-09T15:00:53.256 INFO:tasks.workunit.client.1.vm09.stdout:7/518: dwrite f1 [4194304,4194304] 0 2026-03-09T15:00:53.261 INFO:tasks.workunit.client.1.vm09.stdout:4/589: getdents db/d19/d81 0 2026-03-09T15:00:53.268 INFO:tasks.workunit.client.1.vm09.stdout:2/611: rename df/d2d/f4f to df/d93/fb9 0 2026-03-09T15:00:53.269 INFO:tasks.workunit.client.1.vm09.stdout:8/604: dwrite df/d24/faa [0,4194304] 0 2026-03-09T15:00:53.269 INFO:tasks.workunit.client.1.vm09.stdout:0/679: mknod da/dc/d22/d64/cdd 0 2026-03-09T15:00:53.270 INFO:tasks.workunit.client.1.vm09.stdout:4/590: symlink db/d19/d52/d76/d3b/lbc 0 2026-03-09T15:00:53.270 INFO:tasks.workunit.client.1.vm09.stdout:3/635: dread d3/f3e [0,4194304] 0 2026-03-09T15:00:53.274 INFO:tasks.workunit.client.1.vm09.stdout:8/605: chown df/d2d/d46/d73/d81/l5a 38956687 1 2026-03-09T15:00:53.274 INFO:tasks.workunit.client.1.vm09.stdout:7/519: dread d3/d3d/f51 [0,4194304] 0 2026-03-09T15:00:53.278 INFO:tasks.workunit.client.1.vm09.stdout:3/636: chown d3/d3a/d54/f58 3251 1 2026-03-09T15:00:53.287 INFO:tasks.workunit.client.1.vm09.stdout:9/535: rename d1/d4f/c7c to d1/d7/d1e/d2b/d2e/d56/d6d/cb4 0 2026-03-09T15:00:53.288 INFO:tasks.workunit.client.1.vm09.stdout:3/637: creat d3/d60/fde x:0 0 0 2026-03-09T15:00:53.288 INFO:tasks.workunit.client.1.vm09.stdout:4/591: rename db to db/d19/dbd 22 2026-03-09T15:00:53.288 INFO:tasks.workunit.client.1.vm09.stdout:8/606: symlink df/d2d/d42/d79/lab 0 2026-03-09T15:00:53.289 INFO:tasks.workunit.client.1.vm09.stdout:7/520: stat d3/d61/c91 0 2026-03-09T15:00:53.290 INFO:tasks.workunit.client.1.vm09.stdout:8/607: chown df/f34 9657 1 2026-03-09T15:00:53.310 INFO:tasks.workunit.client.1.vm09.stdout:0/680: mknod da/dc/d10/dd0/cde 0 2026-03-09T15:00:53.317 INFO:tasks.workunit.client.1.vm09.stdout:9/536: read - d1/d7/d1e/d2b/f81 zero size 2026-03-09T15:00:53.319 INFO:tasks.workunit.client.1.vm09.stdout:7/521: sync 2026-03-09T15:00:53.324 INFO:tasks.workunit.client.1.vm09.stdout:4/592: mknod db/d19/d23/d44/d7c/cbe 0 2026-03-09T15:00:53.330 INFO:tasks.workunit.client.1.vm09.stdout:8/608: dread df/d24/f83 [0,4194304] 0 2026-03-09T15:00:53.330 INFO:tasks.workunit.client.1.vm09.stdout:4/593: read db/d19/d23/d71/d5f/f87 [2155209,130473] 0 2026-03-09T15:00:53.330 INFO:tasks.workunit.client.1.vm09.stdout:9/537: dwrite d1/d4f/fa3 [0,4194304] 0 2026-03-09T15:00:53.330 INFO:tasks.workunit.client.1.vm09.stdout:4/594: readlink db/d12/l9a 0 2026-03-09T15:00:53.337 INFO:tasks.workunit.client.1.vm09.stdout:7/522: dwrite d3/f16 [4194304,4194304] 0 2026-03-09T15:00:53.338 INFO:tasks.workunit.client.1.vm09.stdout:0/681: symlink da/dc/d1c/d3c/d78/d7a/dbb/ldf 0 2026-03-09T15:00:53.338 INFO:tasks.workunit.client.1.vm09.stdout:3/638: dread d3/d3a/d2b/f72 [0,4194304] 0 2026-03-09T15:00:53.344 INFO:tasks.workunit.client.1.vm09.stdout:7/523: readlink d3/l18 0 2026-03-09T15:00:53.353 INFO:tasks.workunit.client.1.vm09.stdout:8/609: write df/d24/f83 [1783494,46704] 0 2026-03-09T15:00:53.353 INFO:tasks.workunit.client.1.vm09.stdout:9/538: symlink d1/d6e/lb5 0 2026-03-09T15:00:53.355 INFO:tasks.workunit.client.1.vm09.stdout:3/639: symlink d3/d9a/d80/ldf 0 2026-03-09T15:00:53.357 INFO:tasks.workunit.client.1.vm09.stdout:9/539: dread - d1/d7/d1e/d2b/d2e/d56/d6d/fb1 zero size 2026-03-09T15:00:53.359 INFO:tasks.workunit.client.1.vm09.stdout:0/682: mknod da/dc/d22/ce0 0 2026-03-09T15:00:53.360 INFO:tasks.workunit.client.1.vm09.stdout:4/595: dwrite db/d19/d81/d5d/f77 [0,4194304] 0 2026-03-09T15:00:53.361 INFO:tasks.workunit.client.1.vm09.stdout:7/524: mknod d3/db/d25/d7d/c98 0 2026-03-09T15:00:53.364 INFO:tasks.workunit.client.1.vm09.stdout:8/610: rename lc to df/d2d/d46/d73/lac 0 2026-03-09T15:00:53.368 INFO:tasks.workunit.client.1.vm09.stdout:4/596: truncate db/d12/d16/d5b/fac 93922 0 2026-03-09T15:00:53.373 INFO:tasks.workunit.client.1.vm09.stdout:7/525: rename d3/db/d15/d5f/d44/c85 to d3/d1d/d65/c99 0 2026-03-09T15:00:53.389 INFO:tasks.workunit.client.1.vm09.stdout:4/597: dwrite db/d12/d16/d5b/da5/faf [0,4194304] 0 2026-03-09T15:00:53.389 INFO:tasks.workunit.client.1.vm09.stdout:8/611: creat df/d2d/d46/d73/d81/fad x:0 0 0 2026-03-09T15:00:53.389 INFO:tasks.workunit.client.1.vm09.stdout:7/526: mkdir d3/d1d/d65/d9a 0 2026-03-09T15:00:53.389 INFO:tasks.workunit.client.1.vm09.stdout:3/640: read d3/f77 [982041,62433] 0 2026-03-09T15:00:53.389 INFO:tasks.workunit.client.1.vm09.stdout:7/527: mkdir d3/d3d/d9b 0 2026-03-09T15:00:53.389 INFO:tasks.workunit.client.1.vm09.stdout:1/497: truncate d8/d10/d24/d48/f7f 2587114 0 2026-03-09T15:00:53.393 INFO:tasks.workunit.client.1.vm09.stdout:1/498: readlink d8/d10/d24/d48/l8e 0 2026-03-09T15:00:53.393 INFO:tasks.workunit.client.1.vm09.stdout:4/598: dwrite f3 [0,4194304] 0 2026-03-09T15:00:53.395 INFO:tasks.workunit.client.1.vm09.stdout:6/532: write d6/d20/d24/f67 [1685002,19123] 0 2026-03-09T15:00:53.396 INFO:tasks.workunit.client.1.vm09.stdout:2/612: write df/d58/d67/f46 [649488,88802] 0 2026-03-09T15:00:53.396 INFO:tasks.workunit.client.1.vm09.stdout:2/613: truncate df/d2d/f87 428680 0 2026-03-09T15:00:53.403 INFO:tasks.workunit.client.1.vm09.stdout:5/589: dwrite d2/d37/d3c/d36/f4a [0,4194304] 0 2026-03-09T15:00:53.404 INFO:tasks.workunit.client.1.vm09.stdout:5/590: chown d2/d37/d3c/d36/d4c 68 1 2026-03-09T15:00:53.406 INFO:tasks.workunit.client.1.vm09.stdout:1/499: dwrite d8/ff [0,4194304] 0 2026-03-09T15:00:53.410 INFO:tasks.workunit.client.1.vm09.stdout:9/540: link d1/f1f d1/fb6 0 2026-03-09T15:00:53.414 INFO:tasks.workunit.client.1.vm09.stdout:2/614: dwrite df/d2d/f87 [0,4194304] 0 2026-03-09T15:00:53.415 INFO:tasks.workunit.client.1.vm09.stdout:4/599: chown db/d19/d23/d44/ca4 501 1 2026-03-09T15:00:53.418 INFO:tasks.workunit.client.1.vm09.stdout:8/612: rename df/d5b/d98 to df/d5b/d65/dae 0 2026-03-09T15:00:53.420 INFO:tasks.workunit.client.1.vm09.stdout:4/600: fsync db/d19/d23/d44/d7c/d7d/d97/da3/fbb 0 2026-03-09T15:00:53.423 INFO:tasks.workunit.client.1.vm09.stdout:8/613: write df/d5b/f35 [469107,25516] 0 2026-03-09T15:00:53.423 INFO:tasks.workunit.client.1.vm09.stdout:4/601: read - db/d19/d81/d5d/f8a zero size 2026-03-09T15:00:53.437 INFO:tasks.workunit.client.1.vm09.stdout:4/602: creat db/d19/d23/d44/d7c/d7d/d97/fbf x:0 0 0 2026-03-09T15:00:53.437 INFO:tasks.workunit.client.1.vm09.stdout:4/603: fsync db/d12/d16/f83 0 2026-03-09T15:00:53.437 INFO:tasks.workunit.client.1.vm09.stdout:4/604: write db/d12/f6b [3735048,63686] 0 2026-03-09T15:00:53.439 INFO:tasks.workunit.client.1.vm09.stdout:6/533: mkdir d6/d20/d24/da5/daf 0 2026-03-09T15:00:53.446 INFO:tasks.workunit.client.1.vm09.stdout:6/534: dwrite d6/d20/f52 [0,4194304] 0 2026-03-09T15:00:53.448 INFO:tasks.workunit.client.1.vm09.stdout:3/641: getdents d3/d3a/d2b/d36 0 2026-03-09T15:00:53.457 INFO:tasks.workunit.client.1.vm09.stdout:5/591: creat d2/d37/d3c/d36/d4c/d51/fd0 x:0 0 0 2026-03-09T15:00:53.459 INFO:tasks.workunit.client.1.vm09.stdout:1/500: creat d8/d10/d73/fa1 x:0 0 0 2026-03-09T15:00:53.461 INFO:tasks.workunit.client.1.vm09.stdout:6/535: mkdir d6/db/d8b/db0 0 2026-03-09T15:00:53.462 INFO:tasks.workunit.client.1.vm09.stdout:4/605: dread db/d19/d23/d71/f6c [0,4194304] 0 2026-03-09T15:00:53.470 INFO:tasks.workunit.client.1.vm09.stdout:4/606: chown db/d19/d23/d71/l4c 634105 1 2026-03-09T15:00:53.470 INFO:tasks.workunit.client.1.vm09.stdout:4/607: dread db/d19/d23/d44/d7c/f88 [0,4194304] 0 2026-03-09T15:00:53.470 INFO:tasks.workunit.client.1.vm09.stdout:3/642: mknod d3/d5b/d79/d9d/ce0 0 2026-03-09T15:00:53.470 INFO:tasks.workunit.client.1.vm09.stdout:3/643: chown d3/d5b 556 1 2026-03-09T15:00:53.470 INFO:tasks.workunit.client.1.vm09.stdout:1/501: fdatasync d8/d10/d24/d45/d5f/f60 0 2026-03-09T15:00:53.470 INFO:tasks.workunit.client.1.vm09.stdout:4/608: symlink db/d19/d23/d44/d7c/d7d/lc0 0 2026-03-09T15:00:53.470 INFO:tasks.workunit.client.1.vm09.stdout:5/592: unlink d2/d37/d3c/d36/d45/l94 0 2026-03-09T15:00:53.473 INFO:tasks.workunit.client.1.vm09.stdout:6/536: creat d6/d20/d38/d56/fb1 x:0 0 0 2026-03-09T15:00:53.476 INFO:tasks.workunit.client.1.vm09.stdout:1/502: creat d8/d10/d24/d48/d9b/d78/fa2 x:0 0 0 2026-03-09T15:00:53.484 INFO:tasks.workunit.client.1.vm09.stdout:9/541: dread d1/d7/f3e [0,4194304] 0 2026-03-09T15:00:53.485 INFO:tasks.workunit.client.1.vm09.stdout:9/542: dread - d1/d7/fb0 zero size 2026-03-09T15:00:53.486 INFO:tasks.workunit.client.1.vm09.stdout:0/683: dwrite da/dc/d1c/d3c/d44/f67 [4194304,4194304] 0 2026-03-09T15:00:53.487 INFO:tasks.workunit.client.1.vm09.stdout:6/537: unlink d6/d20/d38/d56/d65/d68/d6f/l95 0 2026-03-09T15:00:53.487 INFO:tasks.workunit.client.1.vm09.stdout:1/503: creat d8/d10/d24/d48/d9b/d78/fa3 x:0 0 0 2026-03-09T15:00:53.489 INFO:tasks.workunit.client.1.vm09.stdout:0/684: write da/dc/d22/f73 [4906854,58901] 0 2026-03-09T15:00:53.494 INFO:tasks.workunit.client.1.vm09.stdout:9/543: truncate d1/d7/d1e/f9e 4399408 0 2026-03-09T15:00:53.499 INFO:tasks.workunit.client.1.vm09.stdout:6/538: dwrite d6/d20/d38/d4e/d55/f5c [0,4194304] 0 2026-03-09T15:00:53.509 INFO:tasks.workunit.client.1.vm09.stdout:7/528: dwrite d3/fd [0,4194304] 0 2026-03-09T15:00:53.520 INFO:tasks.workunit.client.1.vm09.stdout:4/609: creat db/d12/fc1 x:0 0 0 2026-03-09T15:00:53.520 INFO:tasks.workunit.client.1.vm09.stdout:4/610: fsync db/f29 0 2026-03-09T15:00:53.521 INFO:tasks.workunit.client.1.vm09.stdout:1/504: symlink d8/d10/d24/d48/la4 0 2026-03-09T15:00:53.524 INFO:tasks.workunit.client.1.vm09.stdout:9/544: unlink d1/d7/d1e/d2b/d2e/l7e 0 2026-03-09T15:00:53.526 INFO:tasks.workunit.client.1.vm09.stdout:7/529: dread d3/f9 [4194304,4194304] 0 2026-03-09T15:00:53.527 INFO:tasks.workunit.client.1.vm09.stdout:3/644: getdents d3/d74 0 2026-03-09T15:00:53.529 INFO:tasks.workunit.client.1.vm09.stdout:6/539: creat d6/d20/d38/d56/d65/fb2 x:0 0 0 2026-03-09T15:00:53.533 INFO:tasks.workunit.client.1.vm09.stdout:8/614: dwrite df/d24/f7a [0,4194304] 0 2026-03-09T15:00:53.535 INFO:tasks.workunit.client.1.vm09.stdout:8/615: chown df/d5b/d65/l93 110607771 1 2026-03-09T15:00:53.544 INFO:tasks.workunit.client.1.vm09.stdout:1/505: unlink d8/d50/d39/d95/d56/f85 0 2026-03-09T15:00:53.547 INFO:tasks.workunit.client.1.vm09.stdout:1/506: write d8/d50/d39/d95/f61 [728926,62444] 0 2026-03-09T15:00:53.547 INFO:tasks.workunit.client.1.vm09.stdout:2/615: truncate df/d20/f24 265880 0 2026-03-09T15:00:53.548 INFO:tasks.workunit.client.1.vm09.stdout:4/611: rename db/d12/d16/lb0 to db/d12/d16/lc2 0 2026-03-09T15:00:53.551 INFO:tasks.workunit.client.1.vm09.stdout:9/545: creat d1/d7/d9f/fb7 x:0 0 0 2026-03-09T15:00:53.553 INFO:tasks.workunit.client.1.vm09.stdout:3/645: unlink d3/d3a/d2b/d31/fbf 0 2026-03-09T15:00:53.553 INFO:tasks.workunit.client.1.vm09.stdout:7/530: symlink d3/db/d25/d5c/d75/l9c 0 2026-03-09T15:00:53.554 INFO:tasks.workunit.client.1.vm09.stdout:2/616: dwrite df/d1f/d47/d5d/f96 [0,4194304] 0 2026-03-09T15:00:53.557 INFO:tasks.workunit.client.1.vm09.stdout:8/616: mkdir df/d38/d64/daf 0 2026-03-09T15:00:53.570 INFO:tasks.workunit.client.1.vm09.stdout:1/507: mknod d8/d10/d24/d45/d5f/ca5 0 2026-03-09T15:00:53.571 INFO:tasks.workunit.client.1.vm09.stdout:4/612: creat db/d12/d16/d5b/fc3 x:0 0 0 2026-03-09T15:00:53.581 INFO:tasks.workunit.client.1.vm09.stdout:7/531: mknod d3/d61/c9d 0 2026-03-09T15:00:53.581 INFO:tasks.workunit.client.1.vm09.stdout:5/593: truncate d2/d37/d3c/d36/d4c/d51/fb0 3661478 0 2026-03-09T15:00:53.581 INFO:tasks.workunit.client.1.vm09.stdout:2/617: symlink df/d2d/lba 0 2026-03-09T15:00:53.584 INFO:tasks.workunit.client.1.vm09.stdout:3/646: rename d3/d3a/d2b/d31/d9e/fc8 to d3/d9a/d80/fe1 0 2026-03-09T15:00:53.584 INFO:tasks.workunit.client.1.vm09.stdout:8/617: symlink df/d2d/d46/d73/d81/lb0 0 2026-03-09T15:00:53.586 INFO:tasks.workunit.client.1.vm09.stdout:8/618: dread - df/d2d/d42/f8c zero size 2026-03-09T15:00:53.589 INFO:tasks.workunit.client.1.vm09.stdout:5/594: dwrite d2/d37/d53/d86/f87 [0,4194304] 0 2026-03-09T15:00:53.598 INFO:tasks.workunit.client.1.vm09.stdout:5/595: fdatasync d2/d37/d67/fc2 0 2026-03-09T15:00:53.598 INFO:tasks.workunit.client.1.vm09.stdout:9/546: dread d1/d7/d1e/f22 [0,4194304] 0 2026-03-09T15:00:53.598 INFO:tasks.workunit.client.1.vm09.stdout:6/540: link d6/f7f d6/db/fb3 0 2026-03-09T15:00:53.602 INFO:tasks.workunit.client.1.vm09.stdout:2/618: rename df/d20/d2e/f59 to df/d20/d2e/fbb 0 2026-03-09T15:00:53.606 INFO:tasks.workunit.client.1.vm09.stdout:0/685: truncate da/dc/d1c/d3c/d78/f88 2607338 0 2026-03-09T15:00:53.609 INFO:tasks.workunit.client.1.vm09.stdout:2/619: truncate df/d1f/d6d/d8f/f99 294688 0 2026-03-09T15:00:53.613 INFO:tasks.workunit.client.1.vm09.stdout:6/541: dwrite d6/d20/f6e [4194304,4194304] 0 2026-03-09T15:00:53.613 INFO:tasks.workunit.client.1.vm09.stdout:5/596: dwrite d2/d37/d3c/d36/d45/dae/dc3/f57 [0,4194304] 0 2026-03-09T15:00:53.613 INFO:tasks.workunit.client.1.vm09.stdout:2/620: fdatasync df/d1f/d47/d5d/f96 0 2026-03-09T15:00:53.615 INFO:tasks.workunit.client.1.vm09.stdout:6/542: write d6/df/d23/f2f [3947845,70257] 0 2026-03-09T15:00:53.622 INFO:tasks.workunit.client.1.vm09.stdout:6/543: rename d6/d20/d44/d45 to d6/d20/d44/d45/db4 22 2026-03-09T15:00:53.634 INFO:tasks.workunit.client.1.vm09.stdout:3/647: dread d3/d5b/f8b [0,4194304] 0 2026-03-09T15:00:53.634 INFO:tasks.workunit.client.1.vm09.stdout:3/648: write d3/d3a/d2b/f92 [118452,17982] 0 2026-03-09T15:00:53.636 INFO:tasks.workunit.client.1.vm09.stdout:3/649: chown d3/d3a/d2b/d31/d4a/fa9 1 1 2026-03-09T15:00:53.644 INFO:tasks.workunit.client.1.vm09.stdout:9/547: mkdir d1/d7/db8 0 2026-03-09T15:00:53.648 INFO:tasks.workunit.client.1.vm09.stdout:5/597: symlink d2/d37/d67/d95/db8/ld1 0 2026-03-09T15:00:53.651 INFO:tasks.workunit.client.1.vm09.stdout:6/544: chown d6/lc 7721 1 2026-03-09T15:00:53.652 INFO:tasks.workunit.client.1.vm09.stdout:2/621: mkdir df/d1f/d47/d5d/dbc 0 2026-03-09T15:00:53.652 INFO:tasks.workunit.client.1.vm09.stdout:4/613: write db/d19/d23/d44/d7c/f88 [1417678,31342] 0 2026-03-09T15:00:53.656 INFO:tasks.workunit.client.1.vm09.stdout:0/686: creat da/dc/dcb/dd4/fe1 x:0 0 0 2026-03-09T15:00:53.656 INFO:tasks.workunit.client.1.vm09.stdout:5/598: dwrite d2/d37/d3c/d36/d4c/d89/fcf [0,4194304] 0 2026-03-09T15:00:53.665 INFO:tasks.workunit.client.1.vm09.stdout:5/599: rename d2/d37/d53/d86/d88 to d2/d37/d53/d86/d88/dc9/dd2 22 2026-03-09T15:00:53.666 INFO:tasks.workunit.client.1.vm09.stdout:7/532: link d3/l43 d3/d3d/d9b/l9e 0 2026-03-09T15:00:53.666 INFO:tasks.workunit.client.1.vm09.stdout:9/548: mknod d1/d7/d9f/daa/cb9 0 2026-03-09T15:00:53.676 INFO:tasks.workunit.client.1.vm09.stdout:8/619: dwrite df/d2d/f57 [4194304,4194304] 0 2026-03-09T15:00:53.676 INFO:tasks.workunit.client.1.vm09.stdout:2/622: truncate df/f14 3216146 0 2026-03-09T15:00:53.676 INFO:tasks.workunit.client.1.vm09.stdout:4/614: dread db/d19/d52/f6d [4194304,4194304] 0 2026-03-09T15:00:53.687 INFO:tasks.workunit.client.1.vm09.stdout:5/600: mkdir d2/d37/d3c/d36/d45/dae/dd3 0 2026-03-09T15:00:53.689 INFO:tasks.workunit.client.1.vm09.stdout:1/508: truncate d8/d10/d73/f37 174270 0 2026-03-09T15:00:53.690 INFO:tasks.workunit.client.1.vm09.stdout:6/545: unlink d6/d20/d2a/l9f 0 2026-03-09T15:00:53.698 INFO:tasks.workunit.client.1.vm09.stdout:6/546: chown d6/d20/d38/d56/d65 451197507 1 2026-03-09T15:00:53.698 INFO:tasks.workunit.client.1.vm09.stdout:6/547: chown d6/d20/d2a/d3d/d46/f84 115 1 2026-03-09T15:00:53.698 INFO:tasks.workunit.client.1.vm09.stdout:2/623: symlink df/d6e/lbd 0 2026-03-09T15:00:53.700 INFO:tasks.workunit.client.1.vm09.stdout:4/615: creat db/d19/d23/fc4 x:0 0 0 2026-03-09T15:00:53.711 INFO:tasks.workunit.client.1.vm09.stdout:1/509: stat d8/d10/d24/d45/d5f/l7d 0 2026-03-09T15:00:53.712 INFO:tasks.workunit.client.1.vm09.stdout:6/548: rmdir d6/d20/d38/d4e 39 2026-03-09T15:00:53.713 INFO:tasks.workunit.client.1.vm09.stdout:6/549: dread - d6/df/d23/fae zero size 2026-03-09T15:00:53.719 INFO:tasks.workunit.client.1.vm09.stdout:4/616: rename l8 to db/d19/d23/lc5 0 2026-03-09T15:00:53.720 INFO:tasks.workunit.client.1.vm09.stdout:9/549: creat d1/d7/fba x:0 0 0 2026-03-09T15:00:53.720 INFO:tasks.workunit.client.1.vm09.stdout:4/617: stat db/f21 0 2026-03-09T15:00:53.720 INFO:tasks.workunit.client.1.vm09.stdout:3/650: dwrite d3/d74/fb4 [0,4194304] 0 2026-03-09T15:00:53.721 INFO:tasks.workunit.client.1.vm09.stdout:5/601: creat d2/d37/d3c/dbf/fd4 x:0 0 0 2026-03-09T15:00:53.724 INFO:tasks.workunit.client.1.vm09.stdout:4/618: write db/d19/d52/d76/d3b/f69 [1141514,14973] 0 2026-03-09T15:00:53.725 INFO:tasks.workunit.client.1.vm09.stdout:7/533: link d3/db/d25/d5c/f8a d3/d1d/f9f 0 2026-03-09T15:00:53.735 INFO:tasks.workunit.client.1.vm09.stdout:0/687: getdents da/dc/dcb 0 2026-03-09T15:00:53.740 INFO:tasks.workunit.client.1.vm09.stdout:5/602: mknod d2/d37/d3c/dbf/cd5 0 2026-03-09T15:00:53.741 INFO:tasks.workunit.client.1.vm09.stdout:4/619: creat db/d12/da1/fc6 x:0 0 0 2026-03-09T15:00:53.742 INFO:tasks.workunit.client.1.vm09.stdout:5/603: read - d2/d37/d3c/d36/d4c/d51/fd0 zero size 2026-03-09T15:00:53.744 INFO:tasks.workunit.client.1.vm09.stdout:8/620: getdents df/d2d/d42 0 2026-03-09T15:00:53.744 INFO:tasks.workunit.client.1.vm09.stdout:4/620: dread db/d19/d23/d71/f6c [0,4194304] 0 2026-03-09T15:00:53.744 INFO:tasks.workunit.client.1.vm09.stdout:7/534: readlink d3/d1d/d2d/l39 0 2026-03-09T15:00:53.746 INFO:tasks.workunit.client.1.vm09.stdout:4/621: chown db/d19/d23/d44 0 1 2026-03-09T15:00:53.746 INFO:tasks.workunit.client.1.vm09.stdout:5/604: creat d2/d37/d53/d86/dad/fd6 x:0 0 0 2026-03-09T15:00:53.746 INFO:tasks.workunit.client.1.vm09.stdout:4/622: readlink db/d19/l1f 0 2026-03-09T15:00:53.747 INFO:tasks.workunit.client.1.vm09.stdout:4/623: stat db/d19/d81/l41 0 2026-03-09T15:00:53.748 INFO:tasks.workunit.client.1.vm09.stdout:6/550: link d6/d20/f27 d6/db/d8b/db0/fb5 0 2026-03-09T15:00:53.748 INFO:tasks.workunit.client.1.vm09.stdout:4/624: chown db/d19/d23/d44/d7c/d7d/lc0 26762 1 2026-03-09T15:00:53.748 INFO:tasks.workunit.client.1.vm09.stdout:2/624: link df/d1f/c21 df/d1f/d47/d5d/d90/cbe 0 2026-03-09T15:00:53.749 INFO:tasks.workunit.client.1.vm09.stdout:6/551: dread - d6/df/d23/f76 zero size 2026-03-09T15:00:53.749 INFO:tasks.workunit.client.1.vm09.stdout:5/605: truncate d2/d37/d3c/d36/d4c/d51/fc7 498498 0 2026-03-09T15:00:53.750 INFO:tasks.workunit.client.1.vm09.stdout:8/621: rename df/d2d/d46/d73 to df/d24/d99/db1 0 2026-03-09T15:00:53.752 INFO:tasks.workunit.client.1.vm09.stdout:4/625: mknod db/d19/d23/d44/d7c/d7d/d97/da3/cc7 0 2026-03-09T15:00:53.752 INFO:tasks.workunit.client.1.vm09.stdout:9/550: creat d1/d7/fbb x:0 0 0 2026-03-09T15:00:53.753 INFO:tasks.workunit.client.1.vm09.stdout:5/606: mkdir d2/d37/d53/d86/d88/dd7 0 2026-03-09T15:00:53.753 INFO:tasks.workunit.client.1.vm09.stdout:4/626: chown db/d19/d23/d71/fb3 63096 1 2026-03-09T15:00:53.754 INFO:tasks.workunit.client.1.vm09.stdout:2/625: mknod df/d20/d29/da9/cbf 0 2026-03-09T15:00:53.754 INFO:tasks.workunit.client.1.vm09.stdout:5/607: dread - d2/d37/d3c/d36/d4c/d51/fce zero size 2026-03-09T15:00:53.755 INFO:tasks.workunit.client.1.vm09.stdout:6/552: fdatasync d6/d20/d38/d4e/f75 0 2026-03-09T15:00:53.756 INFO:tasks.workunit.client.1.vm09.stdout:5/608: dread - d2/d37/d3c/d36/f97 zero size 2026-03-09T15:00:53.756 INFO:tasks.workunit.client.1.vm09.stdout:4/627: readlink db/d12/l9a 0 2026-03-09T15:00:53.764 INFO:tasks.workunit.client.1.vm09.stdout:4/628: dwrite db/d12/d16/d5b/d78/d7f/f9d [0,4194304] 0 2026-03-09T15:00:53.767 INFO:tasks.workunit.client.1.vm09.stdout:5/609: mknod d2/d37/d53/cd8 0 2026-03-09T15:00:53.770 INFO:tasks.workunit.client.1.vm09.stdout:4/629: read db/f29 [3356294,57985] 0 2026-03-09T15:00:53.770 INFO:tasks.workunit.client.1.vm09.stdout:8/622: dwrite df/d2d/d46/f94 [0,4194304] 0 2026-03-09T15:00:53.773 INFO:tasks.workunit.client.1.vm09.stdout:6/553: mkdir d6/d20/d38/d56/d65/d68/d6f/db6 0 2026-03-09T15:00:53.777 INFO:tasks.workunit.client.1.vm09.stdout:6/554: chown d6/d20/d38/d4e/f87 224707 1 2026-03-09T15:00:53.788 INFO:tasks.workunit.client.1.vm09.stdout:2/626: creat df/d20/d29/fc0 x:0 0 0 2026-03-09T15:00:53.796 INFO:tasks.workunit.client.1.vm09.stdout:3/651: truncate d3/d9a/f7e 1106577 0 2026-03-09T15:00:53.798 INFO:tasks.workunit.client.1.vm09.stdout:3/652: stat d3/l23 0 2026-03-09T15:00:53.800 INFO:tasks.workunit.client.1.vm09.stdout:7/535: dread d3/d1d/f79 [0,4194304] 0 2026-03-09T15:00:53.801 INFO:tasks.workunit.client.1.vm09.stdout:3/653: write d3/d3a/d54/fbd [17142,126282] 0 2026-03-09T15:00:53.802 INFO:tasks.workunit.client.1.vm09.stdout:6/555: write d6/df/d23/f78 [1178185,1075] 0 2026-03-09T15:00:53.803 INFO:tasks.workunit.client.1.vm09.stdout:1/510: dwrite d8/d10/d73/f54 [0,4194304] 0 2026-03-09T15:00:53.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:53 vm05.local ceph-mon[50611]: pgmap v153: 65 pgs: 65 active+clean; 1.2 GiB data, 4.0 GiB used, 116 GiB / 120 GiB avail; 37 MiB/s rd, 115 MiB/s wr, 281 op/s 2026-03-09T15:00:53.811 INFO:tasks.workunit.client.1.vm09.stdout:9/551: rename d1/d7/l3c to d1/d7/d1e/d2b/d2e/d56/d5e/lbc 0 2026-03-09T15:00:53.815 INFO:tasks.workunit.client.1.vm09.stdout:7/536: mknod d3/db/d25/ca0 0 2026-03-09T15:00:53.815 INFO:tasks.workunit.client.1.vm09.stdout:3/654: creat d3/d9a/fe2 x:0 0 0 2026-03-09T15:00:53.816 INFO:tasks.workunit.client.1.vm09.stdout:1/511: dwrite d8/d50/d39/d95/f61 [0,4194304] 0 2026-03-09T15:00:53.817 INFO:tasks.workunit.client.1.vm09.stdout:5/610: link d2/f14 d2/da9/fd9 0 2026-03-09T15:00:53.818 INFO:tasks.workunit.client.1.vm09.stdout:5/611: write d2/d37/d3c/d36/d4c/d51/d96/fa4 [1047016,43742] 0 2026-03-09T15:00:53.827 INFO:tasks.workunit.client.1.vm09.stdout:5/612: dwrite d2/f34 [4194304,4194304] 0 2026-03-09T15:00:53.828 INFO:tasks.workunit.client.1.vm09.stdout:5/613: write d2/d37/d3c/d36/d45/d5c/f9c [126723,83026] 0 2026-03-09T15:00:53.828 INFO:tasks.workunit.client.1.vm09.stdout:1/512: dwrite d8/f42 [0,4194304] 0 2026-03-09T15:00:53.829 INFO:tasks.workunit.client.1.vm09.stdout:5/614: stat d2/d37/d53/d86/dad 0 2026-03-09T15:00:53.829 INFO:tasks.workunit.client.1.vm09.stdout:5/615: dread - d2/d37/d3c/d36/fbd zero size 2026-03-09T15:00:53.830 INFO:tasks.workunit.client.1.vm09.stdout:9/552: unlink d1/d58/c71 0 2026-03-09T15:00:53.831 INFO:tasks.workunit.client.1.vm09.stdout:5/616: write d2/d37/d67/d95/db5/fb6 [53725,81250] 0 2026-03-09T15:00:53.838 INFO:tasks.workunit.client.1.vm09.stdout:3/655: rename d3/d3a/d2b/d7b/d90 to d3/d9a/de3 0 2026-03-09T15:00:53.841 INFO:tasks.workunit.client.1.vm09.stdout:8/623: sync 2026-03-09T15:00:53.843 INFO:tasks.workunit.client.1.vm09.stdout:1/513: write d8/d10/f29 [928453,19335] 0 2026-03-09T15:00:53.858 INFO:tasks.workunit.client.1.vm09.stdout:0/688: truncate da/f12 1856317 0 2026-03-09T15:00:53.858 INFO:tasks.workunit.client.1.vm09.stdout:5/617: symlink d2/d37/d3c/d36/lda 0 2026-03-09T15:00:53.859 INFO:tasks.workunit.client.1.vm09.stdout:7/537: unlink d3/d28/l34 0 2026-03-09T15:00:53.861 INFO:tasks.workunit.client.1.vm09.stdout:3/656: read - d3/d3a/d2b/d39/d48/f5f zero size 2026-03-09T15:00:53.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:53 vm09.local ceph-mon[59673]: pgmap v153: 65 pgs: 65 active+clean; 1.2 GiB data, 4.0 GiB used, 116 GiB / 120 GiB avail; 37 MiB/s rd, 115 MiB/s wr, 281 op/s 2026-03-09T15:00:53.870 INFO:tasks.workunit.client.1.vm09.stdout:6/556: getdents d6/d20/d38/d56/d65/d68 0 2026-03-09T15:00:53.872 INFO:tasks.workunit.client.1.vm09.stdout:6/557: read - d6/df/d23/f6d zero size 2026-03-09T15:00:53.872 INFO:tasks.workunit.client.1.vm09.stdout:1/514: mknod d8/d10/d24/d48/d9b/d78/d8b/ca6 0 2026-03-09T15:00:53.874 INFO:tasks.workunit.client.1.vm09.stdout:0/689: mknod da/dc/d8c/ce2 0 2026-03-09T15:00:53.875 INFO:tasks.workunit.client.1.vm09.stdout:3/657: dwrite d3/f3b [0,4194304] 0 2026-03-09T15:00:53.876 INFO:tasks.workunit.client.1.vm09.stdout:7/538: dread d3/db/d46/f5b [0,4194304] 0 2026-03-09T15:00:53.881 INFO:tasks.workunit.client.1.vm09.stdout:3/658: dread d3/d9a/fc2 [0,4194304] 0 2026-03-09T15:00:53.882 INFO:tasks.workunit.client.1.vm09.stdout:9/553: dread d1/fb6 [0,4194304] 0 2026-03-09T15:00:53.884 INFO:tasks.workunit.client.1.vm09.stdout:6/558: symlink d6/d20/d38/d4e/lb7 0 2026-03-09T15:00:53.889 INFO:tasks.workunit.client.1.vm09.stdout:0/690: read da/dc/d61/f66 [328237,6391] 0 2026-03-09T15:00:53.890 INFO:tasks.workunit.client.1.vm09.stdout:3/659: read - d3/d3a/d2b/d31/d4a/fa8 zero size 2026-03-09T15:00:53.890 INFO:tasks.workunit.client.1.vm09.stdout:6/559: fsync d6/db/d10/d7a/f80 0 2026-03-09T15:00:53.890 INFO:tasks.workunit.client.1.vm09.stdout:6/560: dread - d6/d20/d38/d56/d65/d68/d6f/fa7 zero size 2026-03-09T15:00:53.893 INFO:tasks.workunit.client.1.vm09.stdout:9/554: creat d1/d7/d1e/d2b/d2e/d56/d6d/fbd x:0 0 0 2026-03-09T15:00:53.901 INFO:tasks.workunit.client.1.vm09.stdout:4/630: write db/d12/d16/f36 [1312157,1196] 0 2026-03-09T15:00:53.901 INFO:tasks.workunit.client.1.vm09.stdout:7/539: dwrite d3/d28/f69 [0,4194304] 0 2026-03-09T15:00:53.901 INFO:tasks.workunit.client.1.vm09.stdout:2/627: truncate df/f16 783828 0 2026-03-09T15:00:53.906 INFO:tasks.workunit.client.1.vm09.stdout:7/540: chown d3/d1d/d2d/f81 85361 1 2026-03-09T15:00:53.907 INFO:tasks.workunit.client.1.vm09.stdout:7/541: chown d3/db/d15/l6a 1605326 1 2026-03-09T15:00:53.909 INFO:tasks.workunit.client.1.vm09.stdout:4/631: dread db/d19/f8e [0,4194304] 0 2026-03-09T15:00:53.918 INFO:tasks.workunit.client.1.vm09.stdout:4/632: dwrite db/d12/d16/f2a [0,4194304] 0 2026-03-09T15:00:53.920 INFO:tasks.workunit.client.1.vm09.stdout:3/660: symlink d3/d74/le4 0 2026-03-09T15:00:53.924 INFO:tasks.workunit.client.1.vm09.stdout:4/633: dread - db/d19/d52/d76/d3b/f49 zero size 2026-03-09T15:00:53.925 INFO:tasks.workunit.client.1.vm09.stdout:7/542: dread d3/d61/f6c [0,4194304] 0 2026-03-09T15:00:53.929 INFO:tasks.workunit.client.1.vm09.stdout:7/543: truncate d3/db/d15/d5f/d6e/f7b 1011292 0 2026-03-09T15:00:53.931 INFO:tasks.workunit.client.1.vm09.stdout:7/544: write d3/d3d/f5a [2251803,129881] 0 2026-03-09T15:00:53.934 INFO:tasks.workunit.client.1.vm09.stdout:2/628: mknod df/d6e/cc1 0 2026-03-09T15:00:53.943 INFO:tasks.workunit.client.1.vm09.stdout:3/661: creat d3/d3a/d2b/d7b/fe5 x:0 0 0 2026-03-09T15:00:53.943 INFO:tasks.workunit.client.1.vm09.stdout:4/634: rename db/d12/d16/f9c to db/d19/d23/d44/d7c/d7d/d97/da3/fc8 0 2026-03-09T15:00:53.943 INFO:tasks.workunit.client.1.vm09.stdout:7/545: write d3/db/d46/f5b [2431607,10792] 0 2026-03-09T15:00:53.943 INFO:tasks.workunit.client.1.vm09.stdout:2/629: rmdir df/d20/d2e 39 2026-03-09T15:00:53.943 INFO:tasks.workunit.client.1.vm09.stdout:3/662: unlink d3/d9a/de3/ccb 0 2026-03-09T15:00:53.943 INFO:tasks.workunit.client.1.vm09.stdout:6/561: link d6/f7f d6/d20/fb8 0 2026-03-09T15:00:53.944 INFO:tasks.workunit.client.1.vm09.stdout:2/630: read df/d1f/d6d/d8f/d5f/f63 [1084107,107270] 0 2026-03-09T15:00:53.945 INFO:tasks.workunit.client.1.vm09.stdout:7/546: mknod d3/db/d15/d5f/ca1 0 2026-03-09T15:00:53.947 INFO:tasks.workunit.client.1.vm09.stdout:7/547: write d3/d1d/f79 [5914555,74510] 0 2026-03-09T15:00:53.949 INFO:tasks.workunit.client.1.vm09.stdout:7/548: fsync d3/d1d/d65/f92 0 2026-03-09T15:00:53.950 INFO:tasks.workunit.client.1.vm09.stdout:9/555: dread d1/d7/d1e/d2b/d2e/f16 [0,4194304] 0 2026-03-09T15:00:53.950 INFO:tasks.workunit.client.1.vm09.stdout:4/635: dwrite db/d12/d9e/fae [0,4194304] 0 2026-03-09T15:00:53.953 INFO:tasks.workunit.client.1.vm09.stdout:6/562: dwrite d6/df/d23/f2f [0,4194304] 0 2026-03-09T15:00:53.963 INFO:tasks.workunit.client.1.vm09.stdout:9/556: dread d1/d7/d1e/d2b/d2e/f16 [0,4194304] 0 2026-03-09T15:00:53.963 INFO:tasks.workunit.client.1.vm09.stdout:4/636: dread db/d12/da1/fa6 [0,4194304] 0 2026-03-09T15:00:53.963 INFO:tasks.workunit.client.1.vm09.stdout:6/563: creat d6/d20/d24/d7e/fb9 x:0 0 0 2026-03-09T15:00:53.966 INFO:tasks.workunit.client.1.vm09.stdout:7/549: fdatasync d3/db/d25/d5c/f8a 0 2026-03-09T15:00:53.971 INFO:tasks.workunit.client.1.vm09.stdout:4/637: unlink db/d19/d23/d71/d53/f99 0 2026-03-09T15:00:53.971 INFO:tasks.workunit.client.1.vm09.stdout:7/550: chown f1 1 1 2026-03-09T15:00:53.972 INFO:tasks.workunit.client.1.vm09.stdout:6/564: truncate d6/df/d23/fae 444834 0 2026-03-09T15:00:53.972 INFO:tasks.workunit.client.1.vm09.stdout:0/691: dread da/dc/d1c/d3c/d44/f71 [4194304,4194304] 0 2026-03-09T15:00:53.974 INFO:tasks.workunit.client.1.vm09.stdout:4/638: creat db/d12/d16/d5b/d78/d7f/fc9 x:0 0 0 2026-03-09T15:00:53.975 INFO:tasks.workunit.client.1.vm09.stdout:0/692: symlink da/dc/d1c/d46/d63/le3 0 2026-03-09T15:00:53.986 INFO:tasks.workunit.client.1.vm09.stdout:7/551: dread d3/f7a [0,4194304] 0 2026-03-09T15:00:53.986 INFO:tasks.workunit.client.1.vm09.stdout:7/552: fsync d3/db/d46/f66 0 2026-03-09T15:00:53.986 INFO:tasks.workunit.client.1.vm09.stdout:6/565: chown d6/d20/d38/d4e/f5a 155070792 1 2026-03-09T15:00:53.986 INFO:tasks.workunit.client.1.vm09.stdout:4/639: symlink db/d12/d16/d5b/lca 0 2026-03-09T15:00:53.986 INFO:tasks.workunit.client.1.vm09.stdout:4/640: write db/d19/d81/d5d/f77 [4349708,39996] 0 2026-03-09T15:00:53.986 INFO:tasks.workunit.client.1.vm09.stdout:6/566: mkdir d6/db/d8b/db0/dba 0 2026-03-09T15:00:53.986 INFO:tasks.workunit.client.1.vm09.stdout:0/693: link da/dc/d1c/d46/lc6 da/dc/dcb/dd4/le4 0 2026-03-09T15:00:53.986 INFO:tasks.workunit.client.1.vm09.stdout:6/567: creat d6/d20/d38/d56/d65/d68/d6f/fbb x:0 0 0 2026-03-09T15:00:53.986 INFO:tasks.workunit.client.1.vm09.stdout:0/694: unlink da/dc/l2b 0 2026-03-09T15:00:53.986 INFO:tasks.workunit.client.1.vm09.stdout:4/641: creat db/d19/d23/d44/d7c/d7d/d97/da3/fcb x:0 0 0 2026-03-09T15:00:53.990 INFO:tasks.workunit.client.1.vm09.stdout:6/568: link d6/d20/d38/d56/fb1 d6/d20/d24/d7e/fbc 0 2026-03-09T15:00:53.990 INFO:tasks.workunit.client.1.vm09.stdout:7/553: dread d3/d1d/f33 [0,4194304] 0 2026-03-09T15:00:53.993 INFO:tasks.workunit.client.1.vm09.stdout:4/642: mknod db/d19/d23/d44/d7c/d7d/d97/ccc 0 2026-03-09T15:00:53.994 INFO:tasks.workunit.client.1.vm09.stdout:7/554: chown d3/d1d/c78 32644 1 2026-03-09T15:00:53.995 INFO:tasks.workunit.client.1.vm09.stdout:4/643: mkdir db/d19/dcd 0 2026-03-09T15:00:53.995 INFO:tasks.workunit.client.1.vm09.stdout:7/555: write d3/d61/f90 [216872,108545] 0 2026-03-09T15:00:53.999 INFO:tasks.workunit.client.1.vm09.stdout:7/556: rename d3/db/d15/f80 to d3/d3d/d9b/fa2 0 2026-03-09T15:00:53.999 INFO:tasks.workunit.client.1.vm09.stdout:4/644: write db/d19/d23/d44/d7c/d7d/d97/da3/fc8 [568656,2163] 0 2026-03-09T15:00:54.000 INFO:tasks.workunit.client.1.vm09.stdout:7/557: write d3/db/d15/d5f/f89 [319165,52700] 0 2026-03-09T15:00:54.003 INFO:tasks.workunit.client.1.vm09.stdout:7/558: write d3/d61/f6c [1536343,38418] 0 2026-03-09T15:00:54.003 INFO:tasks.workunit.client.1.vm09.stdout:4/645: dwrite db/d12/f6b [0,4194304] 0 2026-03-09T15:00:54.014 INFO:tasks.workunit.client.1.vm09.stdout:4/646: link db/d19/d52/fb5 db/d19/dcd/fce 0 2026-03-09T15:00:54.015 INFO:tasks.workunit.client.1.vm09.stdout:7/559: mkdir d3/d1d/d65/da3 0 2026-03-09T15:00:54.018 INFO:tasks.workunit.client.1.vm09.stdout:0/695: dread da/dc/f28 [0,4194304] 0 2026-03-09T15:00:54.022 INFO:tasks.workunit.client.1.vm09.stdout:7/560: rmdir d3/d1d 39 2026-03-09T15:00:54.023 INFO:tasks.workunit.client.1.vm09.stdout:0/696: chown da/dc/d1c/d3c/d78/d7a/fb2 302147640 1 2026-03-09T15:00:54.026 INFO:tasks.workunit.client.1.vm09.stdout:7/561: creat d3/d1d/d94/fa4 x:0 0 0 2026-03-09T15:00:54.028 INFO:tasks.workunit.client.1.vm09.stdout:0/697: getdents da 0 2026-03-09T15:00:54.030 INFO:tasks.workunit.client.1.vm09.stdout:0/698: read da/d57/f60 [800954,105463] 0 2026-03-09T15:00:54.033 INFO:tasks.workunit.client.1.vm09.stdout:0/699: fsync da/dc/d1c/d3c/d44/fca 0 2026-03-09T15:00:54.033 INFO:tasks.workunit.client.1.vm09.stdout:7/562: dwrite d3/db/d15/d5f/d6e/f7b [0,4194304] 0 2026-03-09T15:00:54.035 INFO:tasks.workunit.client.1.vm09.stdout:3/663: dread d3/d9a/f97 [0,4194304] 0 2026-03-09T15:00:54.037 INFO:tasks.workunit.client.1.vm09.stdout:7/563: write d3/d28/f95 [99979,27474] 0 2026-03-09T15:00:54.037 INFO:tasks.workunit.client.1.vm09.stdout:0/700: mkdir da/dc/d10/de5 0 2026-03-09T15:00:54.038 INFO:tasks.workunit.client.1.vm09.stdout:0/701: dread - da/dc/d1c/d46/d63/faa zero size 2026-03-09T15:00:54.038 INFO:tasks.workunit.client.1.vm09.stdout:7/564: write d3/db/d15/f68 [2498102,63062] 0 2026-03-09T15:00:54.039 INFO:tasks.workunit.client.1.vm09.stdout:3/664: creat d3/fe6 x:0 0 0 2026-03-09T15:00:54.048 INFO:tasks.workunit.client.1.vm09.stdout:0/702: mkdir da/dc/d1c/d46/d63/de6 0 2026-03-09T15:00:54.053 INFO:tasks.workunit.client.1.vm09.stdout:8/624: write df/f26 [2042096,14701] 0 2026-03-09T15:00:54.054 INFO:tasks.workunit.client.1.vm09.stdout:3/665: symlink d3/d5b/le7 0 2026-03-09T15:00:54.055 INFO:tasks.workunit.client.1.vm09.stdout:7/565: symlink d3/d1d/d65/d9a/la5 0 2026-03-09T15:00:54.057 INFO:tasks.workunit.client.1.vm09.stdout:7/566: fdatasync d3/db/d25/d5c/f5e 0 2026-03-09T15:00:54.064 INFO:tasks.workunit.client.1.vm09.stdout:3/666: dwrite d3/f9 [0,4194304] 0 2026-03-09T15:00:54.065 INFO:tasks.workunit.client.1.vm09.stdout:7/567: unlink d3/db/d15/d5f/d44/l73 0 2026-03-09T15:00:54.065 INFO:tasks.workunit.client.1.vm09.stdout:5/618: write d2/d37/f75 [282422,14595] 0 2026-03-09T15:00:54.066 INFO:tasks.workunit.client.1.vm09.stdout:3/667: dread d3/d9a/fc2 [0,4194304] 0 2026-03-09T15:00:54.066 INFO:tasks.workunit.client.1.vm09.stdout:5/619: readlink d2/le 0 2026-03-09T15:00:54.074 INFO:tasks.workunit.client.1.vm09.stdout:7/568: unlink d3/d28/f7f 0 2026-03-09T15:00:54.076 INFO:tasks.workunit.client.1.vm09.stdout:1/515: write d8/d50/d39/d95/f61 [5223002,126406] 0 2026-03-09T15:00:54.077 INFO:tasks.workunit.client.1.vm09.stdout:7/569: dread d3/db/d46/f66 [0,4194304] 0 2026-03-09T15:00:54.079 INFO:tasks.workunit.client.1.vm09.stdout:3/668: write d3/d5b/f8b [3002449,106899] 0 2026-03-09T15:00:54.081 INFO:tasks.workunit.client.1.vm09.stdout:1/516: rmdir d8/d50/d39/d95 39 2026-03-09T15:00:54.082 INFO:tasks.workunit.client.1.vm09.stdout:7/570: write d3/d1d/d2d/f81 [1633847,9310] 0 2026-03-09T15:00:54.083 INFO:tasks.workunit.client.1.vm09.stdout:5/620: link d2/c39 d2/d37/d67/d95/db5/cdb 0 2026-03-09T15:00:54.088 INFO:tasks.workunit.client.1.vm09.stdout:1/517: write d8/d50/d39/d95/f4c [3995539,44396] 0 2026-03-09T15:00:54.095 INFO:tasks.workunit.client.1.vm09.stdout:5/621: dwrite d2/d37/d3c/d36/d4c/f82 [0,4194304] 0 2026-03-09T15:00:54.109 INFO:tasks.workunit.client.1.vm09.stdout:3/669: rename d3/d3a/d2b/d31/d4a/d62/l2a to d3/d3a/d2b/d39/d48/le8 0 2026-03-09T15:00:54.109 INFO:tasks.workunit.client.1.vm09.stdout:7/571: dread - d3/d1d/d65/f76 zero size 2026-03-09T15:00:54.111 INFO:tasks.workunit.client.1.vm09.stdout:3/670: write d3/d3a/d2b/d39/f84 [130200,72428] 0 2026-03-09T15:00:54.111 INFO:tasks.workunit.client.1.vm09.stdout:3/671: truncate d3/d5b/fc0 852276 0 2026-03-09T15:00:54.113 INFO:tasks.workunit.client.1.vm09.stdout:7/572: dread d3/db/d15/d5f/f89 [0,4194304] 0 2026-03-09T15:00:54.114 INFO:tasks.workunit.client.1.vm09.stdout:1/518: fsync d8/d50/d5b/f6f 0 2026-03-09T15:00:54.117 INFO:tasks.workunit.client.1.vm09.stdout:1/519: fsync d8/d10/d24/d48/d9b/d78/f7c 0 2026-03-09T15:00:54.118 INFO:tasks.workunit.client.1.vm09.stdout:5/622: mkdir d2/d37/d3c/d36/d45/d5c/ddc 0 2026-03-09T15:00:54.119 INFO:tasks.workunit.client.1.vm09.stdout:7/573: unlink d3/d1d/d2d/c3e 0 2026-03-09T15:00:54.120 INFO:tasks.workunit.client.1.vm09.stdout:7/574: fdatasync d3/f97 0 2026-03-09T15:00:54.120 INFO:tasks.workunit.client.1.vm09.stdout:1/520: creat d8/d50/fa7 x:0 0 0 2026-03-09T15:00:54.123 INFO:tasks.workunit.client.1.vm09.stdout:5/623: rename d2/d37/d3c/d36/d45/d5c/c8b to d2/d37/d67/d95/cdd 0 2026-03-09T15:00:54.124 INFO:tasks.workunit.client.1.vm09.stdout:7/575: rename d3/d61/f6c to d3/db/d46/fa6 0 2026-03-09T15:00:54.125 INFO:tasks.workunit.client.1.vm09.stdout:7/576: readlink d3/db/d15/d5f/d6e/l96 0 2026-03-09T15:00:54.126 INFO:tasks.workunit.client.1.vm09.stdout:1/521: rename d8/d50/d39/d95/d56/c5a to d8/d10/d24/d45/ca8 0 2026-03-09T15:00:54.126 INFO:tasks.workunit.client.1.vm09.stdout:7/577: mknod d3/db/d25/d5c/d75/ca7 0 2026-03-09T15:00:54.126 INFO:tasks.workunit.client.1.vm09.stdout:1/522: readlink d8/d10/d24/d45/d5f/l9c 0 2026-03-09T15:00:54.127 INFO:tasks.workunit.client.1.vm09.stdout:7/578: truncate d3/d61/f86 1459823 0 2026-03-09T15:00:54.127 INFO:tasks.workunit.client.1.vm09.stdout:5/624: rename d2/d37/d67/d95/db8/ld1 to d2/db1/db2/lde 0 2026-03-09T15:00:54.130 INFO:tasks.workunit.client.1.vm09.stdout:7/579: mknod d3/db/d25/d7d/ca8 0 2026-03-09T15:00:54.132 INFO:tasks.workunit.client.1.vm09.stdout:5/625: rename d2/d37/d3c/d36/d4c/fa6 to d2/d37/d3c/d36/d45/dae/dd3/fdf 0 2026-03-09T15:00:54.132 INFO:tasks.workunit.client.1.vm09.stdout:1/523: dwrite d8/f17 [4194304,4194304] 0 2026-03-09T15:00:54.134 INFO:tasks.workunit.client.1.vm09.stdout:7/580: readlink d3/d1d/l27 0 2026-03-09T15:00:54.140 INFO:tasks.workunit.client.1.vm09.stdout:3/672: sync 2026-03-09T15:00:54.141 INFO:tasks.workunit.client.1.vm09.stdout:5/626: mknod d2/d37/d3c/d36/d45/dae/dc3/ce0 0 2026-03-09T15:00:54.142 INFO:tasks.workunit.client.1.vm09.stdout:7/581: mkdir d3/d3d/d9b/da9 0 2026-03-09T15:00:54.145 INFO:tasks.workunit.client.1.vm09.stdout:3/673: chown d3/d74/fb4 1147864 1 2026-03-09T15:00:54.149 INFO:tasks.workunit.client.1.vm09.stdout:1/524: rename d8/d10/d24/d48/d9b/l6a to d8/d50/d39/la9 0 2026-03-09T15:00:54.151 INFO:tasks.workunit.client.1.vm09.stdout:5/627: mknod d2/d37/d67/ce1 0 2026-03-09T15:00:54.152 INFO:tasks.workunit.client.1.vm09.stdout:3/674: mknod d3/d3a/d2b/d36/dac/ce9 0 2026-03-09T15:00:54.152 INFO:tasks.workunit.client.1.vm09.stdout:7/582: dwrite d3/d1d/f11 [0,4194304] 0 2026-03-09T15:00:54.158 INFO:tasks.workunit.client.1.vm09.stdout:5/628: creat d2/d37/d67/d95/db8/fe2 x:0 0 0 2026-03-09T15:00:54.158 INFO:tasks.workunit.client.1.vm09.stdout:1/525: dread - d8/d50/d39/d95/d72/f77 zero size 2026-03-09T15:00:54.160 INFO:tasks.workunit.client.1.vm09.stdout:3/675: dread d3/d9a/f97 [0,4194304] 0 2026-03-09T15:00:54.164 INFO:tasks.workunit.client.1.vm09.stdout:7/583: unlink d3/db/fe 0 2026-03-09T15:00:54.173 INFO:tasks.workunit.client.1.vm09.stdout:3/676: dwrite d3/d5b/d79/f83 [0,4194304] 0 2026-03-09T15:00:54.177 INFO:tasks.workunit.client.1.vm09.stdout:3/677: chown d3/d3a/d2b 12211433 1 2026-03-09T15:00:54.179 INFO:tasks.workunit.client.1.vm09.stdout:1/526: truncate d8/d50/d5b/f6f 60775 0 2026-03-09T15:00:54.182 INFO:tasks.workunit.client.1.vm09.stdout:3/678: read d3/d3a/d2b/d36/f8a [835087,83854] 0 2026-03-09T15:00:54.193 INFO:tasks.workunit.client.1.vm09.stdout:3/679: mkdir d3/dea 0 2026-03-09T15:00:54.194 INFO:tasks.workunit.client.1.vm09.stdout:2/631: truncate df/d20/d2e/fbb 3080188 0 2026-03-09T15:00:54.195 INFO:tasks.workunit.client.1.vm09.stdout:7/584: dread d3/d28/f35 [0,4194304] 0 2026-03-09T15:00:54.196 INFO:tasks.workunit.client.1.vm09.stdout:3/680: fsync d3/d3a/d2b/d39/f70 0 2026-03-09T15:00:54.199 INFO:tasks.workunit.client.1.vm09.stdout:2/632: rmdir df/d1f/d47/d84 39 2026-03-09T15:00:54.208 INFO:tasks.workunit.client.1.vm09.stdout:7/585: dwrite d3/d1d/d2d/f81 [8388608,4194304] 0 2026-03-09T15:00:54.208 INFO:tasks.workunit.client.1.vm09.stdout:2/633: creat df/d58/fc2 x:0 0 0 2026-03-09T15:00:54.208 INFO:tasks.workunit.client.1.vm09.stdout:2/634: chown df/d20/d2e/f54 17289 1 2026-03-09T15:00:54.208 INFO:tasks.workunit.client.1.vm09.stdout:3/681: rename d3/d3a/d2b/d31/d4a/l5c to d3/d3a/d2b/d31/d4a/d62/leb 0 2026-03-09T15:00:54.208 INFO:tasks.workunit.client.1.vm09.stdout:3/682: readlink d3/d3a/d2b/d39/d48/l4b 0 2026-03-09T15:00:54.215 INFO:tasks.workunit.client.1.vm09.stdout:7/586: mkdir d3/d3d/d9b/da9/daa 0 2026-03-09T15:00:54.217 INFO:tasks.workunit.client.1.vm09.stdout:3/683: creat d3/d9a/de3/dc4/fec x:0 0 0 2026-03-09T15:00:54.218 INFO:tasks.workunit.client.1.vm09.stdout:7/587: creat d3/d1d/fab x:0 0 0 2026-03-09T15:00:54.220 INFO:tasks.workunit.client.1.vm09.stdout:2/635: dwrite df/d58/d67/f4e [0,4194304] 0 2026-03-09T15:00:54.225 INFO:tasks.workunit.client.1.vm09.stdout:2/636: dwrite df/d2d/f87 [0,4194304] 0 2026-03-09T15:00:54.249 INFO:tasks.workunit.client.1.vm09.stdout:9/557: write d1/d58/f75 [1196025,31934] 0 2026-03-09T15:00:54.249 INFO:tasks.workunit.client.1.vm09.stdout:7/588: rmdir d3/db/d25/d8d 0 2026-03-09T15:00:54.251 INFO:tasks.workunit.client.1.vm09.stdout:7/589: write d3/db/d25/d5c/f88 [889768,78504] 0 2026-03-09T15:00:54.252 INFO:tasks.workunit.client.1.vm09.stdout:7/590: fsync d3/d28/f29 0 2026-03-09T15:00:54.252 INFO:tasks.workunit.client.1.vm09.stdout:7/591: dread - d3/db/d25/d5c/f8a zero size 2026-03-09T15:00:54.255 INFO:tasks.workunit.client.1.vm09.stdout:9/558: rmdir d1/d7/d1e/d2b/d2e 39 2026-03-09T15:00:54.256 INFO:tasks.workunit.client.1.vm09.stdout:7/592: dread d3/db/d46/f66 [0,4194304] 0 2026-03-09T15:00:54.261 INFO:tasks.workunit.client.1.vm09.stdout:9/559: dread d1/d58/f99 [0,4194304] 0 2026-03-09T15:00:54.261 INFO:tasks.workunit.client.1.vm09.stdout:2/637: rename df/d1f/d47/d84/da1 to df/d1f/d47/d84/db7/dc3 0 2026-03-09T15:00:54.262 INFO:tasks.workunit.client.1.vm09.stdout:9/560: write d1/d7/f77 [3426369,25494] 0 2026-03-09T15:00:54.264 INFO:tasks.workunit.client.1.vm09.stdout:2/638: unlink df/d1f/c91 0 2026-03-09T15:00:54.271 INFO:tasks.workunit.client.1.vm09.stdout:2/639: mkdir df/d1f/d47/d84/db7/dc3/dc4 0 2026-03-09T15:00:54.272 INFO:tasks.workunit.client.1.vm09.stdout:2/640: mknod df/d20/d29/da9/cc5 0 2026-03-09T15:00:54.277 INFO:tasks.workunit.client.1.vm09.stdout:2/641: creat df/da0/fc6 x:0 0 0 2026-03-09T15:00:54.277 INFO:tasks.workunit.client.1.vm09.stdout:9/561: dread d1/f28 [0,4194304] 0 2026-03-09T15:00:54.278 INFO:tasks.workunit.client.1.vm09.stdout:9/562: fdatasync d1/d4f/fa3 0 2026-03-09T15:00:54.292 INFO:tasks.workunit.client.1.vm09.stdout:6/569: truncate d6/df/d23/f2f 3110465 0 2026-03-09T15:00:54.295 INFO:tasks.workunit.client.1.vm09.stdout:9/563: dread d1/d7/d1e/f20 [0,4194304] 0 2026-03-09T15:00:54.301 INFO:tasks.workunit.client.1.vm09.stdout:9/564: dread d1/d7/d1e/d2b/f30 [0,4194304] 0 2026-03-09T15:00:54.305 INFO:tasks.workunit.client.1.vm09.stdout:9/565: mknod d1/d7/d1e/cbe 0 2026-03-09T15:00:54.306 INFO:tasks.workunit.client.1.vm09.stdout:6/570: sync 2026-03-09T15:00:54.308 INFO:tasks.workunit.client.1.vm09.stdout:4/647: dwrite db/d12/f5a [4194304,4194304] 0 2026-03-09T15:00:54.309 INFO:tasks.workunit.client.1.vm09.stdout:8/625: write df/d5b/d65/d1d/f44 [312402,129192] 0 2026-03-09T15:00:54.309 INFO:tasks.workunit.client.1.vm09.stdout:4/648: dread - db/d19/d23/fc4 zero size 2026-03-09T15:00:54.313 INFO:tasks.workunit.client.1.vm09.stdout:9/566: write d1/d7/d1e/d2b/d2e/f95 [719150,79124] 0 2026-03-09T15:00:54.314 INFO:tasks.workunit.client.1.vm09.stdout:8/626: creat df/d38/d64/fb2 x:0 0 0 2026-03-09T15:00:54.315 INFO:tasks.workunit.client.1.vm09.stdout:4/649: mkdir db/d19/d23/d71/d53/dcf 0 2026-03-09T15:00:54.315 INFO:tasks.workunit.client.1.vm09.stdout:9/567: rmdir d1 39 2026-03-09T15:00:54.320 INFO:tasks.workunit.client.1.vm09.stdout:4/650: readlink db/d19/d52/d76/d3b/lbc 0 2026-03-09T15:00:54.320 INFO:tasks.workunit.client.1.vm09.stdout:8/627: creat df/d5b/d65/fb3 x:0 0 0 2026-03-09T15:00:54.321 INFO:tasks.workunit.client.1.vm09.stdout:9/568: write d1/d7/f67 [7837586,35891] 0 2026-03-09T15:00:54.323 INFO:tasks.workunit.client.1.vm09.stdout:0/703: truncate da/dc/d10/f2d 3391746 0 2026-03-09T15:00:54.327 INFO:tasks.workunit.client.1.vm09.stdout:8/628: rmdir df/d2d/d42 39 2026-03-09T15:00:54.327 INFO:tasks.workunit.client.1.vm09.stdout:4/651: mkdir db/d12/d9e/dd0 0 2026-03-09T15:00:54.327 INFO:tasks.workunit.client.1.vm09.stdout:0/704: chown da/dc/d1c/d3c/d44/lb4 27144241 1 2026-03-09T15:00:54.327 INFO:tasks.workunit.client.1.vm09.stdout:9/569: rmdir d1/d7/d1e/d2b/d2e/d56/d6d 39 2026-03-09T15:00:54.327 INFO:tasks.workunit.client.1.vm09.stdout:4/652: unlink db/d19/d81/d5d/f77 0 2026-03-09T15:00:54.327 INFO:tasks.workunit.client.1.vm09.stdout:8/629: rename df/l3e to df/d2d/d42/d70/lb4 0 2026-03-09T15:00:54.330 INFO:tasks.workunit.client.1.vm09.stdout:8/630: creat df/d2d/d42/d79/d9a/fb5 x:0 0 0 2026-03-09T15:00:54.331 INFO:tasks.workunit.client.1.vm09.stdout:6/571: dread d6/db/f1f [4194304,4194304] 0 2026-03-09T15:00:54.331 INFO:tasks.workunit.client.1.vm09.stdout:8/631: write df/f26 [1070776,112135] 0 2026-03-09T15:00:54.336 INFO:tasks.workunit.client.1.vm09.stdout:8/632: chown df/d38/c3b 0 1 2026-03-09T15:00:54.348 INFO:tasks.workunit.client.1.vm09.stdout:6/572: link d6/d20/f52 d6/d20/d24/da5/fbd 0 2026-03-09T15:00:54.354 INFO:tasks.workunit.client.1.vm09.stdout:8/633: rename df/d24/d99/db1/d81 to df/d24/d99/db6 0 2026-03-09T15:00:54.358 INFO:tasks.workunit.client.1.vm09.stdout:6/573: mknod d6/d20/d38/d56/d65/d68/d6f/cbe 0 2026-03-09T15:00:54.361 INFO:tasks.workunit.client.1.vm09.stdout:8/634: mkdir df/d24/d99/db6/d60/db7 0 2026-03-09T15:00:54.366 INFO:tasks.workunit.client.1.vm09.stdout:8/635: mknod df/d24/d56/cb8 0 2026-03-09T15:00:54.368 INFO:tasks.workunit.client.1.vm09.stdout:6/574: dread d6/d20/d38/d4e/d55/f77 [4194304,4194304] 0 2026-03-09T15:00:54.370 INFO:tasks.workunit.client.1.vm09.stdout:1/527: rmdir d8/d10/d24/d48 39 2026-03-09T15:00:54.377 INFO:tasks.workunit.client.1.vm09.stdout:8/636: dread df/f26 [0,4194304] 0 2026-03-09T15:00:54.377 INFO:tasks.workunit.client.1.vm09.stdout:6/575: getdents d6/d20/d38/d56/d65/d68/d86 0 2026-03-09T15:00:54.377 INFO:tasks.workunit.client.1.vm09.stdout:5/629: write d2/d37/d3c/d36/d4c/d51/fb0 [1095095,12094] 0 2026-03-09T15:00:54.380 INFO:tasks.workunit.client.1.vm09.stdout:8/637: write df/d2d/f2f [3848618,18686] 0 2026-03-09T15:00:54.385 INFO:tasks.workunit.client.1.vm09.stdout:6/576: creat d6/d20/d38/d56/fbf x:0 0 0 2026-03-09T15:00:54.386 INFO:tasks.workunit.client.1.vm09.stdout:6/577: stat d6/d20/d24/da5/fbd 0 2026-03-09T15:00:54.389 INFO:tasks.workunit.client.1.vm09.stdout:5/630: rename d2/d37/d3c/d36/lda to d2/d37/le3 0 2026-03-09T15:00:54.390 INFO:tasks.workunit.client.1.vm09.stdout:8/638: dwrite df/d38/d64/fb2 [0,4194304] 0 2026-03-09T15:00:54.392 INFO:tasks.workunit.client.1.vm09.stdout:3/684: truncate d3/d3a/f1d 6223170 0 2026-03-09T15:00:54.396 INFO:tasks.workunit.client.1.vm09.stdout:5/631: sync 2026-03-09T15:00:54.398 INFO:tasks.workunit.client.1.vm09.stdout:8/639: mknod df/d24/d99/cb9 0 2026-03-09T15:00:54.399 INFO:tasks.workunit.client.1.vm09.stdout:3/685: rmdir d3/d9a/d80 39 2026-03-09T15:00:54.402 INFO:tasks.workunit.client.1.vm09.stdout:5/632: dwrite d2/d37/d3c/d36/f97 [0,4194304] 0 2026-03-09T15:00:54.404 INFO:tasks.workunit.client.1.vm09.stdout:5/633: chown d2/daa/laf 51773 1 2026-03-09T15:00:54.404 INFO:tasks.workunit.client.1.vm09.stdout:5/634: chown d2/l18 649419334 1 2026-03-09T15:00:54.407 INFO:tasks.workunit.client.1.vm09.stdout:3/686: symlink d3/d3a/d2b/d7b/dd3/led 0 2026-03-09T15:00:54.407 INFO:tasks.workunit.client.1.vm09.stdout:2/642: write df/d20/f24 [484840,118257] 0 2026-03-09T15:00:54.407 INFO:tasks.workunit.client.1.vm09.stdout:7/593: dwrite d3/db/d25/d5c/f8a [0,4194304] 0 2026-03-09T15:00:54.410 INFO:tasks.workunit.client.1.vm09.stdout:8/640: sync 2026-03-09T15:00:54.416 INFO:tasks.workunit.client.1.vm09.stdout:7/594: dwrite f1 [0,4194304] 0 2026-03-09T15:00:54.420 INFO:tasks.workunit.client.1.vm09.stdout:7/595: write d3/d1d/fab [492662,76593] 0 2026-03-09T15:00:54.421 INFO:tasks.workunit.client.1.vm09.stdout:5/635: link d2/d37/d67/fc0 d2/d37/d3c/d36/d45/dae/dd3/fe4 0 2026-03-09T15:00:54.421 INFO:tasks.workunit.client.1.vm09.stdout:9/570: truncate d1/d7/d1e/d2b/f32 3994758 0 2026-03-09T15:00:54.424 INFO:tasks.workunit.client.1.vm09.stdout:2/643: creat df/d1f/d47/d84/db7/dc3/fc7 x:0 0 0 2026-03-09T15:00:54.426 INFO:tasks.workunit.client.1.vm09.stdout:2/644: write df/d1f/d47/f89 [620890,64200] 0 2026-03-09T15:00:54.427 INFO:tasks.workunit.client.1.vm09.stdout:4/653: dwrite db/d19/f8e [0,4194304] 0 2026-03-09T15:00:54.434 INFO:tasks.workunit.client.1.vm09.stdout:5/636: creat d2/d37/d3c/d36/d45/dae/fe5 x:0 0 0 2026-03-09T15:00:54.434 INFO:tasks.workunit.client.1.vm09.stdout:8/641: link df/f51 df/d5c/fba 0 2026-03-09T15:00:54.437 INFO:tasks.workunit.client.1.vm09.stdout:9/571: dread d1/d7/d1e/d2b/d2e/f95 [0,4194304] 0 2026-03-09T15:00:54.438 INFO:tasks.workunit.client.1.vm09.stdout:2/645: read - df/d1f/d6d/d8f/d5f/f72 zero size 2026-03-09T15:00:54.445 INFO:tasks.workunit.client.1.vm09.stdout:4/654: creat db/d19/d52/d76/d3b/fd1 x:0 0 0 2026-03-09T15:00:54.447 INFO:tasks.workunit.client.1.vm09.stdout:1/528: truncate d8/d50/d39/d95/f4c 830392 0 2026-03-09T15:00:54.448 INFO:tasks.workunit.client.1.vm09.stdout:5/637: symlink d2/d37/d3c/dbf/le6 0 2026-03-09T15:00:54.449 INFO:tasks.workunit.client.1.vm09.stdout:5/638: fsync d2/d37/d67/d95/db5/fb6 0 2026-03-09T15:00:54.449 INFO:tasks.workunit.client.1.vm09.stdout:7/596: link d3/f5 d3/d3d/d9b/fac 0 2026-03-09T15:00:54.450 INFO:tasks.workunit.client.1.vm09.stdout:7/597: readlink d3/db/d25/l70 0 2026-03-09T15:00:54.451 INFO:tasks.workunit.client.1.vm09.stdout:7/598: chown d3/db/d15/d5f/ca1 471446826 1 2026-03-09T15:00:54.453 INFO:tasks.workunit.client.1.vm09.stdout:2/646: creat df/d1f/d47/d5d/fc8 x:0 0 0 2026-03-09T15:00:54.455 INFO:tasks.workunit.client.1.vm09.stdout:4/655: mkdir db/d19/d23/d44/dd2 0 2026-03-09T15:00:54.457 INFO:tasks.workunit.client.1.vm09.stdout:5/639: read d2/f5e [79758,7818] 0 2026-03-09T15:00:54.457 INFO:tasks.workunit.client.1.vm09.stdout:4/656: chown db/d19/d23/d44/d7c/d7d/db7 43 1 2026-03-09T15:00:54.473 INFO:tasks.workunit.client.1.vm09.stdout:2/647: dread df/d20/f24 [0,4194304] 0 2026-03-09T15:00:54.494 INFO:tasks.workunit.client.1.vm09.stdout:6/578: dwrite d6/db/fb3 [0,4194304] 0 2026-03-09T15:00:54.496 INFO:tasks.workunit.client.1.vm09.stdout:6/579: write d6/df/d23/f78 [923241,99701] 0 2026-03-09T15:00:54.500 INFO:tasks.workunit.client.1.vm09.stdout:5/640: rename d2/l49 to d2/d37/d3c/d36/d45/d5c/ddc/le7 0 2026-03-09T15:00:54.505 INFO:tasks.workunit.client.1.vm09.stdout:4/657: mkdir db/d19/d52/d76/d3b/dd3 0 2026-03-09T15:00:54.505 INFO:tasks.workunit.client.1.vm09.stdout:2/648: mknod df/d20/d29/da9/cc9 0 2026-03-09T15:00:54.508 INFO:tasks.workunit.client.1.vm09.stdout:0/705: write da/dc/d1c/d3c/d44/f71 [2528126,9087] 0 2026-03-09T15:00:54.508 INFO:tasks.workunit.client.1.vm09.stdout:4/658: dread db/d19/d52/f6d [4194304,4194304] 0 2026-03-09T15:00:54.510 INFO:tasks.workunit.client.1.vm09.stdout:4/659: write db/d12/f1b [757618,41399] 0 2026-03-09T15:00:54.510 INFO:tasks.workunit.client.1.vm09.stdout:8/642: rmdir df/d2d/d42/d79/d9b 0 2026-03-09T15:00:54.515 INFO:tasks.workunit.client.1.vm09.stdout:6/580: write d6/d20/d2a/f98 [774994,43100] 0 2026-03-09T15:00:54.520 INFO:tasks.workunit.client.1.vm09.stdout:3/687: dwrite d3/d3a/d2b/d39/d48/f5f [0,4194304] 0 2026-03-09T15:00:54.530 INFO:tasks.workunit.client.1.vm09.stdout:5/641: symlink d2/daa/le8 0 2026-03-09T15:00:54.533 INFO:tasks.workunit.client.1.vm09.stdout:5/642: stat d2/d37/d3c/fac 0 2026-03-09T15:00:54.533 INFO:tasks.workunit.client.1.vm09.stdout:5/643: dread d2/d37/f6c [0,4194304] 0 2026-03-09T15:00:54.537 INFO:tasks.workunit.client.1.vm09.stdout:0/706: symlink da/dc/dcb/dd4/le7 0 2026-03-09T15:00:54.538 INFO:tasks.workunit.client.1.vm09.stdout:0/707: write da/fdc [1037705,9900] 0 2026-03-09T15:00:54.550 INFO:tasks.workunit.client.1.vm09.stdout:3/688: read d3/d60/f6e [14842,14180] 0 2026-03-09T15:00:54.552 INFO:tasks.workunit.client.1.vm09.stdout:5/644: chown d2/db1/db2/lde 691657 1 2026-03-09T15:00:54.554 INFO:tasks.workunit.client.1.vm09.stdout:0/708: mknod da/dc/d1c/d46/d63/d86/ce8 0 2026-03-09T15:00:54.556 INFO:tasks.workunit.client.1.vm09.stdout:2/649: link df/d1f/f38 df/d20/d29/da9/fca 0 2026-03-09T15:00:54.559 INFO:tasks.workunit.client.1.vm09.stdout:2/650: symlink df/d93/lcb 0 2026-03-09T15:00:54.560 INFO:tasks.workunit.client.1.vm09.stdout:5/645: truncate d2/d37/d3c/d36/d45/f6e 2635955 0 2026-03-09T15:00:54.561 INFO:tasks.workunit.client.1.vm09.stdout:0/709: truncate da/dc/d1c/d3c/d78/f88 2171122 0 2026-03-09T15:00:54.562 INFO:tasks.workunit.client.1.vm09.stdout:0/710: write da/dc/d1c/d3c/d78/d7a/fb2 [1636269,76597] 0 2026-03-09T15:00:54.570 INFO:tasks.workunit.client.1.vm09.stdout:0/711: creat da/dc/d8c/fe9 x:0 0 0 2026-03-09T15:00:54.571 INFO:tasks.workunit.client.1.vm09.stdout:5/646: dread d2/f4f [0,4194304] 0 2026-03-09T15:00:54.572 INFO:tasks.workunit.client.1.vm09.stdout:0/712: truncate da/dc/d84/fd5 5112433 0 2026-03-09T15:00:54.575 INFO:tasks.workunit.client.1.vm09.stdout:0/713: creat da/dc/d61/fea x:0 0 0 2026-03-09T15:00:54.579 INFO:tasks.workunit.client.1.vm09.stdout:0/714: mknod da/dc/d1c/d3c/d78/d7a/d9c/ceb 0 2026-03-09T15:00:54.580 INFO:tasks.workunit.client.1.vm09.stdout:5/647: dwrite d2/d37/d3c/d36/d4c/d51/fd0 [0,4194304] 0 2026-03-09T15:00:54.608 INFO:tasks.workunit.client.1.vm09.stdout:4/660: mknod db/d19/d23/d44/d7c/cd4 0 2026-03-09T15:00:54.627 INFO:tasks.workunit.client.1.vm09.stdout:8/643: rmdir df/d5b/d65/dae 39 2026-03-09T15:00:54.679 INFO:tasks.workunit.client.1.vm09.stdout:7/599: dwrite d3/d3d/f51 [0,4194304] 0 2026-03-09T15:00:54.692 INFO:tasks.workunit.client.1.vm09.stdout:1/529: write d8/d10/f12 [950378,9169] 0 2026-03-09T15:00:54.694 INFO:tasks.workunit.client.1.vm09.stdout:1/530: symlink d8/d10/d24/laa 0 2026-03-09T15:00:54.695 INFO:tasks.workunit.client.1.vm09.stdout:1/531: truncate d8/d90/f99 220743 0 2026-03-09T15:00:54.697 INFO:tasks.workunit.client.1.vm09.stdout:1/532: getdents d8/d50 0 2026-03-09T15:00:54.698 INFO:tasks.workunit.client.1.vm09.stdout:1/533: chown d8/f57 16 1 2026-03-09T15:00:54.698 INFO:tasks.workunit.client.1.vm09.stdout:1/534: rmdir d8/d10/d24/d45/d5f 39 2026-03-09T15:00:54.700 INFO:tasks.workunit.client.1.vm09.stdout:1/535: chown d8/f59 69 1 2026-03-09T15:00:54.714 INFO:tasks.workunit.client.1.vm09.stdout:1/536: dread d8/f6b [0,4194304] 0 2026-03-09T15:00:54.718 INFO:tasks.workunit.client.1.vm09.stdout:1/537: dwrite d8/d10/d24/d45/f6c [0,4194304] 0 2026-03-09T15:00:54.730 INFO:tasks.workunit.client.1.vm09.stdout:6/581: dwrite d6/f7f [4194304,4194304] 0 2026-03-09T15:00:54.734 INFO:tasks.workunit.client.1.vm09.stdout:6/582: read d6/d20/d44/f4a [1556341,12339] 0 2026-03-09T15:00:54.740 INFO:tasks.workunit.client.1.vm09.stdout:6/583: mkdir d6/d20/d38/d56/d65/d68/d86/dc0 0 2026-03-09T15:00:54.781 INFO:tasks.workunit.client.1.vm09.stdout:9/572: creat d1/fbf x:0 0 0 2026-03-09T15:00:54.781 INFO:tasks.workunit.client.1.vm09.stdout:9/573: write d1/d7/d1e/d2b/d40/f4d [2984182,10504] 0 2026-03-09T15:00:54.792 INFO:tasks.workunit.client.1.vm09.stdout:9/574: dread d1/d4f/fa3 [0,4194304] 0 2026-03-09T15:00:54.793 INFO:tasks.workunit.client.1.vm09.stdout:9/575: write d1/d7/d1e/d2b/f5f [2831737,56508] 0 2026-03-09T15:00:54.796 INFO:tasks.workunit.client.1.vm09.stdout:9/576: mkdir d1/d4f/d8f/dc0 0 2026-03-09T15:00:54.798 INFO:tasks.workunit.client.1.vm09.stdout:9/577: mknod d1/d7/d9f/daa/cc1 0 2026-03-09T15:00:54.798 INFO:tasks.workunit.client.1.vm09.stdout:9/578: fdatasync d1/d4f/f89 0 2026-03-09T15:00:54.811 INFO:tasks.workunit.client.1.vm09.stdout:1/538: symlink d8/d10/lab 0 2026-03-09T15:00:54.812 INFO:tasks.workunit.client.1.vm09.stdout:1/539: write d8/f6b [4392809,102302] 0 2026-03-09T15:00:54.814 INFO:tasks.workunit.client.1.vm09.stdout:1/540: symlink d8/d50/d39/d95/d56/lac 0 2026-03-09T15:00:54.816 INFO:tasks.workunit.client.1.vm09.stdout:1/541: symlink d8/d10/d24/d48/d9b/d78/lad 0 2026-03-09T15:00:54.818 INFO:tasks.workunit.client.1.vm09.stdout:1/542: dread d8/d50/d39/d95/f61 [0,4194304] 0 2026-03-09T15:00:54.833 INFO:tasks.workunit.client.1.vm09.stdout:9/579: sync 2026-03-09T15:00:54.843 INFO:tasks.workunit.client.1.vm09.stdout:9/580: dwrite d1/d7/d1e/d2b/d2e/d56/d6d/fb1 [0,4194304] 0 2026-03-09T15:00:54.847 INFO:tasks.workunit.client.1.vm09.stdout:9/581: unlink d1/d4f/d52/f8b 0 2026-03-09T15:00:54.878 INFO:tasks.workunit.client.1.vm09.stdout:0/715: truncate da/dc/f28 1765074 0 2026-03-09T15:00:54.878 INFO:tasks.workunit.client.1.vm09.stdout:0/716: fsync da/dc/d10/f2d 0 2026-03-09T15:00:54.880 INFO:tasks.workunit.client.1.vm09.stdout:0/717: fdatasync da/dc/d22/f47 0 2026-03-09T15:00:54.881 INFO:tasks.workunit.client.1.vm09.stdout:0/718: chown da/d30/f6f 7798 1 2026-03-09T15:00:54.881 INFO:tasks.workunit.client.1.vm09.stdout:0/719: write da/dc/d84/fd5 [1715007,15449] 0 2026-03-09T15:00:54.882 INFO:tasks.workunit.client.1.vm09.stdout:0/720: chown da/dc/d1c/l96 241579 1 2026-03-09T15:00:54.883 INFO:tasks.workunit.client.1.vm09.stdout:0/721: chown da/dc/d1c/lbd 8116 1 2026-03-09T15:00:54.884 INFO:tasks.workunit.client.1.vm09.stdout:5/648: truncate d2/d37/d3c/d36/d45/d5c/f91 574564 0 2026-03-09T15:00:54.885 INFO:tasks.workunit.client.1.vm09.stdout:5/649: mknod d2/d37/d53/d86/ce9 0 2026-03-09T15:00:54.886 INFO:tasks.workunit.client.1.vm09.stdout:5/650: chown d2/d37/f75 83277 1 2026-03-09T15:00:54.888 INFO:tasks.workunit.client.1.vm09.stdout:0/722: read da/d30/f6f [667426,1752] 0 2026-03-09T15:00:54.914 INFO:tasks.workunit.client.1.vm09.stdout:6/584: symlink d6/d20/d38/d56/d65/lc1 0 2026-03-09T15:00:54.915 INFO:tasks.workunit.client.1.vm09.stdout:6/585: stat d6/d20/d24/d7e 0 2026-03-09T15:00:54.918 INFO:tasks.workunit.client.1.vm09.stdout:2/651: rename df/f4a to df/d1f/d47/d84/db7/dc3/fcc 0 2026-03-09T15:00:54.919 INFO:tasks.workunit.client.1.vm09.stdout:2/652: chown df/d20/d2e/f48 144268 1 2026-03-09T15:00:54.920 INFO:tasks.workunit.client.1.vm09.stdout:3/689: mkdir d3/d3a/d2b/dee 0 2026-03-09T15:00:54.921 INFO:tasks.workunit.client.1.vm09.stdout:8/644: rename df/d2d/d42/d79/lab to df/d2d/d90/lbb 0 2026-03-09T15:00:54.922 INFO:tasks.workunit.client.1.vm09.stdout:6/586: mknod d6/db/d8b/db0/dba/cc2 0 2026-03-09T15:00:54.923 INFO:tasks.workunit.client.1.vm09.stdout:8/645: write df/d24/d99/db6/f59 [395543,102022] 0 2026-03-09T15:00:54.925 INFO:tasks.workunit.client.1.vm09.stdout:8/646: write df/d5b/d65/d1d/f41 [3620593,19935] 0 2026-03-09T15:00:54.925 INFO:tasks.workunit.client.1.vm09.stdout:3/690: unlink d3/d9a/de3/c95 0 2026-03-09T15:00:54.927 INFO:tasks.workunit.client.1.vm09.stdout:5/651: rename d2/l2c to d2/d37/d3c/d36/d4c/d89/lea 0 2026-03-09T15:00:54.927 INFO:tasks.workunit.client.1.vm09.stdout:3/691: dread - d3/d9a/de3/dc4/fec zero size 2026-03-09T15:00:54.928 INFO:tasks.workunit.client.1.vm09.stdout:5/652: fsync d2/d37/d67/d95/f99 0 2026-03-09T15:00:54.930 INFO:tasks.workunit.client.1.vm09.stdout:8/647: read df/d5b/d65/d1d/f68 [3079798,106958] 0 2026-03-09T15:00:54.932 INFO:tasks.workunit.client.1.vm09.stdout:5/653: write d2/d37/d3c/dbf/fd4 [99989,128820] 0 2026-03-09T15:00:54.932 INFO:tasks.workunit.client.1.vm09.stdout:3/692: truncate d3/d3a/d2b/d31/d4a/fd2 334390 0 2026-03-09T15:00:54.937 INFO:tasks.workunit.client.1.vm09.stdout:2/653: truncate df/d20/d29/da9/fca 3835979 0 2026-03-09T15:00:54.939 INFO:tasks.workunit.client.1.vm09.stdout:5/654: creat d2/d37/d67/feb x:0 0 0 2026-03-09T15:00:54.940 INFO:tasks.workunit.client.1.vm09.stdout:6/587: dwrite d6/d20/d38/d4e/d55/fac [0,4194304] 0 2026-03-09T15:00:54.952 INFO:tasks.workunit.client.1.vm09.stdout:6/588: chown d6/df/l30 1721652 1 2026-03-09T15:00:54.953 INFO:tasks.workunit.client.1.vm09.stdout:2/654: creat df/d58/d74/fcd x:0 0 0 2026-03-09T15:00:54.957 INFO:tasks.workunit.client.1.vm09.stdout:2/655: symlink df/d1f/d47/d5d/lce 0 2026-03-09T15:00:54.958 INFO:tasks.workunit.client.1.vm09.stdout:6/589: fsync d6/d20/f27 0 2026-03-09T15:00:54.960 INFO:tasks.workunit.client.1.vm09.stdout:2/656: unlink df/f17 0 2026-03-09T15:00:54.964 INFO:tasks.workunit.client.1.vm09.stdout:4/661: unlink db/d12/d16/lc2 0 2026-03-09T15:00:54.965 INFO:tasks.workunit.client.1.vm09.stdout:4/662: truncate db/d19/d23/d44/d7c/d7d/fb9 953180 0 2026-03-09T15:00:54.968 INFO:tasks.workunit.client.1.vm09.stdout:6/590: dread d6/df/f16 [0,4194304] 0 2026-03-09T15:00:54.973 INFO:tasks.workunit.client.1.vm09.stdout:2/657: truncate df/d1f/f55 1159838 0 2026-03-09T15:00:54.975 INFO:tasks.workunit.client.1.vm09.stdout:4/663: creat db/d19/d23/d44/d84/fd5 x:0 0 0 2026-03-09T15:00:54.979 INFO:tasks.workunit.client.1.vm09.stdout:4/664: mknod db/d19/d52/d76/cd6 0 2026-03-09T15:00:54.980 INFO:tasks.workunit.client.1.vm09.stdout:6/591: sync 2026-03-09T15:00:54.984 INFO:tasks.workunit.client.1.vm09.stdout:1/543: getdents d8/d10/d24 0 2026-03-09T15:00:54.986 INFO:tasks.workunit.client.1.vm09.stdout:6/592: dread - d6/d20/d2a/d3d/d46/f84 zero size 2026-03-09T15:00:54.989 INFO:tasks.workunit.client.1.vm09.stdout:1/544: write d8/d50/d39/d95/d56/f9f [619217,48325] 0 2026-03-09T15:00:54.989 INFO:tasks.workunit.client.1.vm09.stdout:7/600: dwrite d3/d1d/d2d/f57 [0,4194304] 0 2026-03-09T15:00:54.992 INFO:tasks.workunit.client.1.vm09.stdout:1/545: dread - d8/d10/d24/d45/f92 zero size 2026-03-09T15:00:54.992 INFO:tasks.workunit.client.1.vm09.stdout:7/601: chown d3/db/d25/l2c 283844 1 2026-03-09T15:00:54.995 INFO:tasks.workunit.client.1.vm09.stdout:7/602: write d3/d1d/fab [1560539,51800] 0 2026-03-09T15:00:54.995 INFO:tasks.workunit.client.1.vm09.stdout:7/603: chown d3/db/c93 61339015 1 2026-03-09T15:00:54.996 INFO:tasks.workunit.client.1.vm09.stdout:4/665: dwrite db/d12/d16/f2a [0,4194304] 0 2026-03-09T15:00:55.004 INFO:tasks.workunit.client.1.vm09.stdout:6/593: dread d6/d20/f70 [0,4194304] 0 2026-03-09T15:00:55.015 INFO:tasks.workunit.client.1.vm09.stdout:6/594: dread d6/d20/d38/d4e/d55/f77 [0,4194304] 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:6/595: write d6/d20/d38/d56/d65/f7b [4491520,103668] 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:1/546: creat d8/d10/d24/d48/d9b/d68/fae x:0 0 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:1/547: stat d8/d10/d24/d48/c7a 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:1/548: truncate d8/d10/f69 763925 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:8/648: rename df/f34 to df/d24/d99/db6/d60/db7/fbc 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:1/549: chown d8/d10/d24/c66 3 1 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:8/649: stat df/d38/d64/f50 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:6/596: creat d6/d20/d38/d56/d65/d68/d6f/fc3 x:0 0 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:1/550: fsync d8/f17 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:4/666: symlink db/d12/d9e/dd0/ld7 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:3/693: rename d3/d9a/fe2 to d3/d3a/d2b/d39/d48/da0/fef 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:1/551: dread - d8/d50/d39/f96 zero size 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:3/694: truncate d3/d3a/fb8 37154 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:8/650: creat df/d2d/d42/fbd x:0 0 0 2026-03-09T15:00:55.025 INFO:tasks.workunit.client.1.vm09.stdout:5/655: rename d2/d37/d3c/d36/d45/dae/dc3/cc1 to d2/d37/d53/d86/d88/cec 0 2026-03-09T15:00:55.028 INFO:tasks.workunit.client.1.vm09.stdout:4/667: symlink db/d19/d23/d44/d7c/d7d/d97/da3/ld8 0 2026-03-09T15:00:55.032 INFO:tasks.workunit.client.1.vm09.stdout:4/668: creat db/d12/d9e/fd9 x:0 0 0 2026-03-09T15:00:55.035 INFO:tasks.workunit.client.1.vm09.stdout:1/552: dread d8/f57 [4194304,4194304] 0 2026-03-09T15:00:55.037 INFO:tasks.workunit.client.1.vm09.stdout:4/669: dread db/d12/f5a [4194304,4194304] 0 2026-03-09T15:00:55.043 INFO:tasks.workunit.client.1.vm09.stdout:3/695: dread d3/d5b/d79/d9d/faf [0,4194304] 0 2026-03-09T15:00:55.050 INFO:tasks.workunit.client.1.vm09.stdout:1/553: chown d8/d10/d73/c4d 0 1 2026-03-09T15:00:55.052 INFO:tasks.workunit.client.1.vm09.stdout:4/670: dread db/d19/d23/d44/d7c/d7d/d97/da3/fc8 [0,4194304] 0 2026-03-09T15:00:55.052 INFO:tasks.workunit.client.1.vm09.stdout:4/671: fdatasync db/d19/d23/d71/f43 0 2026-03-09T15:00:55.060 INFO:tasks.workunit.client.1.vm09.stdout:9/582: dwrite d1/d7/d1e/f2a [0,4194304] 0 2026-03-09T15:00:55.060 INFO:tasks.workunit.client.1.vm09.stdout:3/696: stat d3/d3a/d2b/d31/d4a/d62/f16 0 2026-03-09T15:00:55.061 INFO:tasks.workunit.client.1.vm09.stdout:3/697: write d3/d3a/d2b/d7b/f8c [3471179,81200] 0 2026-03-09T15:00:55.066 INFO:tasks.workunit.client.1.vm09.stdout:9/583: stat d1/d7/d1e/d2b/d2e/d56/l65 0 2026-03-09T15:00:55.073 INFO:tasks.workunit.client.1.vm09.stdout:4/672: read db/d19/d52/d76/d3b/f48 [255222,71129] 0 2026-03-09T15:00:55.074 INFO:tasks.workunit.client.1.vm09.stdout:5/656: dread d2/f15 [0,4194304] 0 2026-03-09T15:00:55.075 INFO:tasks.workunit.client.1.vm09.stdout:0/723: dwrite da/dc/d22/f3b [0,4194304] 0 2026-03-09T15:00:55.075 INFO:tasks.workunit.client.1.vm09.stdout:9/584: readlink d1/d4f/d8f/d91/l9a 0 2026-03-09T15:00:55.079 INFO:tasks.workunit.client.1.vm09.stdout:5/657: readlink d2/d37/l78 0 2026-03-09T15:00:55.082 INFO:tasks.workunit.client.1.vm09.stdout:6/597: rename d6/db/d8b/db0 to d6/d20/d2a/dc4 0 2026-03-09T15:00:55.082 INFO:tasks.workunit.client.1.vm09.stdout:6/598: chown d6/d20/d38/d4e/f75 64259 1 2026-03-09T15:00:55.083 INFO:tasks.workunit.client.1.vm09.stdout:3/698: link d3/d3a/d2b/d31/f45 d3/d9a/de3/dc4/ff0 0 2026-03-09T15:00:55.084 INFO:tasks.workunit.client.1.vm09.stdout:3/699: dread - d3/d9a/de3/dc4/fec zero size 2026-03-09T15:00:55.098 INFO:tasks.workunit.client.1.vm09.stdout:0/724: dwrite da/dc/d1c/d46/fd8 [0,4194304] 0 2026-03-09T15:00:55.098 INFO:tasks.workunit.client.1.vm09.stdout:7/604: dwrite d3/d1d/f30 [0,4194304] 0 2026-03-09T15:00:55.105 INFO:tasks.workunit.client.1.vm09.stdout:7/605: write d3/f97 [471463,10421] 0 2026-03-09T15:00:55.107 INFO:tasks.workunit.client.1.vm09.stdout:8/651: dwrite df/d5c/f72 [0,4194304] 0 2026-03-09T15:00:55.109 INFO:tasks.workunit.client.1.vm09.stdout:2/658: dwrite df/f14 [0,4194304] 0 2026-03-09T15:00:55.109 INFO:tasks.workunit.client.1.vm09.stdout:7/606: read d3/fd [347739,35255] 0 2026-03-09T15:00:55.116 INFO:tasks.workunit.client.1.vm09.stdout:7/607: read d3/db/d15/d5f/f36 [4118292,58715] 0 2026-03-09T15:00:55.120 INFO:tasks.workunit.client.1.vm09.stdout:9/585: link d1/d4f/la9 d1/d7/db8/lc2 0 2026-03-09T15:00:55.123 INFO:tasks.workunit.client.1.vm09.stdout:6/599: link d6/d20/d38/d4e/d55/l9e d6/d20/d44/d8f/lc5 0 2026-03-09T15:00:55.126 INFO:tasks.workunit.client.1.vm09.stdout:8/652: mkdir df/d24/dbe 0 2026-03-09T15:00:55.141 INFO:tasks.workunit.client.1.vm09.stdout:7/608: rename d3/db/d15/c3b to d3/d61/cad 0 2026-03-09T15:00:55.141 INFO:tasks.workunit.client.1.vm09.stdout:6/600: write d6/d20/d24/f49 [1830296,79991] 0 2026-03-09T15:00:55.141 INFO:tasks.workunit.client.1.vm09.stdout:8/653: creat df/d38/d64/fbf x:0 0 0 2026-03-09T15:00:55.141 INFO:tasks.workunit.client.1.vm09.stdout:1/554: write d8/d10/f5c [1022706,6430] 0 2026-03-09T15:00:55.142 INFO:tasks.workunit.client.1.vm09.stdout:8/654: mkdir df/d2d/d42/d70/dc0 0 2026-03-09T15:00:55.143 INFO:tasks.workunit.client.1.vm09.stdout:1/555: chown d8/d50/d39/d95/d56/c71 94082297 1 2026-03-09T15:00:55.145 INFO:tasks.workunit.client.1.vm09.stdout:9/586: creat d1/d4f/fc3 x:0 0 0 2026-03-09T15:00:55.145 INFO:tasks.workunit.client.1.vm09.stdout:0/725: getdents da/dc/d92/d9e 0 2026-03-09T15:00:55.148 INFO:tasks.workunit.client.1.vm09.stdout:6/601: dwrite d6/d20/d38/d4e/d55/f77 [4194304,4194304] 0 2026-03-09T15:00:55.155 INFO:tasks.workunit.client.1.vm09.stdout:6/602: fsync d6/d20/d38/d56/d65/d68/d6f/fc3 0 2026-03-09T15:00:55.165 INFO:tasks.workunit.client.1.vm09.stdout:8/655: creat df/d2d/d42/d70/fc1 x:0 0 0 2026-03-09T15:00:55.166 INFO:tasks.workunit.client.1.vm09.stdout:7/609: rename d3/d1d/l38 to d3/db/d15/d5f/lae 0 2026-03-09T15:00:55.166 INFO:tasks.workunit.client.1.vm09.stdout:4/673: dwrite db/d12/da1/fa6 [0,4194304] 0 2026-03-09T15:00:55.166 INFO:tasks.workunit.client.1.vm09.stdout:9/587: rename d1/d7/d1e/d2b/d8d to d1/d7/d1e/d2b/d8d/dc4 22 2026-03-09T15:00:55.169 INFO:tasks.workunit.client.1.vm09.stdout:8/656: dwrite df/d38/d64/fa7 [0,4194304] 0 2026-03-09T15:00:55.170 INFO:tasks.workunit.client.1.vm09.stdout:1/556: dread d8/d10/d24/f2a [0,4194304] 0 2026-03-09T15:00:55.181 INFO:tasks.workunit.client.1.vm09.stdout:6/603: symlink d6/db/lc6 0 2026-03-09T15:00:55.181 INFO:tasks.workunit.client.1.vm09.stdout:8/657: write df/d24/f86 [2884594,353] 0 2026-03-09T15:00:55.182 INFO:tasks.workunit.client.1.vm09.stdout:0/726: creat da/d30/fec x:0 0 0 2026-03-09T15:00:55.187 INFO:tasks.workunit.client.1.vm09.stdout:7/610: creat d3/db/d25/d5c/d75/faf x:0 0 0 2026-03-09T15:00:55.197 INFO:tasks.workunit.client.1.vm09.stdout:6/604: dread d6/db/fb3 [4194304,4194304] 0 2026-03-09T15:00:55.197 INFO:tasks.workunit.client.1.vm09.stdout:3/700: dread d3/d74/fb4 [0,4194304] 0 2026-03-09T15:00:55.198 INFO:tasks.workunit.client.1.vm09.stdout:6/605: write d6/d20/d38/d56/d65/d68/d6f/fbb [154408,9960] 0 2026-03-09T15:00:55.198 INFO:tasks.workunit.client.1.vm09.stdout:3/701: write d3/fe6 [219096,65059] 0 2026-03-09T15:00:55.199 INFO:tasks.workunit.client.1.vm09.stdout:8/658: unlink df/d5c/f78 0 2026-03-09T15:00:55.203 INFO:tasks.workunit.client.1.vm09.stdout:5/658: dwrite d2/d37/d67/fc0 [0,4194304] 0 2026-03-09T15:00:55.205 INFO:tasks.workunit.client.1.vm09.stdout:4/674: mknod db/d19/d23/d71/d53/dcf/cda 0 2026-03-09T15:00:55.210 INFO:tasks.workunit.client.1.vm09.stdout:0/727: rename da/dc/d1c/d3c/d78/d7a/d9c/fce to da/dc/d1c/d3c/d78/d7a/fed 0 2026-03-09T15:00:55.213 INFO:tasks.workunit.client.1.vm09.stdout:9/588: creat d1/d4f/d8f/fc5 x:0 0 0 2026-03-09T15:00:55.215 INFO:tasks.workunit.client.1.vm09.stdout:7/611: dwrite d3/d3d/d9b/fa2 [0,4194304] 0 2026-03-09T15:00:55.216 INFO:tasks.workunit.client.1.vm09.stdout:5/659: mknod d2/db1/db2/ced 0 2026-03-09T15:00:55.216 INFO:tasks.workunit.client.1.vm09.stdout:5/660: chown d2/d37/d3c/d36/d45/dae/dd3 48072 1 2026-03-09T15:00:55.219 INFO:tasks.workunit.client.1.vm09.stdout:4/675: symlink db/d12/d16/ldb 0 2026-03-09T15:00:55.219 INFO:tasks.workunit.client.1.vm09.stdout:6/606: creat d6/d20/d24/da5/daf/fc7 x:0 0 0 2026-03-09T15:00:55.220 INFO:tasks.workunit.client.1.vm09.stdout:0/728: rename da/dc/d61/c74 to da/dc/d22/d76/cee 0 2026-03-09T15:00:55.221 INFO:tasks.workunit.client.1.vm09.stdout:0/729: chown da 14565972 1 2026-03-09T15:00:55.222 INFO:tasks.workunit.client.1.vm09.stdout:7/612: creat d3/d1d/d65/fb0 x:0 0 0 2026-03-09T15:00:55.242 INFO:tasks.workunit.client.1.vm09.stdout:7/613: write d3/d28/f69 [1821673,94523] 0 2026-03-09T15:00:55.242 INFO:tasks.workunit.client.1.vm09.stdout:5/661: unlink d2/d37/d3c/c44 0 2026-03-09T15:00:55.242 INFO:tasks.workunit.client.1.vm09.stdout:5/662: fsync d2/d37/d3c/d36/d45/fa0 0 2026-03-09T15:00:55.242 INFO:tasks.workunit.client.1.vm09.stdout:6/607: rename d6/d20/fb8 to d6/d20/d24/da5/fc8 0 2026-03-09T15:00:55.242 INFO:tasks.workunit.client.1.vm09.stdout:9/589: symlink d1/d4f/lc6 0 2026-03-09T15:00:55.242 INFO:tasks.workunit.client.1.vm09.stdout:0/730: mknod da/dc/d1c/d46/d5b/cef 0 2026-03-09T15:00:55.242 INFO:tasks.workunit.client.1.vm09.stdout:5/663: symlink d2/d37/d3c/d36/d4c/d89/lee 0 2026-03-09T15:00:55.242 INFO:tasks.workunit.client.1.vm09.stdout:4/676: rename db/d19/d52/d76/d3b/l8b to db/d19/d23/d44/dd2/ldc 0 2026-03-09T15:00:55.242 INFO:tasks.workunit.client.1.vm09.stdout:6/608: read - d6/d20/d38/d56/f8c zero size 2026-03-09T15:00:55.242 INFO:tasks.workunit.client.1.vm09.stdout:6/609: stat d6/d20/d38/d56/d65/l9a 0 2026-03-09T15:00:55.242 INFO:tasks.workunit.client.1.vm09.stdout:8/659: getdents df/d24/d99/db6/d60 0 2026-03-09T15:00:55.243 INFO:tasks.workunit.client.1.vm09.stdout:0/731: write da/dc/d10/f4a [8675730,59039] 0 2026-03-09T15:00:55.245 INFO:tasks.workunit.client.1.vm09.stdout:4/677: symlink db/d19/d23/d71/ldd 0 2026-03-09T15:00:55.245 INFO:tasks.workunit.client.1.vm09.stdout:8/660: readlink df/d5b/d65/d1d/la8 0 2026-03-09T15:00:55.246 INFO:tasks.workunit.client.1.vm09.stdout:3/702: dread d3/d74/f88 [0,4194304] 0 2026-03-09T15:00:55.246 INFO:tasks.workunit.client.1.vm09.stdout:9/590: mknod d1/d7/d9f/cc7 0 2026-03-09T15:00:55.246 INFO:tasks.workunit.client.1.vm09.stdout:6/610: mknod d6/d20/d24/da5/cc9 0 2026-03-09T15:00:55.248 INFO:tasks.workunit.client.1.vm09.stdout:6/611: truncate d6/d20/d38/d56/d65/d68/d6f/fbb 502623 0 2026-03-09T15:00:55.248 INFO:tasks.workunit.client.1.vm09.stdout:7/614: rename d3/d1d/l24 to d3/d3d/lb1 0 2026-03-09T15:00:55.251 INFO:tasks.workunit.client.1.vm09.stdout:0/732: mkdir da/dc/d22/df0 0 2026-03-09T15:00:55.259 INFO:tasks.workunit.client.1.vm09.stdout:3/703: mknod d3/d9a/de3/cf1 0 2026-03-09T15:00:55.264 INFO:tasks.workunit.client.1.vm09.stdout:4/678: rename db/d19/d52/d76/f3e to db/d19/dcd/fde 0 2026-03-09T15:00:55.265 INFO:tasks.workunit.client.1.vm09.stdout:4/679: readlink l6 0 2026-03-09T15:00:55.266 INFO:tasks.workunit.client.1.vm09.stdout:4/680: write db/d19/d23/d44/d7c/d7d/d97/da3/fbb [647419,118197] 0 2026-03-09T15:00:55.267 INFO:tasks.workunit.client.1.vm09.stdout:7/615: mkdir d3/db/d46/db2 0 2026-03-09T15:00:55.271 INFO:tasks.workunit.client.1.vm09.stdout:8/661: mkdir df/d2d/d42/d70/dc0/dc2 0 2026-03-09T15:00:55.279 INFO:tasks.workunit.client.1.vm09.stdout:6/612: mknod d6/d20/d2a/d3b/d91/cca 0 2026-03-09T15:00:55.279 INFO:tasks.workunit.client.1.vm09.stdout:6/613: chown d6/d20/d38/d56/d65/d68/d6f/cbe 14 1 2026-03-09T15:00:55.279 INFO:tasks.workunit.client.1.vm09.stdout:4/681: mkdir db/d19/d23/d71/ddf 0 2026-03-09T15:00:55.279 INFO:tasks.workunit.client.1.vm09.stdout:5/664: link d2/d37/d67/d95/db5/cdb d2/d37/d3c/cef 0 2026-03-09T15:00:55.279 INFO:tasks.workunit.client.1.vm09.stdout:4/682: chown db/d12/fc1 542 1 2026-03-09T15:00:55.280 INFO:tasks.workunit.client.1.vm09.stdout:5/665: truncate d2/d37/d3c/d36/d4c/d51/fce 134495 0 2026-03-09T15:00:55.285 INFO:tasks.workunit.client.1.vm09.stdout:3/704: creat d3/d3a/d2b/d31/d4a/ff2 x:0 0 0 2026-03-09T15:00:55.287 INFO:tasks.workunit.client.1.vm09.stdout:5/666: chown d2/d37/d3c/d36/d4c/d51/d96/c8 1 1 2026-03-09T15:00:55.289 INFO:tasks.workunit.client.1.vm09.stdout:7/616: link d3/db/d25/d7d/f8c d3/d28/fb3 0 2026-03-09T15:00:55.290 INFO:tasks.workunit.client.1.vm09.stdout:2/659: stat df/d1f/f38 0 2026-03-09T15:00:55.295 INFO:tasks.workunit.client.1.vm09.stdout:7/617: chown d3/fd 1 1 2026-03-09T15:00:55.298 INFO:tasks.workunit.client.1.vm09.stdout:0/733: creat da/d30/d36/ff1 x:0 0 0 2026-03-09T15:00:55.299 INFO:tasks.workunit.client.1.vm09.stdout:9/591: sync 2026-03-09T15:00:55.300 INFO:tasks.workunit.client.1.vm09.stdout:4/683: sync 2026-03-09T15:00:55.300 INFO:tasks.workunit.client.1.vm09.stdout:0/734: chown da/dc/d61/f66 64828 1 2026-03-09T15:00:55.306 INFO:tasks.workunit.client.1.vm09.stdout:2/660: mkdir df/d93/da3/dcf 0 2026-03-09T15:00:55.306 INFO:tasks.workunit.client.1.vm09.stdout:2/661: write df/d20/d29/fc0 [70924,56462] 0 2026-03-09T15:00:55.306 INFO:tasks.workunit.client.1.vm09.stdout:4/684: sync 2026-03-09T15:00:55.310 INFO:tasks.workunit.client.1.vm09.stdout:0/735: truncate da/dc/d1c/d3c/d78/fb1 5796 0 2026-03-09T15:00:55.310 INFO:tasks.workunit.client.1.vm09.stdout:6/614: dread d6/db/f42 [0,4194304] 0 2026-03-09T15:00:55.312 INFO:tasks.workunit.client.1.vm09.stdout:2/662: rmdir df/d20/d29 39 2026-03-09T15:00:55.314 INFO:tasks.workunit.client.1.vm09.stdout:7/618: mkdir d3/db/d25/d5c/d75/db4 0 2026-03-09T15:00:55.314 INFO:tasks.workunit.client.1.vm09.stdout:5/667: link d2/d37/d3c/d36/d4c/d51/d96/l5 d2/d37/d53/d86/lf0 0 2026-03-09T15:00:55.315 INFO:tasks.workunit.client.1.vm09.stdout:7/619: chown d3/db/d15/l4a 1 1 2026-03-09T15:00:55.320 INFO:tasks.workunit.client.1.vm09.stdout:7/620: fdatasync f1 0 2026-03-09T15:00:55.320 INFO:tasks.workunit.client.1.vm09.stdout:4/685: creat db/d12/d9e/fe0 x:0 0 0 2026-03-09T15:00:55.323 INFO:tasks.workunit.client.1.vm09.stdout:3/705: read d3/d3a/d2b/d7b/f8d [147371,5996] 0 2026-03-09T15:00:55.326 INFO:tasks.workunit.client.1.vm09.stdout:2/663: mknod df/d1f/d6d/d8f/d5f/cd0 0 2026-03-09T15:00:55.327 INFO:tasks.workunit.client.1.vm09.stdout:6/615: symlink d6/d20/d38/d56/d65/d68/lcb 0 2026-03-09T15:00:55.328 INFO:tasks.workunit.client.1.vm09.stdout:1/557: write d8/d50/d5b/f6f [553482,87262] 0 2026-03-09T15:00:55.330 INFO:tasks.workunit.client.1.vm09.stdout:1/558: write d8/d10/f44 [1353825,34477] 0 2026-03-09T15:00:55.339 INFO:tasks.workunit.client.1.vm09.stdout:8/662: dread df/d5b/d65/d1d/f6e [4194304,4194304] 0 2026-03-09T15:00:55.339 INFO:tasks.workunit.client.1.vm09.stdout:4/686: fdatasync db/d19/d23/d71/d5f/f66 0 2026-03-09T15:00:55.343 INFO:tasks.workunit.client.1.vm09.stdout:7/621: write d3/db/d25/d7d/f8c [4771486,71550] 0 2026-03-09T15:00:55.346 INFO:tasks.workunit.client.1.vm09.stdout:2/664: write df/d20/d29/f31 [2562752,66549] 0 2026-03-09T15:00:55.346 INFO:tasks.workunit.client.1.vm09.stdout:1/559: dread d8/d50/d5b/f6f [0,4194304] 0 2026-03-09T15:00:55.348 INFO:tasks.workunit.client.1.vm09.stdout:8/663: dwrite df/d2d/d42/f7c [0,4194304] 0 2026-03-09T15:00:55.349 INFO:tasks.workunit.client.1.vm09.stdout:2/665: dread - df/d20/fb8 zero size 2026-03-09T15:00:55.354 INFO:tasks.workunit.client.1.vm09.stdout:2/666: chown df/d1f/d6d/d8f/la6 9 1 2026-03-09T15:00:55.354 INFO:tasks.workunit.client.1.vm09.stdout:5/668: rmdir d2/d37/d3c 39 2026-03-09T15:00:55.354 INFO:tasks.workunit.client.1.vm09.stdout:0/736: dread da/dc/d1c/d3c/d78/fb1 [0,4194304] 0 2026-03-09T15:00:55.368 INFO:tasks.workunit.client.1.vm09.stdout:9/592: dwrite d1/d6e/f9b [0,4194304] 0 2026-03-09T15:00:55.369 INFO:tasks.workunit.client.1.vm09.stdout:3/706: dread d3/d3a/d2b/d31/d4a/d62/f8 [0,4194304] 0 2026-03-09T15:00:55.372 INFO:tasks.workunit.client.1.vm09.stdout:3/707: chown d3/d3a/d2b/d31/d9e/fb7 723717 1 2026-03-09T15:00:55.372 INFO:tasks.workunit.client.1.vm09.stdout:2/667: rename df/d6e to df/d1f/d6d/dd1 0 2026-03-09T15:00:55.374 INFO:tasks.workunit.client.1.vm09.stdout:0/737: mknod da/cf2 0 2026-03-09T15:00:55.375 INFO:tasks.workunit.client.1.vm09.stdout:6/616: mknod d6/d20/d38/d56/d65/d68/d86/dc0/ccc 0 2026-03-09T15:00:55.375 INFO:tasks.workunit.client.1.vm09.stdout:3/708: write d3/d3a/d2b/f64 [2194867,91498] 0 2026-03-09T15:00:55.375 INFO:tasks.workunit.client.1.vm09.stdout:4/687: mkdir db/d19/d23/d44/d7c/d7d/db7/de1 0 2026-03-09T15:00:55.376 INFO:tasks.workunit.client.1.vm09.stdout:7/622: mknod d3/db/cb5 0 2026-03-09T15:00:55.380 INFO:tasks.workunit.client.1.vm09.stdout:5/669: rename d2/f56 to d2/d37/d3c/d36/d4c/d89/ff1 0 2026-03-09T15:00:55.381 INFO:tasks.workunit.client.1.vm09.stdout:5/670: write d2/d37/d3c/f4e [5558365,121726] 0 2026-03-09T15:00:55.381 INFO:tasks.workunit.client.1.vm09.stdout:4/688: unlink db/d19/d23/d44/l7a 0 2026-03-09T15:00:55.382 INFO:tasks.workunit.client.1.vm09.stdout:5/671: truncate d2/f93 1452942 0 2026-03-09T15:00:55.383 INFO:tasks.workunit.client.1.vm09.stdout:2/668: dwrite df/d20/d29/fc0 [0,4194304] 0 2026-03-09T15:00:55.385 INFO:tasks.workunit.client.1.vm09.stdout:2/669: truncate df/d1f/d47/d84/db7/dc3/fc7 572291 0 2026-03-09T15:00:55.402 INFO:tasks.workunit.client.1.vm09.stdout:8/664: rmdir df/d24/dbe 0 2026-03-09T15:00:55.404 INFO:tasks.workunit.client.1.vm09.stdout:5/672: dread d2/d37/d53/f79 [0,4194304] 0 2026-03-09T15:00:55.406 INFO:tasks.workunit.client.1.vm09.stdout:3/709: dread - d3/d9a/de3/fa7 zero size 2026-03-09T15:00:55.407 INFO:tasks.workunit.client.1.vm09.stdout:6/617: rename d6/d20/d2a/l4b to d6/df/d23/d5b/lcd 0 2026-03-09T15:00:55.409 INFO:tasks.workunit.client.1.vm09.stdout:4/689: mkdir db/d12/d16/d5b/d78/d7f/de2 0 2026-03-09T15:00:55.411 INFO:tasks.workunit.client.1.vm09.stdout:4/690: chown db/d19/d52/d76/d3b/dd3 14 1 2026-03-09T15:00:55.425 INFO:tasks.workunit.client.1.vm09.stdout:0/738: chown da/dc/d22/d64/cc7 5910 1 2026-03-09T15:00:55.427 INFO:tasks.workunit.client.1.vm09.stdout:7/623: mknod d3/cb6 0 2026-03-09T15:00:55.428 INFO:tasks.workunit.client.1.vm09.stdout:1/560: link d8/d10/d24/d45/d5f/ca5 d8/d50/d5b/caf 0 2026-03-09T15:00:55.428 INFO:tasks.workunit.client.1.vm09.stdout:0/739: write da/dc/d1c/d46/fd8 [348903,78976] 0 2026-03-09T15:00:55.432 INFO:tasks.workunit.client.1.vm09.stdout:3/710: rename d3/d3a/d2b/d39/d48/l4b to d3/d3a/d2b/d7b/db6/lf3 0 2026-03-09T15:00:55.433 INFO:tasks.workunit.client.1.vm09.stdout:4/691: mkdir db/d12/d16/d5b/d78/de3 0 2026-03-09T15:00:55.436 INFO:tasks.workunit.client.1.vm09.stdout:9/593: getdents d1/d7/d1e/d2b 0 2026-03-09T15:00:55.437 INFO:tasks.workunit.client.1.vm09.stdout:9/594: chown d1/d7/d9f/fb7 6 1 2026-03-09T15:00:55.437 INFO:tasks.workunit.client.1.vm09.stdout:1/561: unlink d8/f6b 0 2026-03-09T15:00:55.439 INFO:tasks.workunit.client.1.vm09.stdout:1/562: fsync d8/d50/d39/d95/d72/d64/f82 0 2026-03-09T15:00:55.440 INFO:tasks.workunit.client.1.vm09.stdout:0/740: rename da/dc/d1c/d3c/d78/d7a/dbb/dc8 to da/dc/d22/d64/df3 0 2026-03-09T15:00:55.440 INFO:tasks.workunit.client.1.vm09.stdout:9/595: mkdir d1/d7/d1e/d2b/d8d/dc8 0 2026-03-09T15:00:55.442 INFO:tasks.workunit.client.1.vm09.stdout:4/692: symlink db/d19/d23/d44/le4 0 2026-03-09T15:00:55.445 INFO:tasks.workunit.client.1.vm09.stdout:1/563: chown d8/d50/d39/d95/d56/f9f 14 1 2026-03-09T15:00:55.445 INFO:tasks.workunit.client.1.vm09.stdout:8/665: getdents df/d2d/d42 0 2026-03-09T15:00:55.445 INFO:tasks.workunit.client.1.vm09.stdout:8/666: chown df/d2d/d46/fa9 57806 1 2026-03-09T15:00:55.447 INFO:tasks.workunit.client.1.vm09.stdout:8/667: unlink df/d2d/d46/f6d 0 2026-03-09T15:00:55.448 INFO:tasks.workunit.client.1.vm09.stdout:0/741: write da/dc/d1c/d3c/d44/fca [611817,51856] 0 2026-03-09T15:00:55.449 INFO:tasks.workunit.client.1.vm09.stdout:9/596: unlink d1/d4f/la9 0 2026-03-09T15:00:55.450 INFO:tasks.workunit.client.1.vm09.stdout:8/668: fdatasync df/f1a 0 2026-03-09T15:00:55.453 INFO:tasks.workunit.client.1.vm09.stdout:1/564: sync 2026-03-09T15:00:55.453 INFO:tasks.workunit.client.1.vm09.stdout:8/669: sync 2026-03-09T15:00:55.453 INFO:tasks.workunit.client.1.vm09.stdout:3/711: dread d3/d3a/d2b/f32 [0,4194304] 0 2026-03-09T15:00:55.455 INFO:tasks.workunit.client.1.vm09.stdout:8/670: chown df/d5c/f72 29586029 1 2026-03-09T15:00:55.457 INFO:tasks.workunit.client.1.vm09.stdout:0/742: dwrite da/dc/d1c/d46/d63/f91 [0,4194304] 0 2026-03-09T15:00:55.471 INFO:tasks.workunit.client.1.vm09.stdout:2/670: write df/f42 [1955380,102518] 0 2026-03-09T15:00:55.471 INFO:tasks.workunit.client.1.vm09.stdout:2/671: fdatasync df/d1f/d47/d84/db7/dc3/fc7 0 2026-03-09T15:00:55.471 INFO:tasks.workunit.client.1.vm09.stdout:6/618: write d6/d20/d24/d7e/fbc [201366,13668] 0 2026-03-09T15:00:55.471 INFO:tasks.workunit.client.1.vm09.stdout:5/673: write d2/d37/d3c/d36/d45/dae/dc3/f7b [3667249,56964] 0 2026-03-09T15:00:55.471 INFO:tasks.workunit.client.1.vm09.stdout:7/624: write d3/f32 [3758551,40659] 0 2026-03-09T15:00:55.473 INFO:tasks.workunit.client.1.vm09.stdout:9/597: mknod d1/d6e/cc9 0 2026-03-09T15:00:55.477 INFO:tasks.workunit.client.1.vm09.stdout:1/565: mkdir d8/d50/d39/d95/d56/db0 0 2026-03-09T15:00:55.478 INFO:tasks.workunit.client.1.vm09.stdout:3/712: creat d3/d3a/d2b/d31/d9e/ff4 x:0 0 0 2026-03-09T15:00:55.478 INFO:tasks.workunit.client.1.vm09.stdout:8/671: creat df/d24/d56/fc3 x:0 0 0 2026-03-09T15:00:55.479 INFO:tasks.workunit.client.1.vm09.stdout:0/743: chown da/dc/d1c/d3c/d78/f88 1232135 1 2026-03-09T15:00:55.481 INFO:tasks.workunit.client.1.vm09.stdout:4/693: mknod db/d19/d23/d44/d7c/d7d/db7/de1/ce5 0 2026-03-09T15:00:55.482 INFO:tasks.workunit.client.1.vm09.stdout:2/672: chown df/d20/l4b 30974328 1 2026-03-09T15:00:55.487 INFO:tasks.workunit.client.1.vm09.stdout:4/694: fsync db/d19/d81/d5d/f8a 0 2026-03-09T15:00:55.487 INFO:tasks.workunit.client.1.vm09.stdout:2/673: chown df/d1f/d6d/dd1 33 1 2026-03-09T15:00:55.488 INFO:tasks.workunit.client.1.vm09.stdout:7/625: unlink d3/d1d/fab 0 2026-03-09T15:00:55.488 INFO:tasks.workunit.client.1.vm09.stdout:4/695: fsync db/d12/d16/f36 0 2026-03-09T15:00:55.488 INFO:tasks.workunit.client.1.vm09.stdout:5/674: creat d2/d37/d53/d86/dad/ff2 x:0 0 0 2026-03-09T15:00:55.488 INFO:tasks.workunit.client.1.vm09.stdout:0/744: dwrite da/dc/d1c/d3c/d44/fca [0,4194304] 0 2026-03-09T15:00:55.488 INFO:tasks.workunit.client.1.vm09.stdout:7/626: fdatasync d3/d28/f95 0 2026-03-09T15:00:55.488 INFO:tasks.workunit.client.1.vm09.stdout:4/696: truncate db/d19/d23/d71/fb3 79672 0 2026-03-09T15:00:55.488 INFO:tasks.workunit.client.1.vm09.stdout:2/674: read - df/d20/d2e/f54 zero size 2026-03-09T15:00:55.495 INFO:tasks.workunit.client.1.vm09.stdout:3/713: dread - d3/d5b/d79/d9d/f9f zero size 2026-03-09T15:00:55.500 INFO:tasks.workunit.client.1.vm09.stdout:4/697: dread db/d12/f3d [0,4194304] 0 2026-03-09T15:00:55.501 INFO:tasks.workunit.client.1.vm09.stdout:7/627: dread d3/d28/f35 [0,4194304] 0 2026-03-09T15:00:55.501 INFO:tasks.workunit.client.1.vm09.stdout:4/698: stat db/d12/d16/d5b/d78/d7f/f9d 0 2026-03-09T15:00:55.502 INFO:tasks.workunit.client.1.vm09.stdout:0/745: chown da/d30/cb0 498 1 2026-03-09T15:00:55.505 INFO:tasks.workunit.client.1.vm09.stdout:9/598: symlink d1/d4f/d8f/lca 0 2026-03-09T15:00:55.512 INFO:tasks.workunit.client.1.vm09.stdout:5/675: stat d2/d37/d3c/d36/d4c/d89/lea 0 2026-03-09T15:00:55.512 INFO:tasks.workunit.client.1.vm09.stdout:8/672: mknod df/cc4 0 2026-03-09T15:00:55.514 INFO:tasks.workunit.client.1.vm09.stdout:7/628: sync 2026-03-09T15:00:55.518 INFO:tasks.workunit.client.1.vm09.stdout:4/699: creat db/d19/d23/d71/fe6 x:0 0 0 2026-03-09T15:00:55.519 INFO:tasks.workunit.client.1.vm09.stdout:7/629: read - d3/d1d/d65/fb0 zero size 2026-03-09T15:00:55.520 INFO:tasks.workunit.client.1.vm09.stdout:8/673: write df/d5b/d65/d1d/f6e [223295,117919] 0 2026-03-09T15:00:55.524 INFO:tasks.workunit.client.1.vm09.stdout:4/700: dread - db/d19/d23/d71/fe6 zero size 2026-03-09T15:00:55.527 INFO:tasks.workunit.client.1.vm09.stdout:0/746: dread da/dc/d92/d9e/fa2 [0,4194304] 0 2026-03-09T15:00:55.531 INFO:tasks.workunit.client.1.vm09.stdout:2/675: creat df/d1f/d47/d84/fd2 x:0 0 0 2026-03-09T15:00:55.537 INFO:tasks.workunit.client.1.vm09.stdout:7/630: mkdir d3/db/d25/db7 0 2026-03-09T15:00:55.538 INFO:tasks.workunit.client.1.vm09.stdout:0/747: mknod da/dc/d1c/d46/d63/d86/cf4 0 2026-03-09T15:00:55.538 INFO:tasks.workunit.client.1.vm09.stdout:2/676: mkdir df/d93/dd3 0 2026-03-09T15:00:55.542 INFO:tasks.workunit.client.1.vm09.stdout:2/677: truncate df/d20/d29/f31 4379054 0 2026-03-09T15:00:55.542 INFO:tasks.workunit.client.1.vm09.stdout:9/599: dread d1/d58/f72 [0,4194304] 0 2026-03-09T15:00:55.542 INFO:tasks.workunit.client.1.vm09.stdout:3/714: link d3/d3a/d2b/d39/d48/le8 d3/d60/lf5 0 2026-03-09T15:00:55.543 INFO:tasks.workunit.client.1.vm09.stdout:6/619: truncate d6/d20/d38/d4e/d55/f77 1848052 0 2026-03-09T15:00:55.543 INFO:tasks.workunit.client.1.vm09.stdout:2/678: stat df/d1f/d6d/d8f/d5f/f72 0 2026-03-09T15:00:55.546 INFO:tasks.workunit.client.1.vm09.stdout:0/748: fdatasync da/dc/d92/d9e/fa2 0 2026-03-09T15:00:55.547 INFO:tasks.workunit.client.1.vm09.stdout:7/631: link d3/d1d/f9f d3/d1d/d2d/fb8 0 2026-03-09T15:00:55.548 INFO:tasks.workunit.client.1.vm09.stdout:2/679: truncate df/d58/d74/f88 1468022 0 2026-03-09T15:00:55.549 INFO:tasks.workunit.client.1.vm09.stdout:1/566: write d8/d10/d73/f37 [271283,75547] 0 2026-03-09T15:00:55.552 INFO:tasks.workunit.client.1.vm09.stdout:3/715: fdatasync d3/d9a/de3/fa7 0 2026-03-09T15:00:55.556 INFO:tasks.workunit.client.1.vm09.stdout:1/567: sync 2026-03-09T15:00:55.556 INFO:tasks.workunit.client.1.vm09.stdout:9/600: dread - d1/d7/d1e/d2b/d2e/d56/d6d/f87 zero size 2026-03-09T15:00:55.557 INFO:tasks.workunit.client.1.vm09.stdout:1/568: read d8/d90/f99 [198447,109660] 0 2026-03-09T15:00:55.557 INFO:tasks.workunit.client.1.vm09.stdout:0/749: fsync da/dc/d1c/d46/f52 0 2026-03-09T15:00:55.560 INFO:tasks.workunit.client.1.vm09.stdout:2/680: symlink df/d2d/ld4 0 2026-03-09T15:00:55.560 INFO:tasks.workunit.client.1.vm09.stdout:9/601: dread - d1/d7/d9f/daa/fae zero size 2026-03-09T15:00:55.564 INFO:tasks.workunit.client.1.vm09.stdout:8/674: stat df/d38/f52 0 2026-03-09T15:00:55.566 INFO:tasks.workunit.client.1.vm09.stdout:8/675: sync 2026-03-09T15:00:55.568 INFO:tasks.workunit.client.1.vm09.stdout:8/676: write df/d24/f83 [660264,75646] 0 2026-03-09T15:00:55.569 INFO:tasks.workunit.client.1.vm09.stdout:5/676: write d2/d37/d3c/d36/d4c/d89/ff1 [2880938,105371] 0 2026-03-09T15:00:55.569 INFO:tasks.workunit.client.1.vm09.stdout:4/701: write db/d12/d16/f63 [322079,86673] 0 2026-03-09T15:00:55.569 INFO:tasks.workunit.client.1.vm09.stdout:0/750: symlink da/dc/d92/d9e/lf5 0 2026-03-09T15:00:55.578 INFO:tasks.workunit.client.1.vm09.stdout:1/569: dread d8/ff [4194304,4194304] 0 2026-03-09T15:00:55.587 INFO:tasks.workunit.client.1.vm09.stdout:8/677: unlink df/d2d/d46/f94 0 2026-03-09T15:00:55.587 INFO:tasks.workunit.client.1.vm09.stdout:8/678: readlink l5 0 2026-03-09T15:00:55.587 INFO:tasks.workunit.client.1.vm09.stdout:4/702: mkdir db/d19/d52/d76/d3b/de7 0 2026-03-09T15:00:55.588 INFO:tasks.workunit.client.1.vm09.stdout:4/703: readlink db/d19/d23/d44/l4a 0 2026-03-09T15:00:55.591 INFO:tasks.workunit.client.1.vm09.stdout:0/751: mknod da/dc/d1c/d46/d5b/cf6 0 2026-03-09T15:00:55.591 INFO:tasks.workunit.client.1.vm09.stdout:7/632: creat d3/d1d/fb9 x:0 0 0 2026-03-09T15:00:55.605 INFO:tasks.workunit.client.1.vm09.stdout:6/620: dwrite d6/db/fb3 [4194304,4194304] 0 2026-03-09T15:00:55.609 INFO:tasks.workunit.client.1.vm09.stdout:8/679: rename df/d2d/d42/d70/dc0/dc2 to df/d2d/d46/d33/dc5 0 2026-03-09T15:00:55.610 INFO:tasks.workunit.client.1.vm09.stdout:7/633: dread d3/d1d/d2d/f81 [4194304,4194304] 0 2026-03-09T15:00:55.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:55 vm09.local ceph-mon[59673]: pgmap v154: 65 pgs: 65 active+clean; 1.3 GiB data, 4.7 GiB used, 115 GiB / 120 GiB avail; 55 MiB/s rd, 150 MiB/s wr, 390 op/s 2026-03-09T15:00:55.617 INFO:tasks.workunit.client.1.vm09.stdout:3/716: write d3/d3a/d2b/d31/f34 [1659632,2468] 0 2026-03-09T15:00:55.622 INFO:tasks.workunit.client.1.vm09.stdout:9/602: creat d1/d4f/d8f/fcb x:0 0 0 2026-03-09T15:00:55.622 INFO:tasks.workunit.client.1.vm09.stdout:6/621: rmdir d6/d20/d2a 39 2026-03-09T15:00:55.623 INFO:tasks.workunit.client.1.vm09.stdout:0/752: mknod da/dc/d10/de5/cf7 0 2026-03-09T15:00:55.625 INFO:tasks.workunit.client.1.vm09.stdout:7/634: mknod d3/d1d/d2d/cba 0 2026-03-09T15:00:55.626 INFO:tasks.workunit.client.1.vm09.stdout:2/681: getdents df/d1f/d47/d84 0 2026-03-09T15:00:55.630 INFO:tasks.workunit.client.1.vm09.stdout:7/635: dread - d3/db/d15/d5f/d44/f82 zero size 2026-03-09T15:00:55.630 INFO:tasks.workunit.client.1.vm09.stdout:3/717: mkdir d3/d60/df6 0 2026-03-09T15:00:55.632 INFO:tasks.workunit.client.1.vm09.stdout:9/603: dwrite d1/d7/faf [0,4194304] 0 2026-03-09T15:00:55.635 INFO:tasks.workunit.client.1.vm09.stdout:5/677: getdents d2/d37/d3c/d36/d4c/d51/d96 0 2026-03-09T15:00:55.638 INFO:tasks.workunit.client.1.vm09.stdout:8/680: link df/d2d/d42/f96 df/d24/d99/db6/d60/fc6 0 2026-03-09T15:00:55.644 INFO:tasks.workunit.client.1.vm09.stdout:6/622: readlink d6/d20/d44/l9b 0 2026-03-09T15:00:55.645 INFO:tasks.workunit.client.1.vm09.stdout:0/753: symlink da/dc/dcb/dd4/lf8 0 2026-03-09T15:00:55.648 INFO:tasks.workunit.client.1.vm09.stdout:2/682: dread df/d1f/f55 [0,4194304] 0 2026-03-09T15:00:55.653 INFO:tasks.workunit.client.1.vm09.stdout:1/570: getdents d8/d10/d24/d48/d9b 0 2026-03-09T15:00:55.653 INFO:tasks.workunit.client.1.vm09.stdout:7/636: dwrite d3/f7a [0,4194304] 0 2026-03-09T15:00:55.659 INFO:tasks.workunit.client.1.vm09.stdout:9/604: symlink d1/d7/d1e/d2b/d2e/d56/d5e/lcc 0 2026-03-09T15:00:55.660 INFO:tasks.workunit.client.1.vm09.stdout:4/704: getdents db/d19/d23/d44/d7c/d7d/d97/da8 0 2026-03-09T15:00:55.664 INFO:tasks.workunit.client.1.vm09.stdout:1/571: mknod d8/d10/d24/d48/d9b/d78/d8b/cb1 0 2026-03-09T15:00:55.665 INFO:tasks.workunit.client.1.vm09.stdout:2/683: dread df/d20/f9d [0,4194304] 0 2026-03-09T15:00:55.668 INFO:tasks.workunit.client.1.vm09.stdout:9/605: symlink d1/d7/d9f/lcd 0 2026-03-09T15:00:55.669 INFO:tasks.workunit.client.1.vm09.stdout:9/606: write d1/d7/faf [305197,94044] 0 2026-03-09T15:00:55.670 INFO:tasks.workunit.client.1.vm09.stdout:4/705: dwrite db/d19/d81/d5d/f8a [0,4194304] 0 2026-03-09T15:00:55.675 INFO:tasks.workunit.client.1.vm09.stdout:3/718: creat d3/d3a/d2b/ff7 x:0 0 0 2026-03-09T15:00:55.676 INFO:tasks.workunit.client.1.vm09.stdout:4/706: fsync db/d19/d23/d44/d7c/d7d/d97/da3/fcb 0 2026-03-09T15:00:55.683 INFO:tasks.workunit.client.1.vm09.stdout:6/623: creat d6/df/fce x:0 0 0 2026-03-09T15:00:55.684 INFO:tasks.workunit.client.1.vm09.stdout:9/607: unlink d1/d7/c62 0 2026-03-09T15:00:55.685 INFO:tasks.workunit.client.1.vm09.stdout:2/684: read df/d20/f24 [260812,66486] 0 2026-03-09T15:00:55.686 INFO:tasks.workunit.client.1.vm09.stdout:1/572: symlink d8/d10/d24/d83/lb2 0 2026-03-09T15:00:55.699 INFO:tasks.workunit.client.1.vm09.stdout:8/681: getdents df/d24 0 2026-03-09T15:00:55.699 INFO:tasks.workunit.client.1.vm09.stdout:4/707: dwrite db/d19/d23/d71/d53/fa9 [0,4194304] 0 2026-03-09T15:00:55.699 INFO:tasks.workunit.client.1.vm09.stdout:8/682: readlink df/d24/d99/db6/l88 0 2026-03-09T15:00:55.699 INFO:tasks.workunit.client.1.vm09.stdout:4/708: dread - db/d19/dcd/fce zero size 2026-03-09T15:00:55.699 INFO:tasks.workunit.client.1.vm09.stdout:8/683: dread - df/d2d/d42/f96 zero size 2026-03-09T15:00:55.699 INFO:tasks.workunit.client.1.vm09.stdout:9/608: dwrite d1/d7/d1e/f2a [0,4194304] 0 2026-03-09T15:00:55.699 INFO:tasks.workunit.client.1.vm09.stdout:8/684: fsync df/d2d/d42/d79/d9a/fb5 0 2026-03-09T15:00:55.699 INFO:tasks.workunit.client.1.vm09.stdout:7/637: dread d3/d1d/f72 [0,4194304] 0 2026-03-09T15:00:55.702 INFO:tasks.workunit.client.1.vm09.stdout:2/685: symlink df/d1f/d6d/dd1/ld5 0 2026-03-09T15:00:55.704 INFO:tasks.workunit.client.1.vm09.stdout:6/624: rmdir d6/db/d8b 39 2026-03-09T15:00:55.704 INFO:tasks.workunit.client.1.vm09.stdout:3/719: creat d3/d74/ff8 x:0 0 0 2026-03-09T15:00:55.704 INFO:tasks.workunit.client.1.vm09.stdout:1/573: mknod d8/d10/d24/d48/d9b/d78/d8b/cb3 0 2026-03-09T15:00:55.708 INFO:tasks.workunit.client.1.vm09.stdout:4/709: mknod db/d19/d23/d71/d53/ce8 0 2026-03-09T15:00:55.709 INFO:tasks.workunit.client.1.vm09.stdout:7/638: stat d3/d3d/d9b/fac 0 2026-03-09T15:00:55.710 INFO:tasks.workunit.client.1.vm09.stdout:7/639: write d3/f32 [627104,83875] 0 2026-03-09T15:00:55.711 INFO:tasks.workunit.client.1.vm09.stdout:2/686: mknod df/d93/da3/cd6 0 2026-03-09T15:00:55.711 INFO:tasks.workunit.client.1.vm09.stdout:2/687: fdatasync df/d1f/f38 0 2026-03-09T15:00:55.712 INFO:tasks.workunit.client.1.vm09.stdout:6/625: symlink d6/d20/d38/d56/d65/d68/lcf 0 2026-03-09T15:00:55.721 INFO:tasks.workunit.client.1.vm09.stdout:6/626: dwrite d6/d20/d24/da5/fc8 [0,4194304] 0 2026-03-09T15:00:55.722 INFO:tasks.workunit.client.1.vm09.stdout:4/710: dwrite db/d19/d81/d5d/f8a [0,4194304] 0 2026-03-09T15:00:55.723 INFO:tasks.workunit.client.1.vm09.stdout:3/720: truncate d3/d60/f6e 1077198 0 2026-03-09T15:00:55.724 INFO:tasks.workunit.client.1.vm09.stdout:1/574: mkdir d8/d10/d24/d48/d9b/d78/db4 0 2026-03-09T15:00:55.724 INFO:tasks.workunit.client.1.vm09.stdout:3/721: read - d3/d9a/de3/fa7 zero size 2026-03-09T15:00:55.727 INFO:tasks.workunit.client.1.vm09.stdout:9/609: creat d1/d7/da6/db3/fce x:0 0 0 2026-03-09T15:00:55.728 INFO:tasks.workunit.client.1.vm09.stdout:7/640: unlink d3/db/d15/d5f/d6e/l96 0 2026-03-09T15:00:55.728 INFO:tasks.workunit.client.1.vm09.stdout:7/641: chown d3/d1d/d65 0 1 2026-03-09T15:00:55.731 INFO:tasks.workunit.client.1.vm09.stdout:2/688: creat df/d1f/d47/d84/fd7 x:0 0 0 2026-03-09T15:00:55.736 INFO:tasks.workunit.client.1.vm09.stdout:6/627: unlink d6/db/d10/l11 0 2026-03-09T15:00:55.736 INFO:tasks.workunit.client.1.vm09.stdout:6/628: write d6/df/d23/f76 [846601,125396] 0 2026-03-09T15:00:55.737 INFO:tasks.workunit.client.1.vm09.stdout:1/575: mkdir d8/d50/d39/d95/d56/db5 0 2026-03-09T15:00:55.738 INFO:tasks.workunit.client.1.vm09.stdout:3/722: mkdir d3/d5b/d79/d9d/df9 0 2026-03-09T15:00:55.739 INFO:tasks.workunit.client.1.vm09.stdout:8/685: mknod df/d38/d64/cc7 0 2026-03-09T15:00:55.739 INFO:tasks.workunit.client.1.vm09.stdout:7/642: rmdir d3/db/d15/d5f/d6e/d83 39 2026-03-09T15:00:55.741 INFO:tasks.workunit.client.1.vm09.stdout:6/629: write d6/d20/d24/f60 [5030333,112877] 0 2026-03-09T15:00:55.741 INFO:tasks.workunit.client.1.vm09.stdout:1/576: creat d8/d90/fb6 x:0 0 0 2026-03-09T15:00:55.742 INFO:tasks.workunit.client.1.vm09.stdout:3/723: fdatasync d3/d3a/d2b/d31/d9e/fb7 0 2026-03-09T15:00:55.743 INFO:tasks.workunit.client.1.vm09.stdout:7/643: creat d3/db/d25/fbb x:0 0 0 2026-03-09T15:00:55.745 INFO:tasks.workunit.client.1.vm09.stdout:2/689: rename f5 to df/d1f/d47/d5d/fd8 0 2026-03-09T15:00:55.746 INFO:tasks.workunit.client.1.vm09.stdout:6/630: symlink d6/d20/d44/d45/ld0 0 2026-03-09T15:00:55.748 INFO:tasks.workunit.client.1.vm09.stdout:9/610: creat d1/d4f/d8f/fcf x:0 0 0 2026-03-09T15:00:55.749 INFO:tasks.workunit.client.1.vm09.stdout:4/711: link db/d12/da1/lb6 db/d19/dcd/le9 0 2026-03-09T15:00:55.749 INFO:tasks.workunit.client.1.vm09.stdout:9/611: readlink d1/d6e/l7f 0 2026-03-09T15:00:55.749 INFO:tasks.workunit.client.1.vm09.stdout:8/686: rename df/d2d/d46/f92 to df/d5b/d65/d1d/fc8 0 2026-03-09T15:00:55.752 INFO:tasks.workunit.client.1.vm09.stdout:7/644: link d3/d28/f69 d3/db/d15/d5f/d44/fbc 0 2026-03-09T15:00:55.753 INFO:tasks.workunit.client.1.vm09.stdout:9/612: chown d1/d7/d1e/d2b/d2e/d56 104886659 1 2026-03-09T15:00:55.754 INFO:tasks.workunit.client.1.vm09.stdout:7/645: fsync d3/d1d/f79 0 2026-03-09T15:00:55.760 INFO:tasks.workunit.client.1.vm09.stdout:8/687: truncate df/d24/d56/fc3 863633 0 2026-03-09T15:00:55.760 INFO:tasks.workunit.client.1.vm09.stdout:1/577: dread d8/d10/d73/f21 [0,4194304] 0 2026-03-09T15:00:55.765 INFO:tasks.workunit.client.1.vm09.stdout:4/712: dwrite db/d19/d23/d71/fe6 [0,4194304] 0 2026-03-09T15:00:55.767 INFO:tasks.workunit.client.1.vm09.stdout:4/713: chown db/d19/d81/d5d/l98 18 1 2026-03-09T15:00:55.776 INFO:tasks.workunit.client.1.vm09.stdout:8/688: write df/d5c/f72 [5133518,28161] 0 2026-03-09T15:00:55.776 INFO:tasks.workunit.client.1.vm09.stdout:2/690: dwrite df/d1f/d47/d5d/fc8 [0,4194304] 0 2026-03-09T15:00:55.776 INFO:tasks.workunit.client.1.vm09.stdout:1/578: dwrite d8/d10/d73/f21 [0,4194304] 0 2026-03-09T15:00:55.783 INFO:tasks.workunit.client.1.vm09.stdout:1/579: rmdir d8/d50/d39/d95/d72/d64 39 2026-03-09T15:00:55.790 INFO:tasks.workunit.client.1.vm09.stdout:9/613: dread d1/d4f/d52/f94 [0,4194304] 0 2026-03-09T15:00:55.790 INFO:tasks.workunit.client.1.vm09.stdout:4/714: getdents db/d19/d23/d71/d5f 0 2026-03-09T15:00:55.790 INFO:tasks.workunit.client.1.vm09.stdout:9/614: unlink d1/d7/d1e/d2b/d2e/f8a 0 2026-03-09T15:00:55.790 INFO:tasks.workunit.client.1.vm09.stdout:9/615: unlink d1/d6e/f74 0 2026-03-09T15:00:55.798 INFO:tasks.workunit.client.1.vm09.stdout:0/754: write da/dc/d1c/d46/d63/d86/dcd/f93 [1422574,51142] 0 2026-03-09T15:00:55.800 INFO:tasks.workunit.client.1.vm09.stdout:9/616: dwrite d1/d7/fba [0,4194304] 0 2026-03-09T15:00:55.802 INFO:tasks.workunit.client.1.vm09.stdout:9/617: write d1/d7/d1e/f2a [3690919,57344] 0 2026-03-09T15:00:55.814 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:55 vm05.local ceph-mon[50611]: pgmap v154: 65 pgs: 65 active+clean; 1.3 GiB data, 4.7 GiB used, 115 GiB / 120 GiB avail; 55 MiB/s rd, 150 MiB/s wr, 390 op/s 2026-03-09T15:00:55.840 INFO:tasks.workunit.client.1.vm09.stdout:7/646: dread d3/f97 [0,4194304] 0 2026-03-09T15:00:55.842 INFO:tasks.workunit.client.1.vm09.stdout:7/647: creat d3/db/d25/d5c/fbd x:0 0 0 2026-03-09T15:00:55.844 INFO:tasks.workunit.client.1.vm09.stdout:1/580: sync 2026-03-09T15:00:55.845 INFO:tasks.workunit.client.1.vm09.stdout:1/581: creat d8/d90/fb7 x:0 0 0 2026-03-09T15:00:55.846 INFO:tasks.workunit.client.1.vm09.stdout:9/618: dread d1/d58/f99 [0,4194304] 0 2026-03-09T15:00:55.847 INFO:tasks.workunit.client.1.vm09.stdout:7/648: getdents d3/d3d/d9b 0 2026-03-09T15:00:55.848 INFO:tasks.workunit.client.1.vm09.stdout:7/649: readlink d3/db/d46/l67 0 2026-03-09T15:00:55.849 INFO:tasks.workunit.client.1.vm09.stdout:9/619: symlink d1/d7/d1e/d2b/d2e/d56/d6d/ld0 0 2026-03-09T15:00:55.849 INFO:tasks.workunit.client.1.vm09.stdout:1/582: write d8/ff [3227306,4764] 0 2026-03-09T15:00:55.850 INFO:tasks.workunit.client.1.vm09.stdout:6/631: dread f0 [0,4194304] 0 2026-03-09T15:00:55.851 INFO:tasks.workunit.client.1.vm09.stdout:1/583: truncate d8/d10/d24/d45/f92 590138 0 2026-03-09T15:00:55.851 INFO:tasks.workunit.client.1.vm09.stdout:9/620: truncate d1/d58/f72 4555343 0 2026-03-09T15:00:55.852 INFO:tasks.workunit.client.1.vm09.stdout:6/632: write f0 [684846,88716] 0 2026-03-09T15:00:55.856 INFO:tasks.workunit.client.1.vm09.stdout:0/755: sync 2026-03-09T15:00:55.858 INFO:tasks.workunit.client.1.vm09.stdout:1/584: rename d8/d10/d73/f21 to d8/d10/d24/d48/d9b/d78/fb8 0 2026-03-09T15:00:55.860 INFO:tasks.workunit.client.1.vm09.stdout:6/633: mkdir d6/d20/d38/d56/dd1 0 2026-03-09T15:00:55.864 INFO:tasks.workunit.client.1.vm09.stdout:7/650: sync 2026-03-09T15:00:55.864 INFO:tasks.workunit.client.1.vm09.stdout:8/689: dread df/d5b/f40 [0,4194304] 0 2026-03-09T15:00:55.866 INFO:tasks.workunit.client.1.vm09.stdout:0/756: creat da/dc/d92/ff9 x:0 0 0 2026-03-09T15:00:55.866 INFO:tasks.workunit.client.1.vm09.stdout:6/634: mkdir d6/d20/d38/d4e/d55/dd2 0 2026-03-09T15:00:55.867 INFO:tasks.workunit.client.1.vm09.stdout:7/651: unlink d3/d1d/d2d/f81 0 2026-03-09T15:00:55.874 INFO:tasks.workunit.client.1.vm09.stdout:6/635: dread - d6/d20/d38/d56/f8c zero size 2026-03-09T15:00:55.875 INFO:tasks.workunit.client.1.vm09.stdout:1/585: dwrite d8/d50/d39/d95/f6e [0,4194304] 0 2026-03-09T15:00:55.888 INFO:tasks.workunit.client.1.vm09.stdout:0/757: creat da/ffa x:0 0 0 2026-03-09T15:00:55.888 INFO:tasks.workunit.client.1.vm09.stdout:1/586: dread - d8/d10/d73/fa1 zero size 2026-03-09T15:00:55.890 INFO:tasks.workunit.client.1.vm09.stdout:5/678: dwrite d2/d37/d3c/d36/d45/d5c/f91 [0,4194304] 0 2026-03-09T15:00:55.892 INFO:tasks.workunit.client.1.vm09.stdout:7/652: dwrite f1 [4194304,4194304] 0 2026-03-09T15:00:55.897 INFO:tasks.workunit.client.1.vm09.stdout:1/587: mknod d8/d10/d24/d48/d9b/d78/cb9 0 2026-03-09T15:00:55.906 INFO:tasks.workunit.client.1.vm09.stdout:5/679: creat d2/d37/d3c/d36/ff3 x:0 0 0 2026-03-09T15:00:55.912 INFO:tasks.workunit.client.1.vm09.stdout:1/588: dwrite d8/d10/d24/d48/d9b/d78/fa2 [0,4194304] 0 2026-03-09T15:00:55.917 INFO:tasks.workunit.client.1.vm09.stdout:7/653: creat d3/d1d/d65/fbe x:0 0 0 2026-03-09T15:00:55.924 INFO:tasks.workunit.client.1.vm09.stdout:7/654: write d3/db/d15/f68 [2954205,12130] 0 2026-03-09T15:00:55.924 INFO:tasks.workunit.client.1.vm09.stdout:5/680: truncate d2/d37/d3c/d36/d4c/d51/d96/f73 2714131 0 2026-03-09T15:00:55.924 INFO:tasks.workunit.client.1.vm09.stdout:1/589: creat d8/d50/d39/d95/fba x:0 0 0 2026-03-09T15:00:55.928 INFO:tasks.workunit.client.1.vm09.stdout:5/681: unlink d2/le 0 2026-03-09T15:00:55.932 INFO:tasks.workunit.client.1.vm09.stdout:5/682: creat d2/d37/d3c/d36/d4c/ff4 x:0 0 0 2026-03-09T15:00:55.936 INFO:tasks.workunit.client.1.vm09.stdout:5/683: rename d2/daa to d2/d37/d67/d95/db8/df5 0 2026-03-09T15:00:55.945 INFO:tasks.workunit.client.1.vm09.stdout:5/684: read d2/d37/d67/d95/db5/fb6 [71660,113022] 0 2026-03-09T15:00:55.947 INFO:tasks.workunit.client.1.vm09.stdout:6/636: dread d6/db/d8b/f73 [0,4194304] 0 2026-03-09T15:00:55.948 INFO:tasks.workunit.client.1.vm09.stdout:5/685: rmdir d2/d37/d3c/d36/d45/d5c/ddc 39 2026-03-09T15:00:55.949 INFO:tasks.workunit.client.1.vm09.stdout:6/637: creat d6/d20/d38/d56/fd3 x:0 0 0 2026-03-09T15:00:55.949 INFO:tasks.workunit.client.1.vm09.stdout:5/686: readlink d2/d37/d3c/d36/d45/l8e 0 2026-03-09T15:00:55.950 INFO:tasks.workunit.client.1.vm09.stdout:6/638: symlink d6/d20/d44/d8f/ld4 0 2026-03-09T15:00:55.952 INFO:tasks.workunit.client.1.vm09.stdout:5/687: getdents d2/da9 0 2026-03-09T15:00:55.954 INFO:tasks.workunit.client.1.vm09.stdout:5/688: rename d2/d37/d67/d95/db5 to d2/d37/d67/df6 0 2026-03-09T15:00:55.955 INFO:tasks.workunit.client.1.vm09.stdout:5/689: symlink d2/d37/d53/d86/lf7 0 2026-03-09T15:00:55.958 INFO:tasks.workunit.client.1.vm09.stdout:1/590: dread d8/d10/f2f [0,4194304] 0 2026-03-09T15:00:55.959 INFO:tasks.workunit.client.1.vm09.stdout:5/690: link d2/d37/d3c/d36/d45/l7e d2/d37/d53/d86/d88/dc9/lf8 0 2026-03-09T15:00:55.959 INFO:tasks.workunit.client.1.vm09.stdout:1/591: dread - d8/d50/d39/d95/d72/f77 zero size 2026-03-09T15:00:55.962 INFO:tasks.workunit.client.1.vm09.stdout:6/639: sync 2026-03-09T15:00:55.969 INFO:tasks.workunit.client.1.vm09.stdout:6/640: chown d6/d20/c54 12 1 2026-03-09T15:00:55.983 INFO:tasks.workunit.client.1.vm09.stdout:6/641: write d6/d20/d24/f49 [1581197,83612] 0 2026-03-09T15:00:55.983 INFO:tasks.workunit.client.1.vm09.stdout:5/691: dwrite d2/d37/d3c/d36/d45/f66 [0,4194304] 0 2026-03-09T15:00:55.986 INFO:tasks.workunit.client.1.vm09.stdout:1/592: dwrite d8/d50/d39/d95/d72/d64/f82 [0,4194304] 0 2026-03-09T15:00:55.987 INFO:tasks.workunit.client.1.vm09.stdout:5/692: sync 2026-03-09T15:00:55.987 INFO:tasks.workunit.client.1.vm09.stdout:5/693: sync 2026-03-09T15:00:55.996 INFO:tasks.workunit.client.1.vm09.stdout:6/642: dwrite d6/db/d10/fa0 [0,4194304] 0 2026-03-09T15:00:55.999 INFO:tasks.workunit.client.1.vm09.stdout:6/643: sync 2026-03-09T15:00:56.000 INFO:tasks.workunit.client.1.vm09.stdout:6/644: fdatasync d6/d20/d24/f60 0 2026-03-09T15:00:56.003 INFO:tasks.workunit.client.1.vm09.stdout:7/655: dread d3/d1d/f33 [0,4194304] 0 2026-03-09T15:00:56.004 INFO:tasks.workunit.client.1.vm09.stdout:6/645: write d6/d20/d38/d56/fbf [623318,119061] 0 2026-03-09T15:00:56.004 INFO:tasks.workunit.client.1.vm09.stdout:7/656: dread - d3/d1d/d65/f92 zero size 2026-03-09T15:00:56.004 INFO:tasks.workunit.client.1.vm09.stdout:6/646: readlink d6/d20/d38/d4e/l96 0 2026-03-09T15:00:56.004 INFO:tasks.workunit.client.1.vm09.stdout:7/657: write d3/db/d15/d5f/d6e/f7b [2706821,120505] 0 2026-03-09T15:00:56.008 INFO:tasks.workunit.client.1.vm09.stdout:7/658: write d3/d61/f86 [1071652,77366] 0 2026-03-09T15:00:56.010 INFO:tasks.workunit.client.1.vm09.stdout:1/593: fdatasync d8/fa 0 2026-03-09T15:00:56.016 INFO:tasks.workunit.client.1.vm09.stdout:5/694: fdatasync d2/d37/d3c/d36/d4c/d51/d96/f23 0 2026-03-09T15:00:56.034 INFO:tasks.workunit.client.1.vm09.stdout:7/659: rename d3/d1d/d65/d9a/la5 to d3/db/d25/lbf 0 2026-03-09T15:00:56.038 INFO:tasks.workunit.client.1.vm09.stdout:1/594: symlink d8/d10/d24/d48/d9b/d78/d8b/lbb 0 2026-03-09T15:00:56.039 INFO:tasks.workunit.client.1.vm09.stdout:1/595: dread - d8/d50/d39/d95/fba zero size 2026-03-09T15:00:56.039 INFO:tasks.workunit.client.1.vm09.stdout:5/695: symlink d2/d37/d67/d95/lf9 0 2026-03-09T15:00:56.041 INFO:tasks.workunit.client.1.vm09.stdout:7/660: creat d3/d1d/d65/fc0 x:0 0 0 2026-03-09T15:00:56.043 INFO:tasks.workunit.client.1.vm09.stdout:3/724: write d3/d9a/de3/fa3 [829388,70329] 0 2026-03-09T15:00:56.043 INFO:tasks.workunit.client.1.vm09.stdout:5/696: mknod d2/d37/d3c/d36/d4c/cfa 0 2026-03-09T15:00:56.047 INFO:tasks.workunit.client.1.vm09.stdout:2/691: mkdir df/d1f/d47/d84/db7/dc3/dd9 0 2026-03-09T15:00:56.047 INFO:tasks.workunit.client.1.vm09.stdout:1/596: symlink d8/d50/d39/d95/d56/db0/lbc 0 2026-03-09T15:00:56.047 INFO:tasks.workunit.client.1.vm09.stdout:5/697: mknod d2/d37/d53/cfb 0 2026-03-09T15:00:56.048 INFO:tasks.workunit.client.1.vm09.stdout:7/661: read d3/db/d15/f68 [7812222,111506] 0 2026-03-09T15:00:56.050 INFO:tasks.workunit.client.1.vm09.stdout:7/662: readlink d3/db/d25/l49 0 2026-03-09T15:00:56.050 INFO:tasks.workunit.client.1.vm09.stdout:1/597: chown d8/d10/d24/d45/d5f/f60 449 1 2026-03-09T15:00:56.051 INFO:tasks.workunit.client.1.vm09.stdout:3/725: creat d3/d3a/d2b/d39/ffa x:0 0 0 2026-03-09T15:00:56.052 INFO:tasks.workunit.client.1.vm09.stdout:3/726: chown d3/d3a/d2b/d7b/db6 181323 1 2026-03-09T15:00:56.053 INFO:tasks.workunit.client.1.vm09.stdout:1/598: dread d8/d10/d24/d45/f92 [0,4194304] 0 2026-03-09T15:00:56.053 INFO:tasks.workunit.client.1.vm09.stdout:3/727: readlink d3/d3a/d2b/d31/d4a/d62/l7 0 2026-03-09T15:00:56.058 INFO:tasks.workunit.client.1.vm09.stdout:6/647: rename d6/d20/d2a/d3b/l93 to d6/d20/d2a/ld5 0 2026-03-09T15:00:56.059 INFO:tasks.workunit.client.1.vm09.stdout:5/698: dwrite d2/d37/d3c/d36/f4a [0,4194304] 0 2026-03-09T15:00:56.068 INFO:tasks.workunit.client.1.vm09.stdout:5/699: dwrite d2/d37/f75 [0,4194304] 0 2026-03-09T15:00:56.072 INFO:tasks.workunit.client.1.vm09.stdout:3/728: mkdir d3/d9a/d80/dfb 0 2026-03-09T15:00:56.073 INFO:tasks.workunit.client.1.vm09.stdout:2/692: getdents df/d20/d29/da9 0 2026-03-09T15:00:56.073 INFO:tasks.workunit.client.1.vm09.stdout:6/648: unlink d6/d20/d24/da5/daf/fc7 0 2026-03-09T15:00:56.077 INFO:tasks.workunit.client.1.vm09.stdout:5/700: readlink d2/lb 0 2026-03-09T15:00:56.079 INFO:tasks.workunit.client.1.vm09.stdout:2/693: mkdir df/d2d/dda 0 2026-03-09T15:00:56.084 INFO:tasks.workunit.client.1.vm09.stdout:7/663: rename d3/d28/c3c to d3/db/cc1 0 2026-03-09T15:00:56.085 INFO:tasks.workunit.client.1.vm09.stdout:6/649: creat d6/d20/d38/fd6 x:0 0 0 2026-03-09T15:00:56.088 INFO:tasks.workunit.client.1.vm09.stdout:5/701: rename d2/d37/d3c/d36/d45/d5c/f91 to d2/da9/ffc 0 2026-03-09T15:00:56.088 INFO:tasks.workunit.client.1.vm09.stdout:3/729: getdents d3/d3a/d2b/d7b/db0 0 2026-03-09T15:00:56.092 INFO:tasks.workunit.client.1.vm09.stdout:2/694: dread df/d20/d2e/f48 [0,4194304] 0 2026-03-09T15:00:56.098 INFO:tasks.workunit.client.1.vm09.stdout:5/702: mkdir d2/d37/d3c/d36/d45/dfd 0 2026-03-09T15:00:56.101 INFO:tasks.workunit.client.1.vm09.stdout:4/715: dwrite db/d19/d23/d44/f95 [0,4194304] 0 2026-03-09T15:00:56.103 INFO:tasks.workunit.client.1.vm09.stdout:7/664: dwrite d3/db/d25/d7d/f8c [0,4194304] 0 2026-03-09T15:00:56.111 INFO:tasks.workunit.client.1.vm09.stdout:5/703: creat d2/d37/d3c/d36/d4c/ffe x:0 0 0 2026-03-09T15:00:56.113 INFO:tasks.workunit.client.1.vm09.stdout:3/730: rename d3/d3a/d2b/d31/d4a/f7c to d3/d5b/d79/d9d/ffc 0 2026-03-09T15:00:56.118 INFO:tasks.workunit.client.1.vm09.stdout:2/695: rename df/d20/d29/da9/cc5 to df/d1f/d47/d84/db7/dc3/dd9/cdb 0 2026-03-09T15:00:56.126 INFO:tasks.workunit.client.1.vm09.stdout:7/665: truncate d3/d1d/f9f 4147958 0 2026-03-09T15:00:56.126 INFO:tasks.workunit.client.1.vm09.stdout:3/731: fsync d3/d3a/f6b 0 2026-03-09T15:00:56.126 INFO:tasks.workunit.client.1.vm09.stdout:5/704: creat d2/d37/d3c/d36/d45/dfd/fff x:0 0 0 2026-03-09T15:00:56.126 INFO:tasks.workunit.client.1.vm09.stdout:3/732: symlink d3/d3a/d2b/d39/d48/da0/lfd 0 2026-03-09T15:00:56.126 INFO:tasks.workunit.client.1.vm09.stdout:2/696: creat df/d1f/d6d/d8f/fdc x:0 0 0 2026-03-09T15:00:56.129 INFO:tasks.workunit.client.1.vm09.stdout:3/733: creat d3/d3a/d2b/d7b/dd3/ffe x:0 0 0 2026-03-09T15:00:56.130 INFO:tasks.workunit.client.1.vm09.stdout:7/666: creat d3/db/fc2 x:0 0 0 2026-03-09T15:00:56.130 INFO:tasks.workunit.client.1.vm09.stdout:2/697: dwrite df/d1f/f7e [4194304,4194304] 0 2026-03-09T15:00:56.131 INFO:tasks.workunit.client.1.vm09.stdout:5/705: link d2/d37/f6c d2/d37/d3c/d36/d45/dfd/f100 0 2026-03-09T15:00:56.133 INFO:tasks.workunit.client.1.vm09.stdout:2/698: rename df/d1f/d6d/d8f to df/d1f/d6d/d8f/d5f/ddd 22 2026-03-09T15:00:56.135 INFO:tasks.workunit.client.1.vm09.stdout:5/706: write d2/d37/fa5 [824467,65404] 0 2026-03-09T15:00:56.140 INFO:tasks.workunit.client.1.vm09.stdout:3/734: write d3/d3a/d2b/d36/f44 [3906339,10160] 0 2026-03-09T15:00:56.143 INFO:tasks.workunit.client.1.vm09.stdout:5/707: dwrite d2/d37/d67/d95/db8/fe2 [0,4194304] 0 2026-03-09T15:00:56.147 INFO:tasks.workunit.client.1.vm09.stdout:3/735: chown d3/d9a/c71 53116 1 2026-03-09T15:00:56.147 INFO:tasks.workunit.client.1.vm09.stdout:5/708: stat d2/d37/d53/d86/d88/dc9 0 2026-03-09T15:00:56.151 INFO:tasks.workunit.client.1.vm09.stdout:3/736: mknod d3/d3a/d2b/dee/cff 0 2026-03-09T15:00:56.153 INFO:tasks.workunit.client.1.vm09.stdout:7/667: sync 2026-03-09T15:00:56.156 INFO:tasks.workunit.client.1.vm09.stdout:5/709: getdents d2/d37/d3c/d36/d45/dae/dd3 0 2026-03-09T15:00:56.157 INFO:tasks.workunit.client.1.vm09.stdout:5/710: write d2/d37/d3c/f4e [8658999,42450] 0 2026-03-09T15:00:56.159 INFO:tasks.workunit.client.1.vm09.stdout:7/668: creat d3/db/d15/d5f/d6e/fc3 x:0 0 0 2026-03-09T15:00:56.162 INFO:tasks.workunit.client.1.vm09.stdout:5/711: symlink d2/d37/d3c/d36/d45/dae/dd3/l101 0 2026-03-09T15:00:56.165 INFO:tasks.workunit.client.1.vm09.stdout:7/669: dwrite d3/d28/f35 [0,4194304] 0 2026-03-09T15:00:56.167 INFO:tasks.workunit.client.1.vm09.stdout:7/670: fsync d3/db/d25/fbb 0 2026-03-09T15:00:56.175 INFO:tasks.workunit.client.1.vm09.stdout:7/671: chown d3/d3d/f51 33 1 2026-03-09T15:00:56.206 INFO:tasks.workunit.client.1.vm09.stdout:8/690: dwrite df/d5b/f85 [0,4194304] 0 2026-03-09T15:00:56.206 INFO:tasks.workunit.client.1.vm09.stdout:8/691: chown df/d5b/c7d 8486501 1 2026-03-09T15:00:56.208 INFO:tasks.workunit.client.1.vm09.stdout:9/621: dwrite d1/d7/d1e/d2b/d2e/d56/d6d/f87 [0,4194304] 0 2026-03-09T15:00:56.209 INFO:tasks.workunit.client.1.vm09.stdout:8/692: mkdir df/d2d/d42/d70/dc0/dc9 0 2026-03-09T15:00:56.210 INFO:tasks.workunit.client.1.vm09.stdout:8/693: chown df/d38/d64/fbf 13 1 2026-03-09T15:00:56.212 INFO:tasks.workunit.client.1.vm09.stdout:8/694: mkdir df/d24/d99/db6/dca 0 2026-03-09T15:00:56.236 INFO:tasks.workunit.client.1.vm09.stdout:8/695: dread df/d24/d99/db1/f87 [0,4194304] 0 2026-03-09T15:00:56.237 INFO:tasks.workunit.client.1.vm09.stdout:8/696: truncate df/f51 5232553 0 2026-03-09T15:00:56.237 INFO:tasks.workunit.client.1.vm09.stdout:8/697: write df/d24/d99/db6/fad [377519,44766] 0 2026-03-09T15:00:56.239 INFO:tasks.workunit.client.1.vm09.stdout:8/698: chown df/d24/d99/db6/d60/fc6 2 1 2026-03-09T15:00:56.241 INFO:tasks.workunit.client.1.vm09.stdout:8/699: read df/f26 [3857584,70064] 0 2026-03-09T15:00:56.262 INFO:tasks.workunit.client.1.vm09.stdout:0/758: dwrite da/d80/f98 [0,4194304] 0 2026-03-09T15:00:56.271 INFO:tasks.workunit.client.1.vm09.stdout:0/759: creat da/dc/dcb/dd9/ffb x:0 0 0 2026-03-09T15:00:56.289 INFO:tasks.workunit.client.1.vm09.stdout:0/760: sync 2026-03-09T15:00:56.292 INFO:tasks.workunit.client.1.vm09.stdout:0/761: creat da/dc/d1c/d3c/d78/d7a/dbb/ffc x:0 0 0 2026-03-09T15:00:56.300 INFO:tasks.workunit.client.1.vm09.stdout:0/762: mkdir da/dc/d1c/d3c/d78/d7a/d9c/dfd 0 2026-03-09T15:00:56.338 INFO:tasks.workunit.client.1.vm09.stdout:1/599: write d8/d10/d24/d45/d5f/f60 [4393061,62100] 0 2026-03-09T15:00:56.344 INFO:tasks.workunit.client.1.vm09.stdout:1/600: creat d8/d10/d24/d48/d9b/fbd x:0 0 0 2026-03-09T15:00:56.345 INFO:tasks.workunit.client.1.vm09.stdout:1/601: stat d8/d10/d24/d45/d5f/c6d 0 2026-03-09T15:00:56.353 INFO:tasks.workunit.client.1.vm09.stdout:1/602: mknod d8/d10/d24/d45/d5f/d8d/cbe 0 2026-03-09T15:00:56.355 INFO:tasks.workunit.client.1.vm09.stdout:1/603: fdatasync d8/f59 0 2026-03-09T15:00:56.356 INFO:tasks.workunit.client.1.vm09.stdout:4/716: truncate db/d19/d52/f6a 683186 0 2026-03-09T15:00:56.359 INFO:tasks.workunit.client.1.vm09.stdout:6/650: dwrite d6/db/d10/f2c [0,4194304] 0 2026-03-09T15:00:56.362 INFO:tasks.workunit.client.1.vm09.stdout:4/717: write db/d19/d81/d5d/f8a [635476,21176] 0 2026-03-09T15:00:56.372 INFO:tasks.workunit.client.1.vm09.stdout:1/604: dwrite d8/d10/d24/d45/d5f/d8d/fa0 [0,4194304] 0 2026-03-09T15:00:56.377 INFO:tasks.workunit.client.1.vm09.stdout:3/737: rename d3/d3a/d2b/d39 to d3/d100 0 2026-03-09T15:00:56.377 INFO:tasks.workunit.client.1.vm09.stdout:5/712: write d2/d37/d67/d95/db8/fe2 [4951017,115214] 0 2026-03-09T15:00:56.377 INFO:tasks.workunit.client.1.vm09.stdout:2/699: write df/d1f/d47/d84/db7/dc3/fcc [7885197,76763] 0 2026-03-09T15:00:56.380 INFO:tasks.workunit.client.1.vm09.stdout:2/700: fdatasync df/d1f/d47/d84/db7/dc3/da7/fa8 0 2026-03-09T15:00:56.383 INFO:tasks.workunit.client.1.vm09.stdout:2/701: read df/d20/f6a [3296077,64170] 0 2026-03-09T15:00:56.384 INFO:tasks.workunit.client.1.vm09.stdout:4/718: symlink db/d19/d23/d44/d7c/d7d/db7/lea 0 2026-03-09T15:00:56.387 INFO:tasks.workunit.client.1.vm09.stdout:4/719: write db/d19/d23/d71/f43 [9434101,51487] 0 2026-03-09T15:00:56.387 INFO:tasks.workunit.client.1.vm09.stdout:1/605: unlink d8/d10/d73/f54 0 2026-03-09T15:00:56.389 INFO:tasks.workunit.client.1.vm09.stdout:7/672: rename d3/d1d/f11 to d3/db/d25/d7d/fc4 0 2026-03-09T15:00:56.394 INFO:tasks.workunit.client.1.vm09.stdout:1/606: dwrite d8/d10/d73/f41 [0,4194304] 0 2026-03-09T15:00:56.410 INFO:tasks.workunit.client.1.vm09.stdout:5/713: creat d2/d37/d3c/d36/d4c/d51/f102 x:0 0 0 2026-03-09T15:00:56.410 INFO:tasks.workunit.client.1.vm09.stdout:5/714: chown d2 46 1 2026-03-09T15:00:56.412 INFO:tasks.workunit.client.1.vm09.stdout:7/673: link d3/f7a d3/db/d25/d5c/d75/db4/fc5 0 2026-03-09T15:00:56.412 INFO:tasks.workunit.client.1.vm09.stdout:3/738: getdents d3/d100/d48/da0 0 2026-03-09T15:00:56.415 INFO:tasks.workunit.client.1.vm09.stdout:5/715: rename d2/d37/fa5 to d2/d37/d3c/d36/d45/dae/dc3/f103 0 2026-03-09T15:00:56.416 INFO:tasks.workunit.client.1.vm09.stdout:3/739: write d3/d3a/d2b/d31/fa1 [332875,84317] 0 2026-03-09T15:00:56.417 INFO:tasks.workunit.client.1.vm09.stdout:3/740: readlink d3/d100/l9c 0 2026-03-09T15:00:56.419 INFO:tasks.workunit.client.1.vm09.stdout:7/674: dwrite d3/d1d/f79 [0,4194304] 0 2026-03-09T15:00:56.424 INFO:tasks.workunit.client.1.vm09.stdout:5/716: write d2/f93 [1017145,67819] 0 2026-03-09T15:00:56.427 INFO:tasks.workunit.client.1.vm09.stdout:4/720: dread db/d19/d52/d76/f65 [0,4194304] 0 2026-03-09T15:00:56.428 INFO:tasks.workunit.client.1.vm09.stdout:7/675: mkdir d3/db/d25/d7d/dc6 0 2026-03-09T15:00:56.431 INFO:tasks.workunit.client.1.vm09.stdout:5/717: mknod d2/d37/d53/c104 0 2026-03-09T15:00:56.433 INFO:tasks.workunit.client.1.vm09.stdout:7/676: symlink d3/d1d/d94/lc7 0 2026-03-09T15:00:56.438 INFO:tasks.workunit.client.1.vm09.stdout:7/677: mknod d3/d1d/d2d/cc8 0 2026-03-09T15:00:56.438 INFO:tasks.workunit.client.1.vm09.stdout:4/721: getdents db/d19/d52/d76/d3b/de7 0 2026-03-09T15:00:56.442 INFO:tasks.workunit.client.1.vm09.stdout:7/678: dread d3/db/d46/f66 [0,4194304] 0 2026-03-09T15:00:56.445 INFO:tasks.workunit.client.1.vm09.stdout:4/722: rename db/d12/d16/d5b/fac to db/d12/d9e/feb 0 2026-03-09T15:00:56.446 INFO:tasks.workunit.client.1.vm09.stdout:4/723: dread - db/d19/d52/d76/d3b/f49 zero size 2026-03-09T15:00:56.446 INFO:tasks.workunit.client.1.vm09.stdout:7/679: rename d3/db/d15/d5f/d44 to d3/db/d46/dc9 0 2026-03-09T15:00:56.446 INFO:tasks.workunit.client.1.vm09.stdout:4/724: write db/d19/d23/d44/d7c/d7d/d97/fbf [953186,46822] 0 2026-03-09T15:00:56.450 INFO:tasks.workunit.client.1.vm09.stdout:4/725: symlink db/d19/d52/lec 0 2026-03-09T15:00:56.450 INFO:tasks.workunit.client.1.vm09.stdout:4/726: chown db/d19/c1e 1402333510 1 2026-03-09T15:00:56.453 INFO:tasks.workunit.client.1.vm09.stdout:4/727: stat db/d12/d16/d5b/d78/d7f/de2 0 2026-03-09T15:00:56.462 INFO:tasks.workunit.client.1.vm09.stdout:9/622: dwrite d1/d58/f80 [0,4194304] 0 2026-03-09T15:00:56.472 INFO:tasks.workunit.client.1.vm09.stdout:9/623: dwrite d1/d7/d1e/d2b/f30 [0,4194304] 0 2026-03-09T15:00:56.475 INFO:tasks.workunit.client.1.vm09.stdout:9/624: dread - d1/d4f/d8f/fcb zero size 2026-03-09T15:00:56.476 INFO:tasks.workunit.client.1.vm09.stdout:9/625: fdatasync d1/d4f/f89 0 2026-03-09T15:00:56.484 INFO:tasks.workunit.client.1.vm09.stdout:9/626: rename d1/f24 to d1/d7/da6/fd1 0 2026-03-09T15:00:56.487 INFO:tasks.workunit.client.1.vm09.stdout:4/728: getdents db/d19/d52/d76/d3b 0 2026-03-09T15:00:56.491 INFO:tasks.workunit.client.1.vm09.stdout:9/627: dwrite d1/d7/da6/db3/fce [0,4194304] 0 2026-03-09T15:00:56.502 INFO:tasks.workunit.client.1.vm09.stdout:4/729: unlink db/d12/f5a 0 2026-03-09T15:00:56.517 INFO:tasks.workunit.client.1.vm09.stdout:4/730: dread db/d12/d16/f2a [0,4194304] 0 2026-03-09T15:00:56.519 INFO:tasks.workunit.client.1.vm09.stdout:4/731: write db/d12/d16/d5b/d78/d7f/fc9 [884414,42892] 0 2026-03-09T15:00:56.521 INFO:tasks.workunit.client.1.vm09.stdout:4/732: mkdir db/d19/d23/d71/d53/ded 0 2026-03-09T15:00:56.522 INFO:tasks.workunit.client.1.vm09.stdout:4/733: readlink db/d12/d16/ldb 0 2026-03-09T15:00:56.523 INFO:tasks.workunit.client.1.vm09.stdout:4/734: rmdir db/d19/d23/d71/d53/dcf 39 2026-03-09T15:00:56.524 INFO:tasks.workunit.client.1.vm09.stdout:8/700: write df/f23 [842051,77004] 0 2026-03-09T15:00:56.526 INFO:tasks.workunit.client.1.vm09.stdout:8/701: mknod df/d2d/d42/d79/ccb 0 2026-03-09T15:00:56.530 INFO:tasks.workunit.client.1.vm09.stdout:8/702: mkdir df/d24/d99/db1/dcc 0 2026-03-09T15:00:56.530 INFO:tasks.workunit.client.1.vm09.stdout:8/703: fdatasync df/d24/d99/db6/f59 0 2026-03-09T15:00:56.530 INFO:tasks.workunit.client.1.vm09.stdout:4/735: link db/d19/d52/d76/cd6 db/d19/d23/d44/d7c/d7d/d97/da3/cee 0 2026-03-09T15:00:56.536 INFO:tasks.workunit.client.1.vm09.stdout:8/704: dwrite df/d24/d99/db6/fad [0,4194304] 0 2026-03-09T15:00:56.537 INFO:tasks.workunit.client.1.vm09.stdout:0/763: truncate da/dc/d1c/d46/d63/f7f 3607783 0 2026-03-09T15:00:56.540 INFO:tasks.workunit.client.1.vm09.stdout:4/736: dread db/d19/d23/d44/d7c/d7d/d97/da3/fc8 [0,4194304] 0 2026-03-09T15:00:56.541 INFO:tasks.workunit.client.1.vm09.stdout:0/764: symlink da/dc/d1c/d3c/d78/d7a/dbb/lfe 0 2026-03-09T15:00:56.544 INFO:tasks.workunit.client.1.vm09.stdout:4/737: readlink db/d19/d23/d44/d7c/lb4 0 2026-03-09T15:00:56.544 INFO:tasks.workunit.client.1.vm09.stdout:8/705: dwrite df/d2d/d42/d70/fc1 [0,4194304] 0 2026-03-09T15:00:56.549 INFO:tasks.workunit.client.1.vm09.stdout:4/738: dread - db/d19/d23/d44/d84/fd5 zero size 2026-03-09T15:00:56.551 INFO:tasks.workunit.client.1.vm09.stdout:4/739: truncate db/d19/d23/d71/f43 10083080 0 2026-03-09T15:00:56.554 INFO:tasks.workunit.client.1.vm09.stdout:8/706: dwrite df/d24/d99/db6/f59 [0,4194304] 0 2026-03-09T15:00:56.558 INFO:tasks.workunit.client.1.vm09.stdout:1/607: rmdir d8/d10/d24/d48/d9b 39 2026-03-09T15:00:56.558 INFO:tasks.workunit.client.1.vm09.stdout:1/608: write d8/f42 [3994176,38817] 0 2026-03-09T15:00:56.561 INFO:tasks.workunit.client.1.vm09.stdout:4/740: dread db/d19/d23/d71/d53/fa9 [0,4194304] 0 2026-03-09T15:00:56.562 INFO:tasks.workunit.client.1.vm09.stdout:8/707: dwrite df/d24/d99/db6/fad [0,4194304] 0 2026-03-09T15:00:56.567 INFO:tasks.workunit.client.1.vm09.stdout:1/609: unlink d8/d50/d39/d95/d72/d64/f82 0 2026-03-09T15:00:56.569 INFO:tasks.workunit.client.1.vm09.stdout:6/651: dwrite d6/d20/d38/d56/d65/d68/d86/f92 [0,4194304] 0 2026-03-09T15:00:56.570 INFO:tasks.workunit.client.1.vm09.stdout:6/652: chown d6/db/d10/f2c 3 1 2026-03-09T15:00:56.570 INFO:tasks.workunit.client.1.vm09.stdout:0/765: rename da/d30/cb0 to da/cff 0 2026-03-09T15:00:56.585 INFO:tasks.workunit.client.1.vm09.stdout:2/702: truncate df/f13 2873243 0 2026-03-09T15:00:56.585 INFO:tasks.workunit.client.1.vm09.stdout:1/610: creat d8/d10/fbf x:0 0 0 2026-03-09T15:00:56.586 INFO:tasks.workunit.client.1.vm09.stdout:1/611: write d8/d10/d24/d45/d5f/d8d/fa0 [4415948,115205] 0 2026-03-09T15:00:56.589 INFO:tasks.workunit.client.1.vm09.stdout:4/741: mknod db/d19/d23/d71/ddf/cef 0 2026-03-09T15:00:56.589 INFO:tasks.workunit.client.1.vm09.stdout:4/742: chown db/d19/d23/d71/d53/ce8 36411 1 2026-03-09T15:00:56.593 INFO:tasks.workunit.client.1.vm09.stdout:5/718: write d2/d37/f6d [943390,54235] 0 2026-03-09T15:00:56.594 INFO:tasks.workunit.client.1.vm09.stdout:3/741: dwrite d3/d9a/de3/dc4/ff0 [0,4194304] 0 2026-03-09T15:00:56.595 INFO:tasks.workunit.client.1.vm09.stdout:3/742: write d3/d60/fde [865463,61845] 0 2026-03-09T15:00:56.601 INFO:tasks.workunit.client.1.vm09.stdout:4/743: dwrite db/d12/d16/f36 [0,4194304] 0 2026-03-09T15:00:56.616 INFO:tasks.workunit.client.1.vm09.stdout:5/719: dwrite d2/d37/d3c/d36/d45/dae/dd3/fe4 [0,4194304] 0 2026-03-09T15:00:56.619 INFO:tasks.workunit.client.1.vm09.stdout:5/720: write d2/da9/fb9 [775964,85220] 0 2026-03-09T15:00:56.623 INFO:tasks.workunit.client.1.vm09.stdout:5/721: readlink d2/d37/d3c/d36/d4c/d51/l54 0 2026-03-09T15:00:56.623 INFO:tasks.workunit.client.1.vm09.stdout:7/680: truncate d3/f32 3942035 0 2026-03-09T15:00:56.624 INFO:tasks.workunit.client.1.vm09.stdout:5/722: fdatasync d2/d37/d3c/d36/d4c/d89/ff1 0 2026-03-09T15:00:56.634 INFO:tasks.workunit.client.1.vm09.stdout:4/744: mkdir db/d19/d23/d44/d7c/d7d/d97/df0 0 2026-03-09T15:00:56.635 INFO:tasks.workunit.client.1.vm09.stdout:6/653: rmdir d6/d20/d38/d56/dd1 0 2026-03-09T15:00:56.635 INFO:tasks.workunit.client.1.vm09.stdout:4/745: fdatasync db/d12/d16/f63 0 2026-03-09T15:00:56.637 INFO:tasks.workunit.client.1.vm09.stdout:5/723: mknod d2/db1/c105 0 2026-03-09T15:00:56.637 INFO:tasks.workunit.client.1.vm09.stdout:6/654: dread - d6/d20/d38/d56/f8c zero size 2026-03-09T15:00:56.641 INFO:tasks.workunit.client.1.vm09.stdout:6/655: write d6/d20/d24/f60 [1334593,107624] 0 2026-03-09T15:00:56.643 INFO:tasks.workunit.client.1.vm09.stdout:1/612: unlink d8/d10/d24/d48/d9b/d78/d8b/cb3 0 2026-03-09T15:00:56.646 INFO:tasks.workunit.client.1.vm09.stdout:2/703: creat df/d20/fde x:0 0 0 2026-03-09T15:00:56.647 INFO:tasks.workunit.client.1.vm09.stdout:9/628: dread d1/d4f/f89 [0,4194304] 0 2026-03-09T15:00:56.649 INFO:tasks.workunit.client.1.vm09.stdout:7/681: dread d3/db/d46/fa6 [0,4194304] 0 2026-03-09T15:00:56.650 INFO:tasks.workunit.client.1.vm09.stdout:3/743: mknod d3/d3a/c101 0 2026-03-09T15:00:56.652 INFO:tasks.workunit.client.1.vm09.stdout:4/746: unlink db/d12/d16/f36 0 2026-03-09T15:00:56.652 INFO:tasks.workunit.client.1.vm09.stdout:4/747: readlink db/d19/d81/l41 0 2026-03-09T15:00:56.653 INFO:tasks.workunit.client.1.vm09.stdout:5/724: dread - d2/d37/d3c/d36/f98 zero size 2026-03-09T15:00:56.654 INFO:tasks.workunit.client.1.vm09.stdout:6/656: rmdir d6/db/d8b 39 2026-03-09T15:00:56.655 INFO:tasks.workunit.client.1.vm09.stdout:2/704: mkdir df/d1f/d6d/d8f/d5f/ddf 0 2026-03-09T15:00:56.657 INFO:tasks.workunit.client.1.vm09.stdout:3/744: rmdir d3/d3a/d2b/d7b/db6/dd9 39 2026-03-09T15:00:56.658 INFO:tasks.workunit.client.1.vm09.stdout:6/657: dread - d6/df/d23/d89/f8e zero size 2026-03-09T15:00:56.659 INFO:tasks.workunit.client.1.vm09.stdout:2/705: creat df/d1f/d47/d84/db7/dc3/fe0 x:0 0 0 2026-03-09T15:00:56.662 INFO:tasks.workunit.client.1.vm09.stdout:6/658: write f0 [3351052,78265] 0 2026-03-09T15:00:56.676 INFO:tasks.workunit.client.1.vm09.stdout:7/682: creat d3/db/d46/db2/fca x:0 0 0 2026-03-09T15:00:56.676 INFO:tasks.workunit.client.1.vm09.stdout:7/683: fsync d3/f8 0 2026-03-09T15:00:56.676 INFO:tasks.workunit.client.1.vm09.stdout:7/684: mkdir d3/db/d25/d5c/d75/dcb 0 2026-03-09T15:00:56.676 INFO:tasks.workunit.client.1.vm09.stdout:7/685: chown d3/d61/c9d 580470799 1 2026-03-09T15:00:56.677 INFO:tasks.workunit.client.1.vm09.stdout:6/659: mknod d6/d20/d24/cd7 0 2026-03-09T15:00:56.681 INFO:tasks.workunit.client.1.vm09.stdout:5/725: creat d2/d37/d3c/d36/d45/f106 x:0 0 0 2026-03-09T15:00:56.684 INFO:tasks.workunit.client.1.vm09.stdout:3/745: link d3/d100/l8e d3/l102 0 2026-03-09T15:00:56.685 INFO:tasks.workunit.client.1.vm09.stdout:4/748: dread db/d12/f6b [0,4194304] 0 2026-03-09T15:00:56.685 INFO:tasks.workunit.client.1.vm09.stdout:7/686: creat d3/db/d15/fcc x:0 0 0 2026-03-09T15:00:56.685 INFO:tasks.workunit.client.1.vm09.stdout:6/660: mkdir d6/d20/d2a/dc4/dba/dd8 0 2026-03-09T15:00:56.685 INFO:tasks.workunit.client.1.vm09.stdout:5/726: mknod d2/d37/d67/df6/c107 0 2026-03-09T15:00:56.685 INFO:tasks.workunit.client.1.vm09.stdout:4/749: write db/d19/d81/d5d/f8a [5187409,107516] 0 2026-03-09T15:00:56.685 INFO:tasks.workunit.client.1.vm09.stdout:5/727: write d2/d37/f43 [991081,51906] 0 2026-03-09T15:00:56.685 INFO:tasks.workunit.client.1.vm09.stdout:7/687: fsync d3/db/d46/fa6 0 2026-03-09T15:00:56.685 INFO:tasks.workunit.client.1.vm09.stdout:3/746: symlink d3/d100/d48/l103 0 2026-03-09T15:00:56.687 INFO:tasks.workunit.client.1.vm09.stdout:7/688: chown d3/db/d25/d5c/d75 54108166 1 2026-03-09T15:00:56.688 INFO:tasks.workunit.client.1.vm09.stdout:6/661: mknod d6/d20/d24/da5/cd9 0 2026-03-09T15:00:56.694 INFO:tasks.workunit.client.1.vm09.stdout:5/728: chown d2/d37/d3c/d36/d4c/l63 1 1 2026-03-09T15:00:56.694 INFO:tasks.workunit.client.1.vm09.stdout:4/750: mkdir db/d19/d23/d44/d7c/d7d/d97/df0/df1 0 2026-03-09T15:00:56.695 INFO:tasks.workunit.client.1.vm09.stdout:3/747: dwrite d3/fe6 [0,4194304] 0 2026-03-09T15:00:56.697 INFO:tasks.workunit.client.1.vm09.stdout:5/729: write d2/d37/d3c/d36/d4c/d89/ff1 [2791584,9020] 0 2026-03-09T15:00:56.698 INFO:tasks.workunit.client.1.vm09.stdout:5/730: fdatasync d2/d37/d67/d95/f99 0 2026-03-09T15:00:56.706 INFO:tasks.workunit.client.1.vm09.stdout:7/689: dwrite d3/db/d15/d5f/d6e/fc3 [0,4194304] 0 2026-03-09T15:00:56.708 INFO:tasks.workunit.client.1.vm09.stdout:3/748: rename d3/l23 to d3/d3a/d2b/d31/d9e/l104 0 2026-03-09T15:00:56.708 INFO:tasks.workunit.client.1.vm09.stdout:3/749: read - d3/d3a/d2b/d31/d4a/fa8 zero size 2026-03-09T15:00:56.711 INFO:tasks.workunit.client.1.vm09.stdout:7/690: rmdir d3/d3d/d9b 39 2026-03-09T15:00:56.711 INFO:tasks.workunit.client.1.vm09.stdout:3/750: write d3/d3a/d54/fbd [750876,92161] 0 2026-03-09T15:00:56.712 INFO:tasks.workunit.client.1.vm09.stdout:5/731: dwrite d2/da9/fb9 [0,4194304] 0 2026-03-09T15:00:56.717 INFO:tasks.workunit.client.1.vm09.stdout:3/751: mknod d3/d100/d48/da0/c105 0 2026-03-09T15:00:56.719 INFO:tasks.workunit.client.1.vm09.stdout:7/691: creat d3/d28/fcd x:0 0 0 2026-03-09T15:00:56.727 INFO:tasks.workunit.client.1.vm09.stdout:7/692: symlink d3/db/d25/d5c/d75/dcb/lce 0 2026-03-09T15:00:56.728 INFO:tasks.workunit.client.1.vm09.stdout:5/732: dwrite d2/d37/d3c/d36/d45/dae/dc3/f57 [4194304,4194304] 0 2026-03-09T15:00:56.729 INFO:tasks.workunit.client.1.vm09.stdout:5/733: stat d2/d37/d67/f9e 0 2026-03-09T15:00:56.729 INFO:tasks.workunit.client.1.vm09.stdout:3/752: dwrite d3/d3a/d2b/ff7 [0,4194304] 0 2026-03-09T15:00:56.736 INFO:tasks.workunit.client.1.vm09.stdout:7/693: unlink d3/db/d25/l2c 0 2026-03-09T15:00:56.743 INFO:tasks.workunit.client.1.vm09.stdout:7/694: fdatasync d3/fd 0 2026-03-09T15:00:56.743 INFO:tasks.workunit.client.1.vm09.stdout:5/734: creat d2/d37/d53/dc4/f108 x:0 0 0 2026-03-09T15:00:56.743 INFO:tasks.workunit.client.1.vm09.stdout:3/753: read d3/d60/fde [484999,77130] 0 2026-03-09T15:00:56.745 INFO:tasks.workunit.client.1.vm09.stdout:5/735: truncate d2/d37/d3c/d36/d4c/ff4 30546 0 2026-03-09T15:00:56.747 INFO:tasks.workunit.client.1.vm09.stdout:3/754: creat d3/d5b/d79/d9d/f106 x:0 0 0 2026-03-09T15:00:56.750 INFO:tasks.workunit.client.1.vm09.stdout:7/695: dwrite d3/d1d/f33 [4194304,4194304] 0 2026-03-09T15:00:56.750 INFO:tasks.workunit.client.1.vm09.stdout:3/755: chown d3/d9a/lce 359353511 1 2026-03-09T15:00:56.751 INFO:tasks.workunit.client.1.vm09.stdout:5/736: link d2/f29 d2/d37/d3c/dbf/f109 0 2026-03-09T15:00:56.751 INFO:tasks.workunit.client.1.vm09.stdout:5/737: chown d2/d37/d67/df6/c107 12 1 2026-03-09T15:00:56.753 INFO:tasks.workunit.client.1.vm09.stdout:3/756: write d3/d5b/d79/d9d/fb3 [4155278,36275] 0 2026-03-09T15:00:56.756 INFO:tasks.workunit.client.1.vm09.stdout:3/757: creat d3/d100/d48/dc5/f107 x:0 0 0 2026-03-09T15:00:56.790 INFO:tasks.workunit.client.1.vm09.stdout:7/696: getdents d3/d1d 0 2026-03-09T15:00:56.790 INFO:tasks.workunit.client.1.vm09.stdout:7/697: creat d3/db/d25/d5c/d75/dcb/fcf x:0 0 0 2026-03-09T15:00:56.790 INFO:tasks.workunit.client.1.vm09.stdout:7/698: dwrite d3/db/d15/d5f/d6e/fc3 [0,4194304] 0 2026-03-09T15:00:56.790 INFO:tasks.workunit.client.1.vm09.stdout:7/699: mknod d3/d3d/d9b/da9/cd0 0 2026-03-09T15:00:56.790 INFO:tasks.workunit.client.1.vm09.stdout:7/700: truncate d3/db/d25/d5c/f7c 783970 0 2026-03-09T15:00:56.790 INFO:tasks.workunit.client.1.vm09.stdout:7/701: creat d3/db/d15/d5f/d6e/d83/fd1 x:0 0 0 2026-03-09T15:00:56.790 INFO:tasks.workunit.client.1.vm09.stdout:7/702: creat d3/db/d25/db7/fd2 x:0 0 0 2026-03-09T15:00:56.790 INFO:tasks.workunit.client.1.vm09.stdout:7/703: creat d3/db/d25/d5c/fd3 x:0 0 0 2026-03-09T15:00:56.790 INFO:tasks.workunit.client.1.vm09.stdout:7/704: mkdir d3/db/d25/db7/dd4 0 2026-03-09T15:00:56.790 INFO:tasks.workunit.client.1.vm09.stdout:7/705: write d3/db/d25/db7/fd2 [1009031,18262] 0 2026-03-09T15:00:56.790 INFO:tasks.workunit.client.1.vm09.stdout:7/706: mkdir d3/db/d25/d7d/dd5 0 2026-03-09T15:00:56.922 INFO:tasks.workunit.client.1.vm09.stdout:1/613: sync 2026-03-09T15:00:56.922 INFO:tasks.workunit.client.1.vm09.stdout:4/751: sync 2026-03-09T15:00:56.922 INFO:tasks.workunit.client.1.vm09.stdout:9/629: sync 2026-03-09T15:00:56.924 INFO:tasks.workunit.client.1.vm09.stdout:4/752: rmdir db/d19/d23/d44/d7c/d7d/d97/da8 39 2026-03-09T15:00:56.927 INFO:tasks.workunit.client.1.vm09.stdout:1/614: symlink d8/d50/d39/d95/d72/lc0 0 2026-03-09T15:00:56.927 INFO:tasks.workunit.client.1.vm09.stdout:4/753: symlink db/d12/da1/lf2 0 2026-03-09T15:00:56.927 INFO:tasks.workunit.client.1.vm09.stdout:9/630: getdents d1/d7/d1e/d2b/d8d/dc8 0 2026-03-09T15:00:56.927 INFO:tasks.workunit.client.1.vm09.stdout:1/615: readlink d8/d50/d5b/l9d 0 2026-03-09T15:00:56.929 INFO:tasks.workunit.client.1.vm09.stdout:1/616: unlink d8/d10/f44 0 2026-03-09T15:00:56.929 INFO:tasks.workunit.client.1.vm09.stdout:4/754: symlink db/d19/d23/d44/d7c/d7d/lf3 0 2026-03-09T15:00:56.930 INFO:tasks.workunit.client.1.vm09.stdout:1/617: dread - d8/d90/fb6 zero size 2026-03-09T15:00:56.931 INFO:tasks.workunit.client.1.vm09.stdout:9/631: creat d1/d7/d1e/fd2 x:0 0 0 2026-03-09T15:00:56.932 INFO:tasks.workunit.client.1.vm09.stdout:9/632: truncate d1/d7/d9f/fa4 325199 0 2026-03-09T15:00:56.932 INFO:tasks.workunit.client.1.vm09.stdout:4/755: rename db/c9f to db/d12/d16/d5b/d78/cf4 0 2026-03-09T15:00:56.941 INFO:tasks.workunit.client.1.vm09.stdout:1/618: link d8/f57 d8/d90/fc1 0 2026-03-09T15:00:56.944 INFO:tasks.workunit.client.1.vm09.stdout:4/756: symlink db/d19/d23/d71/d53/lf5 0 2026-03-09T15:00:56.947 INFO:tasks.workunit.client.1.vm09.stdout:4/757: creat db/d19/d23/d71/d5f/ff6 x:0 0 0 2026-03-09T15:00:56.952 INFO:tasks.workunit.client.1.vm09.stdout:1/619: creat d8/d10/d24/d48/fc2 x:0 0 0 2026-03-09T15:00:56.952 INFO:tasks.workunit.client.1.vm09.stdout:9/633: dread d1/d4f/fa3 [0,4194304] 0 2026-03-09T15:00:56.953 INFO:tasks.workunit.client.1.vm09.stdout:9/634: chown d1/d4f/d52 980 1 2026-03-09T15:00:56.954 INFO:tasks.workunit.client.1.vm09.stdout:4/758: creat db/d12/d16/d5b/d78/de3/ff7 x:0 0 0 2026-03-09T15:00:56.955 INFO:tasks.workunit.client.1.vm09.stdout:1/620: creat d8/d50/d39/d95/d56/fc3 x:0 0 0 2026-03-09T15:00:56.957 INFO:tasks.workunit.client.1.vm09.stdout:9/635: symlink d1/d7/d1e/d2b/d40/ld3 0 2026-03-09T15:00:56.991 INFO:tasks.workunit.client.1.vm09.stdout:1/621: sync 2026-03-09T15:00:56.992 INFO:tasks.workunit.client.1.vm09.stdout:1/622: write d8/d10/d24/d48/fc2 [27818,87573] 0 2026-03-09T15:00:56.999 INFO:tasks.workunit.client.1.vm09.stdout:6/662: dread d6/df/d23/f78 [0,4194304] 0 2026-03-09T15:00:57.009 INFO:tasks.workunit.client.1.vm09.stdout:6/663: dread d6/d20/f59 [0,4194304] 0 2026-03-09T15:00:57.010 INFO:tasks.workunit.client.1.vm09.stdout:1/623: dread d8/d10/d24/f2a [0,4194304] 0 2026-03-09T15:00:57.013 INFO:tasks.workunit.client.1.vm09.stdout:1/624: unlink d8/d90/fb7 0 2026-03-09T15:00:57.013 INFO:tasks.workunit.client.1.vm09.stdout:1/625: truncate d8/d10/f12 2908488 0 2026-03-09T15:00:57.014 INFO:tasks.workunit.client.1.vm09.stdout:6/664: creat d6/d20/d24/d7e/d88/fda x:0 0 0 2026-03-09T15:00:57.014 INFO:tasks.workunit.client.1.vm09.stdout:6/665: readlink d6/db/lc6 0 2026-03-09T15:00:57.036 INFO:tasks.workunit.client.1.vm09.stdout:6/666: dread d6/d20/d2a/dc4/fb5 [0,4194304] 0 2026-03-09T15:00:57.042 INFO:tasks.workunit.client.1.vm09.stdout:6/667: dread d6/d20/f27 [0,4194304] 0 2026-03-09T15:00:57.078 INFO:tasks.workunit.client.1.vm09.stdout:8/708: write df/d24/d99/db6/d60/db7/fbc [1618519,118802] 0 2026-03-09T15:00:57.080 INFO:tasks.workunit.client.1.vm09.stdout:8/709: creat df/d5b/d65/d1d/fcd x:0 0 0 2026-03-09T15:00:57.081 INFO:tasks.workunit.client.1.vm09.stdout:8/710: readlink df/d24/d99/db6/l5a 0 2026-03-09T15:00:57.086 INFO:tasks.workunit.client.1.vm09.stdout:8/711: dwrite df/d24/f83 [0,4194304] 0 2026-03-09T15:00:57.092 INFO:tasks.workunit.client.1.vm09.stdout:8/712: symlink df/d24/d99/lce 0 2026-03-09T15:00:57.094 INFO:tasks.workunit.client.1.vm09.stdout:8/713: creat df/d2d/d42/d79/fcf x:0 0 0 2026-03-09T15:00:57.106 INFO:tasks.workunit.client.1.vm09.stdout:0/766: dwrite da/d30/d36/f99 [0,4194304] 0 2026-03-09T15:00:57.125 INFO:tasks.workunit.client.1.vm09.stdout:8/714: sync 2026-03-09T15:00:57.126 INFO:tasks.workunit.client.1.vm09.stdout:8/715: write df/f1a [7800163,19065] 0 2026-03-09T15:00:57.127 INFO:tasks.workunit.client.1.vm09.stdout:8/716: write df/d2d/f2f [8351010,32847] 0 2026-03-09T15:00:57.130 INFO:tasks.workunit.client.1.vm09.stdout:2/706: dwrite df/d20/d29/fc0 [4194304,4194304] 0 2026-03-09T15:00:57.138 INFO:tasks.workunit.client.1.vm09.stdout:6/668: getdents d6/d20/d2a/dc4/dba 0 2026-03-09T15:00:57.144 INFO:tasks.workunit.client.1.vm09.stdout:8/717: write df/d5b/f82 [3934642,123001] 0 2026-03-09T15:00:57.152 INFO:tasks.workunit.client.1.vm09.stdout:6/669: rename d6/df/d23/d5b to d6/d20/d38/d56/d65/d68/d86/dc0/ddb 0 2026-03-09T15:00:57.153 INFO:tasks.workunit.client.1.vm09.stdout:2/707: link df/d1f/d47/f60 df/d1f/d47/d84/fe1 0 2026-03-09T15:00:57.157 INFO:tasks.workunit.client.1.vm09.stdout:8/718: link df/f23 df/d5b/fd0 0 2026-03-09T15:00:57.165 INFO:tasks.workunit.client.1.vm09.stdout:8/719: creat df/d5b/d65/fd1 x:0 0 0 2026-03-09T15:00:57.166 INFO:tasks.workunit.client.1.vm09.stdout:8/720: creat df/d2d/d42/fd2 x:0 0 0 2026-03-09T15:00:57.167 INFO:tasks.workunit.client.1.vm09.stdout:8/721: chown df/d2d/d46/ca2 115963193 1 2026-03-09T15:00:57.168 INFO:tasks.workunit.client.1.vm09.stdout:3/758: rmdir d3/d3a/d2b 39 2026-03-09T15:00:57.168 INFO:tasks.workunit.client.1.vm09.stdout:8/722: chown df/d24/d99/db6/d60/fc6 1269591 1 2026-03-09T15:00:57.168 INFO:tasks.workunit.client.1.vm09.stdout:8/723: fsync f8 0 2026-03-09T15:00:57.170 INFO:tasks.workunit.client.1.vm09.stdout:8/724: creat df/d2d/d42/fd3 x:0 0 0 2026-03-09T15:00:57.173 INFO:tasks.workunit.client.1.vm09.stdout:8/725: rename df/d2d/d42/d70/fc1 to df/d2d/d90/fd4 0 2026-03-09T15:00:57.174 INFO:tasks.workunit.client.1.vm09.stdout:8/726: chown df/d5b/c7d 68 1 2026-03-09T15:00:57.174 INFO:tasks.workunit.client.1.vm09.stdout:5/738: truncate d2/d37/d3c/d36/d45/fa1 2909946 0 2026-03-09T15:00:57.175 INFO:tasks.workunit.client.1.vm09.stdout:8/727: chown df/d24/d99/db6/c6c 464 1 2026-03-09T15:00:57.176 INFO:tasks.workunit.client.1.vm09.stdout:5/739: chown d2/d37/d3c/d36/d45/dae/dc3/f103 6 1 2026-03-09T15:00:57.181 INFO:tasks.workunit.client.1.vm09.stdout:8/728: dwrite df/d24/d99/db6/f59 [4194304,4194304] 0 2026-03-09T15:00:57.182 INFO:tasks.workunit.client.1.vm09.stdout:5/740: sync 2026-03-09T15:00:57.183 INFO:tasks.workunit.client.1.vm09.stdout:7/707: dwrite d3/f5 [0,4194304] 0 2026-03-09T15:00:57.184 INFO:tasks.workunit.client.1.vm09.stdout:8/729: dread - df/d2d/d42/f96 zero size 2026-03-09T15:00:57.184 INFO:tasks.workunit.client.1.vm09.stdout:5/741: chown d2/d37/d3c/f4b 1325 1 2026-03-09T15:00:57.195 INFO:tasks.workunit.client.1.vm09.stdout:9/636: truncate d1/d58/f75 906848 0 2026-03-09T15:00:57.195 INFO:tasks.workunit.client.1.vm09.stdout:4/759: dwrite db/d19/d52/d76/f65 [0,4194304] 0 2026-03-09T15:00:57.198 INFO:tasks.workunit.client.1.vm09.stdout:9/637: truncate d1/d4f/d52/f94 831302 0 2026-03-09T15:00:57.202 INFO:tasks.workunit.client.1.vm09.stdout:7/708: mknod d3/d1d/d65/cd6 0 2026-03-09T15:00:57.202 INFO:tasks.workunit.client.1.vm09.stdout:7/709: dread - d3/d1d/d65/fc0 zero size 2026-03-09T15:00:57.202 INFO:tasks.workunit.client.1.vm09.stdout:5/742: mkdir d2/d37/d3c/d36/d45/dae/dc3/d10a 0 2026-03-09T15:00:57.202 INFO:tasks.workunit.client.1.vm09.stdout:5/743: readlink d2/d37/d3c/d36/d4c/d51/l54 0 2026-03-09T15:00:57.204 INFO:tasks.workunit.client.1.vm09.stdout:1/626: dwrite d8/d10/f13 [0,4194304] 0 2026-03-09T15:00:57.211 INFO:tasks.workunit.client.1.vm09.stdout:5/744: chown d2/d37/d3c/d36/d4c/d51/d96/l13 16210 1 2026-03-09T15:00:57.216 INFO:tasks.workunit.client.1.vm09.stdout:5/745: dwrite d2/d37/d53/d86/dad/ff2 [0,4194304] 0 2026-03-09T15:00:57.217 INFO:tasks.workunit.client.1.vm09.stdout:4/760: creat db/d12/d16/d5b/d78/d7f/de2/ff8 x:0 0 0 2026-03-09T15:00:57.219 INFO:tasks.workunit.client.1.vm09.stdout:9/638: mknod d1/d7/d1e/cd4 0 2026-03-09T15:00:57.223 INFO:tasks.workunit.client.1.vm09.stdout:7/710: creat d3/db/d25/d7d/dd5/fd7 x:0 0 0 2026-03-09T15:00:57.223 INFO:tasks.workunit.client.1.vm09.stdout:8/730: link df/d24/d99/db6/lb0 df/d24/d56/ld5 0 2026-03-09T15:00:57.225 INFO:tasks.workunit.client.1.vm09.stdout:5/746: symlink d2/d37/d67/d95/db8/df5/l10b 0 2026-03-09T15:00:57.230 INFO:tasks.workunit.client.1.vm09.stdout:8/731: truncate df/d5b/f67 5478699 0 2026-03-09T15:00:57.231 INFO:tasks.workunit.client.1.vm09.stdout:9/639: mkdir d1/d7/d1e/d2b/d8d/dd5 0 2026-03-09T15:00:57.237 INFO:tasks.workunit.client.1.vm09.stdout:0/767: dwrite da/dc/d1c/d3c/d44/f89 [0,4194304] 0 2026-03-09T15:00:57.240 INFO:tasks.workunit.client.1.vm09.stdout:9/640: rmdir d1/d7/d9f/daa 39 2026-03-09T15:00:57.244 INFO:tasks.workunit.client.1.vm09.stdout:6/670: write d6/d20/d2a/d3d/d46/f84 [463631,83313] 0 2026-03-09T15:00:57.244 INFO:tasks.workunit.client.1.vm09.stdout:8/732: rename df/d5b/d65/d1d/f6e to df/d5c/fd6 0 2026-03-09T15:00:57.244 INFO:tasks.workunit.client.1.vm09.stdout:9/641: dread - d1/d7/d1e/d2b/d2e/d56/d6d/fbd zero size 2026-03-09T15:00:57.248 INFO:tasks.workunit.client.1.vm09.stdout:9/642: write d1/d7/fbb [181723,130775] 0 2026-03-09T15:00:57.249 INFO:tasks.workunit.client.1.vm09.stdout:2/708: write df/d1f/d47/d5d/fd8 [4989210,14375] 0 2026-03-09T15:00:57.249 INFO:tasks.workunit.client.1.vm09.stdout:0/768: mknod da/dc/d22/c100 0 2026-03-09T15:00:57.249 INFO:tasks.workunit.client.1.vm09.stdout:8/733: read df/d5b/d65/d1d/f41 [2847554,41107] 0 2026-03-09T15:00:57.254 INFO:tasks.workunit.client.1.vm09.stdout:3/759: dwrite d3/d3a/d2b/f66 [0,4194304] 0 2026-03-09T15:00:57.260 INFO:tasks.workunit.client.1.vm09.stdout:6/671: creat d6/d20/d2a/d3b/fdc x:0 0 0 2026-03-09T15:00:57.260 INFO:tasks.workunit.client.1.vm09.stdout:9/643: rmdir d1/d7/d1e/d2b/d2e/d56 39 2026-03-09T15:00:57.260 INFO:tasks.workunit.client.1.vm09.stdout:8/734: symlink df/d5b/d65/d1d/ld7 0 2026-03-09T15:00:57.260 INFO:tasks.workunit.client.1.vm09.stdout:2/709: readlink df/d1f/l57 0 2026-03-09T15:00:57.261 INFO:tasks.workunit.client.1.vm09.stdout:2/710: fsync df/d1f/d47/d5d/fc8 0 2026-03-09T15:00:57.262 INFO:tasks.workunit.client.1.vm09.stdout:5/747: getdents d2/db1/db2 0 2026-03-09T15:00:57.264 INFO:tasks.workunit.client.1.vm09.stdout:3/760: link d3/d3a/d2b/d31/f3f d3/d100/d6a/dd5/f108 0 2026-03-09T15:00:57.264 INFO:tasks.workunit.client.1.vm09.stdout:6/672: mkdir d6/db/d10/d4f/ddd 0 2026-03-09T15:00:57.264 INFO:tasks.workunit.client.1.vm09.stdout:3/761: stat d3/d5b/fc0 0 2026-03-09T15:00:57.265 INFO:tasks.workunit.client.1.vm09.stdout:0/769: creat da/f101 x:0 0 0 2026-03-09T15:00:57.270 INFO:tasks.workunit.client.1.vm09.stdout:1/627: dread d8/d10/d24/d48/d9b/d78/f7c [0,4194304] 0 2026-03-09T15:00:57.273 INFO:tasks.workunit.client.1.vm09.stdout:5/748: creat d2/d37/d53/d86/d88/dc9/f10c x:0 0 0 2026-03-09T15:00:57.279 INFO:tasks.workunit.client.1.vm09.stdout:9/644: dwrite d1/d4f/d52/f94 [0,4194304] 0 2026-03-09T15:00:57.280 INFO:tasks.workunit.client.1.vm09.stdout:2/711: symlink df/d1f/d47/d84/db7/dc3/dd9/le2 0 2026-03-09T15:00:57.280 INFO:tasks.workunit.client.1.vm09.stdout:8/735: truncate f8 1463712 0 2026-03-09T15:00:57.280 INFO:tasks.workunit.client.1.vm09.stdout:9/645: chown d1/d7/d9f 3 1 2026-03-09T15:00:57.283 INFO:tasks.workunit.client.1.vm09.stdout:3/762: dwrite d3/f29 [0,4194304] 0 2026-03-09T15:00:57.286 INFO:tasks.workunit.client.1.vm09.stdout:0/770: rename da/d57/lc9 to da/dc/d61/l102 0 2026-03-09T15:00:57.289 INFO:tasks.workunit.client.1.vm09.stdout:5/749: dwrite d2/d37/d67/df6/fb6 [0,4194304] 0 2026-03-09T15:00:57.290 INFO:tasks.workunit.client.1.vm09.stdout:5/750: readlink d2/d37/d3c/d36/d45/dae/dd3/l101 0 2026-03-09T15:00:57.294 INFO:tasks.workunit.client.1.vm09.stdout:9/646: rename d1/d7/d1e/cd4 to d1/d4f/d52/cd6 0 2026-03-09T15:00:57.294 INFO:tasks.workunit.client.1.vm09.stdout:9/647: dread - d1/fbf zero size 2026-03-09T15:00:57.306 INFO:tasks.workunit.client.1.vm09.stdout:5/751: creat d2/d37/d67/d95/f10d x:0 0 0 2026-03-09T15:00:57.307 INFO:tasks.workunit.client.1.vm09.stdout:2/712: fdatasync df/d20/f52 0 2026-03-09T15:00:57.307 INFO:tasks.workunit.client.1.vm09.stdout:8/736: symlink df/d24/d99/db1/dcc/ld8 0 2026-03-09T15:00:57.307 INFO:tasks.workunit.client.1.vm09.stdout:0/771: symlink da/dc/d1c/d46/l103 0 2026-03-09T15:00:57.308 INFO:tasks.workunit.client.1.vm09.stdout:3/763: write d3/d100/d6a/dd5/f108 [1667578,31182] 0 2026-03-09T15:00:57.309 INFO:tasks.workunit.client.1.vm09.stdout:2/713: chown df/d1f/f9c 2 1 2026-03-09T15:00:57.309 INFO:tasks.workunit.client.1.vm09.stdout:5/752: chown d2/d37/c6f 565 1 2026-03-09T15:00:57.310 INFO:tasks.workunit.client.1.vm09.stdout:0/772: dread - da/dc/d1c/f6d zero size 2026-03-09T15:00:57.315 INFO:tasks.workunit.client.1.vm09.stdout:3/764: fdatasync d3/d3a/d2b/d31/d4a/fa9 0 2026-03-09T15:00:57.315 INFO:tasks.workunit.client.1.vm09.stdout:3/765: write d3/d100/d6a/dd5/f108 [548070,70526] 0 2026-03-09T15:00:57.316 INFO:tasks.workunit.client.1.vm09.stdout:8/737: truncate df/d38/f52 486783 0 2026-03-09T15:00:57.316 INFO:tasks.workunit.client.1.vm09.stdout:9/648: creat d1/d7/d1e/d2b/d2e/d56/d5e/fd7 x:0 0 0 2026-03-09T15:00:57.321 INFO:tasks.workunit.client.1.vm09.stdout:5/753: dwrite d2/d37/d3c/f4e [4194304,4194304] 0 2026-03-09T15:00:57.323 INFO:tasks.workunit.client.1.vm09.stdout:8/738: symlink df/d24/d99/db1/dcc/ld9 0 2026-03-09T15:00:57.329 INFO:tasks.workunit.client.1.vm09.stdout:0/773: link da/d30/f3d da/dc/d22/df0/f104 0 2026-03-09T15:00:57.329 INFO:tasks.workunit.client.1.vm09.stdout:5/754: mknod d2/da9/c10e 0 2026-03-09T15:00:57.330 INFO:tasks.workunit.client.1.vm09.stdout:0/774: dread - da/f101 zero size 2026-03-09T15:00:57.338 INFO:tasks.workunit.client.1.vm09.stdout:8/739: link df/d5b/d65/d1d/f44 df/d24/d99/db6/fda 0 2026-03-09T15:00:57.338 INFO:tasks.workunit.client.1.vm09.stdout:2/714: link df/d20/d2e/l7c df/le3 0 2026-03-09T15:00:57.338 INFO:tasks.workunit.client.1.vm09.stdout:2/715: readlink df/d58/d67/l43 0 2026-03-09T15:00:57.338 INFO:tasks.workunit.client.1.vm09.stdout:0/775: mknod da/dc/d22/df0/c105 0 2026-03-09T15:00:57.339 INFO:tasks.workunit.client.1.vm09.stdout:5/755: mknod d2/d37/d53/d86/d88/dd7/c10f 0 2026-03-09T15:00:57.340 INFO:tasks.workunit.client.1.vm09.stdout:5/756: stat d2/d37/d67/d95/cb7 0 2026-03-09T15:00:57.342 INFO:tasks.workunit.client.1.vm09.stdout:3/766: read d3/d3a/d2b/f72 [1955255,25123] 0 2026-03-09T15:00:57.347 INFO:tasks.workunit.client.1.vm09.stdout:0/776: symlink da/d30/l106 0 2026-03-09T15:00:57.348 INFO:tasks.workunit.client.1.vm09.stdout:0/777: dread - da/dc/d61/fea zero size 2026-03-09T15:00:57.352 INFO:tasks.workunit.client.1.vm09.stdout:5/757: dwrite d2/d37/d3c/d36/d45/fab [0,4194304] 0 2026-03-09T15:00:57.358 INFO:tasks.workunit.client.1.vm09.stdout:5/758: rename c1 to d2/d37/d3c/d36/d45/dfd/c110 0 2026-03-09T15:00:57.363 INFO:tasks.workunit.client.1.vm09.stdout:5/759: dread - d2/d37/d3c/d36/fb3 zero size 2026-03-09T15:00:57.363 INFO:tasks.workunit.client.1.vm09.stdout:5/760: truncate d2/f2e 658742 0 2026-03-09T15:00:57.363 INFO:tasks.workunit.client.1.vm09.stdout:5/761: rename d2/d37/d3c/d36/d4c/l7a to d2/db1/db2/l111 0 2026-03-09T15:00:57.363 INFO:tasks.workunit.client.1.vm09.stdout:5/762: fdatasync d2/d37/d3c/d36/d4c/d51/f62 0 2026-03-09T15:00:57.378 INFO:tasks.workunit.client.1.vm09.stdout:3/767: sync 2026-03-09T15:00:57.380 INFO:tasks.workunit.client.1.vm09.stdout:3/768: chown d3/d3a/d2b/d31/d9e/ff4 3888754 1 2026-03-09T15:00:57.381 INFO:tasks.workunit.client.1.vm09.stdout:3/769: symlink d3/d100/d48/dc5/l109 0 2026-03-09T15:00:57.384 INFO:tasks.workunit.client.1.vm09.stdout:3/770: chown d3/d3a/d2b/d31/d4a/l5d 53266 1 2026-03-09T15:00:57.385 INFO:tasks.workunit.client.1.vm09.stdout:3/771: chown d3/f29 63 1 2026-03-09T15:00:57.391 INFO:tasks.workunit.client.1.vm09.stdout:9/649: dwrite d1/d58/f75 [0,4194304] 0 2026-03-09T15:00:57.396 INFO:tasks.workunit.client.1.vm09.stdout:9/650: link d1/d7/d1e/d2b/d2e/d56/d5e/l76 d1/d58/da8/ld8 0 2026-03-09T15:00:57.399 INFO:tasks.workunit.client.1.vm09.stdout:7/711: dwrite d3/d1d/f37 [0,4194304] 0 2026-03-09T15:00:57.402 INFO:tasks.workunit.client.1.vm09.stdout:4/761: truncate db/d19/d52/d76/f65 1329382 0 2026-03-09T15:00:57.403 INFO:tasks.workunit.client.1.vm09.stdout:4/762: write db/d12/d16/f63 [526029,104223] 0 2026-03-09T15:00:57.404 INFO:tasks.workunit.client.1.vm09.stdout:7/712: rmdir d3/db/d25/d5c/d75/dcb 39 2026-03-09T15:00:57.406 INFO:tasks.workunit.client.1.vm09.stdout:7/713: stat d3/db/c54 0 2026-03-09T15:00:57.407 INFO:tasks.workunit.client.1.vm09.stdout:7/714: read d3/f5 [115922,38944] 0 2026-03-09T15:00:57.410 INFO:tasks.workunit.client.1.vm09.stdout:4/763: rename db/d12/d16/d5b/da5/faf to db/d12/d16/d5b/da5/ff9 0 2026-03-09T15:00:57.411 INFO:tasks.workunit.client.1.vm09.stdout:9/651: dwrite d1/d7/da6/fd1 [0,4194304] 0 2026-03-09T15:00:57.414 INFO:tasks.workunit.client.1.vm09.stdout:0/778: read da/dc/d1c/d3c/d44/f51 [426068,91378] 0 2026-03-09T15:00:57.414 INFO:tasks.workunit.client.1.vm09.stdout:7/715: creat d3/d1d/fd8 x:0 0 0 2026-03-09T15:00:57.415 INFO:tasks.workunit.client.1.vm09.stdout:9/652: chown d1/d4f/d8f 5447 1 2026-03-09T15:00:57.426 INFO:tasks.workunit.client.1.vm09.stdout:9/653: readlink d1/d7/d1e/d2b/d2e/l4c 0 2026-03-09T15:00:57.428 INFO:tasks.workunit.client.1.vm09.stdout:4/764: dwrite db/d19/d23/d44/d7c/d7d/fb9 [0,4194304] 0 2026-03-09T15:00:57.432 INFO:tasks.workunit.client.1.vm09.stdout:9/654: rename d1/f4 to d1/d7/d1e/d2b/d40/fd9 0 2026-03-09T15:00:57.433 INFO:tasks.workunit.client.1.vm09.stdout:4/765: write db/d12/d9e/fd9 [1011520,115360] 0 2026-03-09T15:00:57.435 INFO:tasks.workunit.client.1.vm09.stdout:7/716: dwrite d3/db/f4d [0,4194304] 0 2026-03-09T15:00:57.436 INFO:tasks.workunit.client.1.vm09.stdout:7/717: fsync d3/d1d/f33 0 2026-03-09T15:00:57.441 INFO:tasks.workunit.client.1.vm09.stdout:9/655: mknod d1/d7/da6/cda 0 2026-03-09T15:00:57.444 INFO:tasks.workunit.client.1.vm09.stdout:7/718: unlink d3/db/d25/c74 0 2026-03-09T15:00:57.445 INFO:tasks.workunit.client.1.vm09.stdout:9/656: rename d1/d7/d1e/d2b/d2e/d56/d6d/fb1 to d1/d7/d1e/d2b/d8d/dd5/fdb 0 2026-03-09T15:00:57.445 INFO:tasks.workunit.client.1.vm09.stdout:7/719: chown f1 25 1 2026-03-09T15:00:57.446 INFO:tasks.workunit.client.1.vm09.stdout:4/766: creat db/d19/d23/d44/d7c/d7d/d97/ffa x:0 0 0 2026-03-09T15:00:57.446 INFO:tasks.workunit.client.1.vm09.stdout:4/767: dread - db/d19/d52/fb5 zero size 2026-03-09T15:00:57.449 INFO:tasks.workunit.client.1.vm09.stdout:4/768: rmdir db/d19/d23/d71 39 2026-03-09T15:00:57.450 INFO:tasks.workunit.client.1.vm09.stdout:9/657: mknod d1/d7/d9f/daa/cdc 0 2026-03-09T15:00:57.454 INFO:tasks.workunit.client.1.vm09.stdout:4/769: mkdir db/d19/d23/d71/d53/dcf/dfb 0 2026-03-09T15:00:57.454 INFO:tasks.workunit.client.1.vm09.stdout:4/770: chown db/d19/d23/d71/d53/ce8 240 1 2026-03-09T15:00:57.460 INFO:tasks.workunit.client.1.vm09.stdout:4/771: dwrite db/d19/d23/d44/d7c/d7d/d97/da3/fcb [0,4194304] 0 2026-03-09T15:00:57.462 INFO:tasks.workunit.client.1.vm09.stdout:4/772: mknod db/d19/d52/d76/cfc 0 2026-03-09T15:00:57.469 INFO:tasks.workunit.client.1.vm09.stdout:9/658: sync 2026-03-09T15:00:57.486 INFO:tasks.workunit.client.1.vm09.stdout:4/773: dread - db/d19/d52/d76/d3b/f49 zero size 2026-03-09T15:00:57.486 INFO:tasks.workunit.client.1.vm09.stdout:9/659: unlink d1/d7/d1e/d2b/d40/c84 0 2026-03-09T15:00:57.489 INFO:tasks.workunit.client.1.vm09.stdout:9/660: rmdir d1/d7/d1e/d2b 39 2026-03-09T15:00:57.493 INFO:tasks.workunit.client.1.vm09.stdout:9/661: creat d1/d7/db8/fdd x:0 0 0 2026-03-09T15:00:57.497 INFO:tasks.workunit.client.1.vm09.stdout:9/662: dwrite d1/d58/f80 [0,4194304] 0 2026-03-09T15:00:57.518 INFO:tasks.workunit.client.1.vm09.stdout:6/673: dwrite d6/d20/d44/d45/f4c [0,4194304] 0 2026-03-09T15:00:57.527 INFO:tasks.workunit.client.1.vm09.stdout:9/663: creat d1/d7/d1e/d2b/d8d/dc8/fde x:0 0 0 2026-03-09T15:00:57.530 INFO:tasks.workunit.client.1.vm09.stdout:9/664: truncate d1/d7/d1e/d2b/f32 3225499 0 2026-03-09T15:00:57.530 INFO:tasks.workunit.client.1.vm09.stdout:6/674: dwrite d6/d20/d24/f67 [0,4194304] 0 2026-03-09T15:00:57.531 INFO:tasks.workunit.client.1.vm09.stdout:9/665: creat d1/d58/fdf x:0 0 0 2026-03-09T15:00:57.532 INFO:tasks.workunit.client.1.vm09.stdout:9/666: chown d1/d4f 128 1 2026-03-09T15:00:57.535 INFO:tasks.workunit.client.1.vm09.stdout:6/675: mkdir d6/d20/d2a/dde 0 2026-03-09T15:00:57.544 INFO:tasks.workunit.client.1.vm09.stdout:6/676: creat d6/db/d10/d4f/fdf x:0 0 0 2026-03-09T15:00:57.550 INFO:tasks.workunit.client.1.vm09.stdout:6/677: creat d6/d20/d24/d7e/fe0 x:0 0 0 2026-03-09T15:00:57.554 INFO:tasks.workunit.client.1.vm09.stdout:6/678: dwrite d6/f7f [0,4194304] 0 2026-03-09T15:00:57.555 INFO:tasks.workunit.client.1.vm09.stdout:6/679: chown d6/db/d10/cad 482 1 2026-03-09T15:00:57.559 INFO:tasks.workunit.client.1.vm09.stdout:6/680: creat d6/df/d23/d89/fe1 x:0 0 0 2026-03-09T15:00:57.565 INFO:tasks.workunit.client.1.vm09.stdout:6/681: dwrite d6/db/f66 [0,4194304] 0 2026-03-09T15:00:57.567 INFO:tasks.workunit.client.1.vm09.stdout:6/682: write d6/db/f66 [2151519,79781] 0 2026-03-09T15:00:57.572 INFO:tasks.workunit.client.1.vm09.stdout:6/683: chown d6/d20/d44/d8f/ld4 26987457 1 2026-03-09T15:00:57.583 INFO:tasks.workunit.client.1.vm09.stdout:6/684: getdents d6/d20/d38/d56/d65/d68/d86/dc0 0 2026-03-09T15:00:57.585 INFO:tasks.workunit.client.1.vm09.stdout:6/685: symlink d6/df/le2 0 2026-03-09T15:00:57.588 INFO:tasks.workunit.client.1.vm09.stdout:6/686: stat d6/db/d8b/f73 0 2026-03-09T15:00:57.599 INFO:tasks.workunit.client.1.vm09.stdout:6/687: dread d6/df/f16 [0,4194304] 0 2026-03-09T15:00:57.602 INFO:tasks.workunit.client.1.vm09.stdout:6/688: mkdir d6/df/d23/de3 0 2026-03-09T15:00:57.603 INFO:tasks.workunit.client.1.vm09.stdout:3/772: dread d3/d100/f7d [0,4194304] 0 2026-03-09T15:00:57.603 INFO:tasks.workunit.client.1.vm09.stdout:6/689: rmdir d6/d20/d38/d56/d65 39 2026-03-09T15:00:57.605 INFO:tasks.workunit.client.1.vm09.stdout:3/773: mkdir d3/d100/d48/dc5/d10a 0 2026-03-09T15:00:57.606 INFO:tasks.workunit.client.1.vm09.stdout:6/690: creat d6/fe4 x:0 0 0 2026-03-09T15:00:57.607 INFO:tasks.workunit.client.1.vm09.stdout:3/774: symlink d3/d100/d48/dc5/l10b 0 2026-03-09T15:00:57.609 INFO:tasks.workunit.client.1.vm09.stdout:6/691: symlink d6/d20/d2a/dde/le5 0 2026-03-09T15:00:57.611 INFO:tasks.workunit.client.1.vm09.stdout:3/775: creat d3/d3a/d2b/d31/d9e/f10c x:0 0 0 2026-03-09T15:00:57.614 INFO:tasks.workunit.client.1.vm09.stdout:3/776: dread d3/d9a/fc2 [0,4194304] 0 2026-03-09T15:00:57.614 INFO:tasks.workunit.client.1.vm09.stdout:6/692: mknod d6/d20/d38/d56/d65/d68/d86/dc0/ddb/ce6 0 2026-03-09T15:00:57.617 INFO:tasks.workunit.client.1.vm09.stdout:3/777: symlink d3/d3a/d2b/d31/d9e/l10d 0 2026-03-09T15:00:57.617 INFO:tasks.workunit.client.1.vm09.stdout:6/693: creat d6/d20/d2a/dc4/dba/fe7 x:0 0 0 2026-03-09T15:00:57.617 INFO:tasks.workunit.client.1.vm09.stdout:8/740: truncate df/d5b/d65/d1d/fc8 805105 0 2026-03-09T15:00:57.626 INFO:tasks.workunit.client.1.vm09.stdout:2/716: dwrite df/d20/f6a [0,4194304] 0 2026-03-09T15:00:57.627 INFO:tasks.workunit.client.1.vm09.stdout:1/628: dread d8/d10/f67 [0,4194304] 0 2026-03-09T15:00:57.637 INFO:tasks.workunit.client.1.vm09.stdout:6/694: mkdir d6/db/d8b/de8 0 2026-03-09T15:00:57.640 INFO:tasks.workunit.client.1.vm09.stdout:2/717: dwrite df/d58/d67/f4e [4194304,4194304] 0 2026-03-09T15:00:57.644 INFO:tasks.workunit.client.1.vm09.stdout:2/718: write df/d20/d29/f31 [4024644,86657] 0 2026-03-09T15:00:57.644 INFO:tasks.workunit.client.1.vm09.stdout:1/629: sync 2026-03-09T15:00:57.645 INFO:tasks.workunit.client.1.vm09.stdout:1/630: chown d8/d10/d24/d48/d9b/d78/d8b/ca6 279 1 2026-03-09T15:00:57.649 INFO:tasks.workunit.client.1.vm09.stdout:6/695: read d6/d20/d24/f49 [2860017,55545] 0 2026-03-09T15:00:57.650 INFO:tasks.workunit.client.1.vm09.stdout:2/719: stat df/d1f/d47/d5d/d90/cbe 0 2026-03-09T15:00:57.652 INFO:tasks.workunit.client.1.vm09.stdout:1/631: fdatasync d8/f17 0 2026-03-09T15:00:57.657 INFO:tasks.workunit.client.1.vm09.stdout:6/696: rmdir d6/d20/d38/d4e 39 2026-03-09T15:00:57.659 INFO:tasks.workunit.client.1.vm09.stdout:6/697: dread - d6/d20/d38/d56/d65/d68/d86/dc0/ddb/fa9 zero size 2026-03-09T15:00:57.661 INFO:tasks.workunit.client.1.vm09.stdout:6/698: stat d6/d20/d44/f4a 0 2026-03-09T15:00:57.671 INFO:tasks.workunit.client.1.vm09.stdout:1/632: creat d8/d10/d24/d48/d9b/d68/fc4 x:0 0 0 2026-03-09T15:00:57.672 INFO:tasks.workunit.client.1.vm09.stdout:6/699: dwrite d6/db/fb3 [8388608,4194304] 0 2026-03-09T15:00:57.674 INFO:tasks.workunit.client.1.vm09.stdout:6/700: write d6/db/f42 [8545302,62255] 0 2026-03-09T15:00:57.674 INFO:tasks.workunit.client.1.vm09.stdout:3/778: dread d3/d100/f3c [0,4194304] 0 2026-03-09T15:00:57.689 INFO:tasks.workunit.client.1.vm09.stdout:3/779: fdatasync d3/d3a/d2b/d31/f40 0 2026-03-09T15:00:57.697 INFO:tasks.workunit.client.1.vm09.stdout:2/720: dread df/d1f/d6d/f7f [0,4194304] 0 2026-03-09T15:00:57.697 INFO:tasks.workunit.client.1.vm09.stdout:2/721: stat df/f9b 0 2026-03-09T15:00:57.699 INFO:tasks.workunit.client.1.vm09.stdout:2/722: unlink df/d1f/d6d/dd1/cb1 0 2026-03-09T15:00:57.700 INFO:tasks.workunit.client.1.vm09.stdout:2/723: stat df/d1f/d47/d84/db7/dc3 0 2026-03-09T15:00:57.700 INFO:tasks.workunit.client.1.vm09.stdout:2/724: read - df/da0/fc6 zero size 2026-03-09T15:00:57.700 INFO:tasks.workunit.client.1.vm09.stdout:2/725: chown df/d20/d2e/f48 0 1 2026-03-09T15:00:57.704 INFO:tasks.workunit.client.1.vm09.stdout:2/726: creat df/d1f/d6d/d8f/d5f/fe4 x:0 0 0 2026-03-09T15:00:57.707 INFO:tasks.workunit.client.1.vm09.stdout:2/727: creat df/d1f/fe5 x:0 0 0 2026-03-09T15:00:57.709 INFO:tasks.workunit.client.1.vm09.stdout:2/728: fdatasync df/d1f/d47/d84/fd2 0 2026-03-09T15:00:57.709 INFO:tasks.workunit.client.1.vm09.stdout:5/763: dwrite d2/d37/d3c/d36/d45/f6e [0,4194304] 0 2026-03-09T15:00:57.722 INFO:tasks.workunit.client.1.vm09.stdout:2/729: truncate df/d20/d2e/f54 653898 0 2026-03-09T15:00:57.732 INFO:tasks.workunit.client.1.vm09.stdout:5/764: unlink d2/d37/d3c/d36/d4c/d89/lee 0 2026-03-09T15:00:57.732 INFO:tasks.workunit.client.1.vm09.stdout:2/730: rename df/d1f/d47/d5d/d90/faa to df/d58/d67/fe6 0 2026-03-09T15:00:57.739 INFO:tasks.workunit.client.1.vm09.stdout:2/731: dread df/f23 [0,4194304] 0 2026-03-09T15:00:57.776 INFO:tasks.workunit.client.1.vm09.stdout:2/732: dread fb [0,4194304] 0 2026-03-09T15:00:57.776 INFO:tasks.workunit.client.1.vm09.stdout:2/733: rename df/d1f/d47/d84/fd2 to df/d1f/d47/d84/db7/dc3/dd9/fe7 0 2026-03-09T15:00:57.776 INFO:tasks.workunit.client.1.vm09.stdout:2/734: chown df/d1f/d47/c68 9 1 2026-03-09T15:00:57.776 INFO:tasks.workunit.client.1.vm09.stdout:2/735: write df/d1f/d47/d84/db7/dc3/fe0 [851064,45109] 0 2026-03-09T15:00:57.776 INFO:tasks.workunit.client.1.vm09.stdout:2/736: stat df/d93/da3/cd6 0 2026-03-09T15:00:57.776 INFO:tasks.workunit.client.1.vm09.stdout:2/737: write df/d20/d2e/fb5 [2076189,26408] 0 2026-03-09T15:00:57.776 INFO:tasks.workunit.client.1.vm09.stdout:2/738: dread - df/d1f/fe5 zero size 2026-03-09T15:00:57.776 INFO:tasks.workunit.client.1.vm09.stdout:2/739: fsync df/d20/d29/f31 0 2026-03-09T15:00:57.776 INFO:tasks.workunit.client.1.vm09.stdout:2/740: rename df/d1f/d47/f89 to df/d1f/d47/d84/db7/dc3/dc4/fe8 0 2026-03-09T15:00:57.777 INFO:tasks.workunit.client.1.vm09.stdout:2/741: write df/d1f/d47/d84/fd7 [951861,10155] 0 2026-03-09T15:00:57.777 INFO:tasks.workunit.client.1.vm09.stdout:2/742: rename df/f23 to df/d1f/d47/d84/fe9 0 2026-03-09T15:00:57.777 INFO:tasks.workunit.client.1.vm09.stdout:2/743: rmdir df/d93 39 2026-03-09T15:00:57.777 INFO:tasks.workunit.client.1.vm09.stdout:2/744: truncate df/d1f/d47/d5d/f6c 3283828 0 2026-03-09T15:00:57.777 INFO:tasks.workunit.client.1.vm09.stdout:2/745: write df/d1f/d47/d84/db7/dc3/fcc [9310731,74567] 0 2026-03-09T15:00:57.777 INFO:tasks.workunit.client.1.vm09.stdout:2/746: creat df/d20/d2e/fea x:0 0 0 2026-03-09T15:00:57.920 INFO:tasks.workunit.client.1.vm09.stdout:5/765: sync 2026-03-09T15:00:57.951 INFO:tasks.workunit.client.1.vm09.stdout:5/766: dread d2/d37/d3c/d36/d4c/d51/d96/f16 [0,4194304] 0 2026-03-09T15:00:57.952 INFO:tasks.workunit.client.1.vm09.stdout:5/767: creat d2/d37/d53/d86/dad/f112 x:0 0 0 2026-03-09T15:00:57.954 INFO:tasks.workunit.client.1.vm09.stdout:5/768: write d2/d37/d3c/d36/d45/dae/dd3/fdf [255483,28560] 0 2026-03-09T15:00:57.957 INFO:tasks.workunit.client.1.vm09.stdout:5/769: link d2/d37/d53/d86/d88/dc9/f10c d2/d37/d53/d86/d88/dc9/f113 0 2026-03-09T15:00:57.981 INFO:tasks.workunit.client.1.vm09.stdout:0/779: truncate da/d80/f98 3964567 0 2026-03-09T15:00:57.984 INFO:tasks.workunit.client.1.vm09.stdout:7/720: truncate d3/db/f4d 1632789 0 2026-03-09T15:00:57.986 INFO:tasks.workunit.client.1.vm09.stdout:7/721: read d3/db/d15/f68 [2595526,78602] 0 2026-03-09T15:00:57.993 INFO:tasks.workunit.client.1.vm09.stdout:7/722: sync 2026-03-09T15:00:57.994 INFO:tasks.workunit.client.1.vm09.stdout:7/723: write d3/db/d46/f5b [2558733,12275] 0 2026-03-09T15:00:57.994 INFO:tasks.workunit.client.1.vm09.stdout:7/724: fsync d3/d1d/fb9 0 2026-03-09T15:00:57.995 INFO:tasks.workunit.client.1.vm09.stdout:7/725: chown d3/db/d15/d5f/d6e/d83/c8e 19930 1 2026-03-09T15:00:57.999 INFO:tasks.workunit.client.1.vm09.stdout:4/774: dwrite db/d19/d23/d71/f6c [0,4194304] 0 2026-03-09T15:00:58.000 INFO:tasks.workunit.client.1.vm09.stdout:7/726: unlink d3/db/d25/l70 0 2026-03-09T15:00:58.020 INFO:tasks.workunit.client.1.vm09.stdout:7/727: symlink d3/db/d25/db7/ld9 0 2026-03-09T15:00:58.028 INFO:tasks.workunit.client.1.vm09.stdout:9/667: rmdir d1/d7/d1e/d2b/d8d/dc8 39 2026-03-09T15:00:58.028 INFO:tasks.workunit.client.1.vm09.stdout:7/728: sync 2026-03-09T15:00:58.031 INFO:tasks.workunit.client.1.vm09.stdout:7/729: mknod d3/db/d46/dc9/cda 0 2026-03-09T15:00:58.032 INFO:tasks.workunit.client.1.vm09.stdout:9/668: creat d1/d4f/fe0 x:0 0 0 2026-03-09T15:00:58.035 INFO:tasks.workunit.client.1.vm09.stdout:7/730: dwrite d3/db/d25/d7d/fc4 [0,4194304] 0 2026-03-09T15:00:58.044 INFO:tasks.workunit.client.1.vm09.stdout:9/669: mknod d1/d7/db8/ce1 0 2026-03-09T15:00:58.058 INFO:tasks.workunit.client.1.vm09.stdout:9/670: creat d1/d7/d1e/d2b/d40/fe2 x:0 0 0 2026-03-09T15:00:58.058 INFO:tasks.workunit.client.1.vm09.stdout:7/731: creat d3/db/d15/d5f/d6e/fdb x:0 0 0 2026-03-09T15:00:58.058 INFO:tasks.workunit.client.1.vm09.stdout:7/732: mknod d3/d1d/d65/cdc 0 2026-03-09T15:00:58.058 INFO:tasks.workunit.client.1.vm09.stdout:7/733: rename d3/db/d46/dc9/fbc to d3/db/d25/db7/fdd 0 2026-03-09T15:00:58.058 INFO:tasks.workunit.client.1.vm09.stdout:7/734: write d3/db/d15/d5f/d6e/fdb [479291,80189] 0 2026-03-09T15:00:58.058 INFO:tasks.workunit.client.1.vm09.stdout:7/735: write d3/db/d15/f68 [613723,34917] 0 2026-03-09T15:00:58.058 INFO:tasks.workunit.client.1.vm09.stdout:7/736: readlink d3/d3d/l64 0 2026-03-09T15:00:58.058 INFO:tasks.workunit.client.1.vm09.stdout:7/737: dread - d3/d1d/fb9 zero size 2026-03-09T15:00:58.058 INFO:tasks.workunit.client.1.vm09.stdout:7/738: stat d3/db/d25/d5c/d75/l9c 0 2026-03-09T15:00:58.058 INFO:tasks.workunit.client.1.vm09.stdout:9/671: dread d1/d7/d1e/f46 [0,4194304] 0 2026-03-09T15:00:58.059 INFO:tasks.workunit.client.1.vm09.stdout:7/739: rename d3/db/d46/dc9/l71 to d3/d1d/d65/da3/lde 0 2026-03-09T15:00:58.060 INFO:tasks.workunit.client.1.vm09.stdout:7/740: chown d3/db/d25/d7d/f8c 532453 1 2026-03-09T15:00:58.083 INFO:tasks.workunit.client.1.vm09.stdout:6/701: getdents d6/d20/d2a/dde 0 2026-03-09T15:00:58.088 INFO:tasks.workunit.client.1.vm09.stdout:8/741: write df/d24/d99/db6/f63 [4679057,72867] 0 2026-03-09T15:00:58.094 INFO:tasks.workunit.client.1.vm09.stdout:8/742: chown df/d5b/d65/d1d/f41 12095286 1 2026-03-09T15:00:58.094 INFO:tasks.workunit.client.1.vm09.stdout:8/743: chown df/d5b/d65/d1d/fcd 26 1 2026-03-09T15:00:58.094 INFO:tasks.workunit.client.1.vm09.stdout:1/633: write d8/d50/d5b/f6f [896790,63111] 0 2026-03-09T15:00:58.095 INFO:tasks.workunit.client.1.vm09.stdout:8/744: stat df/d2d/d4f/fa1 0 2026-03-09T15:00:58.095 INFO:tasks.workunit.client.1.vm09.stdout:8/745: symlink df/d24/d95/ldb 0 2026-03-09T15:00:58.095 INFO:tasks.workunit.client.1.vm09.stdout:1/634: getdents d8/d10/d24/d45/d5f 0 2026-03-09T15:00:58.096 INFO:tasks.workunit.client.1.vm09.stdout:1/635: dread - d8/d50/d39/f65 zero size 2026-03-09T15:00:58.096 INFO:tasks.workunit.client.1.vm09.stdout:1/636: chown d8/d10/d24/d45/f92 118471 1 2026-03-09T15:00:58.097 INFO:tasks.workunit.client.1.vm09.stdout:8/746: dwrite df/f89 [0,4194304] 0 2026-03-09T15:00:58.100 INFO:tasks.workunit.client.1.vm09.stdout:8/747: rename df/d24/d56 to df/d2d/d46/d33/ddc 0 2026-03-09T15:00:58.100 INFO:tasks.workunit.client.1.vm09.stdout:1/637: getdents d8/d50/d39/d95/d72/d64 0 2026-03-09T15:00:58.111 INFO:tasks.workunit.client.1.vm09.stdout:8/748: mkdir df/d24/d99/db6/ddd 0 2026-03-09T15:00:58.111 INFO:tasks.workunit.client.1.vm09.stdout:1/638: creat d8/d10/d24/d45/d5f/fc5 x:0 0 0 2026-03-09T15:00:58.119 INFO:tasks.workunit.client.1.vm09.stdout:8/749: dread - df/d2d/d4f/fa1 zero size 2026-03-09T15:00:58.119 INFO:tasks.workunit.client.1.vm09.stdout:8/750: chown df/d2d/d90 639074 1 2026-03-09T15:00:58.120 INFO:tasks.workunit.client.1.vm09.stdout:1/639: dwrite d8/d10/d24/d48/d9b/d78/fb8 [0,4194304] 0 2026-03-09T15:00:58.122 INFO:tasks.workunit.client.1.vm09.stdout:1/640: chown d8/d50/d5b/l98 233888319 1 2026-03-09T15:00:58.125 INFO:tasks.workunit.client.1.vm09.stdout:1/641: rename d8/d10/d73/fa1 to d8/d10/d24/fc6 0 2026-03-09T15:00:58.127 INFO:tasks.workunit.client.1.vm09.stdout:8/751: rmdir df/d5c 39 2026-03-09T15:00:58.133 INFO:tasks.workunit.client.1.vm09.stdout:1/642: mkdir d8/d50/d39/d95/d56/dc7 0 2026-03-09T15:00:58.134 INFO:tasks.workunit.client.1.vm09.stdout:8/752: mknod df/d2d/d42/cde 0 2026-03-09T15:00:58.135 INFO:tasks.workunit.client.1.vm09.stdout:8/753: dread - df/d2d/d46/d33/ddc/f9f zero size 2026-03-09T15:00:58.135 INFO:tasks.workunit.client.1.vm09.stdout:1/643: readlink d8/d10/d24/d45/d5f/l9c 0 2026-03-09T15:00:58.136 INFO:tasks.workunit.client.1.vm09.stdout:8/754: write df/d2d/d46/d33/ddc/fc3 [1686152,80350] 0 2026-03-09T15:00:58.136 INFO:tasks.workunit.client.1.vm09.stdout:1/644: chown d8/d10/d24/d45/l49 7394 1 2026-03-09T15:00:58.138 INFO:tasks.workunit.client.1.vm09.stdout:8/755: mkdir df/d5b/ddf 0 2026-03-09T15:00:58.140 INFO:tasks.workunit.client.1.vm09.stdout:8/756: readlink df/d24/d99/db1/lac 0 2026-03-09T15:00:58.140 INFO:tasks.workunit.client.1.vm09.stdout:1/645: dwrite d8/d50/d39/d95/d56/f9f [0,4194304] 0 2026-03-09T15:00:58.147 INFO:tasks.workunit.client.1.vm09.stdout:1/646: write d8/d10/f5c [430601,13938] 0 2026-03-09T15:00:58.154 INFO:tasks.workunit.client.1.vm09.stdout:8/757: dwrite df/d2d/f2f [0,4194304] 0 2026-03-09T15:00:58.163 INFO:tasks.workunit.client.1.vm09.stdout:8/758: creat df/d5b/d65/fe0 x:0 0 0 2026-03-09T15:00:58.164 INFO:tasks.workunit.client.1.vm09.stdout:8/759: mkdir df/d24/d95/de1 0 2026-03-09T15:00:58.177 INFO:tasks.workunit.client.1.vm09.stdout:1/647: sync 2026-03-09T15:00:58.178 INFO:tasks.workunit.client.1.vm09.stdout:1/648: chown d8/l9 8765670 1 2026-03-09T15:00:58.235 INFO:tasks.workunit.client.1.vm09.stdout:3/780: write d3/d3a/d2b/f65 [1105847,7470] 0 2026-03-09T15:00:58.239 INFO:tasks.workunit.client.1.vm09.stdout:3/781: dwrite d3/d3a/d2b/d31/d4a/fd2 [0,4194304] 0 2026-03-09T15:00:58.242 INFO:tasks.workunit.client.1.vm09.stdout:3/782: creat d3/d3a/d2b/d31/d4a/d62/f10e x:0 0 0 2026-03-09T15:00:58.253 INFO:tasks.workunit.client.1.vm09.stdout:3/783: mknod d3/d100/d48/c10f 0 2026-03-09T15:00:58.258 INFO:tasks.workunit.client.1.vm09.stdout:3/784: rename d3/d3a/d54/f58 to d3/d74/f110 0 2026-03-09T15:00:58.262 INFO:tasks.workunit.client.1.vm09.stdout:3/785: creat d3/d9a/de3/dc4/f111 x:0 0 0 2026-03-09T15:00:58.349 INFO:tasks.workunit.client.1.vm09.stdout:2/747: rmdir df 39 2026-03-09T15:00:58.352 INFO:tasks.workunit.client.1.vm09.stdout:5/770: dwrite d2/d37/d3c/d36/f98 [0,4194304] 0 2026-03-09T15:00:58.357 INFO:tasks.workunit.client.1.vm09.stdout:0/780: write da/dc/d22/f53 [2085900,105527] 0 2026-03-09T15:00:58.358 INFO:tasks.workunit.client.1.vm09.stdout:5/771: symlink d2/d37/d3c/d36/d4c/d51/l114 0 2026-03-09T15:00:58.362 INFO:tasks.workunit.client.1.vm09.stdout:2/748: getdents df 0 2026-03-09T15:00:58.362 INFO:tasks.workunit.client.1.vm09.stdout:0/781: sync 2026-03-09T15:00:58.363 INFO:tasks.workunit.client.1.vm09.stdout:2/749: creat df/d1f/d6d/d8f/d5f/feb x:0 0 0 2026-03-09T15:00:58.364 INFO:tasks.workunit.client.1.vm09.stdout:2/750: read fb [2564352,13088] 0 2026-03-09T15:00:58.364 INFO:tasks.workunit.client.1.vm09.stdout:0/782: creat da/dc/dcb/f107 x:0 0 0 2026-03-09T15:00:58.365 INFO:tasks.workunit.client.1.vm09.stdout:2/751: mknod df/d93/da3/dcf/cec 0 2026-03-09T15:00:58.365 INFO:tasks.workunit.client.1.vm09.stdout:0/783: read da/f4c [390493,106890] 0 2026-03-09T15:00:58.367 INFO:tasks.workunit.client.1.vm09.stdout:0/784: mkdir da/dc/d1c/d46/d63/d86/d108 0 2026-03-09T15:00:58.368 INFO:tasks.workunit.client.1.vm09.stdout:0/785: truncate da/fdb 327410 0 2026-03-09T15:00:58.377 INFO:tasks.workunit.client.1.vm09.stdout:0/786: dread da/dc/d1c/d3c/f4f [0,4194304] 0 2026-03-09T15:00:58.389 INFO:tasks.workunit.client.1.vm09.stdout:5/772: dread d2/d37/d3c/f4b [0,4194304] 0 2026-03-09T15:00:58.390 INFO:tasks.workunit.client.1.vm09.stdout:0/787: dread da/dc/d1c/d46/d63/f7f [0,4194304] 0 2026-03-09T15:00:58.391 INFO:tasks.workunit.client.1.vm09.stdout:0/788: creat da/dc/dcb/dd4/f109 x:0 0 0 2026-03-09T15:00:58.393 INFO:tasks.workunit.client.1.vm09.stdout:5/773: dread d2/d37/d3c/d36/d45/dfd/f100 [0,4194304] 0 2026-03-09T15:00:58.393 INFO:tasks.workunit.client.1.vm09.stdout:5/774: write d2/d37/d53/f81 [674984,3003] 0 2026-03-09T15:00:58.394 INFO:tasks.workunit.client.1.vm09.stdout:5/775: chown d2/d37/d3c/d36/d45/dae/dd3/fe4 10 1 2026-03-09T15:00:58.394 INFO:tasks.workunit.client.1.vm09.stdout:0/789: fdatasync da/dc/d1c/d3c/d78/d7a/dbb/ffc 0 2026-03-09T15:00:58.397 INFO:tasks.workunit.client.1.vm09.stdout:5/776: unlink d2/d37/d3c/d36/fb3 0 2026-03-09T15:00:58.398 INFO:tasks.workunit.client.1.vm09.stdout:5/777: dread d2/d37/d3c/d36/d4c/ff4 [0,4194304] 0 2026-03-09T15:00:58.399 INFO:tasks.workunit.client.1.vm09.stdout:5/778: chown d2/d37/d3c/d36/d4c/d51 73 1 2026-03-09T15:00:58.399 INFO:tasks.workunit.client.1.vm09.stdout:4/775: write db/d19/d23/d71/f4e [1283290,18129] 0 2026-03-09T15:00:58.400 INFO:tasks.workunit.client.1.vm09.stdout:5/779: chown d2/d37/d53/d86/d88/dd7 24 1 2026-03-09T15:00:58.401 INFO:tasks.workunit.client.1.vm09.stdout:4/776: chown db/d19/d81/la7 30 1 2026-03-09T15:00:58.404 INFO:tasks.workunit.client.1.vm09.stdout:5/780: read d2/d37/d3c/d36/d45/dae/dd3/fe4 [1689789,5704] 0 2026-03-09T15:00:58.404 INFO:tasks.workunit.client.1.vm09.stdout:0/790: dwrite da/dc/d22/f7c [0,4194304] 0 2026-03-09T15:00:58.417 INFO:tasks.workunit.client.1.vm09.stdout:0/791: getdents da/d30/d36 0 2026-03-09T15:00:58.419 INFO:tasks.workunit.client.1.vm09.stdout:4/777: dread db/d19/d23/d44/d7c/d7d/d97/da3/fad [0,4194304] 0 2026-03-09T15:00:58.428 INFO:tasks.workunit.client.1.vm09.stdout:0/792: rmdir da/dc/d1c/d3c/d78/d7a/d9c/dfd 0 2026-03-09T15:00:58.429 INFO:tasks.workunit.client.1.vm09.stdout:5/781: dread d2/d37/d3c/d36/d4c/f82 [0,4194304] 0 2026-03-09T15:00:58.429 INFO:tasks.workunit.client.1.vm09.stdout:4/778: creat db/d19/d23/d71/d53/dcf/dfb/ffd x:0 0 0 2026-03-09T15:00:58.429 INFO:tasks.workunit.client.1.vm09.stdout:0/793: symlink da/dc/d10/de5/l10a 0 2026-03-09T15:00:58.433 INFO:tasks.workunit.client.1.vm09.stdout:5/782: mkdir d2/d37/d3c/d36/d45/dae/dc3/d115 0 2026-03-09T15:00:58.437 INFO:tasks.workunit.client.1.vm09.stdout:4/779: unlink db/d12/d16/d5b/da5/ff9 0 2026-03-09T15:00:58.438 INFO:tasks.workunit.client.1.vm09.stdout:0/794: symlink da/dc/d1c/d3c/d78/l10b 0 2026-03-09T15:00:58.442 INFO:tasks.workunit.client.1.vm09.stdout:4/780: dread db/d19/dcd/fde [0,4194304] 0 2026-03-09T15:00:58.444 INFO:tasks.workunit.client.1.vm09.stdout:5/783: rename d2/d37/d67/d95/db8/df5/le8 to d2/d37/d53/d86/d88/dd7/l116 0 2026-03-09T15:00:58.449 INFO:tasks.workunit.client.1.vm09.stdout:0/795: fsync da/dc/d1c/d46/d63/f7f 0 2026-03-09T15:00:58.452 INFO:tasks.workunit.client.1.vm09.stdout:7/741: truncate d3/d61/f90 33826 0 2026-03-09T15:00:58.453 INFO:tasks.workunit.client.1.vm09.stdout:9/672: dwrite d1/d7/f13 [0,4194304] 0 2026-03-09T15:00:58.456 INFO:tasks.workunit.client.1.vm09.stdout:9/673: read d1/d7/d1e/f9e [2324703,57381] 0 2026-03-09T15:00:58.457 INFO:tasks.workunit.client.1.vm09.stdout:5/784: mkdir d2/d37/d53/d86/d88/d117 0 2026-03-09T15:00:58.460 INFO:tasks.workunit.client.1.vm09.stdout:9/674: dwrite d1/d7/d1e/d2b/d40/f4d [0,4194304] 0 2026-03-09T15:00:58.465 INFO:tasks.workunit.client.1.vm09.stdout:0/796: unlink da/d30/d36/ff1 0 2026-03-09T15:00:58.467 INFO:tasks.workunit.client.1.vm09.stdout:9/675: dwrite d1/d7/d1e/d2b/d40/f4d [0,4194304] 0 2026-03-09T15:00:58.477 INFO:tasks.workunit.client.1.vm09.stdout:7/742: unlink d3/d3d/l64 0 2026-03-09T15:00:58.484 INFO:tasks.workunit.client.1.vm09.stdout:5/785: unlink d2/d37/d3c/d36/d4c/d51/fce 0 2026-03-09T15:00:58.489 INFO:tasks.workunit.client.1.vm09.stdout:0/797: truncate da/dc/d22/f47 3594540 0 2026-03-09T15:00:58.490 INFO:tasks.workunit.client.1.vm09.stdout:0/798: write da/dc/dcb/f107 [898701,80238] 0 2026-03-09T15:00:58.491 INFO:tasks.workunit.client.1.vm09.stdout:0/799: write da/dc/d1c/d3c/d78/d7a/dbb/ffc [377336,112803] 0 2026-03-09T15:00:58.492 INFO:tasks.workunit.client.1.vm09.stdout:0/800: truncate da/d57/f60 5069259 0 2026-03-09T15:00:58.493 INFO:tasks.workunit.client.1.vm09.stdout:7/743: readlink d3/db/d25/l56 0 2026-03-09T15:00:58.497 INFO:tasks.workunit.client.1.vm09.stdout:9/676: fsync d1/d7/d1e/d2b/d40/fd9 0 2026-03-09T15:00:58.497 INFO:tasks.workunit.client.1.vm09.stdout:0/801: symlink da/dc/d10/de5/l10c 0 2026-03-09T15:00:58.498 INFO:tasks.workunit.client.1.vm09.stdout:9/677: dread - d1/d7/d1e/d2b/d2e/d56/d6d/fbd zero size 2026-03-09T15:00:58.500 INFO:tasks.workunit.client.1.vm09.stdout:7/744: dwrite d3/db/d25/d7d/fc4 [0,4194304] 0 2026-03-09T15:00:58.509 INFO:tasks.workunit.client.1.vm09.stdout:7/745: chown d3/d1d/d65/c99 3 1 2026-03-09T15:00:58.510 INFO:tasks.workunit.client.1.vm09.stdout:0/802: dread da/dc/f28 [0,4194304] 0 2026-03-09T15:00:58.510 INFO:tasks.workunit.client.1.vm09.stdout:7/746: write d3/db/d25/d5c/f5e [7360125,101430] 0 2026-03-09T15:00:58.511 INFO:tasks.workunit.client.1.vm09.stdout:7/747: stat d3/db/d15/f68 0 2026-03-09T15:00:58.512 INFO:tasks.workunit.client.1.vm09.stdout:0/803: mknod da/dc/d1c/d46/d63/d86/c10d 0 2026-03-09T15:00:58.515 INFO:tasks.workunit.client.1.vm09.stdout:7/748: chown d3/d61/f90 27 1 2026-03-09T15:00:58.515 INFO:tasks.workunit.client.1.vm09.stdout:7/749: chown d3/db/d15/d5f/f36 0 1 2026-03-09T15:00:58.516 INFO:tasks.workunit.client.1.vm09.stdout:7/750: fsync d3/db/d15/d5f/d6e/f7b 0 2026-03-09T15:00:58.517 INFO:tasks.workunit.client.1.vm09.stdout:7/751: write d3/db/fc2 [632686,84403] 0 2026-03-09T15:00:58.522 INFO:tasks.workunit.client.1.vm09.stdout:7/752: rename d3/d1d/d65/d9a to d3/d61/ddf 0 2026-03-09T15:00:58.525 INFO:tasks.workunit.client.1.vm09.stdout:7/753: creat d3/db/d25/db7/dd4/fe0 x:0 0 0 2026-03-09T15:00:58.525 INFO:tasks.workunit.client.1.vm09.stdout:7/754: stat d3/d1d/d65/fb0 0 2026-03-09T15:00:58.543 INFO:tasks.workunit.client.1.vm09.stdout:6/702: write d6/d20/d38/d4e/f87 [379187,38075] 0 2026-03-09T15:00:58.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:58 vm05.local ceph-mon[50611]: pgmap v155: 65 pgs: 65 active+clean; 1.5 GiB data, 5.1 GiB used, 115 GiB / 120 GiB avail; 55 MiB/s rd, 139 MiB/s wr, 349 op/s 2026-03-09T15:00:58.563 INFO:tasks.workunit.client.1.vm09.stdout:6/703: dread d6/d20/d24/da5/fc8 [4194304,4194304] 0 2026-03-09T15:00:58.565 INFO:tasks.workunit.client.1.vm09.stdout:6/704: rmdir d6/d20/d38/d56/d65/d68/d86/dc0/ddb 39 2026-03-09T15:00:58.569 INFO:tasks.workunit.client.1.vm09.stdout:6/705: unlink d6/d20/d2a/d3b/l97 0 2026-03-09T15:00:58.587 INFO:tasks.workunit.client.1.vm09.stdout:8/760: write df/f23 [780747,21139] 0 2026-03-09T15:00:58.587 INFO:tasks.workunit.client.1.vm09.stdout:1/649: write d8/f7e [192674,57296] 0 2026-03-09T15:00:58.587 INFO:tasks.workunit.client.1.vm09.stdout:3/786: truncate d3/d5b/d79/f89 598640 0 2026-03-09T15:00:58.588 INFO:tasks.workunit.client.1.vm09.stdout:8/761: dread - df/d2d/d42/f96 zero size 2026-03-09T15:00:58.589 INFO:tasks.workunit.client.1.vm09.stdout:8/762: chown df/d38/d64/fa7 192 1 2026-03-09T15:00:58.590 INFO:tasks.workunit.client.1.vm09.stdout:8/763: truncate df/d2d/d42/fbd 851631 0 2026-03-09T15:00:58.593 INFO:tasks.workunit.client.1.vm09.stdout:1/650: chown d8/d50/d39/la9 1798 1 2026-03-09T15:00:58.596 INFO:tasks.workunit.client.1.vm09.stdout:8/764: getdents df/d2d/d42/d70/dc0/dc9 0 2026-03-09T15:00:58.599 INFO:tasks.workunit.client.1.vm09.stdout:3/787: link d3/d5b/le7 d3/d3a/d2b/d7b/l112 0 2026-03-09T15:00:58.600 INFO:tasks.workunit.client.1.vm09.stdout:3/788: write d3/d3a/d2b/d7b/dd3/ffe [98310,93434] 0 2026-03-09T15:00:58.601 INFO:tasks.workunit.client.1.vm09.stdout:8/765: chown df/d38/f52 33 1 2026-03-09T15:00:58.606 INFO:tasks.workunit.client.1.vm09.stdout:2/752: fsync df/d58/d74/f88 0 2026-03-09T15:00:58.607 INFO:tasks.workunit.client.1.vm09.stdout:2/753: chown df/d20/d29/f31 82867 1 2026-03-09T15:00:58.612 INFO:tasks.workunit.client.1.vm09.stdout:8/766: creat df/d38/d64/fe2 x:0 0 0 2026-03-09T15:00:58.613 INFO:tasks.workunit.client.1.vm09.stdout:1/651: link d8/d90/f99 d8/d50/d39/d95/d56/db0/fc8 0 2026-03-09T15:00:58.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:58 vm09.local ceph-mon[59673]: pgmap v155: 65 pgs: 65 active+clean; 1.5 GiB data, 5.1 GiB used, 115 GiB / 120 GiB avail; 55 MiB/s rd, 139 MiB/s wr, 349 op/s 2026-03-09T15:00:58.617 INFO:tasks.workunit.client.1.vm09.stdout:8/767: fsync df/d2d/d4f/fa1 0 2026-03-09T15:00:58.627 INFO:tasks.workunit.client.1.vm09.stdout:2/754: symlink df/d1f/d47/d5d/dbc/led 0 2026-03-09T15:00:58.628 INFO:tasks.workunit.client.1.vm09.stdout:4/781: dwrite db/d19/d23/d71/d5f/f87 [0,4194304] 0 2026-03-09T15:00:58.628 INFO:tasks.workunit.client.1.vm09.stdout:2/755: write df/d1f/d47/d84/fd7 [1246434,5489] 0 2026-03-09T15:00:58.632 INFO:tasks.workunit.client.1.vm09.stdout:2/756: write df/d58/d67/f4e [5428281,34237] 0 2026-03-09T15:00:58.641 INFO:tasks.workunit.client.1.vm09.stdout:9/678: dread d1/d7/d1e/d2b/f32 [0,4194304] 0 2026-03-09T15:00:58.643 INFO:tasks.workunit.client.1.vm09.stdout:8/768: rmdir df/d24/d99/db6/d60/db7 39 2026-03-09T15:00:58.643 INFO:tasks.workunit.client.1.vm09.stdout:8/769: stat df/l18 0 2026-03-09T15:00:58.644 INFO:tasks.workunit.client.1.vm09.stdout:8/770: fdatasync df/d5b/f85 0 2026-03-09T15:00:58.647 INFO:tasks.workunit.client.1.vm09.stdout:4/782: symlink db/d12/d16/d5b/da5/lfe 0 2026-03-09T15:00:58.652 INFO:tasks.workunit.client.1.vm09.stdout:4/783: stat db/d19/d23/d44/d7c/d7d/d97/da3/fcb 0 2026-03-09T15:00:58.652 INFO:tasks.workunit.client.1.vm09.stdout:4/784: chown db/d12/c86 5085 1 2026-03-09T15:00:58.652 INFO:tasks.workunit.client.1.vm09.stdout:2/757: mknod df/d1f/d6d/d8f/d5f/cee 0 2026-03-09T15:00:58.652 INFO:tasks.workunit.client.1.vm09.stdout:9/679: write d1/d58/f99 [1317320,59878] 0 2026-03-09T15:00:58.656 INFO:tasks.workunit.client.1.vm09.stdout:9/680: dread d1/d7/f13 [0,4194304] 0 2026-03-09T15:00:58.659 INFO:tasks.workunit.client.1.vm09.stdout:8/771: fsync f8 0 2026-03-09T15:00:58.677 INFO:tasks.workunit.client.1.vm09.stdout:4/785: symlink db/d19/d52/d76/d3b/dd3/lff 0 2026-03-09T15:00:58.677 INFO:tasks.workunit.client.1.vm09.stdout:9/681: write d1/d4f/fa3 [4960128,19161] 0 2026-03-09T15:00:58.677 INFO:tasks.workunit.client.1.vm09.stdout:4/786: chown db/d19/d52/d76/d3b 14873055 1 2026-03-09T15:00:58.677 INFO:tasks.workunit.client.1.vm09.stdout:5/786: dwrite d2/d37/d3c/d36/d4c/d89/fc5 [0,4194304] 0 2026-03-09T15:00:58.677 INFO:tasks.workunit.client.1.vm09.stdout:1/652: getdents d8/d10/d24/d48/d9b/d68 0 2026-03-09T15:00:58.677 INFO:tasks.workunit.client.1.vm09.stdout:5/787: write d2/f4f [1390653,20693] 0 2026-03-09T15:00:58.677 INFO:tasks.workunit.client.1.vm09.stdout:5/788: dwrite d2/d37/d3c/d36/fbd [0,4194304] 0 2026-03-09T15:00:58.677 INFO:tasks.workunit.client.1.vm09.stdout:8/772: dwrite df/d5b/d65/d1d/fcd [0,4194304] 0 2026-03-09T15:00:58.678 INFO:tasks.workunit.client.1.vm09.stdout:8/773: creat df/d2d/d42/fe3 x:0 0 0 2026-03-09T15:00:58.680 INFO:tasks.workunit.client.1.vm09.stdout:5/789: creat d2/d37/d3c/d36/d45/dae/dc3/d115/f118 x:0 0 0 2026-03-09T15:00:58.696 INFO:tasks.workunit.client.1.vm09.stdout:5/790: dread d2/d37/d3c/d36/d45/d5c/f90 [0,4194304] 0 2026-03-09T15:00:58.698 INFO:tasks.workunit.client.1.vm09.stdout:8/774: link df/d5b/d65/d1d/la8 df/d5c/le4 0 2026-03-09T15:00:58.698 INFO:tasks.workunit.client.1.vm09.stdout:5/791: dread d2/d37/d3c/d36/d45/dfd/f100 [0,4194304] 0 2026-03-09T15:00:58.701 INFO:tasks.workunit.client.1.vm09.stdout:8/775: symlink df/d24/d95/le5 0 2026-03-09T15:00:58.701 INFO:tasks.workunit.client.1.vm09.stdout:5/792: fdatasync d2/d37/d53/f81 0 2026-03-09T15:00:58.711 INFO:tasks.workunit.client.1.vm09.stdout:8/776: dread df/d2d/d46/fa9 [0,4194304] 0 2026-03-09T15:00:58.716 INFO:tasks.workunit.client.1.vm09.stdout:8/777: creat df/d2d/d42/d70/dc0/dc9/fe6 x:0 0 0 2026-03-09T15:00:58.723 INFO:tasks.workunit.client.1.vm09.stdout:9/682: dread d1/d7/faf [0,4194304] 0 2026-03-09T15:00:58.726 INFO:tasks.workunit.client.1.vm09.stdout:9/683: dread - d1/d4f/d8f/d91/fa5 zero size 2026-03-09T15:00:58.729 INFO:tasks.workunit.client.1.vm09.stdout:9/684: dwrite d1/d4f/d8f/fc5 [0,4194304] 0 2026-03-09T15:00:58.746 INFO:tasks.workunit.client.1.vm09.stdout:9/685: dwrite d1/d7/d1e/d2b/d2e/f95 [0,4194304] 0 2026-03-09T15:00:58.758 INFO:tasks.workunit.client.1.vm09.stdout:5/793: dread d2/f61 [0,4194304] 0 2026-03-09T15:00:58.782 INFO:tasks.workunit.client.1.vm09.stdout:5/794: sync 2026-03-09T15:00:58.807 INFO:tasks.workunit.client.1.vm09.stdout:0/804: write da/dc/d92/d9e/fa2 [755755,63812] 0 2026-03-09T15:00:58.807 INFO:tasks.workunit.client.1.vm09.stdout:0/805: chown da/dc/d1c/f6d 0 1 2026-03-09T15:00:58.814 INFO:tasks.workunit.client.1.vm09.stdout:0/806: creat da/dc/d61/f10e x:0 0 0 2026-03-09T15:00:58.866 INFO:tasks.workunit.client.1.vm09.stdout:7/755: dwrite d3/fd [0,4194304] 0 2026-03-09T15:00:58.872 INFO:tasks.workunit.client.1.vm09.stdout:7/756: unlink d3/d1d/c78 0 2026-03-09T15:00:58.873 INFO:tasks.workunit.client.1.vm09.stdout:7/757: chown d3/db/d25/d5c/f8a 1540 1 2026-03-09T15:00:58.875 INFO:tasks.workunit.client.1.vm09.stdout:7/758: symlink d3/db/d25/d7d/dc6/le1 0 2026-03-09T15:00:58.875 INFO:tasks.workunit.client.1.vm09.stdout:7/759: stat d3/db/d46/dc9 0 2026-03-09T15:00:58.875 INFO:tasks.workunit.client.1.vm09.stdout:7/760: chown d3/db/d25/d5c/f5e 9 1 2026-03-09T15:00:58.876 INFO:tasks.workunit.client.1.vm09.stdout:7/761: read - d3/db/d25/fbb zero size 2026-03-09T15:00:58.877 INFO:tasks.workunit.client.1.vm09.stdout:7/762: creat d3/d3d/d9b/fe2 x:0 0 0 2026-03-09T15:00:58.877 INFO:tasks.workunit.client.1.vm09.stdout:7/763: chown d3/f7a 242264 1 2026-03-09T15:00:58.879 INFO:tasks.workunit.client.1.vm09.stdout:7/764: dread d3/db/f4d [0,4194304] 0 2026-03-09T15:00:58.881 INFO:tasks.workunit.client.1.vm09.stdout:7/765: mknod d3/d1d/ce3 0 2026-03-09T15:00:58.884 INFO:tasks.workunit.client.1.vm09.stdout:7/766: dwrite d3/db/d25/d7d/f8c [4194304,4194304] 0 2026-03-09T15:00:58.890 INFO:tasks.workunit.client.1.vm09.stdout:7/767: rmdir d3/db/d46/db2 39 2026-03-09T15:00:58.895 INFO:tasks.workunit.client.1.vm09.stdout:6/706: dwrite d6/f17 [0,4194304] 0 2026-03-09T15:00:58.899 INFO:tasks.workunit.client.1.vm09.stdout:6/707: mknod d6/d20/d2a/dc4/dba/dd8/ce9 0 2026-03-09T15:00:58.899 INFO:tasks.workunit.client.1.vm09.stdout:6/708: readlink d6/df/l30 0 2026-03-09T15:00:58.905 INFO:tasks.workunit.client.1.vm09.stdout:6/709: dwrite d6/d20/d2a/f5e [0,4194304] 0 2026-03-09T15:00:58.906 INFO:tasks.workunit.client.1.vm09.stdout:6/710: write d6/d20/d38/d56/fb1 [292078,47669] 0 2026-03-09T15:00:58.910 INFO:tasks.workunit.client.1.vm09.stdout:1/653: rename d8/d50/d39/d95/d56/db0 to d8/d10/dc9 0 2026-03-09T15:00:58.910 INFO:tasks.workunit.client.1.vm09.stdout:1/654: write d8/d10/f12 [2805436,28070] 0 2026-03-09T15:00:58.918 INFO:tasks.workunit.client.1.vm09.stdout:3/789: write d3/d9a/d80/fdd [258919,72896] 0 2026-03-09T15:00:58.919 INFO:tasks.workunit.client.1.vm09.stdout:3/790: chown d3/d3a/d2b/d31/l67 257311 1 2026-03-09T15:00:58.920 INFO:tasks.workunit.client.1.vm09.stdout:8/778: rename df/d5b/d65/fe0 to df/d24/d99/db1/dcc/fe7 0 2026-03-09T15:00:58.924 INFO:tasks.workunit.client.1.vm09.stdout:3/791: creat d3/d9a/de3/dc4/f113 x:0 0 0 2026-03-09T15:00:58.924 INFO:tasks.workunit.client.1.vm09.stdout:5/795: rename d2/f93 to d2/d37/d53/d86/d88/dc9/f119 0 2026-03-09T15:00:58.927 INFO:tasks.workunit.client.1.vm09.stdout:6/711: link d6/d20/d38/d4e/f87 d6/df/fea 0 2026-03-09T15:00:58.932 INFO:tasks.workunit.client.1.vm09.stdout:5/796: dwrite d2/d37/f75 [0,4194304] 0 2026-03-09T15:00:58.932 INFO:tasks.workunit.client.1.vm09.stdout:3/792: fsync d3/d3a/d2b/d31/f40 0 2026-03-09T15:00:58.932 INFO:tasks.workunit.client.1.vm09.stdout:1/655: rename d8/d50/d39/d95/d72/lc0 to d8/d10/d24/d48/d9b/lca 0 2026-03-09T15:00:58.933 INFO:tasks.workunit.client.1.vm09.stdout:5/797: write d2/da9/fb9 [3032682,106234] 0 2026-03-09T15:00:58.935 INFO:tasks.workunit.client.1.vm09.stdout:6/712: dwrite d6/d20/f6e [0,4194304] 0 2026-03-09T15:00:58.935 INFO:tasks.workunit.client.1.vm09.stdout:5/798: dread d2/d37/d3c/d36/d4c/d51/fc7 [0,4194304] 0 2026-03-09T15:00:58.939 INFO:tasks.workunit.client.1.vm09.stdout:5/799: readlink d2/d37/d3c/d36/d45/dae/dc3/l80 0 2026-03-09T15:00:58.950 INFO:tasks.workunit.client.1.vm09.stdout:1/656: getdents d8/d10/d24/d48/d9b/d68 0 2026-03-09T15:00:58.950 INFO:tasks.workunit.client.1.vm09.stdout:3/793: creat d3/d9a/f114 x:0 0 0 2026-03-09T15:00:58.952 INFO:tasks.workunit.client.1.vm09.stdout:1/657: write d8/d10/f13 [1044991,60027] 0 2026-03-09T15:00:58.955 INFO:tasks.workunit.client.1.vm09.stdout:6/713: link d6/db/c1d d6/d20/d2a/dc4/dba/dd8/ceb 0 2026-03-09T15:00:58.965 INFO:tasks.workunit.client.1.vm09.stdout:6/714: mknod d6/df/d23/de3/cec 0 2026-03-09T15:00:58.965 INFO:tasks.workunit.client.1.vm09.stdout:6/715: read d6/d20/d38/d4e/f87 [311339,104998] 0 2026-03-09T15:00:59.029 INFO:tasks.workunit.client.1.vm09.stdout:2/758: write df/d2d/f41 [1779857,73440] 0 2026-03-09T15:00:59.034 INFO:tasks.workunit.client.1.vm09.stdout:2/759: mkdir df/d58/d74/def 0 2026-03-09T15:00:59.036 INFO:tasks.workunit.client.1.vm09.stdout:2/760: mknod df/d20/d29/cf0 0 2026-03-09T15:00:59.039 INFO:tasks.workunit.client.1.vm09.stdout:2/761: dwrite df/d58/d67/f4e [0,4194304] 0 2026-03-09T15:00:59.039 INFO:tasks.workunit.client.1.vm09.stdout:2/762: chown df/l11 24798520 1 2026-03-09T15:00:59.046 INFO:tasks.workunit.client.1.vm09.stdout:4/787: write db/d12/d16/f26 [802663,56694] 0 2026-03-09T15:00:59.050 INFO:tasks.workunit.client.1.vm09.stdout:9/686: dwrite d1/d7/d1e/d2b/d8d/f9d [0,4194304] 0 2026-03-09T15:00:59.055 INFO:tasks.workunit.client.1.vm09.stdout:4/788: dwrite db/d19/d23/d44/d7c/d7d/d97/da3/fcb [0,4194304] 0 2026-03-09T15:00:59.056 INFO:tasks.workunit.client.1.vm09.stdout:0/807: dwrite da/dc/f17 [0,4194304] 0 2026-03-09T15:00:59.071 INFO:tasks.workunit.client.1.vm09.stdout:9/687: truncate d1/f29 306300 0 2026-03-09T15:00:59.072 INFO:tasks.workunit.client.1.vm09.stdout:4/789: creat db/d19/d23/d44/d7c/d7d/d97/da3/f100 x:0 0 0 2026-03-09T15:00:59.073 INFO:tasks.workunit.client.1.vm09.stdout:9/688: creat d1/d4f/d8f/d91/fe3 x:0 0 0 2026-03-09T15:00:59.084 INFO:tasks.workunit.client.1.vm09.stdout:9/689: fsync d1/d6e/f9b 0 2026-03-09T15:00:59.084 INFO:tasks.workunit.client.1.vm09.stdout:4/790: dread db/d12/f6b [0,4194304] 0 2026-03-09T15:00:59.084 INFO:tasks.workunit.client.1.vm09.stdout:9/690: truncate d1/f1f 3243344 0 2026-03-09T15:00:59.084 INFO:tasks.workunit.client.1.vm09.stdout:0/808: link da/dc/d10/f16 da/d30/f10f 0 2026-03-09T15:00:59.084 INFO:tasks.workunit.client.1.vm09.stdout:9/691: stat d1/d7/d1e/d2b/d40/c55 0 2026-03-09T15:00:59.084 INFO:tasks.workunit.client.1.vm09.stdout:4/791: dread - db/d19/d52/d76/d3b/fd1 zero size 2026-03-09T15:00:59.084 INFO:tasks.workunit.client.1.vm09.stdout:9/692: write d1/d7/d1e/d2b/d40/f4d [2019377,18035] 0 2026-03-09T15:00:59.084 INFO:tasks.workunit.client.1.vm09.stdout:0/809: truncate da/dc/d1c/d46/d63/d86/dcd/d7b/fb9 655268 0 2026-03-09T15:00:59.084 INFO:tasks.workunit.client.1.vm09.stdout:7/768: dwrite d3/d28/f69 [4194304,4194304] 0 2026-03-09T15:00:59.086 INFO:tasks.workunit.client.1.vm09.stdout:7/769: chown d3/d3d/d9b 186760489 1 2026-03-09T15:00:59.087 INFO:tasks.workunit.client.1.vm09.stdout:4/792: creat db/d19/d23/d44/f101 x:0 0 0 2026-03-09T15:00:59.092 INFO:tasks.workunit.client.1.vm09.stdout:8/779: dwrite df/d5c/f8b [0,4194304] 0 2026-03-09T15:00:59.092 INFO:tasks.workunit.client.1.vm09.stdout:4/793: read - db/d19/d23/d71/d53/dcf/dfb/ffd zero size 2026-03-09T15:00:59.093 INFO:tasks.workunit.client.1.vm09.stdout:4/794: write db/d19/d23/d71/fb3 [987676,15021] 0 2026-03-09T15:00:59.095 INFO:tasks.workunit.client.1.vm09.stdout:4/795: chown db/d19/d23/d44/d7c/d7d/d97/ccc 115400100 1 2026-03-09T15:00:59.098 INFO:tasks.workunit.client.1.vm09.stdout:7/770: dread - d3/d1d/d2d/f8f zero size 2026-03-09T15:00:59.098 INFO:tasks.workunit.client.1.vm09.stdout:9/693: unlink d1/d7/d1e/d2b/d2e/d56/d5e/l76 0 2026-03-09T15:00:59.100 INFO:tasks.workunit.client.1.vm09.stdout:9/694: chown d1/d7/d1e/d2b/d40/c73 46409 1 2026-03-09T15:00:59.108 INFO:tasks.workunit.client.1.vm09.stdout:7/771: unlink f1 0 2026-03-09T15:00:59.109 INFO:tasks.workunit.client.1.vm09.stdout:8/780: mknod df/d38/ce8 0 2026-03-09T15:00:59.113 INFO:tasks.workunit.client.1.vm09.stdout:8/781: chown df/d5b/c7d 2353 1 2026-03-09T15:00:59.116 INFO:tasks.workunit.client.1.vm09.stdout:7/772: mkdir d3/d1d/d65/da3/de4 0 2026-03-09T15:00:59.117 INFO:tasks.workunit.client.1.vm09.stdout:5/800: write d2/d37/d3c/d36/d45/dae/dc3/f68 [1034989,3410] 0 2026-03-09T15:00:59.118 INFO:tasks.workunit.client.1.vm09.stdout:9/695: dwrite d1/d7/d1e/d2b/d40/fe2 [0,4194304] 0 2026-03-09T15:00:59.118 INFO:tasks.workunit.client.1.vm09.stdout:4/796: dwrite db/d19/d23/d71/d53/fa9 [4194304,4194304] 0 2026-03-09T15:00:59.118 INFO:tasks.workunit.client.1.vm09.stdout:8/782: stat df/d5b/d65/c25 0 2026-03-09T15:00:59.123 INFO:tasks.workunit.client.1.vm09.stdout:9/696: write d1/d58/fdf [514809,6886] 0 2026-03-09T15:00:59.124 INFO:tasks.workunit.client.1.vm09.stdout:5/801: write d2/d37/d3c/d36/d45/dae/dc3/f57 [1445960,75279] 0 2026-03-09T15:00:59.129 INFO:tasks.workunit.client.1.vm09.stdout:8/783: dwrite df/d5b/f82 [0,4194304] 0 2026-03-09T15:00:59.148 INFO:tasks.workunit.client.1.vm09.stdout:7/773: mknod d3/db/d15/d5f/d6e/d83/ce5 0 2026-03-09T15:00:59.152 INFO:tasks.workunit.client.1.vm09.stdout:4/797: creat db/d19/d23/d71/ddf/f102 x:0 0 0 2026-03-09T15:00:59.158 INFO:tasks.workunit.client.1.vm09.stdout:5/802: rename d2/d37/d3c/d36/d4c/d51/fb0 to d2/d37/d53/d86/d88/d117/f11a 0 2026-03-09T15:00:59.158 INFO:tasks.workunit.client.1.vm09.stdout:7/774: creat d3/d1d/fe6 x:0 0 0 2026-03-09T15:00:59.158 INFO:tasks.workunit.client.1.vm09.stdout:5/803: mkdir d2/db1/db2/d11b 0 2026-03-09T15:00:59.158 INFO:tasks.workunit.client.1.vm09.stdout:7/775: mknod d3/db/d25/d7d/dc6/ce7 0 2026-03-09T15:00:59.161 INFO:tasks.workunit.client.1.vm09.stdout:5/804: unlink d2/d37/d3c/d36/d45/f66 0 2026-03-09T15:00:59.161 INFO:tasks.workunit.client.1.vm09.stdout:7/776: symlink d3/db/d25/db7/le8 0 2026-03-09T15:00:59.161 INFO:tasks.workunit.client.1.vm09.stdout:8/784: getdents df/d2d/d46/d33/ddc 0 2026-03-09T15:00:59.162 INFO:tasks.workunit.client.1.vm09.stdout:7/777: chown d3/d61/ddf 856 1 2026-03-09T15:00:59.164 INFO:tasks.workunit.client.1.vm09.stdout:9/697: read d1/d7/d1e/f34 [2493550,71663] 0 2026-03-09T15:00:59.171 INFO:tasks.workunit.client.1.vm09.stdout:5/805: chown d2/d37/d53/cd8 58881452 1 2026-03-09T15:00:59.171 INFO:tasks.workunit.client.1.vm09.stdout:4/798: dwrite db/f29 [4194304,4194304] 0 2026-03-09T15:00:59.171 INFO:tasks.workunit.client.1.vm09.stdout:5/806: read d2/d37/d53/f81 [1447458,59299] 0 2026-03-09T15:00:59.171 INFO:tasks.workunit.client.1.vm09.stdout:4/799: stat db/d19/d52/d76/d3b/f49 0 2026-03-09T15:00:59.177 INFO:tasks.workunit.client.1.vm09.stdout:4/800: mkdir db/d19/d23/d44/dd2/d103 0 2026-03-09T15:00:59.186 INFO:tasks.workunit.client.1.vm09.stdout:9/698: write d1/d7/d1e/d2b/d8d/dd5/fdb [1900514,125448] 0 2026-03-09T15:00:59.186 INFO:tasks.workunit.client.1.vm09.stdout:5/807: write d2/d37/d3c/d36/d4c/d51/d96/f16 [377357,122828] 0 2026-03-09T15:00:59.186 INFO:tasks.workunit.client.1.vm09.stdout:8/785: link df/d38/d64/d5f/l97 df/d2d/d42/d70/dc0/le9 0 2026-03-09T15:00:59.187 INFO:tasks.workunit.client.1.vm09.stdout:5/808: creat d2/d37/d3c/d36/d45/dae/dc3/d115/f11c x:0 0 0 2026-03-09T15:00:59.187 INFO:tasks.workunit.client.1.vm09.stdout:9/699: rename d1/d7/d1e/c3d to d1/d6e/ce4 0 2026-03-09T15:00:59.190 INFO:tasks.workunit.client.1.vm09.stdout:3/794: write d3/d3a/d2b/d31/d4a/d62/f16 [372975,2721] 0 2026-03-09T15:00:59.191 INFO:tasks.workunit.client.1.vm09.stdout:8/786: write df/d24/d99/db6/fad [699878,70299] 0 2026-03-09T15:00:59.191 INFO:tasks.workunit.client.1.vm09.stdout:1/658: write d8/f57 [1060392,76836] 0 2026-03-09T15:00:59.191 INFO:tasks.workunit.client.1.vm09.stdout:4/801: dwrite db/d19/d23/d44/d7c/d7d/d97/ffa [0,4194304] 0 2026-03-09T15:00:59.197 INFO:tasks.workunit.client.1.vm09.stdout:5/809: readlink d2/d37/d3c/d36/d45/d5c/ddc/le7 0 2026-03-09T15:00:59.219 INFO:tasks.workunit.client.1.vm09.stdout:3/795: readlink d3/d3a/d2b/lc1 0 2026-03-09T15:00:59.219 INFO:tasks.workunit.client.1.vm09.stdout:8/787: rename l5 to df/d38/d64/lea 0 2026-03-09T15:00:59.219 INFO:tasks.workunit.client.1.vm09.stdout:8/788: stat df/d2d/d90 0 2026-03-09T15:00:59.219 INFO:tasks.workunit.client.1.vm09.stdout:4/802: dread db/f21 [0,4194304] 0 2026-03-09T15:00:59.219 INFO:tasks.workunit.client.1.vm09.stdout:9/700: dwrite d1/d7/d1e/f2a [4194304,4194304] 0 2026-03-09T15:00:59.219 INFO:tasks.workunit.client.1.vm09.stdout:4/803: write db/d12/d16/d5b/d78/d7f/de2/ff8 [228480,95229] 0 2026-03-09T15:00:59.219 INFO:tasks.workunit.client.1.vm09.stdout:1/659: dwrite d8/d10/d24/d45/f6c [0,4194304] 0 2026-03-09T15:00:59.219 INFO:tasks.workunit.client.1.vm09.stdout:1/660: stat d8/d10/d24/d48 0 2026-03-09T15:00:59.223 INFO:tasks.workunit.client.1.vm09.stdout:8/789: rmdir df/d24/d95/de1 0 2026-03-09T15:00:59.223 INFO:tasks.workunit.client.1.vm09.stdout:9/701: truncate d1/d7/d1e/d2b/d2e/f8e 674485 0 2026-03-09T15:00:59.227 INFO:tasks.workunit.client.1.vm09.stdout:7/778: sync 2026-03-09T15:00:59.227 INFO:tasks.workunit.client.1.vm09.stdout:4/804: write db/d12/d16/d5b/d78/d7f/f9d [3610766,75831] 0 2026-03-09T15:00:59.227 INFO:tasks.workunit.client.1.vm09.stdout:7/779: fdatasync d3/d28/f95 0 2026-03-09T15:00:59.230 INFO:tasks.workunit.client.1.vm09.stdout:8/790: rmdir df/d24/d95 39 2026-03-09T15:00:59.234 INFO:tasks.workunit.client.1.vm09.stdout:8/791: read - df/d24/d99/db6/d60/fc6 zero size 2026-03-09T15:00:59.234 INFO:tasks.workunit.client.1.vm09.stdout:4/805: creat db/d19/d23/d71/d53/dcf/dfb/f104 x:0 0 0 2026-03-09T15:00:59.235 INFO:tasks.workunit.client.1.vm09.stdout:6/716: write d6/d20/d38/d4e/d55/f77 [2274400,47422] 0 2026-03-09T15:00:59.236 INFO:tasks.workunit.client.1.vm09.stdout:6/717: stat d6/d20/d24/da5/fbd 0 2026-03-09T15:00:59.237 INFO:tasks.workunit.client.1.vm09.stdout:4/806: chown db/d12/f50 2119583423 1 2026-03-09T15:00:59.238 INFO:tasks.workunit.client.1.vm09.stdout:4/807: stat db/d19/d23/d71/f4e 0 2026-03-09T15:00:59.240 INFO:tasks.workunit.client.1.vm09.stdout:7/780: link d3/db/d25/d5c/d75/ca7 d3/db/d25/ce9 0 2026-03-09T15:00:59.240 INFO:tasks.workunit.client.1.vm09.stdout:9/702: getdents d1/d7/db8 0 2026-03-09T15:00:59.241 INFO:tasks.workunit.client.1.vm09.stdout:1/661: dwrite d8/d50/d39/d95/d56/f9f [0,4194304] 0 2026-03-09T15:00:59.242 INFO:tasks.workunit.client.1.vm09.stdout:4/808: fdatasync db/d19/d23/d44/f101 0 2026-03-09T15:00:59.242 INFO:tasks.workunit.client.1.vm09.stdout:1/662: readlink d8/l3f 0 2026-03-09T15:00:59.243 INFO:tasks.workunit.client.1.vm09.stdout:7/781: symlink d3/d1d/d65/da3/lea 0 2026-03-09T15:00:59.246 INFO:tasks.workunit.client.1.vm09.stdout:9/703: mkdir d1/d7/d1e/d2b/d2e/d56/d5e/de5 0 2026-03-09T15:00:59.248 INFO:tasks.workunit.client.1.vm09.stdout:6/718: dread d6/db/fb3 [4194304,4194304] 0 2026-03-09T15:00:59.256 INFO:tasks.workunit.client.1.vm09.stdout:4/809: mkdir db/d12/d16/d5b/d105 0 2026-03-09T15:00:59.257 INFO:tasks.workunit.client.1.vm09.stdout:6/719: truncate d6/d20/d2a/d3d/d46/f84 1435410 0 2026-03-09T15:00:59.258 INFO:tasks.workunit.client.1.vm09.stdout:8/792: rename df/d24/d99/db6/d60/db7 to df/deb 0 2026-03-09T15:00:59.265 INFO:tasks.workunit.client.1.vm09.stdout:6/720: creat d6/d20/d38/d56/d65/fed x:0 0 0 2026-03-09T15:00:59.269 INFO:tasks.workunit.client.1.vm09.stdout:8/793: dwrite df/d2d/d90/fd4 [0,4194304] 0 2026-03-09T15:00:59.275 INFO:tasks.workunit.client.1.vm09.stdout:8/794: chown df/d2d/d46/d33/ddc 79521838 1 2026-03-09T15:00:59.276 INFO:tasks.workunit.client.1.vm09.stdout:5/810: dread d2/d37/d67/fc0 [0,4194304] 0 2026-03-09T15:00:59.307 INFO:tasks.workunit.client.1.vm09.stdout:6/721: sync 2026-03-09T15:00:59.307 INFO:tasks.workunit.client.1.vm09.stdout:5/811: sync 2026-03-09T15:00:59.307 INFO:tasks.workunit.client.1.vm09.stdout:8/795: sync 2026-03-09T15:00:59.307 INFO:tasks.workunit.client.1.vm09.stdout:6/722: chown d6/db/d8b/l5f 2 1 2026-03-09T15:00:59.308 INFO:tasks.workunit.client.1.vm09.stdout:5/812: chown d2/d37/d3c/d36/d45/dae/dc3/l80 555302593 1 2026-03-09T15:00:59.309 INFO:tasks.workunit.client.1.vm09.stdout:8/796: sync 2026-03-09T15:00:59.309 INFO:tasks.workunit.client.1.vm09.stdout:6/723: mknod d6/d20/d38/d56/d65/d68/d6f/cee 0 2026-03-09T15:00:59.311 INFO:tasks.workunit.client.1.vm09.stdout:5/813: mknod d2/d37/d3c/d36/d45/d5c/ddc/c11d 0 2026-03-09T15:00:59.312 INFO:tasks.workunit.client.1.vm09.stdout:6/724: unlink d6/d20/d2a/d3d/f43 0 2026-03-09T15:00:59.316 INFO:tasks.workunit.client.1.vm09.stdout:1/663: read d8/d10/d24/d48/d9b/d78/fa2 [1883438,9366] 0 2026-03-09T15:00:59.321 INFO:tasks.workunit.client.1.vm09.stdout:6/725: dwrite d6/d20/d38/d56/d65/fed [0,4194304] 0 2026-03-09T15:00:59.325 INFO:tasks.workunit.client.1.vm09.stdout:2/763: dwrite fb [0,4194304] 0 2026-03-09T15:00:59.326 INFO:tasks.workunit.client.1.vm09.stdout:1/664: sync 2026-03-09T15:00:59.328 INFO:tasks.workunit.client.1.vm09.stdout:1/665: readlink d8/d10/d24/d48/la4 0 2026-03-09T15:00:59.329 INFO:tasks.workunit.client.1.vm09.stdout:1/666: fsync d8/d10/d24/d45/d5f/d8d/fa0 0 2026-03-09T15:00:59.330 INFO:tasks.workunit.client.1.vm09.stdout:1/667: truncate d8/d50/d39/f65 126084 0 2026-03-09T15:00:59.331 INFO:tasks.workunit.client.1.vm09.stdout:1/668: truncate d8/d50/d39/d95/d56/fc3 702269 0 2026-03-09T15:00:59.335 INFO:tasks.workunit.client.1.vm09.stdout:6/726: symlink d6/d20/d38/d4e/d55/dd2/lef 0 2026-03-09T15:00:59.335 INFO:tasks.workunit.client.1.vm09.stdout:2/764: symlink df/d1f/d6d/d8f/lf1 0 2026-03-09T15:00:59.338 INFO:tasks.workunit.client.1.vm09.stdout:1/669: mknod d8/d10/d24/d45/d5f/ccb 0 2026-03-09T15:00:59.339 INFO:tasks.workunit.client.1.vm09.stdout:6/727: mkdir d6/d20/d24/da5/df0 0 2026-03-09T15:00:59.348 INFO:tasks.workunit.client.1.vm09.stdout:6/728: truncate d6/d20/d24/d7e/f9c 3494787 0 2026-03-09T15:00:59.349 INFO:tasks.workunit.client.1.vm09.stdout:6/729: chown d6/d20/d38/d56/d65/d68/d6f/cbe 135 1 2026-03-09T15:00:59.350 INFO:tasks.workunit.client.1.vm09.stdout:1/670: getdents d8/d10/d24/d45 0 2026-03-09T15:00:59.354 INFO:tasks.workunit.client.1.vm09.stdout:5/814: dread d2/d37/d3c/d36/d45/dae/dc3/f103 [0,4194304] 0 2026-03-09T15:00:59.355 INFO:tasks.workunit.client.1.vm09.stdout:6/730: rename d6/df/d23/f2f to d6/d20/d38/d56/ff1 0 2026-03-09T15:00:59.355 INFO:tasks.workunit.client.1.vm09.stdout:6/731: dread - d6/d20/d24/d7e/fe0 zero size 2026-03-09T15:00:59.356 INFO:tasks.workunit.client.1.vm09.stdout:5/815: creat d2/db1/db2/f11e x:0 0 0 2026-03-09T15:00:59.356 INFO:tasks.workunit.client.1.vm09.stdout:6/732: chown d6/d20/d38/d4e/d55/f5c 956981805 1 2026-03-09T15:00:59.357 INFO:tasks.workunit.client.1.vm09.stdout:1/671: rename d8/l9 to d8/d10/d24/d48/d9b/d78/lcc 0 2026-03-09T15:00:59.361 INFO:tasks.workunit.client.1.vm09.stdout:1/672: symlink d8/d10/lcd 0 2026-03-09T15:00:59.363 INFO:tasks.workunit.client.1.vm09.stdout:5/816: rename d2/d37/d3c/d36/d4c/d51/d96/c3e to d2/d37/d3c/d36/d45/dae/dc3/d10a/c11f 0 2026-03-09T15:00:59.364 INFO:tasks.workunit.client.1.vm09.stdout:1/673: creat d8/d10/d24/d48/d9b/d78/d8b/fce x:0 0 0 2026-03-09T15:00:59.364 INFO:tasks.workunit.client.1.vm09.stdout:6/733: link d6/d20/d38/d56/d65/d68/d86/dc0/ddb/c62 d6/db/d10/d7a/cf2 0 2026-03-09T15:00:59.365 INFO:tasks.workunit.client.1.vm09.stdout:6/734: mknod d6/d20/d44/cf3 0 2026-03-09T15:00:59.367 INFO:tasks.workunit.client.1.vm09.stdout:5/817: read d2/f22 [3196226,118649] 0 2026-03-09T15:00:59.369 INFO:tasks.workunit.client.1.vm09.stdout:5/818: mknod d2/d37/d53/d86/d88/dd7/c120 0 2026-03-09T15:00:59.371 INFO:tasks.workunit.client.1.vm09.stdout:5/819: mkdir d2/d37/d3c/d36/d45/d5c/ddc/d121 0 2026-03-09T15:00:59.373 INFO:tasks.workunit.client.1.vm09.stdout:6/735: rename d6/d20/d38/d4e/d55/f8a to d6/d20/d38/d56/ff4 0 2026-03-09T15:00:59.374 INFO:tasks.workunit.client.1.vm09.stdout:6/736: write d6/d20/d38/d56/d65/f7b [1274662,64680] 0 2026-03-09T15:00:59.374 INFO:tasks.workunit.client.1.vm09.stdout:5/820: creat d2/d37/d53/d86/f122 x:0 0 0 2026-03-09T15:00:59.382 INFO:tasks.workunit.client.1.vm09.stdout:6/737: dread d6/db/fb3 [4194304,4194304] 0 2026-03-09T15:00:59.382 INFO:tasks.workunit.client.1.vm09.stdout:6/738: chown d6/f83 29 1 2026-03-09T15:00:59.384 INFO:tasks.workunit.client.1.vm09.stdout:5/821: dwrite d2/d37/d3c/d36/f98 [4194304,4194304] 0 2026-03-09T15:00:59.391 INFO:tasks.workunit.client.1.vm09.stdout:6/739: dwrite d6/d20/d24/d7e/fe0 [0,4194304] 0 2026-03-09T15:00:59.397 INFO:tasks.workunit.client.1.vm09.stdout:5/822: mknod d2/d37/d3c/d36/d45/dae/dc3/d10a/c123 0 2026-03-09T15:00:59.399 INFO:tasks.workunit.client.1.vm09.stdout:6/740: symlink d6/d20/d24/d7e/d88/lf5 0 2026-03-09T15:00:59.399 INFO:tasks.workunit.client.1.vm09.stdout:5/823: chown d2/d37/d3c/d36/d4c/d89/fc5 57172 1 2026-03-09T15:00:59.400 INFO:tasks.workunit.client.1.vm09.stdout:6/741: chown d6/d20/d24/da5/fbd 32 1 2026-03-09T15:00:59.404 INFO:tasks.workunit.client.1.vm09.stdout:5/824: link d2/d37/d3c/d36/d45/dae/dc3/l80 d2/d37/d3c/d36/d45/d5c/ddc/l124 0 2026-03-09T15:00:59.406 INFO:tasks.workunit.client.1.vm09.stdout:5/825: rename d2/da9 to d2/d37/d3c/dbf/d125 0 2026-03-09T15:00:59.406 INFO:tasks.workunit.client.1.vm09.stdout:5/826: chown d2/db1 2 1 2026-03-09T15:00:59.408 INFO:tasks.workunit.client.1.vm09.stdout:5/827: write d2/d37/d3c/d36/f97 [2897684,83007] 0 2026-03-09T15:00:59.410 INFO:tasks.workunit.client.1.vm09.stdout:5/828: rmdir d2/d37/d3c/d36/d45/dae/dd3 39 2026-03-09T15:00:59.418 INFO:tasks.workunit.client.1.vm09.stdout:0/810: truncate da/dc/d92/d9e/fa2 3928233 0 2026-03-09T15:00:59.424 INFO:tasks.workunit.client.1.vm09.stdout:5/829: creat d2/d37/d3c/d36/d45/dae/dd3/f126 x:0 0 0 2026-03-09T15:00:59.424 INFO:tasks.workunit.client.1.vm09.stdout:0/811: creat da/d57/f110 x:0 0 0 2026-03-09T15:00:59.427 INFO:tasks.workunit.client.1.vm09.stdout:6/742: dread d6/db/d8b/f73 [0,4194304] 0 2026-03-09T15:00:59.433 INFO:tasks.workunit.client.1.vm09.stdout:0/812: dwrite da/d57/f60 [0,4194304] 0 2026-03-09T15:00:59.435 INFO:tasks.workunit.client.1.vm09.stdout:5/830: write d2/d37/d67/fa7 [3633710,28063] 0 2026-03-09T15:00:59.437 INFO:tasks.workunit.client.1.vm09.stdout:0/813: dread - da/d30/fec zero size 2026-03-09T15:00:59.440 INFO:tasks.workunit.client.1.vm09.stdout:6/743: dread d6/d20/f70 [0,4194304] 0 2026-03-09T15:00:59.444 INFO:tasks.workunit.client.1.vm09.stdout:5/831: read d2/d37/d53/f79 [819469,48200] 0 2026-03-09T15:00:59.444 INFO:tasks.workunit.client.1.vm09.stdout:6/744: mknod d6/db/d10/d4f/cf6 0 2026-03-09T15:00:59.445 INFO:tasks.workunit.client.1.vm09.stdout:0/814: creat da/d30/f111 x:0 0 0 2026-03-09T15:00:59.446 INFO:tasks.workunit.client.1.vm09.stdout:0/815: chown da/dc/d8c/ce2 321794919 1 2026-03-09T15:00:59.451 INFO:tasks.workunit.client.1.vm09.stdout:5/832: mknod d2/d37/d53/d86/d88/c127 0 2026-03-09T15:00:59.452 INFO:tasks.workunit.client.1.vm09.stdout:5/833: unlink d2/d37/d53/d86/dad/fd6 0 2026-03-09T15:00:59.452 INFO:tasks.workunit.client.1.vm09.stdout:5/834: chown d2/d37/d3c/d36/d45/dae/fe5 1293 1 2026-03-09T15:00:59.453 INFO:tasks.workunit.client.1.vm09.stdout:3/796: truncate d3/d9a/d80/fdd 3497897 0 2026-03-09T15:00:59.455 INFO:tasks.workunit.client.1.vm09.stdout:5/835: dread d2/f61 [0,4194304] 0 2026-03-09T15:00:59.456 INFO:tasks.workunit.client.1.vm09.stdout:5/836: fsync d2/d37/d67/d95/db8/fe2 0 2026-03-09T15:00:59.492 INFO:tasks.workunit.client.1.vm09.stdout:8/797: rmdir df/deb 39 2026-03-09T15:00:59.493 INFO:tasks.workunit.client.1.vm09.stdout:7/782: truncate d3/d3d/f51 1275204 0 2026-03-09T15:00:59.494 INFO:tasks.workunit.client.1.vm09.stdout:9/704: write d1/d7/f13 [2370341,64614] 0 2026-03-09T15:00:59.497 INFO:tasks.workunit.client.1.vm09.stdout:8/798: mkdir df/d24/d99/db1/dec 0 2026-03-09T15:00:59.506 INFO:tasks.workunit.client.1.vm09.stdout:9/705: symlink d1/d58/le6 0 2026-03-09T15:00:59.507 INFO:tasks.workunit.client.1.vm09.stdout:4/810: truncate db/d19/d23/d71/fe6 787327 0 2026-03-09T15:00:59.507 INFO:tasks.workunit.client.1.vm09.stdout:8/799: fsync df/d2d/d46/fa9 0 2026-03-09T15:00:59.507 INFO:tasks.workunit.client.1.vm09.stdout:7/783: dread d3/db/f42 [0,4194304] 0 2026-03-09T15:00:59.508 INFO:tasks.workunit.client.1.vm09.stdout:8/800: chown cd 0 1 2026-03-09T15:00:59.509 INFO:tasks.workunit.client.1.vm09.stdout:4/811: write db/d12/d16/f26 [5118671,27043] 0 2026-03-09T15:00:59.513 INFO:tasks.workunit.client.1.vm09.stdout:9/706: dwrite d1/d6e/f9b [0,4194304] 0 2026-03-09T15:00:59.520 INFO:tasks.workunit.client.1.vm09.stdout:4/812: dwrite db/d12/d16/d5b/d78/d7f/de2/ff8 [0,4194304] 0 2026-03-09T15:00:59.524 INFO:tasks.workunit.client.1.vm09.stdout:7/784: mknod d3/db/d46/ceb 0 2026-03-09T15:00:59.528 INFO:tasks.workunit.client.1.vm09.stdout:2/765: dwrite df/d1f/d47/f73 [0,4194304] 0 2026-03-09T15:00:59.531 INFO:tasks.workunit.client.1.vm09.stdout:2/766: read - df/d1f/fe5 zero size 2026-03-09T15:00:59.533 INFO:tasks.workunit.client.1.vm09.stdout:2/767: write df/d1f/d6d/d8f/d5f/fe4 [189003,63968] 0 2026-03-09T15:00:59.533 INFO:tasks.workunit.client.1.vm09.stdout:8/801: getdents df/d38/d64/daf 0 2026-03-09T15:00:59.534 INFO:tasks.workunit.client.1.vm09.stdout:8/802: fsync df/d2d/d42/fd2 0 2026-03-09T15:00:59.535 INFO:tasks.workunit.client.1.vm09.stdout:4/813: symlink db/d12/d16/d5b/d78/de3/l106 0 2026-03-09T15:00:59.536 INFO:tasks.workunit.client.1.vm09.stdout:1/674: write d8/d10/d24/f2a [5140679,10734] 0 2026-03-09T15:00:59.539 INFO:tasks.workunit.client.1.vm09.stdout:6/745: rmdir d6/d20/d38/d56 39 2026-03-09T15:00:59.542 INFO:tasks.workunit.client.1.vm09.stdout:1/675: dwrite d8/d10/d24/d48/f76 [0,4194304] 0 2026-03-09T15:00:59.549 INFO:tasks.workunit.client.1.vm09.stdout:9/707: creat d1/d7/d1e/d2b/d8d/dc8/fe7 x:0 0 0 2026-03-09T15:00:59.552 INFO:tasks.workunit.client.1.vm09.stdout:2/768: creat df/d1f/d47/d5d/dbc/ff2 x:0 0 0 2026-03-09T15:00:59.556 INFO:tasks.workunit.client.1.vm09.stdout:4/814: creat db/d19/d23/d71/d53/dcf/dfb/f107 x:0 0 0 2026-03-09T15:00:59.560 INFO:tasks.workunit.client.1.vm09.stdout:6/746: mknod d6/df/d23/de3/cf7 0 2026-03-09T15:00:59.573 INFO:tasks.workunit.client.1.vm09.stdout:0/816: dwrite da/dc/d61/f66 [0,4194304] 0 2026-03-09T15:00:59.578 INFO:tasks.workunit.client.1.vm09.stdout:9/708: read d1/d7/d1e/d2b/d40/f43 [536669,40096] 0 2026-03-09T15:00:59.590 INFO:tasks.workunit.client.1.vm09.stdout:0/817: stat da/dc/d61/l102 0 2026-03-09T15:00:59.591 INFO:tasks.workunit.client.1.vm09.stdout:4/815: truncate db/d12/d16/f54 215925 0 2026-03-09T15:00:59.594 INFO:tasks.workunit.client.1.vm09.stdout:9/709: creat d1/fe8 x:0 0 0 2026-03-09T15:00:59.595 INFO:tasks.workunit.client.1.vm09.stdout:9/710: chown d1/d7/d1e/f34 20 1 2026-03-09T15:00:59.596 INFO:tasks.workunit.client.1.vm09.stdout:9/711: unlink d1/d7/f13 0 2026-03-09T15:00:59.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:00:59 vm09.local ceph-mon[59673]: pgmap v156: 65 pgs: 65 active+clean; 1.5 GiB data, 5.3 GiB used, 115 GiB / 120 GiB avail; 53 MiB/s rd, 129 MiB/s wr, 338 op/s 2026-03-09T15:00:59.619 INFO:tasks.workunit.client.1.vm09.stdout:6/747: mknod d6/db/d8b/cf8 0 2026-03-09T15:00:59.620 INFO:tasks.workunit.client.1.vm09.stdout:3/797: link d3/f3e d3/d3a/d2b/d7b/db6/f115 0 2026-03-09T15:00:59.622 INFO:tasks.workunit.client.1.vm09.stdout:3/798: mkdir d3/d5b/d79/d9d/d116 0 2026-03-09T15:00:59.625 INFO:tasks.workunit.client.1.vm09.stdout:3/799: mknod d3/d3a/d2b/d31/d9e/c117 0 2026-03-09T15:00:59.654 INFO:tasks.workunit.client.1.vm09.stdout:3/800: dread d3/f3b [8388608,4194304] 0 2026-03-09T15:00:59.657 INFO:tasks.workunit.client.1.vm09.stdout:3/801: link d3/d9a/d80/ldf d3/d3a/d2b/d31/d9e/l118 0 2026-03-09T15:00:59.661 INFO:tasks.workunit.client.1.vm09.stdout:3/802: dwrite d3/d100/f84 [0,4194304] 0 2026-03-09T15:00:59.738 INFO:tasks.workunit.client.1.vm09.stdout:0/818: write da/dc/d10/f29 [5084488,85557] 0 2026-03-09T15:00:59.742 INFO:tasks.workunit.client.1.vm09.stdout:4/816: dwrite db/d19/d52/d76/d3b/f48 [0,4194304] 0 2026-03-09T15:00:59.757 INFO:tasks.workunit.client.1.vm09.stdout:9/712: dwrite d1/d4f/d8f/d91/fa5 [0,4194304] 0 2026-03-09T15:00:59.758 INFO:tasks.workunit.client.1.vm09.stdout:9/713: write d1/d7/d1e/d2b/d40/fe2 [2567220,65434] 0 2026-03-09T15:00:59.762 INFO:tasks.workunit.client.1.vm09.stdout:9/714: truncate d1/fe8 686946 0 2026-03-09T15:00:59.763 INFO:tasks.workunit.client.1.vm09.stdout:9/715: creat d1/d4f/d8f/dc0/fe9 x:0 0 0 2026-03-09T15:00:59.763 INFO:tasks.workunit.client.1.vm09.stdout:9/716: stat d1/d58/c93 0 2026-03-09T15:00:59.778 INFO:tasks.workunit.client.1.vm09.stdout:9/717: dread d1/d7/d1e/d2b/f5f [0,4194304] 0 2026-03-09T15:00:59.784 INFO:tasks.workunit.client.1.vm09.stdout:9/718: creat d1/d7/d1e/d2b/d2e/fea x:0 0 0 2026-03-09T15:00:59.785 INFO:tasks.workunit.client.1.vm09.stdout:9/719: chown d1/d7/d1e/d2b/c66 202 1 2026-03-09T15:00:59.787 INFO:tasks.workunit.client.1.vm09.stdout:9/720: getdents d1/d58 0 2026-03-09T15:00:59.802 INFO:tasks.workunit.client.1.vm09.stdout:6/748: dwrite d6/d20/d38/fa3 [0,4194304] 0 2026-03-09T15:00:59.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:00:59 vm05.local ceph-mon[50611]: pgmap v156: 65 pgs: 65 active+clean; 1.5 GiB data, 5.3 GiB used, 115 GiB / 120 GiB avail; 53 MiB/s rd, 129 MiB/s wr, 338 op/s 2026-03-09T15:00:59.809 INFO:tasks.workunit.client.1.vm09.stdout:6/749: mknod d6/df/d23/de3/cf9 0 2026-03-09T15:00:59.833 INFO:tasks.workunit.client.1.vm09.stdout:7/785: creat d3/d3d/fec x:0 0 0 2026-03-09T15:00:59.834 INFO:tasks.workunit.client.1.vm09.stdout:7/786: chown d3/db/d15 2 1 2026-03-09T15:00:59.834 INFO:tasks.workunit.client.1.vm09.stdout:7/787: readlink d3/db/d25/l40 0 2026-03-09T15:00:59.837 INFO:tasks.workunit.client.1.vm09.stdout:7/788: write d3/db/d25/d5c/d75/db4/fc5 [4336513,67328] 0 2026-03-09T15:00:59.837 INFO:tasks.workunit.client.1.vm09.stdout:2/769: mknod df/d1f/d47/cf3 0 2026-03-09T15:00:59.845 INFO:tasks.workunit.client.1.vm09.stdout:2/770: mkdir df/d58/df4 0 2026-03-09T15:00:59.845 INFO:tasks.workunit.client.1.vm09.stdout:2/771: stat df/f13 0 2026-03-09T15:00:59.850 INFO:tasks.workunit.client.1.vm09.stdout:3/803: truncate d3/d9a/de3/dc4/ff0 1751173 0 2026-03-09T15:00:59.851 INFO:tasks.workunit.client.1.vm09.stdout:3/804: chown d3/d5b/d79/d9d/faf 30947466 1 2026-03-09T15:00:59.852 INFO:tasks.workunit.client.1.vm09.stdout:2/772: link df/d1f/d6d/d8f/f99 df/d93/ff5 0 2026-03-09T15:00:59.856 INFO:tasks.workunit.client.1.vm09.stdout:2/773: truncate df/d1f/d47/d84/db7/dc3/dc4/fe8 1650237 0 2026-03-09T15:00:59.856 INFO:tasks.workunit.client.1.vm09.stdout:5/837: rename d2/d37/f43 to d2/d37/d3c/dbf/f128 0 2026-03-09T15:00:59.859 INFO:tasks.workunit.client.1.vm09.stdout:1/676: rename d8/d10/d24/d45/d5f/d8d to d8/d10/d24/d45/dcf 0 2026-03-09T15:00:59.860 INFO:tasks.workunit.client.1.vm09.stdout:5/838: mknod d2/db1/c129 0 2026-03-09T15:00:59.863 INFO:tasks.workunit.client.1.vm09.stdout:8/803: rename df/l17 to df/deb/led 0 2026-03-09T15:00:59.863 INFO:tasks.workunit.client.1.vm09.stdout:3/805: dread d3/d3a/d2b/d31/f3f [0,4194304] 0 2026-03-09T15:00:59.865 INFO:tasks.workunit.client.1.vm09.stdout:5/839: unlink d2/d37/d67/feb 0 2026-03-09T15:00:59.868 INFO:tasks.workunit.client.1.vm09.stdout:4/817: rename db/d19/d23/d71/ddf to db/d19/d23/d44/dd2/d108 0 2026-03-09T15:00:59.870 INFO:tasks.workunit.client.1.vm09.stdout:8/804: dwrite df/d24/f83 [0,4194304] 0 2026-03-09T15:00:59.874 INFO:tasks.workunit.client.1.vm09.stdout:5/840: symlink d2/d37/d3c/d36/d45/dfd/l12a 0 2026-03-09T15:00:59.881 INFO:tasks.workunit.client.1.vm09.stdout:8/805: dread df/d24/f83 [0,4194304] 0 2026-03-09T15:00:59.881 INFO:tasks.workunit.client.1.vm09.stdout:5/841: truncate d2/d37/d53/d86/d88/dc9/f10c 915799 0 2026-03-09T15:00:59.882 INFO:tasks.workunit.client.1.vm09.stdout:4/818: dread db/d19/d23/d44/f45 [0,4194304] 0 2026-03-09T15:00:59.882 INFO:tasks.workunit.client.1.vm09.stdout:5/842: read d2/d37/f6c [1466816,129169] 0 2026-03-09T15:00:59.883 INFO:tasks.workunit.client.1.vm09.stdout:6/750: rename d6/db/ce to d6/d20/d2a/d3d/cfa 0 2026-03-09T15:00:59.885 INFO:tasks.workunit.client.1.vm09.stdout:8/806: mkdir df/d24/d99/db6/dee 0 2026-03-09T15:00:59.889 INFO:tasks.workunit.client.1.vm09.stdout:2/774: sync 2026-03-09T15:00:59.890 INFO:tasks.workunit.client.1.vm09.stdout:4/819: rename f3 to db/d12/d16/d5b/d105/f109 0 2026-03-09T15:00:59.891 INFO:tasks.workunit.client.1.vm09.stdout:2/775: write df/da0/fc6 [10537,40228] 0 2026-03-09T15:00:59.892 INFO:tasks.workunit.client.1.vm09.stdout:5/843: symlink d2/d37/d53/d86/dad/l12b 0 2026-03-09T15:00:59.894 INFO:tasks.workunit.client.1.vm09.stdout:4/820: write db/d12/d16/d5b/d78/d7f/f9d [469194,59598] 0 2026-03-09T15:00:59.895 INFO:tasks.workunit.client.1.vm09.stdout:2/776: dread df/d1f/f55 [0,4194304] 0 2026-03-09T15:00:59.898 INFO:tasks.workunit.client.1.vm09.stdout:4/821: dread - db/d19/d23/d44/d84/fd5 zero size 2026-03-09T15:00:59.898 INFO:tasks.workunit.client.1.vm09.stdout:6/751: dwrite d6/d20/f70 [0,4194304] 0 2026-03-09T15:00:59.903 INFO:tasks.workunit.client.1.vm09.stdout:5/844: write d2/d37/d53/d86/d88/dc9/f119 [2449730,96821] 0 2026-03-09T15:00:59.904 INFO:tasks.workunit.client.1.vm09.stdout:2/777: rmdir df/d1f/d47 39 2026-03-09T15:00:59.907 INFO:tasks.workunit.client.1.vm09.stdout:2/778: chown df/d1f/f9c 1 1 2026-03-09T15:00:59.910 INFO:tasks.workunit.client.1.vm09.stdout:4/822: dwrite db/d12/d16/d5b/d78/de3/ff7 [0,4194304] 0 2026-03-09T15:00:59.911 INFO:tasks.workunit.client.1.vm09.stdout:4/823: truncate db/d19/d81/d5d/f8a 6045397 0 2026-03-09T15:00:59.914 INFO:tasks.workunit.client.1.vm09.stdout:2/779: sync 2026-03-09T15:00:59.920 INFO:tasks.workunit.client.1.vm09.stdout:4/824: unlink db/d19/d23/d44/ca4 0 2026-03-09T15:00:59.922 INFO:tasks.workunit.client.1.vm09.stdout:4/825: dread - db/d19/d23/d71/d5f/ff6 zero size 2026-03-09T15:00:59.926 INFO:tasks.workunit.client.1.vm09.stdout:2/780: dread df/f42 [0,4194304] 0 2026-03-09T15:00:59.930 INFO:tasks.workunit.client.1.vm09.stdout:4/826: mkdir db/d19/d23/d71/d53/d10a 0 2026-03-09T15:00:59.935 INFO:tasks.workunit.client.1.vm09.stdout:6/752: dread d6/f39 [0,4194304] 0 2026-03-09T15:00:59.936 INFO:tasks.workunit.client.1.vm09.stdout:4/827: symlink db/d12/d16/d5b/d105/l10b 0 2026-03-09T15:00:59.938 INFO:tasks.workunit.client.1.vm09.stdout:2/781: dread df/d20/f6a [0,4194304] 0 2026-03-09T15:00:59.939 INFO:tasks.workunit.client.1.vm09.stdout:6/753: chown d6/d20/d38/d4e/d55/l9e 402 1 2026-03-09T15:00:59.940 INFO:tasks.workunit.client.1.vm09.stdout:5/845: dread d2/d37/d3c/d36/d45/dae/dc3/f58 [0,4194304] 0 2026-03-09T15:00:59.940 INFO:tasks.workunit.client.1.vm09.stdout:4/828: unlink db/d19/dcd/fce 0 2026-03-09T15:00:59.943 INFO:tasks.workunit.client.1.vm09.stdout:4/829: fdatasync db/d19/d23/d71/d53/dcf/dfb/f104 0 2026-03-09T15:00:59.947 INFO:tasks.workunit.client.1.vm09.stdout:8/807: dread df/d24/d99/db6/f59 [4194304,4194304] 0 2026-03-09T15:00:59.952 INFO:tasks.workunit.client.1.vm09.stdout:2/782: rename df/d93/da3/dcf/cec to df/d1f/d47/d84/db7/dc3/cf6 0 2026-03-09T15:00:59.952 INFO:tasks.workunit.client.1.vm09.stdout:4/830: truncate db/d12/fb8 730943 0 2026-03-09T15:00:59.956 INFO:tasks.workunit.client.1.vm09.stdout:4/831: creat db/d19/d52/d76/d3b/f10c x:0 0 0 2026-03-09T15:00:59.957 INFO:tasks.workunit.client.1.vm09.stdout:4/832: chown db/d19/d23/d44/d7c/d7d/fb9 66433726 1 2026-03-09T15:00:59.958 INFO:tasks.workunit.client.1.vm09.stdout:2/783: mknod df/d58/df4/cf7 0 2026-03-09T15:00:59.959 INFO:tasks.workunit.client.1.vm09.stdout:2/784: dread df/d1f/f55 [0,4194304] 0 2026-03-09T15:00:59.959 INFO:tasks.workunit.client.1.vm09.stdout:2/785: chown df/d1f/d47/d5d 622 1 2026-03-09T15:00:59.960 INFO:tasks.workunit.client.1.vm09.stdout:2/786: chown df/d2d/l62 498947 1 2026-03-09T15:00:59.961 INFO:tasks.workunit.client.1.vm09.stdout:2/787: chown df/d1f/d47/d84/fd7 27 1 2026-03-09T15:00:59.963 INFO:tasks.workunit.client.1.vm09.stdout:8/808: link df/d5b/fd0 df/d2d/d46/fef 0 2026-03-09T15:00:59.966 INFO:tasks.workunit.client.1.vm09.stdout:4/833: truncate db/d12/d16/f60 287336 0 2026-03-09T15:00:59.967 INFO:tasks.workunit.client.1.vm09.stdout:2/788: creat df/d1f/d47/d5d/dbc/ff8 x:0 0 0 2026-03-09T15:00:59.967 INFO:tasks.workunit.client.1.vm09.stdout:4/834: fdatasync db/d12/d16/d5b/d78/d7f/de2/ff8 0 2026-03-09T15:00:59.967 INFO:tasks.workunit.client.1.vm09.stdout:2/789: chown df/d58/d67/l79 511662629 1 2026-03-09T15:00:59.968 INFO:tasks.workunit.client.1.vm09.stdout:8/809: dread - df/d2d/d42/f8c zero size 2026-03-09T15:00:59.968 INFO:tasks.workunit.client.1.vm09.stdout:4/835: readlink db/d19/d23/d44/d7c/d7d/d97/da3/ld8 0 2026-03-09T15:00:59.969 INFO:tasks.workunit.client.1.vm09.stdout:2/790: write df/d20/fb8 [408940,15214] 0 2026-03-09T15:00:59.969 INFO:tasks.workunit.client.1.vm09.stdout:4/836: chown db/d19/d52/d76/d3b/f10c 3245 1 2026-03-09T15:00:59.972 INFO:tasks.workunit.client.1.vm09.stdout:8/810: symlink df/d2d/d4f/lf0 0 2026-03-09T15:00:59.973 INFO:tasks.workunit.client.1.vm09.stdout:8/811: readlink df/d2d/d4f/l8f 0 2026-03-09T15:00:59.974 INFO:tasks.workunit.client.1.vm09.stdout:4/837: rename db/d19/d23/d44/d7c/d7d/lc0 to db/d19/l10d 0 2026-03-09T15:00:59.976 INFO:tasks.workunit.client.1.vm09.stdout:2/791: rename df/l6b to df/d58/df4/lf9 0 2026-03-09T15:00:59.982 INFO:tasks.workunit.client.1.vm09.stdout:4/838: creat db/d12/f10e x:0 0 0 2026-03-09T15:00:59.983 INFO:tasks.workunit.client.1.vm09.stdout:4/839: chown db/d12/d9e/fe0 61 1 2026-03-09T15:00:59.987 INFO:tasks.workunit.client.1.vm09.stdout:0/819: dwrite da/dc/d1c/d3c/d78/f88 [0,4194304] 0 2026-03-09T15:00:59.988 INFO:tasks.workunit.client.1.vm09.stdout:4/840: creat db/d19/d52/d76/d3b/de7/f10f x:0 0 0 2026-03-09T15:00:59.992 INFO:tasks.workunit.client.1.vm09.stdout:0/820: getdents da/dc/d1c/d46/d63/de6 0 2026-03-09T15:00:59.998 INFO:tasks.workunit.client.1.vm09.stdout:4/841: dwrite db/d12/d9e/fe0 [0,4194304] 0 2026-03-09T15:00:59.999 INFO:tasks.workunit.client.1.vm09.stdout:4/842: readlink db/d19/d23/d44/le4 0 2026-03-09T15:01:00.000 INFO:tasks.workunit.client.1.vm09.stdout:0/821: dwrite da/dc/d1c/d3c/d78/d7a/fed [0,4194304] 0 2026-03-09T15:01:00.008 INFO:tasks.workunit.client.1.vm09.stdout:4/843: fsync db/d19/f96 0 2026-03-09T15:01:00.010 INFO:tasks.workunit.client.1.vm09.stdout:0/822: link da/dc/d1c/d3c/d78/d7a/dbb/ffc da/dc/dc0/f112 0 2026-03-09T15:01:00.018 INFO:tasks.workunit.client.1.vm09.stdout:4/844: dread db/d19/f38 [0,4194304] 0 2026-03-09T15:01:00.021 INFO:tasks.workunit.client.1.vm09.stdout:0/823: dread da/dc/d1c/d46/d63/d86/dcd/f93 [0,4194304] 0 2026-03-09T15:01:00.021 INFO:tasks.workunit.client.1.vm09.stdout:4/845: chown db/d19/d23/d71/d53/dcf/dfb 116 1 2026-03-09T15:01:00.021 INFO:tasks.workunit.client.1.vm09.stdout:0/824: getdents da/dc/d1c/d3c/d78/d7a 0 2026-03-09T15:01:00.024 INFO:tasks.workunit.client.1.vm09.stdout:0/825: creat da/dc/d22/d64/f113 x:0 0 0 2026-03-09T15:01:00.027 INFO:tasks.workunit.client.1.vm09.stdout:0/826: dwrite da/dc/d22/d64/f113 [0,4194304] 0 2026-03-09T15:01:00.039 INFO:tasks.workunit.client.1.vm09.stdout:3/806: dread d3/d9a/f97 [0,4194304] 0 2026-03-09T15:01:00.039 INFO:tasks.workunit.client.1.vm09.stdout:3/807: stat d3/d100/d48/dc5/l109 0 2026-03-09T15:01:00.042 INFO:tasks.workunit.client.1.vm09.stdout:0/827: dwrite da/dc/dcb/dd9/ffb [0,4194304] 0 2026-03-09T15:01:00.051 INFO:tasks.workunit.client.1.vm09.stdout:0/828: truncate da/dc/d1c/d3c/f81 1175098 0 2026-03-09T15:01:00.051 INFO:tasks.workunit.client.1.vm09.stdout:0/829: readlink da/dc/d1c/d46/d63/l9b 0 2026-03-09T15:01:00.055 INFO:tasks.workunit.client.1.vm09.stdout:3/808: getdents d3/d5b/d79 0 2026-03-09T15:01:00.057 INFO:tasks.workunit.client.1.vm09.stdout:0/830: getdents da/dc/d22/d64/df3 0 2026-03-09T15:01:00.058 INFO:tasks.workunit.client.1.vm09.stdout:3/809: rename d3/d100/d48/f5f to d3/d100/d6a/dd5/f119 0 2026-03-09T15:01:00.062 INFO:tasks.workunit.client.1.vm09.stdout:0/831: dread da/dc/d1c/d3c/d78/d7a/dbb/ffc [0,4194304] 0 2026-03-09T15:01:00.069 INFO:tasks.workunit.client.1.vm09.stdout:3/810: getdents d3/d3a/d2b/d31/d9e 0 2026-03-09T15:01:00.072 INFO:tasks.workunit.client.1.vm09.stdout:0/832: rename da/f101 to da/d30/d36/f114 0 2026-03-09T15:01:00.072 INFO:tasks.workunit.client.1.vm09.stdout:0/833: chown da/dc/d22/c100 77 1 2026-03-09T15:01:00.074 INFO:tasks.workunit.client.1.vm09.stdout:3/811: fdatasync d3/d3a/d2b/d31/d4a/d62/f78 0 2026-03-09T15:01:00.074 INFO:tasks.workunit.client.1.vm09.stdout:0/834: dread - da/dc/d22/d76/fcf zero size 2026-03-09T15:01:00.089 INFO:tasks.workunit.client.1.vm09.stdout:0/835: sync 2026-03-09T15:01:00.090 INFO:tasks.workunit.client.1.vm09.stdout:0/836: chown da/dc/d1c/d3c/d44/lb4 0 1 2026-03-09T15:01:00.097 INFO:tasks.workunit.client.1.vm09.stdout:0/837: fdatasync da/dc/d1c/d3c/f81 0 2026-03-09T15:01:00.098 INFO:tasks.workunit.client.1.vm09.stdout:3/812: dread d3/d100/d6a/dd5/f119 [0,4194304] 0 2026-03-09T15:01:00.100 INFO:tasks.workunit.client.1.vm09.stdout:0/838: creat da/dc/d92/d9e/f115 x:0 0 0 2026-03-09T15:01:00.102 INFO:tasks.workunit.client.1.vm09.stdout:0/839: fsync da/dc/dcb/dd4/fe1 0 2026-03-09T15:01:00.104 INFO:tasks.workunit.client.1.vm09.stdout:0/840: truncate da/dc/d22/f9d 977782 0 2026-03-09T15:01:00.106 INFO:tasks.workunit.client.1.vm09.stdout:0/841: write da/d30/f6f [1403855,4541] 0 2026-03-09T15:01:00.107 INFO:tasks.workunit.client.1.vm09.stdout:0/842: write da/d57/f60 [2986068,14491] 0 2026-03-09T15:01:00.108 INFO:tasks.workunit.client.1.vm09.stdout:9/721: dwrite d1/d7/d1e/d2b/f32 [0,4194304] 0 2026-03-09T15:01:00.111 INFO:tasks.workunit.client.1.vm09.stdout:9/722: sync 2026-03-09T15:01:00.113 INFO:tasks.workunit.client.1.vm09.stdout:9/723: sync 2026-03-09T15:01:00.118 INFO:tasks.workunit.client.1.vm09.stdout:0/843: chown da/c8f 92739618 1 2026-03-09T15:01:00.124 INFO:tasks.workunit.client.1.vm09.stdout:7/789: dwrite d3/f32 [0,4194304] 0 2026-03-09T15:01:00.152 INFO:tasks.workunit.client.1.vm09.stdout:0/844: creat da/dc/d1c/d3c/d78/f116 x:0 0 0 2026-03-09T15:01:00.153 INFO:tasks.workunit.client.1.vm09.stdout:7/790: mknod d3/d3d/d9b/ced 0 2026-03-09T15:01:00.160 INFO:tasks.workunit.client.1.vm09.stdout:9/724: rmdir d1/d7/d1e/d2b/d2e/d56/d5e/de5 0 2026-03-09T15:01:00.161 INFO:tasks.workunit.client.1.vm09.stdout:9/725: chown d1/d7/f83 78899853 1 2026-03-09T15:01:00.164 INFO:tasks.workunit.client.1.vm09.stdout:7/791: dread d3/db/d25/d7d/f8c [0,4194304] 0 2026-03-09T15:01:00.165 INFO:tasks.workunit.client.1.vm09.stdout:0/845: link da/dc/d22/d64/f113 da/dc/d10/f117 0 2026-03-09T15:01:00.171 INFO:tasks.workunit.client.1.vm09.stdout:9/726: write d1/d7/d1e/d2b/f5f [2065371,125785] 0 2026-03-09T15:01:00.174 INFO:tasks.workunit.client.1.vm09.stdout:9/727: dwrite d1/d58/f80 [0,4194304] 0 2026-03-09T15:01:00.176 INFO:tasks.workunit.client.1.vm09.stdout:9/728: fdatasync d1/d7/d1e/d2b/d2e/d56/d6d/fbd 0 2026-03-09T15:01:00.195 INFO:tasks.workunit.client.1.vm09.stdout:0/846: creat da/dc/d1c/f118 x:0 0 0 2026-03-09T15:01:00.197 INFO:tasks.workunit.client.1.vm09.stdout:0/847: symlink da/dc/d22/d64/df3/l119 0 2026-03-09T15:01:00.199 INFO:tasks.workunit.client.1.vm09.stdout:0/848: chown da/dc/d1c/d3c/c5e 8435374 1 2026-03-09T15:01:00.200 INFO:tasks.workunit.client.1.vm09.stdout:0/849: fdatasync da/dc/d1c/d3c/d78/f116 0 2026-03-09T15:01:00.202 INFO:tasks.workunit.client.1.vm09.stdout:0/850: mknod da/d57/c11a 0 2026-03-09T15:01:00.202 INFO:tasks.workunit.client.1.vm09.stdout:1/677: write d8/d50/d39/d95/f4c [724072,47159] 0 2026-03-09T15:01:00.207 INFO:tasks.workunit.client.1.vm09.stdout:1/678: creat d8/d90/fd0 x:0 0 0 2026-03-09T15:01:00.208 INFO:tasks.workunit.client.1.vm09.stdout:0/851: getdents da/dc/d22/df0 0 2026-03-09T15:01:00.210 INFO:tasks.workunit.client.1.vm09.stdout:1/679: dwrite d8/d10/f13 [0,4194304] 0 2026-03-09T15:01:00.213 INFO:tasks.workunit.client.1.vm09.stdout:1/680: fdatasync d8/d10/f12 0 2026-03-09T15:01:00.216 INFO:tasks.workunit.client.1.vm09.stdout:0/852: write da/dc/d1c/d46/fd8 [77332,118364] 0 2026-03-09T15:01:00.222 INFO:tasks.workunit.client.1.vm09.stdout:0/853: creat da/dc/d1c/d46/d63/de6/f11b x:0 0 0 2026-03-09T15:01:00.227 INFO:tasks.workunit.client.1.vm09.stdout:1/681: dread d8/d10/d24/d45/d5f/f60 [0,4194304] 0 2026-03-09T15:01:00.233 INFO:tasks.workunit.client.1.vm09.stdout:0/854: rename da/dc/d22/d76 to da/dc/dcb/dd4/d11c 0 2026-03-09T15:01:00.238 INFO:tasks.workunit.client.1.vm09.stdout:0/855: mknod da/dc/d1c/d46/d63/d86/dcd/d7b/c11d 0 2026-03-09T15:01:00.238 INFO:tasks.workunit.client.1.vm09.stdout:1/682: mknod d8/d10/d24/d48/d9b/d78/db4/cd1 0 2026-03-09T15:01:00.242 INFO:tasks.workunit.client.1.vm09.stdout:0/856: dread da/dc/d1c/d3c/d78/d7a/dbb/ffc [0,4194304] 0 2026-03-09T15:01:00.243 INFO:tasks.workunit.client.1.vm09.stdout:1/683: symlink d8/d50/d39/d95/d72/d64/ld2 0 2026-03-09T15:01:00.244 INFO:tasks.workunit.client.1.vm09.stdout:0/857: symlink da/dc/d22/d64/df3/l11e 0 2026-03-09T15:01:00.244 INFO:tasks.workunit.client.1.vm09.stdout:1/684: read - d8/d90/fd0 zero size 2026-03-09T15:01:00.246 INFO:tasks.workunit.client.1.vm09.stdout:0/858: chown da/dc/d22/d64/cc3 17324 1 2026-03-09T15:01:00.246 INFO:tasks.workunit.client.1.vm09.stdout:1/685: dread - d8/d90/fd0 zero size 2026-03-09T15:01:00.247 INFO:tasks.workunit.client.1.vm09.stdout:1/686: fsync d8/ff 0 2026-03-09T15:01:00.248 INFO:tasks.workunit.client.1.vm09.stdout:0/859: fdatasync da/dc/f8b 0 2026-03-09T15:01:00.253 INFO:tasks.workunit.client.1.vm09.stdout:1/687: dwrite d8/d10/f13 [0,4194304] 0 2026-03-09T15:01:00.256 INFO:tasks.workunit.client.1.vm09.stdout:0/860: rename da/dc/d84/db8/ld2 to da/dc/d1c/d46/d63/d86/dcd/d7b/l11f 0 2026-03-09T15:01:00.258 INFO:tasks.workunit.client.1.vm09.stdout:1/688: unlink d8/d10/d24/d48/c7a 0 2026-03-09T15:01:00.259 INFO:tasks.workunit.client.1.vm09.stdout:0/861: creat da/f120 x:0 0 0 2026-03-09T15:01:00.263 INFO:tasks.workunit.client.1.vm09.stdout:1/689: creat d8/d50/fd3 x:0 0 0 2026-03-09T15:01:00.274 INFO:tasks.workunit.client.1.vm09.stdout:0/862: getdents da/dc/d1c/d46/d63 0 2026-03-09T15:01:00.275 INFO:tasks.workunit.client.1.vm09.stdout:1/690: rename d8/d10/d24/d48/d9b/d78/fb8 to d8/d10/d24/d48/fd4 0 2026-03-09T15:01:00.277 INFO:tasks.workunit.client.1.vm09.stdout:1/691: fsync d8/d10/d24/d45/f6c 0 2026-03-09T15:01:00.278 INFO:tasks.workunit.client.1.vm09.stdout:0/863: dwrite da/dc/d92/ff9 [0,4194304] 0 2026-03-09T15:01:00.283 INFO:tasks.workunit.client.1.vm09.stdout:1/692: mknod d8/d10/d24/d48/d9b/cd5 0 2026-03-09T15:01:00.283 INFO:tasks.workunit.client.1.vm09.stdout:1/693: fdatasync d8/d10/d24/d48/d9b/d78/d8b/fce 0 2026-03-09T15:01:00.339 INFO:tasks.workunit.client.1.vm09.stdout:1/694: sync 2026-03-09T15:01:00.345 INFO:tasks.workunit.client.1.vm09.stdout:1/695: creat d8/d50/d39/d95/d56/dc7/fd6 x:0 0 0 2026-03-09T15:01:00.345 INFO:tasks.workunit.client.1.vm09.stdout:1/696: write d8/d50/d5b/f6f [1579724,22167] 0 2026-03-09T15:01:00.445 INFO:tasks.workunit.client.1.vm09.stdout:6/754: dwrite d6/d20/d24/d7e/f9c [0,4194304] 0 2026-03-09T15:01:00.447 INFO:tasks.workunit.client.1.vm09.stdout:6/755: creat d6/d20/d2a/d3b/d91/ffb x:0 0 0 2026-03-09T15:01:00.449 INFO:tasks.workunit.client.1.vm09.stdout:6/756: chown d6/d20/d38/d56/d65/d68/d6f/cab 2520 1 2026-03-09T15:01:00.449 INFO:tasks.workunit.client.1.vm09.stdout:6/757: read - d6/df/fce zero size 2026-03-09T15:01:00.450 INFO:tasks.workunit.client.1.vm09.stdout:6/758: symlink d6/d20/d2a/d3d/d46/lfc 0 2026-03-09T15:01:00.584 INFO:tasks.workunit.client.1.vm09.stdout:6/759: sync 2026-03-09T15:01:00.608 INFO:tasks.workunit.client.1.vm09.stdout:5/846: dwrite d2/d37/d3c/dbf/d125/fd9 [0,4194304] 0 2026-03-09T15:01:00.611 INFO:tasks.workunit.client.1.vm09.stdout:5/847: symlink d2/d37/d3c/d36/d45/dae/dc3/d115/l12c 0 2026-03-09T15:01:00.683 INFO:tasks.workunit.client.1.vm09.stdout:6/760: sync 2026-03-09T15:01:00.776 INFO:tasks.workunit.client.1.vm09.stdout:8/812: write df/d24/d99/db6/d60/fc6 [1010093,126800] 0 2026-03-09T15:01:00.779 INFO:tasks.workunit.client.1.vm09.stdout:8/813: creat df/d2d/d42/d70/dc0/dc9/ff1 x:0 0 0 2026-03-09T15:01:00.780 INFO:tasks.workunit.client.1.vm09.stdout:8/814: write df/d24/d99/db6/fad [492291,118900] 0 2026-03-09T15:01:00.786 INFO:tasks.workunit.client.1.vm09.stdout:2/792: mknod df/d1f/d6d/cfa 0 2026-03-09T15:01:00.787 INFO:tasks.workunit.client.1.vm09.stdout:2/793: dread - df/d1f/d47/d84/db7/dc3/da7/fa8 zero size 2026-03-09T15:01:00.799 INFO:tasks.workunit.client.1.vm09.stdout:8/815: dwrite df/d38/d64/fb2 [0,4194304] 0 2026-03-09T15:01:00.800 INFO:tasks.workunit.client.1.vm09.stdout:8/816: stat df/d5b/d65/c25 0 2026-03-09T15:01:00.800 INFO:tasks.workunit.client.1.vm09.stdout:4/846: write db/d12/da1/fc6 [595701,47817] 0 2026-03-09T15:01:00.801 INFO:tasks.workunit.client.1.vm09.stdout:8/817: fdatasync df/d24/d99/db1/f87 0 2026-03-09T15:01:00.802 INFO:tasks.workunit.client.1.vm09.stdout:2/794: read df/d20/f52 [3824535,52148] 0 2026-03-09T15:01:00.812 INFO:tasks.workunit.client.1.vm09.stdout:2/795: rmdir df/d58/df4 39 2026-03-09T15:01:00.812 INFO:tasks.workunit.client.1.vm09.stdout:8/818: dwrite df/f26 [0,4194304] 0 2026-03-09T15:01:00.814 INFO:tasks.workunit.client.1.vm09.stdout:8/819: rmdir df 39 2026-03-09T15:01:00.815 INFO:tasks.workunit.client.1.vm09.stdout:2/796: fsync df/d20/f6a 0 2026-03-09T15:01:00.825 INFO:tasks.workunit.client.1.vm09.stdout:8/820: creat df/d24/d99/db1/dec/ff2 x:0 0 0 2026-03-09T15:01:00.827 INFO:tasks.workunit.client.1.vm09.stdout:8/821: fdatasync f8 0 2026-03-09T15:01:00.830 INFO:tasks.workunit.client.1.vm09.stdout:2/797: truncate df/d58/d74/f88 1246061 0 2026-03-09T15:01:00.833 INFO:tasks.workunit.client.1.vm09.stdout:2/798: fsync df/d20/f52 0 2026-03-09T15:01:00.836 INFO:tasks.workunit.client.1.vm09.stdout:2/799: unlink df/d20/d2e/l69 0 2026-03-09T15:01:00.837 INFO:tasks.workunit.client.1.vm09.stdout:2/800: mkdir df/d1f/d47/d5d/dfb 0 2026-03-09T15:01:00.838 INFO:tasks.workunit.client.1.vm09.stdout:2/801: chown df/d1f/d47/d84/db7/dc3/dc4 2816 1 2026-03-09T15:01:00.841 INFO:tasks.workunit.client.1.vm09.stdout:2/802: symlink df/d20/lfc 0 2026-03-09T15:01:00.842 INFO:tasks.workunit.client.1.vm09.stdout:2/803: symlink df/d1f/d47/d84/db7/dc3/dd9/lfd 0 2026-03-09T15:01:00.843 INFO:tasks.workunit.client.1.vm09.stdout:2/804: write df/d1f/d47/d84/db7/dc3/fe0 [1056961,14463] 0 2026-03-09T15:01:00.847 INFO:tasks.workunit.client.1.vm09.stdout:2/805: symlink df/d1f/d47/d84/db7/dc3/dd9/lfe 0 2026-03-09T15:01:01.021 INFO:tasks.workunit.client.1.vm09.stdout:3/813: truncate d3/d3a/d54/fbd 839265 0 2026-03-09T15:01:01.022 INFO:tasks.workunit.client.1.vm09.stdout:3/814: write d3/d74/f9b [4142658,130385] 0 2026-03-09T15:01:01.061 INFO:tasks.workunit.client.1.vm09.stdout:7/792: dwrite d3/db/f4d [0,4194304] 0 2026-03-09T15:01:01.061 INFO:tasks.workunit.client.1.vm09.stdout:9/729: dwrite d1/d7/d1e/f22 [4194304,4194304] 0 2026-03-09T15:01:01.066 INFO:tasks.workunit.client.1.vm09.stdout:9/730: write d1/d4f/fe0 [434373,24026] 0 2026-03-09T15:01:01.070 INFO:tasks.workunit.client.1.vm09.stdout:9/731: creat d1/d7/d1e/d2b/d2e/d56/feb x:0 0 0 2026-03-09T15:01:01.070 INFO:tasks.workunit.client.1.vm09.stdout:9/732: stat d1/d4f/d52/l5c 0 2026-03-09T15:01:01.071 INFO:tasks.workunit.client.1.vm09.stdout:7/793: dwrite d3/db/d15/f68 [4194304,4194304] 0 2026-03-09T15:01:01.075 INFO:tasks.workunit.client.1.vm09.stdout:9/733: creat d1/d4f/d52/fec x:0 0 0 2026-03-09T15:01:01.088 INFO:tasks.workunit.client.1.vm09.stdout:7/794: rmdir d3 39 2026-03-09T15:01:01.088 INFO:tasks.workunit.client.1.vm09.stdout:9/734: readlink d1/d58/l6f 0 2026-03-09T15:01:01.088 INFO:tasks.workunit.client.1.vm09.stdout:7/795: getdents d3/db 0 2026-03-09T15:01:01.088 INFO:tasks.workunit.client.1.vm09.stdout:9/735: rename d1/d7/d1e/d2b/d2e/d56/d5e/lcc to d1/d7/d1e/d2b/d2e/d56/led 0 2026-03-09T15:01:01.088 INFO:tasks.workunit.client.1.vm09.stdout:7/796: read d3/db/d25/db7/fd2 [55085,84772] 0 2026-03-09T15:01:01.088 INFO:tasks.workunit.client.1.vm09.stdout:7/797: fsync d3/db/d25/d7d/dd5/fd7 0 2026-03-09T15:01:01.088 INFO:tasks.workunit.client.1.vm09.stdout:9/736: mknod d1/d7/d1e/d2b/d40/cee 0 2026-03-09T15:01:01.090 INFO:tasks.workunit.client.1.vm09.stdout:9/737: dwrite d1/d7/db8/fdd [0,4194304] 0 2026-03-09T15:01:01.176 INFO:tasks.workunit.client.1.vm09.stdout:0/864: fsync da/f120 0 2026-03-09T15:01:01.181 INFO:tasks.workunit.client.1.vm09.stdout:0/865: dwrite da/dc/d1c/d3c/d78/f88 [0,4194304] 0 2026-03-09T15:01:01.184 INFO:tasks.workunit.client.1.vm09.stdout:0/866: write da/dc/d1c/d46/d63/de6/f11b [324253,92529] 0 2026-03-09T15:01:01.185 INFO:tasks.workunit.client.1.vm09.stdout:0/867: chown da/dc/d92/d9e/f115 9 1 2026-03-09T15:01:01.189 INFO:tasks.workunit.client.1.vm09.stdout:0/868: dwrite da/d30/f6f [0,4194304] 0 2026-03-09T15:01:01.193 INFO:tasks.workunit.client.1.vm09.stdout:0/869: readlink da/dc/d1c/d3c/d44/lb4 0 2026-03-09T15:01:01.197 INFO:tasks.workunit.client.1.vm09.stdout:0/870: mkdir da/dc/d1c/d46/d5b/d9f/d121 0 2026-03-09T15:01:01.198 INFO:tasks.workunit.client.1.vm09.stdout:0/871: write da/dc/d1c/f118 [549323,66363] 0 2026-03-09T15:01:01.201 INFO:tasks.workunit.client.1.vm09.stdout:0/872: fdatasync f7 0 2026-03-09T15:01:01.246 INFO:tasks.workunit.client.1.vm09.stdout:1/697: truncate d8/d50/d5b/f6f 1157191 0 2026-03-09T15:01:01.249 INFO:tasks.workunit.client.1.vm09.stdout:1/698: unlink d8/d50/d39/f65 0 2026-03-09T15:01:01.254 INFO:tasks.workunit.client.1.vm09.stdout:1/699: write d8/d10/d73/f37 [506792,116243] 0 2026-03-09T15:01:01.255 INFO:tasks.workunit.client.1.vm09.stdout:1/700: write d8/d50/d39/d95/f4c [982094,23662] 0 2026-03-09T15:01:01.268 INFO:tasks.workunit.client.1.vm09.stdout:1/701: symlink d8/d10/d24/d45/d5f/ld7 0 2026-03-09T15:01:01.269 INFO:tasks.workunit.client.1.vm09.stdout:1/702: creat d8/d10/dc9/fd8 x:0 0 0 2026-03-09T15:01:01.270 INFO:tasks.workunit.client.1.vm09.stdout:1/703: fdatasync d8/ff 0 2026-03-09T15:01:01.271 INFO:tasks.workunit.client.1.vm09.stdout:1/704: mkdir d8/d50/d39/d95/d56/dc7/dd9 0 2026-03-09T15:01:01.290 INFO:tasks.workunit.client.1.vm09.stdout:5/848: write d2/f2e [354596,9295] 0 2026-03-09T15:01:01.290 INFO:tasks.workunit.client.1.vm09.stdout:6/761: truncate d6/d20/d2a/d3d/d46/f84 339227 0 2026-03-09T15:01:01.291 INFO:tasks.workunit.client.1.vm09.stdout:6/762: write d6/df/fce [68465,39911] 0 2026-03-09T15:01:01.292 INFO:tasks.workunit.client.1.vm09.stdout:5/849: truncate d2/d37/d53/d86/dad/f112 225111 0 2026-03-09T15:01:01.297 INFO:tasks.workunit.client.1.vm09.stdout:5/850: creat d2/d37/d3c/d36/d4c/d51/d96/f12d x:0 0 0 2026-03-09T15:01:01.298 INFO:tasks.workunit.client.1.vm09.stdout:5/851: dread - d2/d37/d3c/d36/d45/dae/dc3/d115/f11c zero size 2026-03-09T15:01:01.299 INFO:tasks.workunit.client.1.vm09.stdout:6/763: creat d6/df/ffd x:0 0 0 2026-03-09T15:01:01.301 INFO:tasks.workunit.client.1.vm09.stdout:5/852: rename d2/f5e to d2/d37/d3c/dbf/d125/f12e 0 2026-03-09T15:01:01.302 INFO:tasks.workunit.client.1.vm09.stdout:4/847: write db/d19/d23/d44/d7c/d7d/d97/da3/fba [18707,97968] 0 2026-03-09T15:01:01.303 INFO:tasks.workunit.client.1.vm09.stdout:4/848: chown db/d19/d23/d71/fb3 715343411 1 2026-03-09T15:01:01.304 INFO:tasks.workunit.client.1.vm09.stdout:6/764: symlink d6/d20/d38/d56/d65/d68/d86/dc0/lfe 0 2026-03-09T15:01:01.307 INFO:tasks.workunit.client.1.vm09.stdout:6/765: fdatasync d6/db/d8b/f73 0 2026-03-09T15:01:01.310 INFO:tasks.workunit.client.1.vm09.stdout:4/849: dwrite db/d19/d81/d5d/f8a [4194304,4194304] 0 2026-03-09T15:01:01.314 INFO:tasks.workunit.client.1.vm09.stdout:2/806: dwrite df/d1f/d6d/fb3 [0,4194304] 0 2026-03-09T15:01:01.317 INFO:tasks.workunit.client.1.vm09.stdout:8/822: dwrite df/d38/d64/f50 [0,4194304] 0 2026-03-09T15:01:01.334 INFO:tasks.workunit.client.1.vm09.stdout:6/766: dread d6/d20/d38/d56/d65/d68/d6f/f85 [0,4194304] 0 2026-03-09T15:01:01.336 INFO:tasks.workunit.client.1.vm09.stdout:4/850: truncate db/d19/f38 2159187 0 2026-03-09T15:01:01.339 INFO:tasks.workunit.client.1.vm09.stdout:6/767: dwrite d6/db/d10/f2c [0,4194304] 0 2026-03-09T15:01:01.341 INFO:tasks.workunit.client.1.vm09.stdout:5/853: dread d2/d37/d3c/d36/d4c/d89/fcf [0,4194304] 0 2026-03-09T15:01:01.349 INFO:tasks.workunit.client.1.vm09.stdout:2/807: unlink df/d1f/c21 0 2026-03-09T15:01:01.350 INFO:tasks.workunit.client.1.vm09.stdout:2/808: chown df/d1f/d6d/d8f/la4 1503470274 1 2026-03-09T15:01:01.351 INFO:tasks.workunit.client.1.vm09.stdout:2/809: chown df/d1f/d6d/d8f/d5f/feb 127 1 2026-03-09T15:01:01.354 INFO:tasks.workunit.client.1.vm09.stdout:4/851: rename db/f1c to db/d19/d23/d71/d53/ded/f110 0 2026-03-09T15:01:01.355 INFO:tasks.workunit.client.1.vm09.stdout:6/768: fsync d6/d20/d38/d56/d65/d68/d6f/f85 0 2026-03-09T15:01:01.361 INFO:tasks.workunit.client.1.vm09.stdout:2/810: mkdir df/d2d/dff 0 2026-03-09T15:01:01.369 INFO:tasks.workunit.client.1.vm09.stdout:3/815: dwrite d3/d3a/f1d [0,4194304] 0 2026-03-09T15:01:01.371 INFO:tasks.workunit.client.1.vm09.stdout:2/811: dwrite df/d1f/d47/d84/fd7 [0,4194304] 0 2026-03-09T15:01:01.379 INFO:tasks.workunit.client.1.vm09.stdout:6/769: dread d6/d20/d38/d4e/f5a [4194304,4194304] 0 2026-03-09T15:01:01.385 INFO:tasks.workunit.client.1.vm09.stdout:3/816: mknod d3/d100/d48/dc5/c11a 0 2026-03-09T15:01:01.390 INFO:tasks.workunit.client.1.vm09.stdout:2/812: creat df/d1f/d47/d84/db7/dc3/f100 x:0 0 0 2026-03-09T15:01:01.390 INFO:tasks.workunit.client.1.vm09.stdout:6/770: mknod d6/d20/d44/d8f/cff 0 2026-03-09T15:01:01.391 INFO:tasks.workunit.client.1.vm09.stdout:2/813: dread df/d1f/d47/f73 [0,4194304] 0 2026-03-09T15:01:01.395 INFO:tasks.workunit.client.1.vm09.stdout:3/817: symlink d3/d3a/d2b/d36/dac/l11b 0 2026-03-09T15:01:01.396 INFO:tasks.workunit.client.1.vm09.stdout:2/814: fdatasync df/f42 0 2026-03-09T15:01:01.396 INFO:tasks.workunit.client.1.vm09.stdout:6/771: creat d6/d20/d38/d56/d65/f100 x:0 0 0 2026-03-09T15:01:01.396 INFO:tasks.workunit.client.1.vm09.stdout:5/854: sync 2026-03-09T15:01:01.397 INFO:tasks.workunit.client.1.vm09.stdout:3/818: fsync d3/d9a/de3/dc4/fec 0 2026-03-09T15:01:01.397 INFO:tasks.workunit.client.1.vm09.stdout:6/772: unlink d6/la4 0 2026-03-09T15:01:01.399 INFO:tasks.workunit.client.1.vm09.stdout:5/855: rename d2/d37/d3c/d36/d45/dae/dc3/f57 to d2/d37/d53/d86/d88/dc9/f12f 0 2026-03-09T15:01:01.400 INFO:tasks.workunit.client.1.vm09.stdout:5/856: readlink d2/d37/d3c/d36/d45/d5c/la2 0 2026-03-09T15:01:01.400 INFO:tasks.workunit.client.1.vm09.stdout:3/819: fsync d3/d5b/d79/f89 0 2026-03-09T15:01:01.401 INFO:tasks.workunit.client.1.vm09.stdout:6/773: dread - d6/db/d10/d7a/f80 zero size 2026-03-09T15:01:01.402 INFO:tasks.workunit.client.1.vm09.stdout:5/857: creat d2/d37/d3c/d36/d45/dae/dc3/f130 x:0 0 0 2026-03-09T15:01:01.407 INFO:tasks.workunit.client.1.vm09.stdout:5/858: rename d2/d37/d3c/d36/d45/dae/dd3/fe4 to d2/d37/d3c/d36/d45/dae/dd3/f131 0 2026-03-09T15:01:01.417 INFO:tasks.workunit.client.1.vm09.stdout:5/859: creat d2/d37/d53/f132 x:0 0 0 2026-03-09T15:01:01.417 INFO:tasks.workunit.client.1.vm09.stdout:5/860: truncate d2/d37/d3c/d36/d45/dae/dc3/f92 1038400 0 2026-03-09T15:01:01.417 INFO:tasks.workunit.client.1.vm09.stdout:5/861: dread d2/d37/d3c/d36/d4c/d51/fc7 [0,4194304] 0 2026-03-09T15:01:01.420 INFO:tasks.workunit.client.1.vm09.stdout:5/862: link d2/d37/d3c/d36/d4c/d51/fd0 d2/d37/d3c/d36/d45/dae/f133 0 2026-03-09T15:01:01.426 INFO:tasks.workunit.client.1.vm09.stdout:5/863: getdents d2/d37/d3c/d36/d45/dae/dc3/d10a 0 2026-03-09T15:01:01.428 INFO:tasks.workunit.client.1.vm09.stdout:5/864: symlink d2/db1/db2/d11b/l134 0 2026-03-09T15:01:01.434 INFO:tasks.workunit.client.1.vm09.stdout:5/865: mknod d2/d37/d3c/d36/d45/dfd/c135 0 2026-03-09T15:01:01.444 INFO:tasks.workunit.client.1.vm09.stdout:5/866: sync 2026-03-09T15:01:01.452 INFO:tasks.workunit.client.1.vm09.stdout:5/867: mkdir d2/d37/d53/d86/d88/dc9/d136 0 2026-03-09T15:01:01.453 INFO:tasks.workunit.client.1.vm09.stdout:5/868: write d2/d37/d53/dc4/f108 [493359,66626] 0 2026-03-09T15:01:01.459 INFO:tasks.workunit.client.1.vm09.stdout:7/798: truncate d3/db/d25/db7/fd2 463426 0 2026-03-09T15:01:01.466 INFO:tasks.workunit.client.1.vm09.stdout:9/738: dwrite d1/f1f [0,4194304] 0 2026-03-09T15:01:01.477 INFO:tasks.workunit.client.1.vm09.stdout:9/739: stat d1/d58/da8/ld8 0 2026-03-09T15:01:01.500 INFO:tasks.workunit.client.1.vm09.stdout:0/873: dwrite da/dc/d8c/fe9 [0,4194304] 0 2026-03-09T15:01:01.502 INFO:tasks.workunit.client.1.vm09.stdout:0/874: stat da/dc/d84 0 2026-03-09T15:01:01.509 INFO:tasks.workunit.client.1.vm09.stdout:1/705: rmdir d8/d10/dc9 39 2026-03-09T15:01:01.510 INFO:tasks.workunit.client.1.vm09.stdout:1/706: write d8/d10/f12 [3365541,73697] 0 2026-03-09T15:01:01.516 INFO:tasks.workunit.client.1.vm09.stdout:0/875: dread da/dc/d10/f16 [0,4194304] 0 2026-03-09T15:01:01.536 INFO:tasks.workunit.client.1.vm09.stdout:1/707: rename d8/d10/d73/c34 to d8/d50/d39/d95/d72/cda 0 2026-03-09T15:01:01.538 INFO:tasks.workunit.client.1.vm09.stdout:8/823: write df/d5c/fba [1392746,58943] 0 2026-03-09T15:01:01.549 INFO:tasks.workunit.client.1.vm09.stdout:4/852: write db/d12/d16/d5b/d105/f109 [2940817,42101] 0 2026-03-09T15:01:01.550 INFO:tasks.workunit.client.1.vm09.stdout:8/824: symlink df/d24/d99/db6/ddd/lf3 0 2026-03-09T15:01:01.561 INFO:tasks.workunit.client.1.vm09.stdout:2/815: truncate df/d1f/d6d/fb3 357822 0 2026-03-09T15:01:01.562 INFO:tasks.workunit.client.1.vm09.stdout:2/816: chown df/d1f/d47/d84/db7/dc3/dc4 1198 1 2026-03-09T15:01:01.566 INFO:tasks.workunit.client.1.vm09.stdout:6/774: dwrite d6/d20/d2a/f98 [0,4194304] 0 2026-03-09T15:01:01.566 INFO:tasks.workunit.client.1.vm09.stdout:3/820: dwrite d3/d3a/d2b/d31/d4a/d62/f1b [0,4194304] 0 2026-03-09T15:01:01.566 INFO:tasks.workunit.client.1.vm09.stdout:6/775: stat d6/d20/c48 0 2026-03-09T15:01:01.568 INFO:tasks.workunit.client.1.vm09.stdout:6/776: chown d6/db/f66 41876 1 2026-03-09T15:01:01.585 INFO:tasks.workunit.client.1.vm09.stdout:5/869: write d2/d37/d53/f79 [1453222,106836] 0 2026-03-09T15:01:01.585 INFO:tasks.workunit.client.1.vm09.stdout:3/821: dread - d3/d3a/d2b/d31/fd4 zero size 2026-03-09T15:01:01.587 INFO:tasks.workunit.client.1.vm09.stdout:5/870: dread d2/d37/d3c/d36/d4c/d51/fc7 [0,4194304] 0 2026-03-09T15:01:01.594 INFO:tasks.workunit.client.1.vm09.stdout:3/822: dread d3/d3a/d2b/d31/d4a/fd2 [0,4194304] 0 2026-03-09T15:01:01.596 INFO:tasks.workunit.client.1.vm09.stdout:5/871: symlink d2/d37/d3c/dbf/l137 0 2026-03-09T15:01:01.598 INFO:tasks.workunit.client.1.vm09.stdout:9/740: write d1/d7/f83 [2033142,123267] 0 2026-03-09T15:01:01.602 INFO:tasks.workunit.client.1.vm09.stdout:7/799: dwrite d3/db/d25/d5c/fbd [0,4194304] 0 2026-03-09T15:01:01.604 INFO:tasks.workunit.client.1.vm09.stdout:4/853: dread db/d19/d23/d71/f4e [0,4194304] 0 2026-03-09T15:01:01.604 INFO:tasks.workunit.client.1.vm09.stdout:5/872: dwrite d2/d37/d53/f79 [0,4194304] 0 2026-03-09T15:01:01.604 INFO:tasks.workunit.client.1.vm09.stdout:7/800: write d3/d28/fcd [118823,18508] 0 2026-03-09T15:01:01.609 INFO:tasks.workunit.client.1.vm09.stdout:9/741: fdatasync d1/d7/d1e/f34 0 2026-03-09T15:01:01.609 INFO:tasks.workunit.client.1.vm09.stdout:7/801: read - d3/d1d/fe6 zero size 2026-03-09T15:01:01.610 INFO:tasks.workunit.client.1.vm09.stdout:5/873: chown d2/d37/d3c/d36/d45/dae/dd3/l101 266314238 1 2026-03-09T15:01:01.612 INFO:tasks.workunit.client.1.vm09.stdout:4/854: chown db/d19/d23/l72 28 1 2026-03-09T15:01:01.619 INFO:tasks.workunit.client.1.vm09.stdout:3/823: link d3/d3a/d2b/d36/c4e d3/d9a/d80/c11c 0 2026-03-09T15:01:01.619 INFO:tasks.workunit.client.1.vm09.stdout:3/824: stat d3/d3a/d2b/d36/l99 0 2026-03-09T15:01:01.620 INFO:tasks.workunit.client.1.vm09.stdout:9/742: readlink d1/d7/l1c 0 2026-03-09T15:01:01.622 INFO:tasks.workunit.client.1.vm09.stdout:9/743: write d1/d7/d1e/d2b/d8d/f9d [4964565,95216] 0 2026-03-09T15:01:01.628 INFO:tasks.workunit.client.1.vm09.stdout:0/876: write da/dc/dcb/dd4/fe1 [274064,9402] 0 2026-03-09T15:01:01.628 INFO:tasks.workunit.client.1.vm09.stdout:7/802: chown d3/d61/cad 767216177 1 2026-03-09T15:01:01.629 INFO:tasks.workunit.client.1.vm09.stdout:9/744: write d1/d4f/d8f/fcb [55935,29302] 0 2026-03-09T15:01:01.629 INFO:tasks.workunit.client.1.vm09.stdout:9/745: write d1/d7/d1e/d2b/d2e/f95 [2364120,29798] 0 2026-03-09T15:01:01.630 INFO:tasks.workunit.client.1.vm09.stdout:1/708: dwrite d8/d50/fa7 [0,4194304] 0 2026-03-09T15:01:01.636 INFO:tasks.workunit.client.1.vm09.stdout:7/803: readlink d3/l43 0 2026-03-09T15:01:01.636 INFO:tasks.workunit.client.1.vm09.stdout:7/804: write d3/d3d/fec [795544,19513] 0 2026-03-09T15:01:01.650 INFO:tasks.workunit.client.1.vm09.stdout:8/825: dread df/d5c/fba [0,4194304] 0 2026-03-09T15:01:01.651 INFO:tasks.workunit.client.1.vm09.stdout:6/777: read d6/df/d23/f78 [1090749,112374] 0 2026-03-09T15:01:01.651 INFO:tasks.workunit.client.1.vm09.stdout:1/709: mknod d8/d10/d24/d48/d9b/d78/d8b/cdb 0 2026-03-09T15:01:01.656 INFO:tasks.workunit.client.1.vm09.stdout:4/855: sync 2026-03-09T15:01:01.656 INFO:tasks.workunit.client.1.vm09.stdout:0/877: sync 2026-03-09T15:01:01.662 INFO:tasks.workunit.client.1.vm09.stdout:4/856: dread db/d12/d9e/fd9 [0,4194304] 0 2026-03-09T15:01:01.670 INFO:tasks.workunit.client.1.vm09.stdout:4/857: truncate db/d19/d23/d71/d53/dcf/dfb/f104 245066 0 2026-03-09T15:01:01.671 INFO:tasks.workunit.client.1.vm09.stdout:2/817: mknod df/d1f/d47/d5d/d90/c101 0 2026-03-09T15:01:01.672 INFO:tasks.workunit.client.1.vm09.stdout:6/778: dread - d6/d20/d38/d56/d65/d68/f99 zero size 2026-03-09T15:01:01.675 INFO:tasks.workunit.client.1.vm09.stdout:0/878: chown da/dc/d1c/d46/d5b/d9f/lc2 0 1 2026-03-09T15:01:01.675 INFO:tasks.workunit.client.1.vm09.stdout:4/858: rmdir db/d19/d23/d44/d7c/d7d 39 2026-03-09T15:01:01.676 INFO:tasks.workunit.client.1.vm09.stdout:3/825: getdents d3/d9a/de3 0 2026-03-09T15:01:01.681 INFO:tasks.workunit.client.1.vm09.stdout:6/779: mknod d6/d20/d38/d4e/c101 0 2026-03-09T15:01:01.689 INFO:tasks.workunit.client.1.vm09.stdout:3/826: mknod d3/d100/d6a/c11d 0 2026-03-09T15:01:01.690 INFO:tasks.workunit.client.1.vm09.stdout:3/827: write d3/f9 [3159155,111566] 0 2026-03-09T15:01:01.691 INFO:tasks.workunit.client.1.vm09.stdout:3/828: fsync d3/d3a/d2b/d7b/dd3/ffe 0 2026-03-09T15:01:01.695 INFO:tasks.workunit.client.1.vm09.stdout:0/879: dread f7 [4194304,4194304] 0 2026-03-09T15:01:01.696 INFO:tasks.workunit.client.1.vm09.stdout:0/880: write da/dc/d61/f10e [824193,19522] 0 2026-03-09T15:01:01.699 INFO:tasks.workunit.client.1.vm09.stdout:7/805: write d3/d1d/d65/f92 [954791,31902] 0 2026-03-09T15:01:01.699 INFO:tasks.workunit.client.1.vm09.stdout:9/746: write d1/d7/d1e/f46 [369838,69055] 0 2026-03-09T15:01:01.702 INFO:tasks.workunit.client.1.vm09.stdout:5/874: dwrite d2/d37/d3c/d36/d45/dae/f133 [4194304,4194304] 0 2026-03-09T15:01:01.702 INFO:tasks.workunit.client.1.vm09.stdout:7/806: chown d3/d1d/fe6 531 1 2026-03-09T15:01:01.705 INFO:tasks.workunit.client.1.vm09.stdout:7/807: chown d3/db/d25/d5c/d75/db4 0 1 2026-03-09T15:01:01.706 INFO:tasks.workunit.client.1.vm09.stdout:3/829: dread d3/d5b/f8b [0,4194304] 0 2026-03-09T15:01:01.708 INFO:tasks.workunit.client.1.vm09.stdout:0/881: dwrite da/dc/dcb/dd4/f109 [0,4194304] 0 2026-03-09T15:01:01.723 INFO:tasks.workunit.client.1.vm09.stdout:9/747: unlink d1/d7/d1e/d2b/d40/f4d 0 2026-03-09T15:01:01.725 INFO:tasks.workunit.client.1.vm09.stdout:1/710: write d8/d10/d24/d48/d9b/d68/fae [940062,76945] 0 2026-03-09T15:01:01.725 INFO:tasks.workunit.client.1.vm09.stdout:5/875: creat d2/d37/d3c/dbf/d125/f138 x:0 0 0 2026-03-09T15:01:01.725 INFO:tasks.workunit.client.1.vm09.stdout:3/830: readlink d3/l102 0 2026-03-09T15:01:01.728 INFO:tasks.workunit.client.1.vm09.stdout:8/826: dwrite df/d24/f32 [0,4194304] 0 2026-03-09T15:01:01.738 INFO:tasks.workunit.client.1.vm09.stdout:4/859: write db/d19/d23/d71/d53/fa0 [221154,26016] 0 2026-03-09T15:01:01.738 INFO:tasks.workunit.client.1.vm09.stdout:9/748: fsync d1/d7/d1e/f5a 0 2026-03-09T15:01:01.740 INFO:tasks.workunit.client.1.vm09.stdout:4/860: write db/d12/d16/d5b/d78/d7f/f9d [1079675,114976] 0 2026-03-09T15:01:01.742 INFO:tasks.workunit.client.1.vm09.stdout:2/818: dwrite df/f5b [0,4194304] 0 2026-03-09T15:01:01.743 INFO:tasks.workunit.client.1.vm09.stdout:3/831: chown d3/d3a/d2b/d31/d4a/l69 371404307 1 2026-03-09T15:01:01.743 INFO:tasks.workunit.client.1.vm09.stdout:3/832: fsync d3/fe6 0 2026-03-09T15:01:01.747 INFO:tasks.workunit.client.1.vm09.stdout:7/808: symlink d3/db/d25/d5c/d75/db4/lee 0 2026-03-09T15:01:01.751 INFO:tasks.workunit.client.1.vm09.stdout:4/861: unlink db/d12/f10e 0 2026-03-09T15:01:01.752 INFO:tasks.workunit.client.1.vm09.stdout:8/827: mkdir df/d2d/df4 0 2026-03-09T15:01:01.753 INFO:tasks.workunit.client.1.vm09.stdout:3/833: rename d3/d100/d6a/lc6 to d3/d5b/l11e 0 2026-03-09T15:01:01.754 INFO:tasks.workunit.client.1.vm09.stdout:5/876: creat d2/d37/d3c/d36/d45/f139 x:0 0 0 2026-03-09T15:01:01.757 INFO:tasks.workunit.client.1.vm09.stdout:4/862: rename db/d19/d23/d71/d53/d10a to db/d19/dcd/d111 0 2026-03-09T15:01:01.759 INFO:tasks.workunit.client.1.vm09.stdout:9/749: creat d1/d58/da8/fef x:0 0 0 2026-03-09T15:01:01.759 INFO:tasks.workunit.client.1.vm09.stdout:9/750: fsync d1/d7/d9f/fa4 0 2026-03-09T15:01:01.761 INFO:tasks.workunit.client.1.vm09.stdout:5/877: fdatasync d2/f38 0 2026-03-09T15:01:01.761 INFO:tasks.workunit.client.1.vm09.stdout:4/863: dwrite db/d19/d23/d71/d53/fa9 [0,4194304] 0 2026-03-09T15:01:01.763 INFO:tasks.workunit.client.1.vm09.stdout:4/864: dread - db/d19/d23/d44/f101 zero size 2026-03-09T15:01:01.763 INFO:tasks.workunit.client.1.vm09.stdout:2/819: creat df/f102 x:0 0 0 2026-03-09T15:01:01.769 INFO:tasks.workunit.client.1.vm09.stdout:5/878: rename d2/d37/d3c/d36/d4c/d51/d96/l41 to d2/d37/d67/d95/db8/l13a 0 2026-03-09T15:01:01.771 INFO:tasks.workunit.client.1.vm09.stdout:7/809: sync 2026-03-09T15:01:01.771 INFO:tasks.workunit.client.1.vm09.stdout:3/834: sync 2026-03-09T15:01:01.771 INFO:tasks.workunit.client.1.vm09.stdout:5/879: readlink d2/d37/le3 0 2026-03-09T15:01:01.773 INFO:tasks.workunit.client.1.vm09.stdout:5/880: stat d2/d37/d3c/dbf/f128 0 2026-03-09T15:01:01.773 INFO:tasks.workunit.client.1.vm09.stdout:2/820: mkdir df/d1f/d47/d71/d103 0 2026-03-09T15:01:01.779 INFO:tasks.workunit.client.1.vm09.stdout:4/865: rename db/d19/d23/d44/l8c to db/d12/d9e/dd0/l112 0 2026-03-09T15:01:01.782 INFO:tasks.workunit.client.1.vm09.stdout:5/881: truncate d2/d37/d3c/d36/d4c/d51/d96/f23 4724679 0 2026-03-09T15:01:01.782 INFO:tasks.workunit.client.1.vm09.stdout:7/810: creat d3/d3d/d9b/da9/daa/fef x:0 0 0 2026-03-09T15:01:01.782 INFO:tasks.workunit.client.1.vm09.stdout:3/835: symlink d3/d100/d48/dc5/d10a/l11f 0 2026-03-09T15:01:01.785 INFO:tasks.workunit.client.1.vm09.stdout:0/882: write da/dc/dcb/dd4/d11c/f83 [623964,19453] 0 2026-03-09T15:01:01.792 INFO:tasks.workunit.client.1.vm09.stdout:6/780: dwrite d6/db/d10/fa0 [0,4194304] 0 2026-03-09T15:01:01.796 INFO:tasks.workunit.client.1.vm09.stdout:2/821: fsync df/d58/d67/fe6 0 2026-03-09T15:01:01.800 INFO:tasks.workunit.client.1.vm09.stdout:2/822: readlink ld 0 2026-03-09T15:01:01.800 INFO:tasks.workunit.client.1.vm09.stdout:2/823: chown df/d1f/d47/d71 309996601 1 2026-03-09T15:01:01.804 INFO:tasks.workunit.client.1.vm09.stdout:4/866: dread db/f21 [0,4194304] 0 2026-03-09T15:01:01.807 INFO:tasks.workunit.client.1.vm09.stdout:3/836: fsync d3/d74/f110 0 2026-03-09T15:01:01.811 INFO:tasks.workunit.client.1.vm09.stdout:0/883: symlink da/dc/d1c/d3c/d78/d7a/d9c/l122 0 2026-03-09T15:01:01.811 INFO:tasks.workunit.client.1.vm09.stdout:6/781: fdatasync d6/d20/d38/fd6 0 2026-03-09T15:01:01.811 INFO:tasks.workunit.client.1.vm09.stdout:0/884: stat da/dc/f90 0 2026-03-09T15:01:01.811 INFO:tasks.workunit.client.1.vm09.stdout:1/711: dwrite d8/d10/d24/d48/f9a [0,4194304] 0 2026-03-09T15:01:01.819 INFO:tasks.workunit.client.1.vm09.stdout:7/811: link d3/db/d25/db7/dd4/fe0 d3/db/d15/ff0 0 2026-03-09T15:01:01.821 INFO:tasks.workunit.client.1.vm09.stdout:7/812: chown d3/d1d/d65/f6f 5 1 2026-03-09T15:01:01.828 INFO:tasks.workunit.client.1.vm09.stdout:4/867: creat db/d19/d52/d76/d3b/f113 x:0 0 0 2026-03-09T15:01:01.831 INFO:tasks.workunit.client.1.vm09.stdout:0/885: rmdir da/dc/dcb/dd4 39 2026-03-09T15:01:01.832 INFO:tasks.workunit.client.1.vm09.stdout:5/882: dread d2/f47 [0,4194304] 0 2026-03-09T15:01:01.834 INFO:tasks.workunit.client.1.vm09.stdout:2/824: rename df/d1f/d47/d84/l98 to df/d58/d74/def/l104 0 2026-03-09T15:01:01.836 INFO:tasks.workunit.client.1.vm09.stdout:4/868: truncate db/d19/d23/d44/d7c/f88 864535 0 2026-03-09T15:01:01.837 INFO:tasks.workunit.client.1.vm09.stdout:6/782: mkdir d6/db/d10/d4f/ddd/d102 0 2026-03-09T15:01:01.838 INFO:tasks.workunit.client.1.vm09.stdout:0/886: creat da/dc/d10/f123 x:0 0 0 2026-03-09T15:01:01.840 INFO:tasks.workunit.client.1.vm09.stdout:3/837: dread d3/d5b/d79/f89 [0,4194304] 0 2026-03-09T15:01:01.840 INFO:tasks.workunit.client.1.vm09.stdout:8/828: dwrite df/d2d/f57 [4194304,4194304] 0 2026-03-09T15:01:01.849 INFO:tasks.workunit.client.1.vm09.stdout:9/751: dwrite d1/d7/f3e [0,4194304] 0 2026-03-09T15:01:01.850 INFO:tasks.workunit.client.1.vm09.stdout:9/752: write d1/d7/d1e/d2b/d8d/dc8/fe7 [160968,337] 0 2026-03-09T15:01:01.854 INFO:tasks.workunit.client.1.vm09.stdout:7/813: link d3/f32 d3/db/d46/dc9/ff1 0 2026-03-09T15:01:01.857 INFO:tasks.workunit.client.1.vm09.stdout:2/825: dread df/d58/d67/f61 [0,4194304] 0 2026-03-09T15:01:01.864 INFO:tasks.workunit.client.1.vm09.stdout:0/887: chown da/dc/d1c/c40 202237911 1 2026-03-09T15:01:01.866 INFO:tasks.workunit.client.1.vm09.stdout:9/753: creat d1/d7/d1e/d2b/d2e/d56/d6d/ff0 x:0 0 0 2026-03-09T15:01:01.867 INFO:tasks.workunit.client.1.vm09.stdout:9/754: readlink d1/d4f/l54 0 2026-03-09T15:01:01.868 INFO:tasks.workunit.client.1.vm09.stdout:9/755: readlink d1/d58/l6f 0 2026-03-09T15:01:01.873 INFO:tasks.workunit.client.1.vm09.stdout:2/826: mkdir df/d58/d74/def/d105 0 2026-03-09T15:01:01.873 INFO:tasks.workunit.client.1.vm09.stdout:2/827: dread - df/d1f/fe5 zero size 2026-03-09T15:01:01.883 INFO:tasks.workunit.client.1.vm09.stdout:6/783: rename d6/db/d10/d4f/ddd to d6/d20/d38/d56/d65/d68/d86/d103 0 2026-03-09T15:01:01.891 INFO:tasks.workunit.client.1.vm09.stdout:3/838: mkdir d3/d3a/d2b/d7b/db6/d120 0 2026-03-09T15:01:01.895 INFO:tasks.workunit.client.1.vm09.stdout:3/839: chown d3/d3a/d2b/d7b/dd3 1 1 2026-03-09T15:01:01.900 INFO:tasks.workunit.client.1.vm09.stdout:5/883: getdents d2/d37/d67/df6 0 2026-03-09T15:01:01.900 INFO:tasks.workunit.client.1.vm09.stdout:7/814: dread d3/d1d/f30 [0,4194304] 0 2026-03-09T15:01:01.901 INFO:tasks.workunit.client.1.vm09.stdout:5/884: write d2/d37/d67/d95/f10d [1016092,89960] 0 2026-03-09T15:01:01.901 INFO:tasks.workunit.client.1.vm09.stdout:7/815: write d3/f26 [3852417,83298] 0 2026-03-09T15:01:01.905 INFO:tasks.workunit.client.1.vm09.stdout:8/829: creat df/d24/d99/db6/d60/ff5 x:0 0 0 2026-03-09T15:01:01.905 INFO:tasks.workunit.client.1.vm09.stdout:7/816: write d3/db/d46/f5b [559595,9362] 0 2026-03-09T15:01:01.907 INFO:tasks.workunit.client.1.vm09.stdout:6/784: dwrite d6/db/fb3 [0,4194304] 0 2026-03-09T15:01:01.908 INFO:tasks.workunit.client.1.vm09.stdout:3/840: creat d3/d3a/d2b/d7b/f121 x:0 0 0 2026-03-09T15:01:01.909 INFO:tasks.workunit.client.1.vm09.stdout:7/817: dwrite d3/d1d/f33 [4194304,4194304] 0 2026-03-09T15:01:01.916 INFO:tasks.workunit.client.1.vm09.stdout:0/888: rename da/dc/dcb/dd4 to da/dc/d84/d124 0 2026-03-09T15:01:01.921 INFO:tasks.workunit.client.1.vm09.stdout:5/885: creat d2/db1/db2/d11b/f13b x:0 0 0 2026-03-09T15:01:01.923 INFO:tasks.workunit.client.1.vm09.stdout:5/886: truncate d2/d37/d3c/dbf/d125/f138 667954 0 2026-03-09T15:01:01.923 INFO:tasks.workunit.client.1.vm09.stdout:5/887: stat d2/d37/d3c/l35 0 2026-03-09T15:01:01.923 INFO:tasks.workunit.client.1.vm09.stdout:5/888: stat d2/db1/db2/d11b/f13b 0 2026-03-09T15:01:01.930 INFO:tasks.workunit.client.1.vm09.stdout:5/889: dwrite d2/d37/d3c/d36/d4c/d51/fd0 [8388608,4194304] 0 2026-03-09T15:01:01.931 INFO:tasks.workunit.client.1.vm09.stdout:5/890: chown d2/lf 60 1 2026-03-09T15:01:01.940 INFO:tasks.workunit.client.1.vm09.stdout:2/828: mkdir df/d20/d106 0 2026-03-09T15:01:01.940 INFO:tasks.workunit.client.1.vm09.stdout:4/869: link db/d19/d23/d44/d7c/d7d/lf3 db/d19/d23/l114 0 2026-03-09T15:01:01.947 INFO:tasks.workunit.client.1.vm09.stdout:4/870: write db/d19/d23/d71/d53/dcf/dfb/f104 [793801,116178] 0 2026-03-09T15:01:01.950 INFO:tasks.workunit.client.1.vm09.stdout:9/756: getdents d1/d6e 0 2026-03-09T15:01:01.950 INFO:tasks.workunit.client.1.vm09.stdout:1/712: dread d8/d10/d24/d45/f6c [0,4194304] 0 2026-03-09T15:01:01.950 INFO:tasks.workunit.client.1.vm09.stdout:5/891: unlink d2/d37/d67/d95/f10d 0 2026-03-09T15:01:01.950 INFO:tasks.workunit.client.1.vm09.stdout:7/818: rename d3/db/d15/d5f/lae to d3/d3d/d9b/da9/lf2 0 2026-03-09T15:01:01.951 INFO:tasks.workunit.client.1.vm09.stdout:4/871: truncate db/d12/d16/f46 3212068 0 2026-03-09T15:01:01.955 INFO:tasks.workunit.client.1.vm09.stdout:9/757: truncate d1/d7/d1e/d2b/d8d/dc8/fde 331678 0 2026-03-09T15:01:01.959 INFO:tasks.workunit.client.1.vm09.stdout:7/819: symlink d3/db/d15/d5f/d6e/lf3 0 2026-03-09T15:01:01.962 INFO:tasks.workunit.client.1.vm09.stdout:9/758: dread d1/d4f/d8f/fcb [0,4194304] 0 2026-03-09T15:01:01.963 INFO:tasks.workunit.client.1.vm09.stdout:9/759: fdatasync d1/d4f/d8f/dc0/fe9 0 2026-03-09T15:01:01.963 INFO:tasks.workunit.client.1.vm09.stdout:1/713: dwrite d8/d10/d24/d48/d9b/d78/f7c [0,4194304] 0 2026-03-09T15:01:01.964 INFO:tasks.workunit.client.1.vm09.stdout:0/889: read da/dc/d1c/d3c/d78/d7a/fb2 [822088,119095] 0 2026-03-09T15:01:01.965 INFO:tasks.workunit.client.1.vm09.stdout:0/890: chown da/d30 3242 1 2026-03-09T15:01:01.969 INFO:tasks.workunit.client.1.vm09.stdout:4/872: rename db/d19/d23/d44/d7c/d7d/d97/da3/cc7 to db/d19/d81/d5d/c115 0 2026-03-09T15:01:01.970 INFO:tasks.workunit.client.1.vm09.stdout:7/820: dwrite d3/d28/fcd [0,4194304] 0 2026-03-09T15:01:01.993 INFO:tasks.workunit.client.1.vm09.stdout:3/841: write d3/d5b/d79/d9d/faf [847036,90587] 0 2026-03-09T15:01:01.995 INFO:tasks.workunit.client.1.vm09.stdout:5/892: dread d2/f34 [0,4194304] 0 2026-03-09T15:01:01.997 INFO:tasks.workunit.client.1.vm09.stdout:5/893: truncate d2/d37/d53/d86/f122 645102 0 2026-03-09T15:01:01.998 INFO:tasks.workunit.client.1.vm09.stdout:0/891: creat da/dc/d22/f125 x:0 0 0 2026-03-09T15:01:01.999 INFO:tasks.workunit.client.1.vm09.stdout:4/873: mknod db/d19/d52/c116 0 2026-03-09T15:01:02.002 INFO:tasks.workunit.client.1.vm09.stdout:1/714: mkdir d8/d10/d24/d45/ddc 0 2026-03-09T15:01:02.002 INFO:tasks.workunit.client.1.vm09.stdout:8/830: write df/d38/f8a [3191896,36355] 0 2026-03-09T15:01:02.003 INFO:tasks.workunit.client.1.vm09.stdout:7/821: symlink d3/d3d/lf4 0 2026-03-09T15:01:02.004 INFO:tasks.workunit.client.1.vm09.stdout:6/785: dwrite d6/d20/d38/d56/d65/d68/d6f/f85 [0,4194304] 0 2026-03-09T15:01:02.007 INFO:tasks.workunit.client.1.vm09.stdout:6/786: dread - d6/db/d10/d7a/f80 zero size 2026-03-09T15:01:02.011 INFO:tasks.workunit.client.1.vm09.stdout:1/715: mknod d8/d50/d5b/cdd 0 2026-03-09T15:01:02.011 INFO:tasks.workunit.client.1.vm09.stdout:0/892: truncate da/dc/d22/df0/f104 357108 0 2026-03-09T15:01:02.011 INFO:tasks.workunit.client.1.vm09.stdout:0/893: write da/f120 [742557,71021] 0 2026-03-09T15:01:02.011 INFO:tasks.workunit.client.1.vm09.stdout:5/894: link d2/d37/d3c/d36/d45/d5c/f90 d2/d37/d53/d86/dad/f13c 0 2026-03-09T15:01:02.011 INFO:tasks.workunit.client.1.vm09.stdout:2/829: dwrite df/d58/d74/f95 [0,4194304] 0 2026-03-09T15:01:02.022 INFO:tasks.workunit.client.1.vm09.stdout:5/895: chown d2/d37/d53/d86/d88/dc9/d136 163911758 1 2026-03-09T15:01:02.027 INFO:tasks.workunit.client.1.vm09.stdout:8/831: dread df/d5b/f82 [0,4194304] 0 2026-03-09T15:01:02.027 INFO:tasks.workunit.client.1.vm09.stdout:2/830: read - df/d1f/d47/d5d/dbc/ff8 zero size 2026-03-09T15:01:02.029 INFO:tasks.workunit.client.1.vm09.stdout:7/822: rename d3/d1d/d2d to d3/db/d46/db2/df5 0 2026-03-09T15:01:02.030 INFO:tasks.workunit.client.1.vm09.stdout:6/787: dread - d6/d20/d38/d56/fd3 zero size 2026-03-09T15:01:02.030 INFO:tasks.workunit.client.1.vm09.stdout:4/874: rmdir db/d12 39 2026-03-09T15:01:02.034 INFO:tasks.workunit.client.1.vm09.stdout:9/760: write d1/d7/d1e/d2b/d40/f57 [1456163,72009] 0 2026-03-09T15:01:02.034 INFO:tasks.workunit.client.1.vm09.stdout:7/823: dread d3/d28/f95 [0,4194304] 0 2026-03-09T15:01:02.038 INFO:tasks.workunit.client.1.vm09.stdout:7/824: dwrite d3/db/d15/fcc [0,4194304] 0 2026-03-09T15:01:02.048 INFO:tasks.workunit.client.1.vm09.stdout:1/716: creat d8/d50/d39/d95/d56/fde x:0 0 0 2026-03-09T15:01:02.048 INFO:tasks.workunit.client.1.vm09.stdout:0/894: creat da/dc/d84/db8/f126 x:0 0 0 2026-03-09T15:01:02.048 INFO:tasks.workunit.client.1.vm09.stdout:3/842: getdents d3/d100/d6a 0 2026-03-09T15:01:02.051 INFO:tasks.workunit.client.1.vm09.stdout:5/896: creat d2/d37/d67/df6/f13d x:0 0 0 2026-03-09T15:01:02.052 INFO:tasks.workunit.client.1.vm09.stdout:1/717: write d8/d10/d24/d48/f9a [4475509,74322] 0 2026-03-09T15:01:02.082 INFO:tasks.workunit.client.1.vm09.stdout:0/895: rename da/dc/d61/c8d to da/dc/d92/c127 0 2026-03-09T15:01:02.083 INFO:tasks.workunit.client.1.vm09.stdout:5/897: mkdir d2/d37/d53/d86/d88/d13e 0 2026-03-09T15:01:02.084 INFO:tasks.workunit.client.1.vm09.stdout:6/788: symlink d6/d20/d38/d56/l104 0 2026-03-09T15:01:02.084 INFO:tasks.workunit.client.1.vm09.stdout:4/875: truncate db/d12/f1b 849002 0 2026-03-09T15:01:02.089 INFO:tasks.workunit.client.1.vm09.stdout:9/761: write d1/d58/f72 [1843147,75243] 0 2026-03-09T15:01:02.089 INFO:tasks.workunit.client.1.vm09.stdout:3/843: write d3/d3a/d2b/f92 [378915,65899] 0 2026-03-09T15:01:02.090 INFO:tasks.workunit.client.1.vm09.stdout:2/831: mkdir df/d1f/d47/d5d/d90/d107 0 2026-03-09T15:01:02.093 INFO:tasks.workunit.client.1.vm09.stdout:1/718: mknod d8/d10/d24/cdf 0 2026-03-09T15:01:02.099 INFO:tasks.workunit.client.1.vm09.stdout:6/789: dwrite d6/d20/f70 [0,4194304] 0 2026-03-09T15:01:02.099 INFO:tasks.workunit.client.1.vm09.stdout:0/896: symlink da/d80/l128 0 2026-03-09T15:01:02.110 INFO:tasks.workunit.client.1.vm09.stdout:1/719: dwrite d8/f42 [0,4194304] 0 2026-03-09T15:01:02.119 INFO:tasks.workunit.client.1.vm09.stdout:9/762: unlink d1/d7/d1e/d2b/d2e/d56/d6d/f87 0 2026-03-09T15:01:02.123 INFO:tasks.workunit.client.1.vm09.stdout:8/832: getdents df/d2d/d42/d70 0 2026-03-09T15:01:02.125 INFO:tasks.workunit.client.1.vm09.stdout:6/790: symlink d6/d20/d38/d4e/l105 0 2026-03-09T15:01:02.125 INFO:tasks.workunit.client.1.vm09.stdout:0/897: creat da/dc/d10/dd0/f129 x:0 0 0 2026-03-09T15:01:02.127 INFO:tasks.workunit.client.1.vm09.stdout:7/825: getdents d3/db/d46/dc9 0 2026-03-09T15:01:02.128 INFO:tasks.workunit.client.1.vm09.stdout:3/844: mknod d3/d5b/d79/d9d/df9/c122 0 2026-03-09T15:01:02.130 INFO:tasks.workunit.client.1.vm09.stdout:5/898: dread d2/d37/d3c/dbf/fd4 [0,4194304] 0 2026-03-09T15:01:02.137 INFO:tasks.workunit.client.1.vm09.stdout:3/845: dread d3/d3a/d2b/d31/f40 [0,4194304] 0 2026-03-09T15:01:02.146 INFO:tasks.workunit.client.1.vm09.stdout:7/826: creat d3/d1d/d65/da3/ff6 x:0 0 0 2026-03-09T15:01:02.164 INFO:tasks.workunit.client.1.vm09.stdout:6/791: dread d6/df/d23/fae [0,4194304] 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:2/832: rmdir df/d1f/d47/d71/d103 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:5/899: unlink d2/d37/d3c/d36/d45/dae/dc3/ce0 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:6/792: readlink d6/d20/d38/l41 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:7/827: symlink d3/db/d25/d5c/d75/db4/lf7 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:5/900: symlink d2/db1/db2/l13f 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:1/720: link d8/d10/d24/d45/l49 d8/d90/le0 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:3/846: dread d3/d5b/d79/f83 [0,4194304] 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:2/833: creat df/d2d/dff/f108 x:0 0 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:5/901: mkdir d2/d37/d3c/d36/d140 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:5/902: read d2/f22 [7782448,47405] 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:2/834: mkdir df/d58/d74/d109 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:5/903: symlink d2/d37/d67/d95/l141 0 2026-03-09T15:01:02.165 INFO:tasks.workunit.client.1.vm09.stdout:6/793: link d6/df/l35 d6/d20/l106 0 2026-03-09T15:01:02.167 INFO:tasks.workunit.client.1.vm09.stdout:2/835: unlink df/d93/da3/cb4 0 2026-03-09T15:01:02.169 INFO:tasks.workunit.client.1.vm09.stdout:6/794: symlink d6/d20/d2a/d3d/d46/l107 0 2026-03-09T15:01:02.169 INFO:tasks.workunit.client.1.vm09.stdout:7/828: dread d3/f16 [0,4194304] 0 2026-03-09T15:01:02.171 INFO:tasks.workunit.client.1.vm09.stdout:2/836: chown df/d1f/d6d 2 1 2026-03-09T15:01:02.177 INFO:tasks.workunit.client.1.vm09.stdout:2/837: dread - df/d1f/d6d/d8f/fdc zero size 2026-03-09T15:01:02.180 INFO:tasks.workunit.client.1.vm09.stdout:7/829: creat d3/db/ff8 x:0 0 0 2026-03-09T15:01:02.185 INFO:tasks.workunit.client.1.vm09.stdout:6/795: dwrite d6/d20/d38/d56/d65/f100 [0,4194304] 0 2026-03-09T15:01:02.187 INFO:tasks.workunit.client.1.vm09.stdout:4/876: dwrite db/d12/da1/fa6 [0,4194304] 0 2026-03-09T15:01:02.198 INFO:tasks.workunit.client.1.vm09.stdout:2/838: mkdir df/d58/d67/d10a 0 2026-03-09T15:01:02.199 INFO:tasks.workunit.client.1.vm09.stdout:7/830: dwrite d3/f16 [0,4194304] 0 2026-03-09T15:01:02.202 INFO:tasks.workunit.client.1.vm09.stdout:6/796: truncate d6/d20/d38/d56/d65/d68/f99 317751 0 2026-03-09T15:01:02.208 INFO:tasks.workunit.client.1.vm09.stdout:4/877: mkdir db/d19/d23/d44/d7c/d7d/d97/da8/d117 0 2026-03-09T15:01:02.211 INFO:tasks.workunit.client.1.vm09.stdout:4/878: chown l6 11084 1 2026-03-09T15:01:02.211 INFO:tasks.workunit.client.1.vm09.stdout:6/797: creat d6/df/d23/d89/f108 x:0 0 0 2026-03-09T15:01:02.221 INFO:tasks.workunit.client.1.vm09.stdout:4/879: fdatasync db/d19/dcd/fde 0 2026-03-09T15:01:02.223 INFO:tasks.workunit.client.1.vm09.stdout:6/798: write d6/d20/d24/d7e/d88/fda [362299,106391] 0 2026-03-09T15:01:02.225 INFO:tasks.workunit.client.1.vm09.stdout:2/839: link df/d58/fc2 df/d1f/d47/d5d/dfb/f10b 0 2026-03-09T15:01:02.227 INFO:tasks.workunit.client.1.vm09.stdout:4/880: dread db/d12/d16/f54 [0,4194304] 0 2026-03-09T15:01:02.232 INFO:tasks.workunit.client.1.vm09.stdout:9/763: write d1/d7/d1e/d2b/d2e/f16 [1784250,89882] 0 2026-03-09T15:01:02.232 INFO:tasks.workunit.client.1.vm09.stdout:1/721: rmdir d8/d90 39 2026-03-09T15:01:02.237 INFO:tasks.workunit.client.1.vm09.stdout:8/833: dwrite df/d2d/d42/f7c [0,4194304] 0 2026-03-09T15:01:02.240 INFO:tasks.workunit.client.1.vm09.stdout:0/898: truncate da/dc/d8c/fe9 2921643 0 2026-03-09T15:01:02.240 INFO:tasks.workunit.client.1.vm09.stdout:7/831: dread d3/f26 [0,4194304] 0 2026-03-09T15:01:02.241 INFO:tasks.workunit.client.1.vm09.stdout:0/899: chown da/dc/d84/fd5 1 1 2026-03-09T15:01:02.241 INFO:tasks.workunit.client.1.vm09.stdout:0/900: stat da/dc/fae 0 2026-03-09T15:01:02.244 INFO:tasks.workunit.client.1.vm09.stdout:0/901: truncate da/dc/d1c/d3c/d78/f116 153541 0 2026-03-09T15:01:02.248 INFO:tasks.workunit.client.1.vm09.stdout:8/834: dread df/f23 [0,4194304] 0 2026-03-09T15:01:02.251 INFO:tasks.workunit.client.1.vm09.stdout:3/847: truncate d3/d74/f9b 1667485 0 2026-03-09T15:01:02.254 INFO:tasks.workunit.client.1.vm09.stdout:5/904: write d2/f3d [4429458,91865] 0 2026-03-09T15:01:02.255 INFO:tasks.workunit.client.1.vm09.stdout:3/848: chown d3/d9a/de3/fa3 0 1 2026-03-09T15:01:02.258 INFO:tasks.workunit.client.1.vm09.stdout:1/722: dwrite d8/d90/fd0 [0,4194304] 0 2026-03-09T15:01:02.259 INFO:tasks.workunit.client.1.vm09.stdout:0/902: fdatasync da/ffa 0 2026-03-09T15:01:02.260 INFO:tasks.workunit.client.1.vm09.stdout:0/903: stat da/dc/d10 0 2026-03-09T15:01:02.268 INFO:tasks.workunit.client.1.vm09.stdout:8/835: rmdir df/d2d/d4f 39 2026-03-09T15:01:02.270 INFO:tasks.workunit.client.1.vm09.stdout:5/905: mknod d2/d37/d67/d95/c142 0 2026-03-09T15:01:02.271 INFO:tasks.workunit.client.1.vm09.stdout:3/849: rename d3/d5b/f6d to d3/d100/f123 0 2026-03-09T15:01:02.275 INFO:tasks.workunit.client.1.vm09.stdout:4/881: getdents db/d12 0 2026-03-09T15:01:02.278 INFO:tasks.workunit.client.1.vm09.stdout:2/840: getdents df/d1f/d47/d5d/dfb 0 2026-03-09T15:01:02.279 INFO:tasks.workunit.client.1.vm09.stdout:8/836: symlink df/d5c/lf6 0 2026-03-09T15:01:02.279 INFO:tasks.workunit.client.1.vm09.stdout:6/799: write d6/df/d23/d89/f8e [776971,103673] 0 2026-03-09T15:01:02.282 INFO:tasks.workunit.client.1.vm09.stdout:5/906: mknod d2/d37/d67/df6/c143 0 2026-03-09T15:01:02.286 INFO:tasks.workunit.client.1.vm09.stdout:3/850: mkdir d3/d5b/d79/d9d/d124 0 2026-03-09T15:01:02.287 INFO:tasks.workunit.client.1.vm09.stdout:4/882: chown db/d19/l10d 28104 1 2026-03-09T15:01:02.289 INFO:tasks.workunit.client.1.vm09.stdout:7/832: symlink d3/db/d15/d5f/d6e/d83/lf9 0 2026-03-09T15:01:02.296 INFO:tasks.workunit.client.1.vm09.stdout:4/883: dwrite db/d19/dcd/fde [0,4194304] 0 2026-03-09T15:01:02.298 INFO:tasks.workunit.client.1.vm09.stdout:6/800: mkdir d6/d20/d38/d4e/d55/d109 0 2026-03-09T15:01:02.298 INFO:tasks.workunit.client.1.vm09.stdout:9/764: write d1/d7/d1e/f9e [4407624,46100] 0 2026-03-09T15:01:02.299 INFO:tasks.workunit.client.1.vm09.stdout:8/837: truncate df/d38/d64/d5f/f62 5048673 0 2026-03-09T15:01:02.300 INFO:tasks.workunit.client.1.vm09.stdout:5/907: dread d2/d37/d3c/d36/d4c/d51/f62 [0,4194304] 0 2026-03-09T15:01:02.301 INFO:tasks.workunit.client.1.vm09.stdout:8/838: chown df/d2d/d42/fe3 0 1 2026-03-09T15:01:02.304 INFO:tasks.workunit.client.1.vm09.stdout:2/841: sync 2026-03-09T15:01:02.306 INFO:tasks.workunit.client.1.vm09.stdout:0/904: link da/dc/d22/c100 da/dc/d1c/d46/d63/c12a 0 2026-03-09T15:01:02.306 INFO:tasks.workunit.client.1.vm09.stdout:2/842: sync 2026-03-09T15:01:02.310 INFO:tasks.workunit.client.1.vm09.stdout:6/801: creat d6/d20/d2a/d3b/d91/f10a x:0 0 0 2026-03-09T15:01:02.316 INFO:tasks.workunit.client.1.vm09.stdout:6/802: dwrite d6/d20/d38/d56/d65/d68/d86/dc0/ddb/fa9 [0,4194304] 0 2026-03-09T15:01:02.318 INFO:tasks.workunit.client.1.vm09.stdout:8/839: creat df/d5c/ff7 x:0 0 0 2026-03-09T15:01:02.326 INFO:tasks.workunit.client.1.vm09.stdout:7/833: write d3/d28/f29 [396923,42589] 0 2026-03-09T15:01:02.326 INFO:tasks.workunit.client.1.vm09.stdout:3/851: write d3/d100/f7d [3039606,103590] 0 2026-03-09T15:01:02.326 INFO:tasks.workunit.client.1.vm09.stdout:1/723: getdents d8/d50/d5b 0 2026-03-09T15:01:02.328 INFO:tasks.workunit.client.1.vm09.stdout:5/908: rename d2/d37/d3c/d36/d45/dae/dd3/f131 to d2/d37/d53/f144 0 2026-03-09T15:01:02.329 INFO:tasks.workunit.client.1.vm09.stdout:0/905: readlink da/dc/l2a 0 2026-03-09T15:01:02.329 INFO:tasks.workunit.client.1.vm09.stdout:5/909: stat d2/d37/d3c/d36/f97 0 2026-03-09T15:01:02.330 INFO:tasks.workunit.client.1.vm09.stdout:2/843: truncate df/d1f/f7e 8833455 0 2026-03-09T15:01:02.334 INFO:tasks.workunit.client.1.vm09.stdout:8/840: symlink df/d5b/d65/d1d/lf8 0 2026-03-09T15:01:02.336 INFO:tasks.workunit.client.1.vm09.stdout:6/803: fsync d6/d20/d38/d56/d65/d68/d6f/fbb 0 2026-03-09T15:01:02.338 INFO:tasks.workunit.client.1.vm09.stdout:1/724: write d8/d10/d24/d48/d9b/d78/fa2 [2736339,21323] 0 2026-03-09T15:01:02.338 INFO:tasks.workunit.client.1.vm09.stdout:3/852: truncate d3/d3a/d2b/d31/f33 1384811 0 2026-03-09T15:01:02.339 INFO:tasks.workunit.client.1.vm09.stdout:0/906: rmdir da/dc/dc0 39 2026-03-09T15:01:02.341 INFO:tasks.workunit.client.1.vm09.stdout:5/910: read - d2/d37/d3c/d36/d45/dae/fe5 zero size 2026-03-09T15:01:02.344 INFO:tasks.workunit.client.1.vm09.stdout:9/765: creat d1/d7/ff1 x:0 0 0 2026-03-09T15:01:02.345 INFO:tasks.workunit.client.1.vm09.stdout:5/911: chown d2/d37/d3c/d36/d4c/d89/ff1 53 1 2026-03-09T15:01:02.345 INFO:tasks.workunit.client.1.vm09.stdout:6/804: creat d6/d20/d24/f10b x:0 0 0 2026-03-09T15:01:02.345 INFO:tasks.workunit.client.1.vm09.stdout:7/834: write d3/db/d25/d7d/f8c [822271,52210] 0 2026-03-09T15:01:02.347 INFO:tasks.workunit.client.1.vm09.stdout:5/912: write d2/d37/f75 [1831806,24169] 0 2026-03-09T15:01:02.349 INFO:tasks.workunit.client.1.vm09.stdout:8/841: sync 2026-03-09T15:01:02.349 INFO:tasks.workunit.client.1.vm09.stdout:1/725: dwrite d8/d10/f12 [0,4194304] 0 2026-03-09T15:01:02.352 INFO:tasks.workunit.client.1.vm09.stdout:1/726: fsync d8/ff 0 2026-03-09T15:01:02.354 INFO:tasks.workunit.client.1.vm09.stdout:3/853: creat d3/d100/d6a/f125 x:0 0 0 2026-03-09T15:01:02.356 INFO:tasks.workunit.client.1.vm09.stdout:0/907: creat da/dc/d22/f12b x:0 0 0 2026-03-09T15:01:02.356 INFO:tasks.workunit.client.1.vm09.stdout:0/908: fdatasync da/dc/d22/f7c 0 2026-03-09T15:01:02.366 INFO:tasks.workunit.client.1.vm09.stdout:6/805: rmdir d6/d20/d2a/d3b 39 2026-03-09T15:01:02.369 INFO:tasks.workunit.client.1.vm09.stdout:7/835: rename d3/db/d25/db7/fdd to d3/db/d25/d5c/d75/db4/ffa 0 2026-03-09T15:01:02.370 INFO:tasks.workunit.client.1.vm09.stdout:5/913: dread d2/d37/d3c/dbf/d125/fb9 [0,4194304] 0 2026-03-09T15:01:02.379 INFO:tasks.workunit.client.1.vm09.stdout:4/884: write db/d19/d52/fb5 [581382,103484] 0 2026-03-09T15:01:02.381 INFO:tasks.workunit.client.1.vm09.stdout:1/727: unlink d8/d50/d39/d95/d56/c71 0 2026-03-09T15:01:02.382 INFO:tasks.workunit.client.1.vm09.stdout:1/728: write d8/d10/f13 [1414773,97508] 0 2026-03-09T15:01:02.389 INFO:tasks.workunit.client.1.vm09.stdout:1/729: dwrite d8/d10/f13 [0,4194304] 0 2026-03-09T15:01:02.401 INFO:tasks.workunit.client.1.vm09.stdout:8/842: write df/deb/fbc [1529760,122937] 0 2026-03-09T15:01:02.405 INFO:tasks.workunit.client.1.vm09.stdout:2/844: dwrite df/d1f/f39 [0,4194304] 0 2026-03-09T15:01:02.407 INFO:tasks.workunit.client.1.vm09.stdout:3/854: symlink d3/d100/d48/l126 0 2026-03-09T15:01:02.408 INFO:tasks.workunit.client.1.vm09.stdout:5/914: creat d2/d37/d3c/d36/d45/dae/dc3/d115/f145 x:0 0 0 2026-03-09T15:01:02.408 INFO:tasks.workunit.client.1.vm09.stdout:9/766: creat d1/d7/d1e/d2b/d2e/ff2 x:0 0 0 2026-03-09T15:01:02.409 INFO:tasks.workunit.client.1.vm09.stdout:4/885: write db/d19/d23/d71/f4e [3117426,123944] 0 2026-03-09T15:01:02.418 INFO:tasks.workunit.client.1.vm09.stdout:0/909: dwrite da/dc/d1c/d3c/d78/fb1 [0,4194304] 0 2026-03-09T15:01:02.420 INFO:tasks.workunit.client.1.vm09.stdout:6/806: mknod d6/d20/d38/d56/c10c 0 2026-03-09T15:01:02.422 INFO:tasks.workunit.client.1.vm09.stdout:8/843: creat df/d2d/d42/d70/ff9 x:0 0 0 2026-03-09T15:01:02.423 INFO:tasks.workunit.client.1.vm09.stdout:2/845: fsync df/d58/d67/f46 0 2026-03-09T15:01:02.424 INFO:tasks.workunit.client.1.vm09.stdout:3/855: dread - d3/d3a/d2b/d7b/fe5 zero size 2026-03-09T15:01:02.424 INFO:tasks.workunit.client.1.vm09.stdout:3/856: dread - d3/d9a/f114 zero size 2026-03-09T15:01:02.427 INFO:tasks.workunit.client.1.vm09.stdout:5/915: truncate d2/d37/d3c/d36/d45/d5c/f9c 2896575 0 2026-03-09T15:01:02.437 INFO:tasks.workunit.client.1.vm09.stdout:1/730: write d8/d90/fb6 [620989,18708] 0 2026-03-09T15:01:02.437 INFO:tasks.workunit.client.1.vm09.stdout:7/836: write d3/db/d25/d5c/fd3 [358985,39212] 0 2026-03-09T15:01:02.438 INFO:tasks.workunit.client.1.vm09.stdout:9/767: write d1/d7/d1e/d2b/d2e/d56/d6d/fab [57095,84554] 0 2026-03-09T15:01:02.440 INFO:tasks.workunit.client.1.vm09.stdout:9/768: chown d1/d7/d1e/f9e 464815766 1 2026-03-09T15:01:02.446 INFO:tasks.workunit.client.1.vm09.stdout:3/857: fsync d3/d3a/d2b/d7b/fe5 0 2026-03-09T15:01:02.446 INFO:tasks.workunit.client.1.vm09.stdout:3/858: write d3/d100/f7d [4568366,127480] 0 2026-03-09T15:01:02.448 INFO:tasks.workunit.client.1.vm09.stdout:7/837: mkdir d3/d1d/d94/dfb 0 2026-03-09T15:01:02.450 INFO:tasks.workunit.client.1.vm09.stdout:1/731: symlink d8/d10/d24/d48/d9b/le1 0 2026-03-09T15:01:02.450 INFO:tasks.workunit.client.1.vm09.stdout:9/769: mknod d1/d6e/cf3 0 2026-03-09T15:01:02.453 INFO:tasks.workunit.client.1.vm09.stdout:2/846: rename df/d2d/dda to df/d1f/d47/d84/d10c 0 2026-03-09T15:01:02.454 INFO:tasks.workunit.client.1.vm09.stdout:5/916: creat d2/f146 x:0 0 0 2026-03-09T15:01:02.454 INFO:tasks.workunit.client.1.vm09.stdout:0/910: getdents da/dc/d10/de5 0 2026-03-09T15:01:02.458 INFO:tasks.workunit.client.1.vm09.stdout:9/770: rmdir d1/d7/d1e 39 2026-03-09T15:01:02.459 INFO:tasks.workunit.client.1.vm09.stdout:5/917: write d2/d37/d3c/d36/d45/dae/dd3/fdf [1280630,111665] 0 2026-03-09T15:01:02.460 INFO:tasks.workunit.client.1.vm09.stdout:1/732: symlink d8/d10/d24/d48/d9b/d78/d8b/le2 0 2026-03-09T15:01:02.467 INFO:tasks.workunit.client.1.vm09.stdout:6/807: write d6/d20/d38/fd6 [684317,43591] 0 2026-03-09T15:01:02.467 INFO:tasks.workunit.client.1.vm09.stdout:6/808: stat c4 0 2026-03-09T15:01:02.469 INFO:tasks.workunit.client.1.vm09.stdout:3/859: write d3/d3a/d2b/d7b/db0/fc7 [866934,95931] 0 2026-03-09T15:01:02.469 INFO:tasks.workunit.client.1.vm09.stdout:8/844: getdents df/d24/d99/db6/ddd 0 2026-03-09T15:01:02.475 INFO:tasks.workunit.client.1.vm09.stdout:1/733: dwrite d8/d10/d73/f37 [0,4194304] 0 2026-03-09T15:01:02.478 INFO:tasks.workunit.client.1.vm09.stdout:0/911: truncate da/dc/d1c/d46/d63/faa 802761 0 2026-03-09T15:01:02.478 INFO:tasks.workunit.client.1.vm09.stdout:9/771: creat d1/d7/db8/ff4 x:0 0 0 2026-03-09T15:01:02.479 INFO:tasks.workunit.client.1.vm09.stdout:4/886: dwrite db/d19/d23/d71/d5f/f66 [0,4194304] 0 2026-03-09T15:01:02.480 INFO:tasks.workunit.client.1.vm09.stdout:7/838: write d3/db/f42 [3927065,80039] 0 2026-03-09T15:01:02.489 INFO:tasks.workunit.client.1.vm09.stdout:5/918: creat d2/d37/d53/d86/d88/d117/f147 x:0 0 0 2026-03-09T15:01:02.490 INFO:tasks.workunit.client.1.vm09.stdout:2/847: rmdir df/d1f/d47/d5d/d90 39 2026-03-09T15:01:02.493 INFO:tasks.workunit.client.1.vm09.stdout:4/887: unlink db/d12/d16/d5b/d78/de3/l106 0 2026-03-09T15:01:02.493 INFO:tasks.workunit.client.1.vm09.stdout:0/912: dread - da/dc/d22/d64/fcc zero size 2026-03-09T15:01:02.500 INFO:tasks.workunit.client.1.vm09.stdout:6/809: dread d6/d20/d2a/dc4/fb5 [0,4194304] 0 2026-03-09T15:01:02.501 INFO:tasks.workunit.client.1.vm09.stdout:9/772: dwrite d1/d7/d1e/d2b/d8d/dc8/fde [0,4194304] 0 2026-03-09T15:01:02.503 INFO:tasks.workunit.client.1.vm09.stdout:6/810: write d6/d20/d38/d4e/d55/f77 [1749950,46565] 0 2026-03-09T15:01:02.503 INFO:tasks.workunit.client.1.vm09.stdout:8/845: creat df/d24/d99/db6/dca/ffa x:0 0 0 2026-03-09T15:01:02.505 INFO:tasks.workunit.client.1.vm09.stdout:3/860: creat d3/d5b/d79/d9d/d116/f127 x:0 0 0 2026-03-09T15:01:02.505 INFO:tasks.workunit.client.1.vm09.stdout:5/919: write d2/d37/d3c/d36/fcc [534852,42127] 0 2026-03-09T15:01:02.507 INFO:tasks.workunit.client.1.vm09.stdout:3/861: dread - d3/d9a/de3/dc4/f113 zero size 2026-03-09T15:01:02.515 INFO:tasks.workunit.client.1.vm09.stdout:6/811: dwrite d6/d20/d38/d56/d65/d68/d86/dc0/ddb/fa9 [0,4194304] 0 2026-03-09T15:01:02.515 INFO:tasks.workunit.client.1.vm09.stdout:0/913: mknod da/dc/d1c/d3c/d78/c12c 0 2026-03-09T15:01:02.516 INFO:tasks.workunit.client.1.vm09.stdout:2/848: mkdir df/d1f/d10d 0 2026-03-09T15:01:02.517 INFO:tasks.workunit.client.1.vm09.stdout:9/773: dread - d1/d7/fb0 zero size 2026-03-09T15:01:02.518 INFO:tasks.workunit.client.1.vm09.stdout:7/839: rmdir d3/d61/ddf 0 2026-03-09T15:01:02.518 INFO:tasks.workunit.client.1.vm09.stdout:6/812: stat d6/d20/d38/d56/d65/d68/d86/d103/d102 0 2026-03-09T15:01:02.519 INFO:tasks.workunit.client.1.vm09.stdout:5/920: symlink d2/d37/d3c/d36/d45/dae/dc3/l148 0 2026-03-09T15:01:02.519 INFO:tasks.workunit.client.1.vm09.stdout:1/734: getdents d8/d10/d24/d48/d9b/d78/db4 0 2026-03-09T15:01:02.520 INFO:tasks.workunit.client.1.vm09.stdout:6/813: fsync d6/d20/d44/f4a 0 2026-03-09T15:01:02.521 INFO:tasks.workunit.client.1.vm09.stdout:4/888: link db/d12/d16/d5b/d78/d7f/f9d db/d19/d23/d44/d7c/d7d/d97/da8/f118 0 2026-03-09T15:01:02.528 INFO:tasks.workunit.client.1.vm09.stdout:5/921: dread d2/d37/f6c [0,4194304] 0 2026-03-09T15:01:02.529 INFO:tasks.workunit.client.1.vm09.stdout:0/914: creat da/dc/d84/f12d x:0 0 0 2026-03-09T15:01:02.532 INFO:tasks.workunit.client.1.vm09.stdout:1/735: fdatasync d8/d10/dc9/fd8 0 2026-03-09T15:01:02.536 INFO:tasks.workunit.client.1.vm09.stdout:5/922: mkdir d2/d37/d3c/d36/d45/d5c/ddc/d149 0 2026-03-09T15:01:02.536 INFO:tasks.workunit.client.1.vm09.stdout:0/915: dread - da/dc/d84/d124/d11c/fcf zero size 2026-03-09T15:01:02.536 INFO:tasks.workunit.client.1.vm09.stdout:1/736: getdents d8/d10/d24 0 2026-03-09T15:01:02.540 INFO:tasks.workunit.client.1.vm09.stdout:1/737: rmdir d8/d10 39 2026-03-09T15:01:02.541 INFO:tasks.workunit.client.1.vm09.stdout:0/916: truncate da/dc/d22/f3b 1586344 0 2026-03-09T15:01:02.548 INFO:tasks.workunit.client.1.vm09.stdout:6/814: dread d6/d20/d38/d56/ff1 [0,4194304] 0 2026-03-09T15:01:02.549 INFO:tasks.workunit.client.1.vm09.stdout:6/815: stat d6/d20/d44/d45/c53 0 2026-03-09T15:01:02.552 INFO:tasks.workunit.client.1.vm09.stdout:1/738: creat d8/fe3 x:0 0 0 2026-03-09T15:01:02.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:02 vm05.local ceph-mon[50611]: pgmap v157: 65 pgs: 65 active+clean; 1.7 GiB data, 5.7 GiB used, 114 GiB / 120 GiB avail; 64 MiB/s rd, 155 MiB/s wr, 393 op/s 2026-03-09T15:01:02.554 INFO:tasks.workunit.client.1.vm09.stdout:8/846: write df/d38/d64/d5f/f62 [5210357,7413] 0 2026-03-09T15:01:02.554 INFO:tasks.workunit.client.1.vm09.stdout:3/862: write d3/d9a/d80/fdd [4091438,74273] 0 2026-03-09T15:01:02.554 INFO:tasks.workunit.client.1.vm09.stdout:9/774: write d1/d7/fba [3577345,106350] 0 2026-03-09T15:01:02.558 INFO:tasks.workunit.client.1.vm09.stdout:2/849: dwrite df/d58/d74/f94 [0,4194304] 0 2026-03-09T15:01:02.562 INFO:tasks.workunit.client.1.vm09.stdout:5/923: write d2/d37/d3c/d36/d4c/ff4 [634832,75642] 0 2026-03-09T15:01:02.563 INFO:tasks.workunit.client.1.vm09.stdout:4/889: write db/d12/f50 [964464,102266] 0 2026-03-09T15:01:02.565 INFO:tasks.workunit.client.1.vm09.stdout:0/917: write da/dc/d1c/d46/d63/f7f [3734549,127030] 0 2026-03-09T15:01:02.565 INFO:tasks.workunit.client.1.vm09.stdout:9/775: dread d1/fe8 [0,4194304] 0 2026-03-09T15:01:02.566 INFO:tasks.workunit.client.1.vm09.stdout:3/863: readlink d3/d5b/le7 0 2026-03-09T15:01:02.573 INFO:tasks.workunit.client.1.vm09.stdout:1/739: unlink d8/d50/d39/d95/d56/fc3 0 2026-03-09T15:01:02.573 INFO:tasks.workunit.client.1.vm09.stdout:5/924: mknod d2/d37/d3c/d36/d45/dfd/c14a 0 2026-03-09T15:01:02.574 INFO:tasks.workunit.client.1.vm09.stdout:8/847: write df/f51 [4513845,93753] 0 2026-03-09T15:01:02.576 INFO:tasks.workunit.client.1.vm09.stdout:8/848: write df/d5c/fba [6266133,123045] 0 2026-03-09T15:01:02.576 INFO:tasks.workunit.client.1.vm09.stdout:6/816: dwrite d6/db/d10/f1c [0,4194304] 0 2026-03-09T15:01:02.578 INFO:tasks.workunit.client.1.vm09.stdout:7/840: dwrite d3/db/d25/d5c/f7c [0,4194304] 0 2026-03-09T15:01:02.579 INFO:tasks.workunit.client.1.vm09.stdout:2/850: fsync df/f33 0 2026-03-09T15:01:02.579 INFO:tasks.workunit.client.1.vm09.stdout:8/849: write fe [627923,71272] 0 2026-03-09T15:01:02.586 INFO:tasks.workunit.client.1.vm09.stdout:3/864: unlink d3/d3a/d2b/d7b/f8c 0 2026-03-09T15:01:02.592 INFO:tasks.workunit.client.1.vm09.stdout:8/850: chown df/d5b/f82 13958 1 2026-03-09T15:01:02.600 INFO:tasks.workunit.client.1.vm09.stdout:4/890: dread db/d19/d81/d5d/f8a [0,4194304] 0 2026-03-09T15:01:02.605 INFO:tasks.workunit.client.1.vm09.stdout:6/817: unlink c4 0 2026-03-09T15:01:02.605 INFO:tasks.workunit.client.1.vm09.stdout:6/818: read d6/d20/d2a/dc4/fb5 [2027661,80244] 0 2026-03-09T15:01:02.606 INFO:tasks.workunit.client.1.vm09.stdout:3/865: mknod d3/d100/d48/dc5/d10a/c128 0 2026-03-09T15:01:02.606 INFO:tasks.workunit.client.1.vm09.stdout:1/740: creat d8/d10/d24/d48/d9b/d78/d8b/fe4 x:0 0 0 2026-03-09T15:01:02.609 INFO:tasks.workunit.client.1.vm09.stdout:2/851: truncate df/d20/d2e/fbb 1946442 0 2026-03-09T15:01:02.609 INFO:tasks.workunit.client.1.vm09.stdout:9/776: link d1/d7/d1e/d2b/d2e/c49 d1/d4f/d8f/d91/cf5 0 2026-03-09T15:01:02.609 INFO:tasks.workunit.client.1.vm09.stdout:9/777: fsync d1/d7/f83 0 2026-03-09T15:01:02.614 INFO:tasks.workunit.client.1.vm09.stdout:7/841: creat d3/db/d25/d5c/ffc x:0 0 0 2026-03-09T15:01:02.614 INFO:tasks.workunit.client.1.vm09.stdout:6/819: symlink d6/d20/d38/d56/d65/d68/l10d 0 2026-03-09T15:01:02.616 INFO:tasks.workunit.client.1.vm09.stdout:1/741: rename d8/d90/fb6 to d8/d10/d24/d45/ddc/fe5 0 2026-03-09T15:01:02.616 INFO:tasks.workunit.client.1.vm09.stdout:4/891: link db/d12/c86 db/d12/d9e/c119 0 2026-03-09T15:01:02.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:02 vm09.local ceph-mon[59673]: pgmap v157: 65 pgs: 65 active+clean; 1.7 GiB data, 5.7 GiB used, 114 GiB / 120 GiB avail; 64 MiB/s rd, 155 MiB/s wr, 393 op/s 2026-03-09T15:01:02.621 INFO:tasks.workunit.client.1.vm09.stdout:3/866: dwrite d3/d100/d6a/dd5/f108 [0,4194304] 0 2026-03-09T15:01:02.622 INFO:tasks.workunit.client.1.vm09.stdout:7/842: unlink d3/db/d46/db2/df5/fb8 0 2026-03-09T15:01:02.622 INFO:tasks.workunit.client.1.vm09.stdout:8/851: sync 2026-03-09T15:01:02.622 INFO:tasks.workunit.client.1.vm09.stdout:1/742: chown d8/d50/d39/d95/d72/cda 1846 1 2026-03-09T15:01:02.624 INFO:tasks.workunit.client.1.vm09.stdout:6/820: truncate d6/d20/d38/d56/d65/d68/d6f/fa7 315684 0 2026-03-09T15:01:02.628 INFO:tasks.workunit.client.1.vm09.stdout:6/821: chown d6/db/d8b/cf8 14595833 1 2026-03-09T15:01:02.630 INFO:tasks.workunit.client.1.vm09.stdout:2/852: dread df/d58/d67/f46 [0,4194304] 0 2026-03-09T15:01:02.630 INFO:tasks.workunit.client.1.vm09.stdout:0/918: dread da/dc/f28 [0,4194304] 0 2026-03-09T15:01:02.634 INFO:tasks.workunit.client.1.vm09.stdout:8/852: mkdir df/d2d/d42/dfb 0 2026-03-09T15:01:02.636 INFO:tasks.workunit.client.1.vm09.stdout:3/867: creat d3/d5b/d79/d9d/df9/f129 x:0 0 0 2026-03-09T15:01:02.640 INFO:tasks.workunit.client.1.vm09.stdout:7/843: link d3/db/d25/d5c/d75/db4/ffa d3/d1d/d65/da3/de4/ffd 0 2026-03-09T15:01:02.643 INFO:tasks.workunit.client.1.vm09.stdout:1/743: truncate d8/d90/f99 392390 0 2026-03-09T15:01:02.644 INFO:tasks.workunit.client.1.vm09.stdout:5/925: write d2/d37/d3c/dbf/d125/f12e [1220897,25672] 0 2026-03-09T15:01:02.646 INFO:tasks.workunit.client.1.vm09.stdout:5/926: write d2/d37/d53/d86/dad/f112 [1183917,105977] 0 2026-03-09T15:01:02.647 INFO:tasks.workunit.client.1.vm09.stdout:5/927: write d2/d37/d67/d95/db8/fe2 [2299048,64040] 0 2026-03-09T15:01:02.649 INFO:tasks.workunit.client.1.vm09.stdout:4/892: write db/d12/d9e/fd9 [158290,93016] 0 2026-03-09T15:01:02.652 INFO:tasks.workunit.client.1.vm09.stdout:8/853: mkdir df/d24/d99/dfc 0 2026-03-09T15:01:02.652 INFO:tasks.workunit.client.1.vm09.stdout:3/868: mkdir d3/d60/d12a 0 2026-03-09T15:01:02.653 INFO:tasks.workunit.client.1.vm09.stdout:8/854: readlink df/d5b/l4c 0 2026-03-09T15:01:02.656 INFO:tasks.workunit.client.1.vm09.stdout:9/778: dwrite d1/d7/d1e/d2b/d40/f43 [0,4194304] 0 2026-03-09T15:01:02.658 INFO:tasks.workunit.client.1.vm09.stdout:1/744: creat d8/d10/d24/d48/d9b/d78/db4/fe6 x:0 0 0 2026-03-09T15:01:02.662 INFO:tasks.workunit.client.1.vm09.stdout:3/869: symlink d3/d9a/de3/dc4/l12b 0 2026-03-09T15:01:02.665 INFO:tasks.workunit.client.1.vm09.stdout:2/853: rename df/d1f/d47/d84/fe9 to df/d1f/f10e 0 2026-03-09T15:01:02.665 INFO:tasks.workunit.client.1.vm09.stdout:2/854: dread - df/d1f/d6d/d8f/d5f/f72 zero size 2026-03-09T15:01:02.665 INFO:tasks.workunit.client.1.vm09.stdout:4/893: sync 2026-03-09T15:01:02.665 INFO:tasks.workunit.client.1.vm09.stdout:3/870: chown d3/d5b/d79/d9d/faf 4559962 1 2026-03-09T15:01:02.666 INFO:tasks.workunit.client.1.vm09.stdout:4/894: write db/d12/d16/d5b/d78/d7f/de2/ff8 [2466101,79335] 0 2026-03-09T15:01:02.674 INFO:tasks.workunit.client.1.vm09.stdout:8/855: truncate df/d38/d64/fa7 1060970 0 2026-03-09T15:01:02.674 INFO:tasks.workunit.client.1.vm09.stdout:7/844: mknod d3/db/d25/d5c/d75/cfe 0 2026-03-09T15:01:02.676 INFO:tasks.workunit.client.1.vm09.stdout:4/895: truncate db/d19/d23/d71/d53/dcf/dfb/ffd 861829 0 2026-03-09T15:01:02.677 INFO:tasks.workunit.client.1.vm09.stdout:1/745: dread d8/d10/d24/d48/d9b/d78/f7c [0,4194304] 0 2026-03-09T15:01:02.680 INFO:tasks.workunit.client.1.vm09.stdout:3/871: dwrite d3/d9a/de3/dc4/f111 [0,4194304] 0 2026-03-09T15:01:02.684 INFO:tasks.workunit.client.1.vm09.stdout:1/746: chown d8/d50/d39/d95/f6e 144 1 2026-03-09T15:01:02.687 INFO:tasks.workunit.client.1.vm09.stdout:9/779: symlink d1/d4f/d8f/lf6 0 2026-03-09T15:01:02.688 INFO:tasks.workunit.client.1.vm09.stdout:1/747: readlink d8/d10/d24/d45/d5f/l9c 0 2026-03-09T15:01:02.688 INFO:tasks.workunit.client.1.vm09.stdout:9/780: dread - d1/d7/d1e/d2b/d2e/d56/d6d/ff0 zero size 2026-03-09T15:01:02.689 INFO:tasks.workunit.client.1.vm09.stdout:6/822: dread d6/df/f16 [0,4194304] 0 2026-03-09T15:01:02.692 INFO:tasks.workunit.client.1.vm09.stdout:3/872: truncate d3/d3a/d2b/d7b/f121 320374 0 2026-03-09T15:01:02.697 INFO:tasks.workunit.client.1.vm09.stdout:7/845: creat d3/d1d/d65/fff x:0 0 0 2026-03-09T15:01:02.697 INFO:tasks.workunit.client.1.vm09.stdout:8/856: stat df/d2d/d4f 0 2026-03-09T15:01:02.697 INFO:tasks.workunit.client.1.vm09.stdout:4/896: creat db/d19/d81/f11a x:0 0 0 2026-03-09T15:01:02.697 INFO:tasks.workunit.client.1.vm09.stdout:4/897: write db/d19/dcd/fde [2860704,22780] 0 2026-03-09T15:01:02.698 INFO:tasks.workunit.client.1.vm09.stdout:1/748: symlink d8/d50/d5b/le7 0 2026-03-09T15:01:02.699 INFO:tasks.workunit.client.1.vm09.stdout:1/749: fdatasync d8/d90/fd0 0 2026-03-09T15:01:02.699 INFO:tasks.workunit.client.1.vm09.stdout:9/781: rename d1/d4f/f64 to d1/d7/d1e/d2b/d40/ff7 0 2026-03-09T15:01:02.703 INFO:tasks.workunit.client.1.vm09.stdout:0/919: truncate da/dc/d1c/d46/d63/de6/f11b 175335 0 2026-03-09T15:01:02.705 INFO:tasks.workunit.client.1.vm09.stdout:4/898: creat db/d12/d16/d5b/d78/d7f/f11b x:0 0 0 2026-03-09T15:01:02.705 INFO:tasks.workunit.client.1.vm09.stdout:7/846: symlink d3/db/d25/db7/l100 0 2026-03-09T15:01:02.708 INFO:tasks.workunit.client.1.vm09.stdout:3/873: truncate d3/d3a/d2b/d31/f33 274373 0 2026-03-09T15:01:02.727 INFO:tasks.workunit.client.1.vm09.stdout:2/855: dwrite df/d20/f24 [0,4194304] 0 2026-03-09T15:01:02.729 INFO:tasks.workunit.client.1.vm09.stdout:2/856: stat df/d1f/d47/f73 0 2026-03-09T15:01:02.729 INFO:tasks.workunit.client.1.vm09.stdout:9/782: rename d1/d7/d1e/d2b/d2e/d56/d6d/ld0 to d1/d7/d9f/daa/lf8 0 2026-03-09T15:01:02.730 INFO:tasks.workunit.client.1.vm09.stdout:9/783: readlink d1/l48 0 2026-03-09T15:01:02.731 INFO:tasks.workunit.client.1.vm09.stdout:5/928: dwrite d2/d37/d3c/dbf/f109 [4194304,4194304] 0 2026-03-09T15:01:02.732 INFO:tasks.workunit.client.1.vm09.stdout:5/929: write d2/f3d [5050125,61888] 0 2026-03-09T15:01:02.733 INFO:tasks.workunit.client.1.vm09.stdout:0/920: readlink da/dc/d1c/d46/d63/d86/dcd/d7b/l11f 0 2026-03-09T15:01:02.733 INFO:tasks.workunit.client.1.vm09.stdout:0/921: chown da/dc/dcb 282 1 2026-03-09T15:01:02.744 INFO:tasks.workunit.client.1.vm09.stdout:7/847: dread d3/db/d46/f66 [0,4194304] 0 2026-03-09T15:01:02.745 INFO:tasks.workunit.client.1.vm09.stdout:7/848: readlink d3/l43 0 2026-03-09T15:01:02.745 INFO:tasks.workunit.client.1.vm09.stdout:5/930: readlink d2/d37/d3c/d36/d4c/d51/lc8 0 2026-03-09T15:01:02.750 INFO:tasks.workunit.client.1.vm09.stdout:3/874: mkdir d3/d3a/d2b/d7b/db6/d12c 0 2026-03-09T15:01:02.750 INFO:tasks.workunit.client.1.vm09.stdout:3/875: stat d3/d9a/d80/cd7 0 2026-03-09T15:01:02.750 INFO:tasks.workunit.client.1.vm09.stdout:8/857: getdents df/d24/d99/db1/dec 0 2026-03-09T15:01:02.752 INFO:tasks.workunit.client.1.vm09.stdout:6/823: link d6/d20/d38/d56/d65/d68/d86/dc0/ddb/c62 d6/d20/d38/d56/d65/d68/d86/c10e 0 2026-03-09T15:01:02.752 INFO:tasks.workunit.client.1.vm09.stdout:2/857: mkdir df/d1f/d47/d5d/d10f 0 2026-03-09T15:01:02.760 INFO:tasks.workunit.client.1.vm09.stdout:4/899: rename db/d19/d52/d76/c47 to db/d19/d23/d44/d7c/d7d/d97/df0/df1/c11c 0 2026-03-09T15:01:02.764 INFO:tasks.workunit.client.1.vm09.stdout:5/931: mknod d2/d37/d53/d86/d88/c14b 0 2026-03-09T15:01:02.764 INFO:tasks.workunit.client.1.vm09.stdout:5/932: write d2/d37/d67/d95/db8/fe2 [2093845,108731] 0 2026-03-09T15:01:02.765 INFO:tasks.workunit.client.1.vm09.stdout:5/933: fdatasync d2/d37/d3c/d36/d4c/d51/f62 0 2026-03-09T15:01:02.771 INFO:tasks.workunit.client.1.vm09.stdout:6/824: symlink d6/d20/d38/d56/d65/l10f 0 2026-03-09T15:01:02.780 INFO:tasks.workunit.client.1.vm09.stdout:3/876: symlink d3/d5b/d79/d9d/d124/l12d 0 2026-03-09T15:01:02.782 INFO:tasks.workunit.client.1.vm09.stdout:1/750: truncate d8/d10/f2f 1569159 0 2026-03-09T15:01:02.782 INFO:tasks.workunit.client.1.vm09.stdout:1/751: stat d8/f59 0 2026-03-09T15:01:02.784 INFO:tasks.workunit.client.1.vm09.stdout:0/922: write da/dc/f90 [12352,86293] 0 2026-03-09T15:01:02.785 INFO:tasks.workunit.client.1.vm09.stdout:0/923: stat da/dc/d22/f73 0 2026-03-09T15:01:02.785 INFO:tasks.workunit.client.1.vm09.stdout:0/924: dread - da/dc/d92/d9e/f115 zero size 2026-03-09T15:01:02.789 INFO:tasks.workunit.client.1.vm09.stdout:0/925: dread da/f120 [0,4194304] 0 2026-03-09T15:01:02.789 INFO:tasks.workunit.client.1.vm09.stdout:6/825: mkdir d6/db/d10/d4f/d110 0 2026-03-09T15:01:02.789 INFO:tasks.workunit.client.1.vm09.stdout:6/826: stat d6/d20/d38/d56/fb1 0 2026-03-09T15:01:02.794 INFO:tasks.workunit.client.1.vm09.stdout:9/784: getdents d1/d6e 0 2026-03-09T15:01:02.798 INFO:tasks.workunit.client.1.vm09.stdout:8/858: write df/d24/d99/db6/fa6 [403997,60821] 0 2026-03-09T15:01:02.801 INFO:tasks.workunit.client.1.vm09.stdout:5/934: fsync d2/d37/d3c/dbf/f109 0 2026-03-09T15:01:02.802 INFO:tasks.workunit.client.1.vm09.stdout:3/877: mknod d3/d3a/d2b/d31/d4a/d62/c12e 0 2026-03-09T15:01:02.802 INFO:tasks.workunit.client.1.vm09.stdout:5/935: write d2/d37/d3c/f4e [4378887,96068] 0 2026-03-09T15:01:02.806 INFO:tasks.workunit.client.1.vm09.stdout:1/752: symlink d8/d10/d24/d48/d9b/le8 0 2026-03-09T15:01:02.807 INFO:tasks.workunit.client.1.vm09.stdout:3/878: dwrite d3/d100/d6a/f125 [0,4194304] 0 2026-03-09T15:01:02.811 INFO:tasks.workunit.client.1.vm09.stdout:6/827: symlink d6/df/d23/de3/l111 0 2026-03-09T15:01:02.814 INFO:tasks.workunit.client.1.vm09.stdout:2/858: creat df/d58/f110 x:0 0 0 2026-03-09T15:01:02.814 INFO:tasks.workunit.client.1.vm09.stdout:0/926: creat da/dc/d22/f12e x:0 0 0 2026-03-09T15:01:02.822 INFO:tasks.workunit.client.1.vm09.stdout:1/753: dwrite d8/f17 [4194304,4194304] 0 2026-03-09T15:01:02.839 INFO:tasks.workunit.client.1.vm09.stdout:4/900: truncate db/d19/d23/d71/d5f/f87 2005133 0 2026-03-09T15:01:02.847 INFO:tasks.workunit.client.1.vm09.stdout:4/901: fdatasync db/d12/d16/f4f 0 2026-03-09T15:01:02.849 INFO:tasks.workunit.client.1.vm09.stdout:3/879: dread d3/d9a/de3/fa3 [0,4194304] 0 2026-03-09T15:01:02.849 INFO:tasks.workunit.client.1.vm09.stdout:0/927: symlink da/dc/d1c/d3c/l12f 0 2026-03-09T15:01:02.851 INFO:tasks.workunit.client.1.vm09.stdout:2/859: creat df/d1f/d6d/d8f/d5f/ddf/f111 x:0 0 0 2026-03-09T15:01:02.852 INFO:tasks.workunit.client.1.vm09.stdout:2/860: write df/d1f/d47/d84/db7/dc3/fe0 [1169030,70975] 0 2026-03-09T15:01:02.856 INFO:tasks.workunit.client.1.vm09.stdout:5/936: dwrite d2/d37/d3c/d36/d4c/d51/fc7 [0,4194304] 0 2026-03-09T15:01:02.861 INFO:tasks.workunit.client.1.vm09.stdout:3/880: fsync d3/d3a/d2b/d31/d4a/ff2 0 2026-03-09T15:01:02.864 INFO:tasks.workunit.client.1.vm09.stdout:1/754: write d8/d10/fbf [569430,89390] 0 2026-03-09T15:01:02.866 INFO:tasks.workunit.client.1.vm09.stdout:9/785: rename d1/d7/d1e/l60 to d1/d7/d1e/d2b/d2e/lf9 0 2026-03-09T15:01:02.867 INFO:tasks.workunit.client.1.vm09.stdout:6/828: dwrite d6/d20/d38/d56/d65/fb2 [0,4194304] 0 2026-03-09T15:01:02.869 INFO:tasks.workunit.client.1.vm09.stdout:1/755: write d8/d10/d24/d45/f6c [2147316,120165] 0 2026-03-09T15:01:02.869 INFO:tasks.workunit.client.1.vm09.stdout:2/861: mknod df/d2d/c112 0 2026-03-09T15:01:02.876 INFO:tasks.workunit.client.1.vm09.stdout:1/756: write d8/d90/fd0 [584133,4679] 0 2026-03-09T15:01:02.877 INFO:tasks.workunit.client.1.vm09.stdout:0/928: chown da/dc/d1c/d3c/d44/dba/cd1 3029667 1 2026-03-09T15:01:02.888 INFO:tasks.workunit.client.1.vm09.stdout:1/757: rmdir d8/d50/d39/d95/d72/d64 39 2026-03-09T15:01:02.890 INFO:tasks.workunit.client.1.vm09.stdout:8/859: rename df/d24/d99/db6/f59 to df/d38/ffd 0 2026-03-09T15:01:02.890 INFO:tasks.workunit.client.1.vm09.stdout:4/902: getdents db/d19/d23/d71 0 2026-03-09T15:01:02.893 INFO:tasks.workunit.client.1.vm09.stdout:2/862: creat df/d1f/d47/d5d/f113 x:0 0 0 2026-03-09T15:01:02.894 INFO:tasks.workunit.client.1.vm09.stdout:8/860: rename df/d24/d99/db1 to df/d2d/d46/d33/ddc/dfe 0 2026-03-09T15:01:02.895 INFO:tasks.workunit.client.1.vm09.stdout:2/863: stat df/le3 0 2026-03-09T15:01:02.896 INFO:tasks.workunit.client.1.vm09.stdout:9/786: dread d1/d4f/d8f/d91/fa5 [0,4194304] 0 2026-03-09T15:01:02.897 INFO:tasks.workunit.client.1.vm09.stdout:9/787: write d1/d58/f99 [1697317,63737] 0 2026-03-09T15:01:02.899 INFO:tasks.workunit.client.1.vm09.stdout:7/849: dread d3/f8 [0,4194304] 0 2026-03-09T15:01:02.905 INFO:tasks.workunit.client.1.vm09.stdout:8/861: rename df/d2d/d42/d79 to df/d2d/dff 0 2026-03-09T15:01:02.912 INFO:tasks.workunit.client.1.vm09.stdout:3/881: dwrite d3/d3a/d2b/d31/d4a/d62/f78 [0,4194304] 0 2026-03-09T15:01:02.915 INFO:tasks.workunit.client.1.vm09.stdout:5/937: truncate d2/d37/d3c/d36/d4c/d51/d96/f16 983852 0 2026-03-09T15:01:02.916 INFO:tasks.workunit.client.1.vm09.stdout:5/938: fsync d2/d37/d3c/d36/f97 0 2026-03-09T15:01:02.917 INFO:tasks.workunit.client.1.vm09.stdout:5/939: truncate d2/d37/d3c/d36/d45/dae/dd3/f126 183571 0 2026-03-09T15:01:02.923 INFO:tasks.workunit.client.1.vm09.stdout:2/864: link df/l18 df/d2d/l114 0 2026-03-09T15:01:02.924 INFO:tasks.workunit.client.1.vm09.stdout:3/882: mkdir d3/d3a/d2b/d12f 0 2026-03-09T15:01:02.928 INFO:tasks.workunit.client.1.vm09.stdout:5/940: rename d2/d37/d3c/d36/d4c/d51/f62 to d2/d37/d53/dc4/f14c 0 2026-03-09T15:01:02.928 INFO:tasks.workunit.client.1.vm09.stdout:6/829: dread d6/d20/d38/d56/fb1 [0,4194304] 0 2026-03-09T15:01:02.930 INFO:tasks.workunit.client.1.vm09.stdout:4/903: write db/d12/d16/f83 [1141181,85875] 0 2026-03-09T15:01:02.930 INFO:tasks.workunit.client.1.vm09.stdout:3/883: readlink d3/d100/d48/le8 0 2026-03-09T15:01:02.931 INFO:tasks.workunit.client.1.vm09.stdout:5/941: truncate d2/d37/d3c/d36/d45/d5c/f90 2130850 0 2026-03-09T15:01:02.935 INFO:tasks.workunit.client.1.vm09.stdout:7/850: write d3/d61/f86 [104609,64839] 0 2026-03-09T15:01:02.939 INFO:tasks.workunit.client.1.vm09.stdout:0/929: dwrite da/dc/d84/d124/d11c/fcf [0,4194304] 0 2026-03-09T15:01:02.956 INFO:tasks.workunit.client.1.vm09.stdout:8/862: dwrite df/d2d/d46/d33/f8e [0,4194304] 0 2026-03-09T15:01:02.957 INFO:tasks.workunit.client.1.vm09.stdout:9/788: dwrite d1/d4f/d8f/fcb [0,4194304] 0 2026-03-09T15:01:02.957 INFO:tasks.workunit.client.1.vm09.stdout:2/865: symlink df/d1f/d47/d5d/d90/d107/l115 0 2026-03-09T15:01:02.957 INFO:tasks.workunit.client.1.vm09.stdout:4/904: symlink db/d19/d23/d71/d53/dcf/l11d 0 2026-03-09T15:01:02.958 INFO:tasks.workunit.client.1.vm09.stdout:8/863: stat df/d2d/d42 0 2026-03-09T15:01:02.958 INFO:tasks.workunit.client.1.vm09.stdout:6/830: mknod d6/d20/d38/d56/d65/d68/d6f/db6/c112 0 2026-03-09T15:01:02.961 INFO:tasks.workunit.client.1.vm09.stdout:5/942: creat d2/d37/d67/d95/f14d x:0 0 0 2026-03-09T15:01:02.962 INFO:tasks.workunit.client.1.vm09.stdout:3/884: symlink d3/d100/d48/l130 0 2026-03-09T15:01:02.965 INFO:tasks.workunit.client.1.vm09.stdout:8/864: dread - df/d2d/d42/d70/dc0/dc9/ff1 zero size 2026-03-09T15:01:02.965 INFO:tasks.workunit.client.1.vm09.stdout:4/905: dread - db/d12/d16/d5b/fc3 zero size 2026-03-09T15:01:02.967 INFO:tasks.workunit.client.1.vm09.stdout:0/930: dwrite da/dc/d22/f12b [0,4194304] 0 2026-03-09T15:01:02.971 INFO:tasks.workunit.client.1.vm09.stdout:2/866: mknod df/d58/d67/c116 0 2026-03-09T15:01:02.971 INFO:tasks.workunit.client.1.vm09.stdout:7/851: rename d3/db/c93 to d3/d1d/d65/c101 0 2026-03-09T15:01:02.974 INFO:tasks.workunit.client.1.vm09.stdout:6/831: dwrite d6/d20/d2a/d3b/fdc [0,4194304] 0 2026-03-09T15:01:02.975 INFO:tasks.workunit.client.1.vm09.stdout:9/789: dwrite d1/d7/d1e/d2b/d40/f57 [0,4194304] 0 2026-03-09T15:01:02.982 INFO:tasks.workunit.client.1.vm09.stdout:3/885: mkdir d3/d5b/d79/d9d/d124/d131 0 2026-03-09T15:01:02.982 INFO:tasks.workunit.client.1.vm09.stdout:6/832: dread d6/d20/d38/d56/ff1 [0,4194304] 0 2026-03-09T15:01:02.992 INFO:tasks.workunit.client.1.vm09.stdout:4/906: mkdir db/d12/d16/d5b/d78/d11e 0 2026-03-09T15:01:02.992 INFO:tasks.workunit.client.1.vm09.stdout:5/943: symlink d2/d37/d3c/d36/d140/l14e 0 2026-03-09T15:01:02.992 INFO:tasks.workunit.client.1.vm09.stdout:2/867: unlink df/d2d/ld4 0 2026-03-09T15:01:02.996 INFO:tasks.workunit.client.1.vm09.stdout:0/931: rename da/l15 to da/dc/d22/d64/df3/l130 0 2026-03-09T15:01:03.000 INFO:tasks.workunit.client.1.vm09.stdout:2/868: chown df/d1f/d47/d84/db7/dc3/da7/lb6 3235 1 2026-03-09T15:01:03.000 INFO:tasks.workunit.client.1.vm09.stdout:6/833: symlink d6/d20/d2a/l113 0 2026-03-09T15:01:03.000 INFO:tasks.workunit.client.1.vm09.stdout:0/932: fdatasync da/dc/d1c/d3c/f4f 0 2026-03-09T15:01:03.008 INFO:tasks.workunit.client.1.vm09.stdout:1/758: dread d8/d10/f67 [0,4194304] 0 2026-03-09T15:01:03.008 INFO:tasks.workunit.client.1.vm09.stdout:7/852: rename d3/d1d/d65/f6f to d3/d61/f102 0 2026-03-09T15:01:03.015 INFO:tasks.workunit.client.1.vm09.stdout:0/933: mknod da/dc/d1c/d46/d63/d86/dcd/d7b/c131 0 2026-03-09T15:01:03.016 INFO:tasks.workunit.client.1.vm09.stdout:5/944: rename d2/d37/d53/d86/dad/l12b to d2/d37/d53/l14f 0 2026-03-09T15:01:03.017 INFO:tasks.workunit.client.1.vm09.stdout:0/934: truncate da/dc/d61/fea 1006650 0 2026-03-09T15:01:03.017 INFO:tasks.workunit.client.1.vm09.stdout:5/945: write d2/d37/d53/d86/dad/ff2 [901689,102571] 0 2026-03-09T15:01:03.026 INFO:tasks.workunit.client.1.vm09.stdout:0/935: rename da/dc/d1c/f118 to da/dc/dcb/dd9/f132 0 2026-03-09T15:01:03.028 INFO:tasks.workunit.client.1.vm09.stdout:1/759: dwrite d8/d10/dc9/fd8 [0,4194304] 0 2026-03-09T15:01:03.028 INFO:tasks.workunit.client.1.vm09.stdout:1/760: chown d8/d10/dc9 37650 1 2026-03-09T15:01:03.028 INFO:tasks.workunit.client.1.vm09.stdout:1/761: readlink d8/d10/d24/d48/l74 0 2026-03-09T15:01:03.029 INFO:tasks.workunit.client.1.vm09.stdout:5/946: chown d2/d37/d53/d86/lf0 7 1 2026-03-09T15:01:03.030 INFO:tasks.workunit.client.1.vm09.stdout:2/869: link df/d1f/d47/f60 df/d93/dd3/f117 0 2026-03-09T15:01:03.031 INFO:tasks.workunit.client.1.vm09.stdout:0/936: creat da/dc/d8c/f133 x:0 0 0 2026-03-09T15:01:03.032 INFO:tasks.workunit.client.1.vm09.stdout:0/937: stat da/dc/d8c 0 2026-03-09T15:01:03.033 INFO:tasks.workunit.client.1.vm09.stdout:2/870: chown df/d58/d74/fcd 7124289 1 2026-03-09T15:01:03.033 INFO:tasks.workunit.client.1.vm09.stdout:0/938: dread - da/d30/f111 zero size 2026-03-09T15:01:03.034 INFO:tasks.workunit.client.1.vm09.stdout:1/762: creat d8/d10/d24/d45/ddc/fe9 x:0 0 0 2026-03-09T15:01:03.034 INFO:tasks.workunit.client.1.vm09.stdout:1/763: chown d8/d10/d24/l5e 373715180 1 2026-03-09T15:01:03.035 INFO:tasks.workunit.client.1.vm09.stdout:0/939: write da/dc/d84/f12d [930101,39525] 0 2026-03-09T15:01:03.038 INFO:tasks.workunit.client.1.vm09.stdout:1/764: chown d8/d10/d24/d45/d5f/c6d 182328483 1 2026-03-09T15:01:03.044 INFO:tasks.workunit.client.1.vm09.stdout:2/871: fsync df/d1f/d6d/d8f/fdc 0 2026-03-09T15:01:03.046 INFO:tasks.workunit.client.1.vm09.stdout:2/872: write df/d1f/d47/d84/db7/dc3/fcc [7042838,65682] 0 2026-03-09T15:01:03.047 INFO:tasks.workunit.client.1.vm09.stdout:9/790: dread d1/d4f/f89 [0,4194304] 0 2026-03-09T15:01:03.048 INFO:tasks.workunit.client.1.vm09.stdout:1/765: symlink d8/d50/d39/d95/d56/lea 0 2026-03-09T15:01:03.051 INFO:tasks.workunit.client.1.vm09.stdout:8/865: write df/d38/d64/fbf [739805,101268] 0 2026-03-09T15:01:03.061 INFO:tasks.workunit.client.1.vm09.stdout:9/791: readlink d1/d7/d1e/d2b/d2e/d56/d5e/lbc 0 2026-03-09T15:01:03.062 INFO:tasks.workunit.client.1.vm09.stdout:3/886: dwrite d3/ff [0,4194304] 0 2026-03-09T15:01:03.062 INFO:tasks.workunit.client.1.vm09.stdout:6/834: write d6/d20/d24/f60 [4400103,4376] 0 2026-03-09T15:01:03.062 INFO:tasks.workunit.client.1.vm09.stdout:2/873: creat df/d1f/d6d/d8f/d5f/ddf/f118 x:0 0 0 2026-03-09T15:01:03.062 INFO:tasks.workunit.client.1.vm09.stdout:4/907: dwrite db/f21 [0,4194304] 0 2026-03-09T15:01:03.063 INFO:tasks.workunit.client.1.vm09.stdout:5/947: rmdir d2/d37/d53/d86 39 2026-03-09T15:01:03.064 INFO:tasks.workunit.client.1.vm09.stdout:9/792: truncate d1/d4f/f89 4079037 0 2026-03-09T15:01:03.064 INFO:tasks.workunit.client.1.vm09.stdout:8/866: stat df/deb/led 0 2026-03-09T15:01:03.066 INFO:tasks.workunit.client.1.vm09.stdout:3/887: fdatasync d3/d3a/d2b/d31/d4a/fd2 0 2026-03-09T15:01:03.071 INFO:tasks.workunit.client.1.vm09.stdout:6/835: fsync d6/d20/d2a/f61 0 2026-03-09T15:01:03.072 INFO:tasks.workunit.client.1.vm09.stdout:5/948: read d2/d37/d3c/d36/d45/d5c/f90 [466292,105304] 0 2026-03-09T15:01:03.075 INFO:tasks.workunit.client.1.vm09.stdout:5/949: truncate d2/d37/d3c/d36/fcc 1123529 0 2026-03-09T15:01:03.076 INFO:tasks.workunit.client.1.vm09.stdout:2/874: rmdir df/d2d 39 2026-03-09T15:01:03.080 INFO:tasks.workunit.client.1.vm09.stdout:8/867: symlink df/d2d/d4f/l100 0 2026-03-09T15:01:03.083 INFO:tasks.workunit.client.1.vm09.stdout:6/836: dread d6/d20/d38/d56/d65/d68/d86/dc0/ddb/fa9 [0,4194304] 0 2026-03-09T15:01:03.084 INFO:tasks.workunit.client.1.vm09.stdout:3/888: mknod d3/d100/d48/dc5/c132 0 2026-03-09T15:01:03.088 INFO:tasks.workunit.client.1.vm09.stdout:8/868: dwrite fe [0,4194304] 0 2026-03-09T15:01:03.091 INFO:tasks.workunit.client.1.vm09.stdout:1/766: link d8/d50/d39/d95/fba d8/d10/d73/feb 0 2026-03-09T15:01:03.091 INFO:tasks.workunit.client.1.vm09.stdout:0/940: write da/dc/d22/f9d [441122,36113] 0 2026-03-09T15:01:03.092 INFO:tasks.workunit.client.1.vm09.stdout:9/793: unlink d1/d7/d1e/d2b/d2e/d56/c86 0 2026-03-09T15:01:03.092 INFO:tasks.workunit.client.1.vm09.stdout:4/908: mknod db/d19/d23/d71/d53/c11f 0 2026-03-09T15:01:03.093 INFO:tasks.workunit.client.1.vm09.stdout:4/909: write db/d19/d23/d44/d7c/d7d/d97/da3/f100 [886397,86522] 0 2026-03-09T15:01:03.094 INFO:tasks.workunit.client.1.vm09.stdout:4/910: fdatasync db/d19/d23/d71/fb3 0 2026-03-09T15:01:03.095 INFO:tasks.workunit.client.1.vm09.stdout:4/911: readlink db/d19/d23/d44/d7c/d7d/d97/da3/ld8 0 2026-03-09T15:01:03.100 INFO:tasks.workunit.client.1.vm09.stdout:1/767: readlink d8/d50/d5b/l9d 0 2026-03-09T15:01:03.106 INFO:tasks.workunit.client.1.vm09.stdout:6/837: mknod d6/d20/d24/d7e/c114 0 2026-03-09T15:01:03.107 INFO:tasks.workunit.client.1.vm09.stdout:3/889: rename d3/d3a/d2b/d7b/dd3/led to d3/d3a/d2b/d7b/db0/l133 0 2026-03-09T15:01:03.107 INFO:tasks.workunit.client.1.vm09.stdout:5/950: mkdir d2/d37/d3c/d36/d4c/d51/d150 0 2026-03-09T15:01:03.107 INFO:tasks.workunit.client.1.vm09.stdout:6/838: write d6/df/d23/d89/f108 [10513,50861] 0 2026-03-09T15:01:03.110 INFO:tasks.workunit.client.1.vm09.stdout:5/951: stat d2/d37/d3c/d36/d4c/d51/fc7 0 2026-03-09T15:01:03.120 INFO:tasks.workunit.client.1.vm09.stdout:2/875: readlink df/d2d/l114 0 2026-03-09T15:01:03.120 INFO:tasks.workunit.client.1.vm09.stdout:2/876: chown df/d58/d67/fe6 715279 1 2026-03-09T15:01:03.120 INFO:tasks.workunit.client.1.vm09.stdout:0/941: readlink da/dc/d1c/l9a 0 2026-03-09T15:01:03.120 INFO:tasks.workunit.client.1.vm09.stdout:4/912: creat db/d12/d9e/dd0/f120 x:0 0 0 2026-03-09T15:01:03.120 INFO:tasks.workunit.client.1.vm09.stdout:9/794: creat d1/d4f/d52/ffa x:0 0 0 2026-03-09T15:01:03.120 INFO:tasks.workunit.client.1.vm09.stdout:4/913: chown db/d19/d23/d71/d5f/f66 247 1 2026-03-09T15:01:03.120 INFO:tasks.workunit.client.1.vm09.stdout:9/795: write d1/d4f/fe0 [774476,68199] 0 2026-03-09T15:01:03.121 INFO:tasks.workunit.client.1.vm09.stdout:1/768: rmdir d8/d10/d24/d48/d9b/d78 39 2026-03-09T15:01:03.121 INFO:tasks.workunit.client.1.vm09.stdout:9/796: write d1/d4f/d52/f94 [1524689,78200] 0 2026-03-09T15:01:03.126 INFO:tasks.workunit.client.1.vm09.stdout:6/839: mkdir d6/d20/d2a/dde/d115 0 2026-03-09T15:01:03.128 INFO:tasks.workunit.client.1.vm09.stdout:5/952: rmdir d2/db1/db2/d11b 39 2026-03-09T15:01:03.128 INFO:tasks.workunit.client.1.vm09.stdout:2/877: symlink df/d2d/l119 0 2026-03-09T15:01:03.128 INFO:tasks.workunit.client.1.vm09.stdout:0/942: mkdir da/dc/d84/d124/d11c/d134 0 2026-03-09T15:01:03.129 INFO:tasks.workunit.client.1.vm09.stdout:0/943: read da/dc/d84/d124/fe1 [139233,27627] 0 2026-03-09T15:01:03.130 INFO:tasks.workunit.client.1.vm09.stdout:4/914: read db/d19/d23/d71/d5f/f87 [630423,25767] 0 2026-03-09T15:01:03.131 INFO:tasks.workunit.client.1.vm09.stdout:7/853: truncate d3/db/d46/f5b 3552792 0 2026-03-09T15:01:03.135 INFO:tasks.workunit.client.1.vm09.stdout:5/953: dwrite d2/d37/d53/dc4/f108 [0,4194304] 0 2026-03-09T15:01:03.150 INFO:tasks.workunit.client.1.vm09.stdout:2/878: symlink df/d1f/d47/d84/db7/l11a 0 2026-03-09T15:01:03.156 INFO:tasks.workunit.client.1.vm09.stdout:4/915: creat db/d19/d23/d71/d53/dcf/dfb/f121 x:0 0 0 2026-03-09T15:01:03.157 INFO:tasks.workunit.client.1.vm09.stdout:5/954: symlink d2/d37/d67/df6/l151 0 2026-03-09T15:01:03.157 INFO:tasks.workunit.client.1.vm09.stdout:9/797: link d1/d4f/d8f/d91/fa5 d1/d4f/d8f/dc0/ffb 0 2026-03-09T15:01:03.158 INFO:tasks.workunit.client.1.vm09.stdout:9/798: chown d1/l5 0 1 2026-03-09T15:01:03.161 INFO:tasks.workunit.client.1.vm09.stdout:7/854: creat d3/db/d15/f103 x:0 0 0 2026-03-09T15:01:03.163 INFO:tasks.workunit.client.1.vm09.stdout:9/799: symlink d1/d7/d1e/d2b/d2e/d56/d5e/lfc 0 2026-03-09T15:01:03.164 INFO:tasks.workunit.client.1.vm09.stdout:1/769: rename d8/d10/d24/d45/l58 to d8/d10/d24/d48/lec 0 2026-03-09T15:01:03.165 INFO:tasks.workunit.client.1.vm09.stdout:9/800: dread - d1/d7/d1e/d2b/d2e/d56/d6d/ff0 zero size 2026-03-09T15:01:03.166 INFO:tasks.workunit.client.1.vm09.stdout:1/770: write d8/d10/f5c [929331,88378] 0 2026-03-09T15:01:03.167 INFO:tasks.workunit.client.1.vm09.stdout:7/855: dwrite d3/db/ff8 [0,4194304] 0 2026-03-09T15:01:03.168 INFO:tasks.workunit.client.1.vm09.stdout:1/771: dread - d8/d10/d24/d45/ddc/fe9 zero size 2026-03-09T15:01:03.169 INFO:tasks.workunit.client.1.vm09.stdout:6/840: rename d6/d20/d24/d7e/d88 to d6/db/d8b/de8/d116 0 2026-03-09T15:01:03.170 INFO:tasks.workunit.client.1.vm09.stdout:5/955: creat d2/db1/db2/f152 x:0 0 0 2026-03-09T15:01:03.172 INFO:tasks.workunit.client.1.vm09.stdout:9/801: creat d1/d4f/d52/ffd x:0 0 0 2026-03-09T15:01:03.177 INFO:tasks.workunit.client.1.vm09.stdout:4/916: sync 2026-03-09T15:01:03.179 INFO:tasks.workunit.client.1.vm09.stdout:3/890: sync 2026-03-09T15:01:03.181 INFO:tasks.workunit.client.1.vm09.stdout:5/956: creat d2/d37/d3c/d36/f153 x:0 0 0 2026-03-09T15:01:03.182 INFO:tasks.workunit.client.1.vm09.stdout:4/917: truncate db/d19/d23/d71/d53/dcf/dfb/ffd 1171369 0 2026-03-09T15:01:03.197 INFO:tasks.workunit.client.1.vm09.stdout:8/869: fsync df/d38/d64/fbf 0 2026-03-09T15:01:03.200 INFO:tasks.workunit.client.1.vm09.stdout:5/957: symlink d2/d37/d53/d86/d88/l154 0 2026-03-09T15:01:03.202 INFO:tasks.workunit.client.1.vm09.stdout:4/918: unlink db/d12/d16/f60 0 2026-03-09T15:01:03.202 INFO:tasks.workunit.client.1.vm09.stdout:5/958: chown d2/d37/d3c/d36/d45/dae/dc3/d115/f145 0 1 2026-03-09T15:01:03.209 INFO:tasks.workunit.client.1.vm09.stdout:8/870: mkdir df/d2d/dff/d9a/d101 0 2026-03-09T15:01:03.210 INFO:tasks.workunit.client.1.vm09.stdout:2/879: truncate df/d1f/f10e 1138533 0 2026-03-09T15:01:03.211 INFO:tasks.workunit.client.1.vm09.stdout:8/871: write df/d5c/fba [5687417,42213] 0 2026-03-09T15:01:03.215 INFO:tasks.workunit.client.1.vm09.stdout:4/919: unlink db/d19/d23/d71/d53/dcf/dfb/f121 0 2026-03-09T15:01:03.220 INFO:tasks.workunit.client.1.vm09.stdout:0/944: dwrite f7 [8388608,4194304] 0 2026-03-09T15:01:03.232 INFO:tasks.workunit.client.1.vm09.stdout:8/872: symlink df/d2d/d42/l102 0 2026-03-09T15:01:03.232 INFO:tasks.workunit.client.1.vm09.stdout:1/772: write d8/d10/f29 [704671,82352] 0 2026-03-09T15:01:03.232 INFO:tasks.workunit.client.1.vm09.stdout:7/856: write d3/f32 [3476458,106808] 0 2026-03-09T15:01:03.233 INFO:tasks.workunit.client.1.vm09.stdout:2/880: rename df/d1f/d6d/d8f/c70 to df/d93/da3/c11b 0 2026-03-09T15:01:03.233 INFO:tasks.workunit.client.1.vm09.stdout:5/959: dread d2/d37/d3c/d36/d4c/d51/d96/f16 [0,4194304] 0 2026-03-09T15:01:03.236 INFO:tasks.workunit.client.1.vm09.stdout:6/841: truncate d6/d20/d2a/f98 1529660 0 2026-03-09T15:01:03.237 INFO:tasks.workunit.client.1.vm09.stdout:4/920: truncate db/d19/d23/d71/fe6 783943 0 2026-03-09T15:01:03.237 INFO:tasks.workunit.client.1.vm09.stdout:9/802: write d1/d7/d1e/fd2 [723476,90058] 0 2026-03-09T15:01:03.240 INFO:tasks.workunit.client.1.vm09.stdout:4/921: readlink db/d12/d16/d5b/d78/d7f/l8f 0 2026-03-09T15:01:03.241 INFO:tasks.workunit.client.1.vm09.stdout:2/881: dwrite df/d1f/d47/d5d/dbc/ff8 [0,4194304] 0 2026-03-09T15:01:03.243 INFO:tasks.workunit.client.1.vm09.stdout:3/891: write d3/d100/f123 [120066,77603] 0 2026-03-09T15:01:03.243 INFO:tasks.workunit.client.1.vm09.stdout:8/873: mkdir df/d2d/d46/d33/d103 0 2026-03-09T15:01:03.243 INFO:tasks.workunit.client.1.vm09.stdout:4/922: dread db/d19/d23/d71/d53/fa0 [0,4194304] 0 2026-03-09T15:01:03.247 INFO:tasks.workunit.client.1.vm09.stdout:1/773: fdatasync d8/d10/d24/d45/d5f/fc5 0 2026-03-09T15:01:03.248 INFO:tasks.workunit.client.1.vm09.stdout:5/960: creat d2/d37/d53/d86/d88/dd7/f155 x:0 0 0 2026-03-09T15:01:03.254 INFO:tasks.workunit.client.1.vm09.stdout:5/961: read d2/f34 [6044560,58398] 0 2026-03-09T15:01:03.254 INFO:tasks.workunit.client.1.vm09.stdout:4/923: write db/d19/d23/d71/d53/fa0 [1165983,61311] 0 2026-03-09T15:01:03.257 INFO:tasks.workunit.client.1.vm09.stdout:3/892: dread d3/ff [0,4194304] 0 2026-03-09T15:01:03.261 INFO:tasks.workunit.client.1.vm09.stdout:0/945: write da/dc/d84/d124/fe1 [481195,126177] 0 2026-03-09T15:01:03.263 INFO:tasks.workunit.client.1.vm09.stdout:0/946: fsync da/dc/d92/ff9 0 2026-03-09T15:01:03.264 INFO:tasks.workunit.client.1.vm09.stdout:0/947: readlink da/dc/d22/d64/df3/l119 0 2026-03-09T15:01:03.265 INFO:tasks.workunit.client.1.vm09.stdout:9/803: fdatasync d1/d4f/d8f/d91/fa5 0 2026-03-09T15:01:03.273 INFO:tasks.workunit.client.1.vm09.stdout:6/842: dwrite d6/f83 [0,4194304] 0 2026-03-09T15:01:03.274 INFO:tasks.workunit.client.1.vm09.stdout:6/843: stat d6/d20/d38/d56/d65/fb2 0 2026-03-09T15:01:03.275 INFO:tasks.workunit.client.1.vm09.stdout:6/844: chown d6/f17 1032386 1 2026-03-09T15:01:03.277 INFO:tasks.workunit.client.1.vm09.stdout:9/804: sync 2026-03-09T15:01:03.280 INFO:tasks.workunit.client.1.vm09.stdout:8/874: rmdir df/d2d/dff/d9a 39 2026-03-09T15:01:03.281 INFO:tasks.workunit.client.1.vm09.stdout:8/875: dread - df/d5c/ff7 zero size 2026-03-09T15:01:03.282 INFO:tasks.workunit.client.1.vm09.stdout:4/924: mkdir db/d19/d23/d71/d53/dcf/d122 0 2026-03-09T15:01:03.287 INFO:tasks.workunit.client.1.vm09.stdout:0/948: creat da/dc/d10/dd0/f135 x:0 0 0 2026-03-09T15:01:03.291 INFO:tasks.workunit.client.1.vm09.stdout:2/882: dread df/d1f/d47/d84/fe1 [4194304,4194304] 0 2026-03-09T15:01:03.291 INFO:tasks.workunit.client.1.vm09.stdout:0/949: write da/dc/d1c/d46/fd8 [5006164,129117] 0 2026-03-09T15:01:03.292 INFO:tasks.workunit.client.1.vm09.stdout:7/857: mknod d3/db/d25/d5c/c104 0 2026-03-09T15:01:03.300 INFO:tasks.workunit.client.1.vm09.stdout:5/962: mknod d2/d37/d53/d86/d88/dc9/d136/c156 0 2026-03-09T15:01:03.300 INFO:tasks.workunit.client.1.vm09.stdout:5/963: dread - d2/d37/d3c/d36/d45/f139 zero size 2026-03-09T15:01:03.301 INFO:tasks.workunit.client.1.vm09.stdout:1/774: symlink d8/d10/d73/led 0 2026-03-09T15:01:03.303 INFO:tasks.workunit.client.1.vm09.stdout:8/876: dread df/d2d/f57 [4194304,4194304] 0 2026-03-09T15:01:03.303 INFO:tasks.workunit.client.1.vm09.stdout:8/877: readlink df/l1b 0 2026-03-09T15:01:03.303 INFO:tasks.workunit.client.1.vm09.stdout:4/925: mknod db/d19/d52/d76/d3b/dd3/c123 0 2026-03-09T15:01:03.304 INFO:tasks.workunit.client.1.vm09.stdout:4/926: write db/d19/d23/d71/d5f/f66 [5209317,113895] 0 2026-03-09T15:01:03.307 INFO:tasks.workunit.client.1.vm09.stdout:5/964: dread d2/d37/d3c/d36/d4c/ff4 [0,4194304] 0 2026-03-09T15:01:03.308 INFO:tasks.workunit.client.1.vm09.stdout:0/950: creat da/dc/dcb/dd9/f136 x:0 0 0 2026-03-09T15:01:03.310 INFO:tasks.workunit.client.1.vm09.stdout:0/951: chown da/dc/d1c/d3c/d78 315964 1 2026-03-09T15:01:03.312 INFO:tasks.workunit.client.1.vm09.stdout:8/878: dwrite df/d24/d99/db6/fad [0,4194304] 0 2026-03-09T15:01:03.316 INFO:tasks.workunit.client.1.vm09.stdout:8/879: read - df/d2d/d46/d33/ddc/dfe/dcc/fe7 zero size 2026-03-09T15:01:03.317 INFO:tasks.workunit.client.1.vm09.stdout:7/858: fsync d3/d1d/d94/fa4 0 2026-03-09T15:01:03.317 INFO:tasks.workunit.client.1.vm09.stdout:9/805: symlink d1/d58/da8/lfe 0 2026-03-09T15:01:03.321 INFO:tasks.workunit.client.1.vm09.stdout:1/775: unlink d8/d50/d39/d95/d72/c4a 0 2026-03-09T15:01:03.332 INFO:tasks.workunit.client.1.vm09.stdout:4/927: symlink db/d12/d16/d5b/d78/de3/l124 0 2026-03-09T15:01:03.333 INFO:tasks.workunit.client.1.vm09.stdout:5/965: creat d2/d37/d3c/d36/d4c/d51/d96/f157 x:0 0 0 2026-03-09T15:01:03.339 INFO:tasks.workunit.client.1.vm09.stdout:7/859: mknod d3/db/d25/db7/dd4/c105 0 2026-03-09T15:01:03.340 INFO:tasks.workunit.client.1.vm09.stdout:4/928: dwrite db/d19/d81/f11a [0,4194304] 0 2026-03-09T15:01:03.343 INFO:tasks.workunit.client.1.vm09.stdout:5/966: dread d2/d37/d53/dc4/f108 [0,4194304] 0 2026-03-09T15:01:03.349 INFO:tasks.workunit.client.1.vm09.stdout:5/967: stat d2/d37/d3c/d36/d4c/d51/d96/c50 0 2026-03-09T15:01:03.352 INFO:tasks.workunit.client.1.vm09.stdout:1/776: write d8/d50/d39/d95/f61 [5617487,10576] 0 2026-03-09T15:01:03.354 INFO:tasks.workunit.client.1.vm09.stdout:7/860: dread d3/d61/f86 [0,4194304] 0 2026-03-09T15:01:03.365 INFO:tasks.workunit.client.1.vm09.stdout:3/893: rename d3/d3a/d2b/d31/d4a/l5e to d3/d9a/l134 0 2026-03-09T15:01:03.365 INFO:tasks.workunit.client.1.vm09.stdout:2/883: dwrite df/d58/d74/f88 [0,4194304] 0 2026-03-09T15:01:03.365 INFO:tasks.workunit.client.1.vm09.stdout:0/952: mknod da/d30/c137 0 2026-03-09T15:01:03.370 INFO:tasks.workunit.client.1.vm09.stdout:8/880: truncate df/d2d/dff/d9a/fb5 33094 0 2026-03-09T15:01:03.375 INFO:tasks.workunit.client.1.vm09.stdout:4/929: mknod db/d12/d9e/c125 0 2026-03-09T15:01:03.375 INFO:tasks.workunit.client.1.vm09.stdout:4/930: chown db/d19/d52/d76/d3b/fd1 31009337 1 2026-03-09T15:01:03.376 INFO:tasks.workunit.client.1.vm09.stdout:5/968: mknod d2/d37/d53/d86/d88/dc9/c158 0 2026-03-09T15:01:03.381 INFO:tasks.workunit.client.1.vm09.stdout:3/894: mkdir d3/d9a/d80/d135 0 2026-03-09T15:01:03.387 INFO:tasks.workunit.client.1.vm09.stdout:6/845: rename d6/db/d10/fa0 to d6/d20/d2a/dde/d115/f117 0 2026-03-09T15:01:03.388 INFO:tasks.workunit.client.1.vm09.stdout:2/884: dread - df/d58/fc2 zero size 2026-03-09T15:01:03.389 INFO:tasks.workunit.client.1.vm09.stdout:2/885: write df/d1f/d6d/d8f/d5f/ddf/f118 [789469,11761] 0 2026-03-09T15:01:03.399 INFO:tasks.workunit.client.1.vm09.stdout:0/953: creat da/dc/d10/de5/f138 x:0 0 0 2026-03-09T15:01:03.399 INFO:tasks.workunit.client.1.vm09.stdout:7/861: dread d3/db/d25/d5c/f8a [0,4194304] 0 2026-03-09T15:01:03.400 INFO:tasks.workunit.client.1.vm09.stdout:8/881: mkdir df/d2d/d46/d33/dc5/d104 0 2026-03-09T15:01:03.406 INFO:tasks.workunit.client.1.vm09.stdout:4/931: mknod db/d12/d16/d5b/da5/c126 0 2026-03-09T15:01:03.407 INFO:tasks.workunit.client.1.vm09.stdout:3/895: mkdir d3/d100/d48/da0/d136 0 2026-03-09T15:01:03.408 INFO:tasks.workunit.client.1.vm09.stdout:2/886: symlink df/d1f/d47/d84/db7/l11c 0 2026-03-09T15:01:03.408 INFO:tasks.workunit.client.1.vm09.stdout:3/896: dread - d3/d9a/d80/fe1 zero size 2026-03-09T15:01:03.409 INFO:tasks.workunit.client.1.vm09.stdout:0/954: mkdir da/dc/d92/d139 0 2026-03-09T15:01:03.410 INFO:tasks.workunit.client.1.vm09.stdout:4/932: chown db/d12/d16/d5b/l9b 13133676 1 2026-03-09T15:01:03.411 INFO:tasks.workunit.client.1.vm09.stdout:3/897: write d3/d3a/d2b/d7b/f121 [202687,29129] 0 2026-03-09T15:01:03.415 INFO:tasks.workunit.client.1.vm09.stdout:1/777: dwrite d8/d10/f69 [0,4194304] 0 2026-03-09T15:01:03.426 INFO:tasks.workunit.client.1.vm09.stdout:7/862: unlink d3/db/d25/d5c/d75/ca7 0 2026-03-09T15:01:03.432 INFO:tasks.workunit.client.1.vm09.stdout:2/887: creat df/d58/d67/f11d x:0 0 0 2026-03-09T15:01:03.435 INFO:tasks.workunit.client.1.vm09.stdout:4/933: mkdir db/d19/d23/d44/d7c/d7d/d127 0 2026-03-09T15:01:03.439 INFO:tasks.workunit.client.1.vm09.stdout:9/806: rename d1/l6b to d1/d7/d1e/d2b/d8d/lff 0 2026-03-09T15:01:03.444 INFO:tasks.workunit.client.1.vm09.stdout:1/778: fsync d8/d10/d24/d48/fc2 0 2026-03-09T15:01:03.450 INFO:tasks.workunit.client.1.vm09.stdout:8/882: truncate df/d24/d99/db6/d60/fc6 59712 0 2026-03-09T15:01:03.454 INFO:tasks.workunit.client.1.vm09.stdout:0/955: truncate da/dc/d1c/f6d 52895 0 2026-03-09T15:01:03.459 INFO:tasks.workunit.client.1.vm09.stdout:6/846: getdents d6/d20/d2a/d3b/d91 0 2026-03-09T15:01:03.461 INFO:tasks.workunit.client.1.vm09.stdout:4/934: symlink db/d12/d16/d5b/d78/de3/l128 0 2026-03-09T15:01:03.462 INFO:tasks.workunit.client.1.vm09.stdout:9/807: mkdir d1/d7/d1e/d2b/d8d/dd5/d100 0 2026-03-09T15:01:03.462 INFO:tasks.workunit.client.1.vm09.stdout:0/956: sync 2026-03-09T15:01:03.464 INFO:tasks.workunit.client.1.vm09.stdout:3/898: link d3/d3a/d2b/f65 d3/d9a/de3/f137 0 2026-03-09T15:01:03.464 INFO:tasks.workunit.client.1.vm09.stdout:3/899: stat d3/d3a/d2b/d31/d9e 0 2026-03-09T15:01:03.467 INFO:tasks.workunit.client.1.vm09.stdout:7/863: fsync d3/db/d25/db7/fd2 0 2026-03-09T15:01:03.472 INFO:tasks.workunit.client.1.vm09.stdout:5/969: rename d2/c39 to d2/d37/d3c/dbf/d125/c159 0 2026-03-09T15:01:03.474 INFO:tasks.workunit.client.1.vm09.stdout:4/935: creat db/d19/f129 x:0 0 0 2026-03-09T15:01:03.475 INFO:tasks.workunit.client.1.vm09.stdout:5/970: stat d2/d37/d3c/d36/d45/dae/dd3 0 2026-03-09T15:01:03.475 INFO:tasks.workunit.client.1.vm09.stdout:4/936: read - db/d19/d23/d71/d5f/ff6 zero size 2026-03-09T15:01:03.489 INFO:tasks.workunit.client.1.vm09.stdout:6/847: write d6/d20/d24/f67 [3220816,70063] 0 2026-03-09T15:01:03.491 INFO:tasks.workunit.client.1.vm09.stdout:8/883: symlink df/d2d/dff/l105 0 2026-03-09T15:01:03.491 INFO:tasks.workunit.client.1.vm09.stdout:0/957: write da/dc/d10/f117 [3628218,125953] 0 2026-03-09T15:01:03.492 INFO:tasks.workunit.client.1.vm09.stdout:7/864: rmdir d3/d3d/d9b/da9/daa 39 2026-03-09T15:01:03.498 INFO:tasks.workunit.client.1.vm09.stdout:4/937: symlink db/d12/d16/d5b/d105/l12a 0 2026-03-09T15:01:03.500 INFO:tasks.workunit.client.1.vm09.stdout:3/900: mknod d3/d3a/d2b/d7b/db6/d12c/c138 0 2026-03-09T15:01:03.500 INFO:tasks.workunit.client.1.vm09.stdout:3/901: chown d3/d3a/c96 106894307 1 2026-03-09T15:01:03.501 INFO:tasks.workunit.client.1.vm09.stdout:1/779: creat d8/d10/d24/d45/fee x:0 0 0 2026-03-09T15:01:03.501 INFO:tasks.workunit.client.1.vm09.stdout:7/865: creat d3/db/d46/db2/f106 x:0 0 0 2026-03-09T15:01:03.503 INFO:tasks.workunit.client.1.vm09.stdout:2/888: getdents df/d93/dd3 0 2026-03-09T15:01:03.503 INFO:tasks.workunit.client.1.vm09.stdout:0/958: dwrite da/dc/dcb/dd9/ffb [0,4194304] 0 2026-03-09T15:01:03.514 INFO:tasks.workunit.client.1.vm09.stdout:1/780: dwrite d8/d90/fc1 [4194304,4194304] 0 2026-03-09T15:01:03.517 INFO:tasks.workunit.client.1.vm09.stdout:9/808: rename d1/d7/d1e/d2b/d2e/d56/led to d1/d4f/d8f/d91/l101 0 2026-03-09T15:01:03.517 INFO:tasks.workunit.client.1.vm09.stdout:3/902: dwrite d3/d3a/d2b/d7b/dd3/ffe [0,4194304] 0 2026-03-09T15:01:03.534 INFO:tasks.workunit.client.1.vm09.stdout:4/938: symlink db/d12/d16/d5b/d78/d7f/de2/l12b 0 2026-03-09T15:01:03.535 INFO:tasks.workunit.client.1.vm09.stdout:8/884: creat df/d38/d64/daf/f106 x:0 0 0 2026-03-09T15:01:03.543 INFO:tasks.workunit.client.1.vm09.stdout:3/903: dread d3/d9a/d80/fdd [0,4194304] 0 2026-03-09T15:01:03.543 INFO:tasks.workunit.client.1.vm09.stdout:1/781: unlink d8/d50/d5b/le7 0 2026-03-09T15:01:03.543 INFO:tasks.workunit.client.1.vm09.stdout:9/809: rmdir d1/d7/d1e 39 2026-03-09T15:01:03.543 INFO:tasks.workunit.client.1.vm09.stdout:3/904: readlink d3/d3a/d2b/d36/l5a 0 2026-03-09T15:01:03.545 INFO:tasks.workunit.client.1.vm09.stdout:5/971: rename d2/d37/d3c/d36/d4c/d51/d96/c2d to d2/d37/d53/dc4/c15a 0 2026-03-09T15:01:03.545 INFO:tasks.workunit.client.1.vm09.stdout:6/848: rename d6 to d6/db/d10/d7a/d118 22 2026-03-09T15:01:03.546 INFO:tasks.workunit.client.1.vm09.stdout:6/849: chown d6/d20/d2a/dde/d115 1030 1 2026-03-09T15:01:03.546 INFO:tasks.workunit.client.1.vm09.stdout:8/885: symlink df/d24/d99/l107 0 2026-03-09T15:01:03.554 INFO:tasks.workunit.client.1.vm09.stdout:2/889: fdatasync df/d20/d2e/fbb 0 2026-03-09T15:01:03.554 INFO:tasks.workunit.client.1.vm09.stdout:7/866: dwrite d3/db/d25/fbb [0,4194304] 0 2026-03-09T15:01:03.554 INFO:tasks.workunit.client.1.vm09.stdout:1/782: write d8/d10/d24/d48/d9b/d78/fa2 [3451039,18932] 0 2026-03-09T15:01:03.556 INFO:tasks.workunit.client.1.vm09.stdout:3/905: read d3/d3a/d2b/d7b/db0/fc7 [933683,114344] 0 2026-03-09T15:01:03.562 INFO:tasks.workunit.client.1.vm09.stdout:3/906: chown d3/d3a/d2b/d31/d4a/d62/f16 3 1 2026-03-09T15:01:03.564 INFO:tasks.workunit.client.1.vm09.stdout:7/867: dread d3/d3d/fec [0,4194304] 0 2026-03-09T15:01:03.569 INFO:tasks.workunit.client.1.vm09.stdout:4/939: rename db/d19/d52/d76/d3b/lbc to db/d19/d23/d44/d7c/d7d/d97/da3/l12c 0 2026-03-09T15:01:03.569 INFO:tasks.workunit.client.1.vm09.stdout:4/940: chown db/d19/d23/d71/d5f 40 1 2026-03-09T15:01:03.576 INFO:tasks.workunit.client.1.vm09.stdout:2/890: sync 2026-03-09T15:01:03.579 INFO:tasks.workunit.client.1.vm09.stdout:3/907: mknod d3/d3a/d2b/d31/d4a/c139 0 2026-03-09T15:01:03.582 INFO:tasks.workunit.client.1.vm09.stdout:1/783: creat d8/d10/d73/fef x:0 0 0 2026-03-09T15:01:03.587 INFO:tasks.workunit.client.1.vm09.stdout:2/891: creat df/d20/f11e x:0 0 0 2026-03-09T15:01:03.588 INFO:tasks.workunit.client.1.vm09.stdout:4/941: getdents db/d19/d23/d44/d7c/d7d/db7/de1 0 2026-03-09T15:01:03.590 INFO:tasks.workunit.client.1.vm09.stdout:1/784: dread d8/d10/d24/d45/f92 [0,4194304] 0 2026-03-09T15:01:03.591 INFO:tasks.workunit.client.1.vm09.stdout:7/868: dread d3/db/d15/d5f/f36 [0,4194304] 0 2026-03-09T15:01:03.599 INFO:tasks.workunit.client.1.vm09.stdout:4/942: write db/d19/d23/d71/d53/fa0 [1572472,81657] 0 2026-03-09T15:01:03.602 INFO:tasks.workunit.client.1.vm09.stdout:1/785: fdatasync d8/d10/d24/d45/f92 0 2026-03-09T15:01:03.604 INFO:tasks.workunit.client.1.vm09.stdout:7/869: chown d3/db/cb5 119094434 1 2026-03-09T15:01:03.606 INFO:tasks.workunit.client.1.vm09.stdout:7/870: dread - d3/db/d46/db2/df5/f8f zero size 2026-03-09T15:01:03.607 INFO:tasks.workunit.client.1.vm09.stdout:0/959: dwrite da/dc/d1c/d3c/d44/f51 [0,4194304] 0 2026-03-09T15:01:03.616 INFO:tasks.workunit.client.1.vm09.stdout:1/786: symlink d8/d10/d24/d45/d5f/lf0 0 2026-03-09T15:01:03.622 INFO:tasks.workunit.client.1.vm09.stdout:7/871: rename d3/db/d25 to d3/d1d/d94/d107 0 2026-03-09T15:01:03.627 INFO:tasks.workunit.client.1.vm09.stdout:4/943: read db/d19/d23/d44/d7c/d7d/fb9 [717072,33447] 0 2026-03-09T15:01:03.627 INFO:tasks.workunit.client.1.vm09.stdout:5/972: write d2/d37/d3c/f4b [240093,62765] 0 2026-03-09T15:01:03.627 INFO:tasks.workunit.client.1.vm09.stdout:8/886: write df/d5c/f72 [2455531,57614] 0 2026-03-09T15:01:03.627 INFO:tasks.workunit.client.1.vm09.stdout:6/850: write d6/d20/d38/d56/d65/d68/d86/dc0/ddb/fa9 [3201503,115532] 0 2026-03-09T15:01:03.627 INFO:tasks.workunit.client.1.vm09.stdout:6/851: readlink d6/db/d8b/de8/d116/l94 0 2026-03-09T15:01:03.633 INFO:tasks.workunit.client.1.vm09.stdout:1/787: getdents d8 0 2026-03-09T15:01:03.634 INFO:tasks.workunit.client.1.vm09.stdout:1/788: chown d8/d10/d24/d48/d9b 1734902774 1 2026-03-09T15:01:03.635 INFO:tasks.workunit.client.1.vm09.stdout:9/810: dwrite d1/d7/d1e/d2b/d2e/f12 [0,4194304] 0 2026-03-09T15:01:03.636 INFO:tasks.workunit.client.1.vm09.stdout:4/944: sync 2026-03-09T15:01:03.636 INFO:tasks.workunit.client.1.vm09.stdout:3/908: write d3/d3a/d2b/d31/f33 [68593,80577] 0 2026-03-09T15:01:03.642 INFO:tasks.workunit.client.1.vm09.stdout:0/960: rename da/dc/d1c/d3c/d44/f71 to da/dc/d1c/d3c/f13a 0 2026-03-09T15:01:03.643 INFO:tasks.workunit.client.1.vm09.stdout:0/961: write da/dc/d61/f10e [1153258,50898] 0 2026-03-09T15:01:03.651 INFO:tasks.workunit.client.1.vm09.stdout:9/811: chown d1/d7/d1e/d2b/d8d/dd5/fdb 1 1 2026-03-09T15:01:03.655 INFO:tasks.workunit.client.1.vm09.stdout:3/909: dread - d3/d5b/d79/d9d/d116/f127 zero size 2026-03-09T15:01:03.659 INFO:tasks.workunit.client.1.vm09.stdout:6/852: dwrite d6/d20/d24/da5/fc8 [0,4194304] 0 2026-03-09T15:01:03.660 INFO:tasks.workunit.client.1.vm09.stdout:8/887: truncate df/d5c/fd6 2525986 0 2026-03-09T15:01:03.662 INFO:tasks.workunit.client.1.vm09.stdout:0/962: dwrite da/dc/f90 [0,4194304] 0 2026-03-09T15:01:03.664 INFO:tasks.workunit.client.1.vm09.stdout:5/973: dread d2/d37/d67/df6/fb6 [0,4194304] 0 2026-03-09T15:01:03.664 INFO:tasks.workunit.client.1.vm09.stdout:7/872: chown d3/db/d15/l1c 83 1 2026-03-09T15:01:03.669 INFO:tasks.workunit.client.1.vm09.stdout:2/892: dwrite df/d58/d74/fcd [0,4194304] 0 2026-03-09T15:01:03.680 INFO:tasks.workunit.client.1.vm09.stdout:4/945: truncate db/d19/d23/d44/dd2/d108/f102 748506 0 2026-03-09T15:01:03.684 INFO:tasks.workunit.client.1.vm09.stdout:1/789: dread d8/d10/d24/d48/d9b/d78/f7c [0,4194304] 0 2026-03-09T15:01:03.685 INFO:tasks.workunit.client.1.vm09.stdout:6/853: creat d6/db/d8b/de8/d116/f119 x:0 0 0 2026-03-09T15:01:03.689 INFO:tasks.workunit.client.1.vm09.stdout:0/963: dread da/dc/d1c/f6d [0,4194304] 0 2026-03-09T15:01:03.689 INFO:tasks.workunit.client.1.vm09.stdout:4/946: mkdir db/d19/d23/d44/d7c/d7d/d97/da8/d12d 0 2026-03-09T15:01:03.690 INFO:tasks.workunit.client.1.vm09.stdout:7/873: unlink d3/db/f4d 0 2026-03-09T15:01:03.691 INFO:tasks.workunit.client.1.vm09.stdout:7/874: fdatasync d3/f16 0 2026-03-09T15:01:03.693 INFO:tasks.workunit.client.1.vm09.stdout:1/790: truncate d8/d10/f5c 1433787 0 2026-03-09T15:01:03.693 INFO:tasks.workunit.client.1.vm09.stdout:5/974: read d2/d37/d3c/d36/d45/dae/dc3/f103 [3992657,101800] 0 2026-03-09T15:01:03.694 INFO:tasks.workunit.client.1.vm09.stdout:8/888: dwrite df/d38/d64/fe2 [0,4194304] 0 2026-03-09T15:01:03.698 INFO:tasks.workunit.client.1.vm09.stdout:9/812: getdents d1/d4f/d52 0 2026-03-09T15:01:03.703 INFO:tasks.workunit.client.1.vm09.stdout:3/910: dread d3/d74/f88 [0,4194304] 0 2026-03-09T15:01:03.712 INFO:tasks.workunit.client.1.vm09.stdout:4/947: symlink db/d19/d23/d44/d7c/d7d/db7/l12e 0 2026-03-09T15:01:03.712 INFO:tasks.workunit.client.1.vm09.stdout:4/948: truncate db/d19/d81/f11a 4687010 0 2026-03-09T15:01:03.712 INFO:tasks.workunit.client.1.vm09.stdout:6/854: truncate d6/d20/d44/d8f/f9d 1333712 0 2026-03-09T15:01:03.712 INFO:tasks.workunit.client.1.vm09.stdout:4/949: fdatasync db/d19/d23/d44/d7c/d7d/d97/da3/fba 0 2026-03-09T15:01:03.712 INFO:tasks.workunit.client.1.vm09.stdout:4/950: stat db 0 2026-03-09T15:01:03.712 INFO:tasks.workunit.client.1.vm09.stdout:2/893: link df/d1f/f55 df/d93/da3/dcf/f11f 0 2026-03-09T15:01:03.712 INFO:tasks.workunit.client.1.vm09.stdout:4/951: dread - db/d12/d9e/dd0/f120 zero size 2026-03-09T15:01:03.712 INFO:tasks.workunit.client.1.vm09.stdout:0/964: truncate da/dc/d1c/d46/d63/d86/dcd/d7b/fa8 1358986 0 2026-03-09T15:01:03.712 INFO:tasks.workunit.client.1.vm09.stdout:2/894: write df/d1f/d6d/d8f/d5f/ddf/f111 [931909,46983] 0 2026-03-09T15:01:03.713 INFO:tasks.workunit.client.1.vm09.stdout:6/855: rmdir d6/d20/d2a 39 2026-03-09T15:01:03.713 INFO:tasks.workunit.client.1.vm09.stdout:1/791: mknod d8/d10/d24/d48/d9b/cf1 0 2026-03-09T15:01:03.713 INFO:tasks.workunit.client.1.vm09.stdout:0/965: readlink da/dc/d1c/d46/l42 0 2026-03-09T15:01:03.714 INFO:tasks.workunit.client.1.vm09.stdout:5/975: dread d2/d37/f6d [0,4194304] 0 2026-03-09T15:01:03.715 INFO:tasks.workunit.client.1.vm09.stdout:8/889: dwrite df/d5b/f82 [0,4194304] 0 2026-03-09T15:01:03.715 INFO:tasks.workunit.client.1.vm09.stdout:9/813: creat d1/d7/d1e/d2b/d2e/d56/f102 x:0 0 0 2026-03-09T15:01:03.717 INFO:tasks.workunit.client.1.vm09.stdout:6/856: fdatasync d6/df/ffd 0 2026-03-09T15:01:03.728 INFO:tasks.workunit.client.1.vm09.stdout:7/875: link d3/db/d46/l67 d3/d1d/d94/d107/d5c/d75/db4/l108 0 2026-03-09T15:01:03.728 INFO:tasks.workunit.client.1.vm09.stdout:4/952: truncate db/d19/d23/d44/d7c/d7d/d97/da8/f118 1798892 0 2026-03-09T15:01:03.731 INFO:tasks.workunit.client.1.vm09.stdout:2/895: mknod df/d1f/d47/d84/db7/dc3/dd9/c120 0 2026-03-09T15:01:03.737 INFO:tasks.workunit.client.1.vm09.stdout:8/890: rename df/d5b/f35 to df/d24/d95/da4/f108 0 2026-03-09T15:01:03.737 INFO:tasks.workunit.client.1.vm09.stdout:8/891: chown df/d2d/d42/d70/dc0/dc9/ff1 6 1 2026-03-09T15:01:03.742 INFO:tasks.workunit.client.1.vm09.stdout:4/953: rename db/d19/d23/d71/ldd to db/d19/d52/d76/d3b/l12f 0 2026-03-09T15:01:03.747 INFO:tasks.workunit.client.1.vm09.stdout:9/814: creat d1/d7/d1e/d2b/d2e/d56/f103 x:0 0 0 2026-03-09T15:01:03.747 INFO:tasks.workunit.client.1.vm09.stdout:4/954: truncate db/d12/d16/f54 503688 0 2026-03-09T15:01:03.747 INFO:tasks.workunit.client.1.vm09.stdout:8/892: getdents df/d2d/d42/d70/dc0 0 2026-03-09T15:01:03.747 INFO:tasks.workunit.client.1.vm09.stdout:4/955: write db/d19/d23/d44/d84/fd5 [836593,28276] 0 2026-03-09T15:01:03.747 INFO:tasks.workunit.client.1.vm09.stdout:8/893: fsync df/d24/d99/db6/fad 0 2026-03-09T15:01:03.751 INFO:tasks.workunit.client.1.vm09.stdout:0/966: dread da/d30/f3d [0,4194304] 0 2026-03-09T15:01:03.752 INFO:tasks.workunit.client.1.vm09.stdout:0/967: write da/dc/d10/f29 [3869940,34258] 0 2026-03-09T15:01:03.756 INFO:tasks.workunit.client.1.vm09.stdout:0/968: rename da/dc/d92/c127 to da/dc/d22/d64/c13b 0 2026-03-09T15:01:03.761 INFO:tasks.workunit.client.1.vm09.stdout:0/969: getdents da/dc/dcb 0 2026-03-09T15:01:03.761 INFO:tasks.workunit.client.1.vm09.stdout:0/970: chown da/cf2 14 1 2026-03-09T15:01:03.763 INFO:tasks.workunit.client.1.vm09.stdout:0/971: creat da/dc/d84/db8/f13c x:0 0 0 2026-03-09T15:01:03.763 INFO:tasks.workunit.client.1.vm09.stdout:4/956: dread db/d12/f27 [0,4194304] 0 2026-03-09T15:01:03.764 INFO:tasks.workunit.client.1.vm09.stdout:0/972: truncate da/dc/d1c/d3c/d78/f116 628376 0 2026-03-09T15:01:03.764 INFO:tasks.workunit.client.1.vm09.stdout:4/957: chown db/d12/d16/f63 14356 1 2026-03-09T15:01:03.768 INFO:tasks.workunit.client.1.vm09.stdout:4/958: write db/f29 [2177065,74464] 0 2026-03-09T15:01:03.777 INFO:tasks.workunit.client.1.vm09.stdout:7/876: getdents d3/db 0 2026-03-09T15:01:03.782 INFO:tasks.workunit.client.1.vm09.stdout:3/911: truncate d3/d3a/d2b/d31/d4a/d62/f1b 5965053 0 2026-03-09T15:01:03.782 INFO:tasks.workunit.client.1.vm09.stdout:3/912: write d3/d3a/d2b/d31/f33 [715504,25013] 0 2026-03-09T15:01:03.790 INFO:tasks.workunit.client.1.vm09.stdout:0/973: mknod da/dc/d1c/d46/d63/d86/c13d 0 2026-03-09T15:01:03.791 INFO:tasks.workunit.client.1.vm09.stdout:0/974: chown da/dc/d1c/d46 137023080 1 2026-03-09T15:01:03.792 INFO:tasks.workunit.client.1.vm09.stdout:5/976: truncate d2/d37/d67/d95/db8/fe2 3181546 0 2026-03-09T15:01:03.792 INFO:tasks.workunit.client.1.vm09.stdout:2/896: write df/d20/d29/fc0 [9208596,38964] 0 2026-03-09T15:01:03.796 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:03 vm09.local ceph-mon[59673]: pgmap v158: 65 pgs: 65 active+clean; 1.7 GiB data, 5.7 GiB used, 114 GiB / 120 GiB avail; 48 MiB/s rd, 108 MiB/s wr, 280 op/s 2026-03-09T15:01:03.799 INFO:tasks.workunit.client.1.vm09.stdout:3/913: unlink d3/d3a/d2b/d31/fd4 0 2026-03-09T15:01:03.799 INFO:tasks.workunit.client.1.vm09.stdout:6/857: dwrite d6/d20/d38/d56/d65/d68/f99 [0,4194304] 0 2026-03-09T15:01:03.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:03 vm05.local ceph-mon[50611]: pgmap v158: 65 pgs: 65 active+clean; 1.7 GiB data, 5.7 GiB used, 114 GiB / 120 GiB avail; 48 MiB/s rd, 108 MiB/s wr, 280 op/s 2026-03-09T15:01:03.806 INFO:tasks.workunit.client.1.vm09.stdout:5/977: chown d2/d37/d3c/dbf/d125/f12e 1 1 2026-03-09T15:01:03.806 INFO:tasks.workunit.client.1.vm09.stdout:7/877: symlink d3/l109 0 2026-03-09T15:01:03.808 INFO:tasks.workunit.client.1.vm09.stdout:6/858: creat d6/d20/d38/d4e/d55/dd2/f11a x:0 0 0 2026-03-09T15:01:03.808 INFO:tasks.workunit.client.1.vm09.stdout:1/792: write d8/d10/d24/d48/f7f [1982248,90913] 0 2026-03-09T15:01:03.811 INFO:tasks.workunit.client.1.vm09.stdout:5/978: chown d2/l70 12982 1 2026-03-09T15:01:03.813 INFO:tasks.workunit.client.1.vm09.stdout:9/815: write d1/fbf [52453,85376] 0 2026-03-09T15:01:03.813 INFO:tasks.workunit.client.1.vm09.stdout:4/959: link db/d12/f2b db/d19/d23/d44/d7c/d7d/d127/f130 0 2026-03-09T15:01:03.813 INFO:tasks.workunit.client.1.vm09.stdout:3/914: symlink d3/d3a/d2b/d7b/db6/d120/l13a 0 2026-03-09T15:01:03.815 INFO:tasks.workunit.client.1.vm09.stdout:2/897: chown df/f13 4232 1 2026-03-09T15:01:03.820 INFO:tasks.workunit.client.1.vm09.stdout:7/878: creat d3/d1d/d94/d107/d5c/d75/db4/f10a x:0 0 0 2026-03-09T15:01:03.821 INFO:tasks.workunit.client.1.vm09.stdout:7/879: chown d3/d1d/d65/da3/de4 0 1 2026-03-09T15:01:03.823 INFO:tasks.workunit.client.1.vm09.stdout:8/894: dread df/d38/d64/f50 [0,4194304] 0 2026-03-09T15:01:03.823 INFO:tasks.workunit.client.1.vm09.stdout:3/915: unlink d3/d5b/d79/l87 0 2026-03-09T15:01:03.823 INFO:tasks.workunit.client.1.vm09.stdout:5/979: chown d2/d37/d67/d95/l141 20 1 2026-03-09T15:01:03.826 INFO:tasks.workunit.client.1.vm09.stdout:1/793: symlink d8/d50/d39/d95/d56/dc7/dd9/lf2 0 2026-03-09T15:01:03.827 INFO:tasks.workunit.client.1.vm09.stdout:8/895: rmdir df/d2d/dff 39 2026-03-09T15:01:03.833 INFO:tasks.workunit.client.1.vm09.stdout:5/980: fdatasync d2/d37/d67/df6/fb6 0 2026-03-09T15:01:03.833 INFO:tasks.workunit.client.1.vm09.stdout:6/859: dread d6/d20/f52 [0,4194304] 0 2026-03-09T15:01:03.844 INFO:tasks.workunit.client.1.vm09.stdout:0/975: dwrite da/dc/d1c/d3c/d78/d7a/dbb/ffc [0,4194304] 0 2026-03-09T15:01:03.847 INFO:tasks.workunit.client.1.vm09.stdout:4/960: write db/d12/fb8 [1239305,62186] 0 2026-03-09T15:01:03.849 INFO:tasks.workunit.client.1.vm09.stdout:7/880: mknod d3/d1d/d94/d107/d7d/c10b 0 2026-03-09T15:01:03.850 INFO:tasks.workunit.client.1.vm09.stdout:2/898: write df/d1f/f55 [354383,111377] 0 2026-03-09T15:01:03.851 INFO:tasks.workunit.client.1.vm09.stdout:1/794: read - d8/d10/d24/d48/d9b/d68/fc4 zero size 2026-03-09T15:01:03.856 INFO:tasks.workunit.client.1.vm09.stdout:3/916: truncate d3/d3a/d2b/d7b/db0/fc7 1362228 0 2026-03-09T15:01:03.858 INFO:tasks.workunit.client.1.vm09.stdout:4/961: dwrite db/d19/d23/d71/fb3 [0,4194304] 0 2026-03-09T15:01:03.860 INFO:tasks.workunit.client.1.vm09.stdout:5/981: rename d2/d37/d53/d86/dad to d2/d37/d3c/dbf/d15b 0 2026-03-09T15:01:03.862 INFO:tasks.workunit.client.1.vm09.stdout:1/795: dwrite d8/d10/dc9/fd8 [4194304,4194304] 0 2026-03-09T15:01:03.864 INFO:tasks.workunit.client.1.vm09.stdout:5/982: write d2/d37/d3c/d36/d45/f139 [244518,108945] 0 2026-03-09T15:01:03.865 INFO:tasks.workunit.client.1.vm09.stdout:1/796: truncate d8/f7e 1285319 0 2026-03-09T15:01:03.865 INFO:tasks.workunit.client.1.vm09.stdout:5/983: fdatasync d2/d37/d67/df6/f13d 0 2026-03-09T15:01:03.873 INFO:tasks.workunit.client.1.vm09.stdout:9/816: getdents d1/d7 0 2026-03-09T15:01:03.877 INFO:tasks.workunit.client.1.vm09.stdout:2/899: creat df/d1f/d47/d5d/d90/d107/f121 x:0 0 0 2026-03-09T15:01:03.880 INFO:tasks.workunit.client.1.vm09.stdout:8/896: creat df/d24/d99/dfc/f109 x:0 0 0 2026-03-09T15:01:03.880 INFO:tasks.workunit.client.1.vm09.stdout:4/962: creat db/d19/d23/d71/f131 x:0 0 0 2026-03-09T15:01:03.881 INFO:tasks.workunit.client.1.vm09.stdout:8/897: chown df/d38/ffd 9 1 2026-03-09T15:01:03.884 INFO:tasks.workunit.client.1.vm09.stdout:1/797: mknod d8/d10/d24/d48/d9b/d78/d8b/cf3 0 2026-03-09T15:01:03.891 INFO:tasks.workunit.client.1.vm09.stdout:9/817: dread - d1/d7/d9f/daa/fae zero size 2026-03-09T15:01:03.891 INFO:tasks.workunit.client.1.vm09.stdout:2/900: rename df/d1f/d47/d5d/c81 to df/d58/d74/d109/c122 0 2026-03-09T15:01:03.892 INFO:tasks.workunit.client.1.vm09.stdout:7/881: creat d3/d28/f10c x:0 0 0 2026-03-09T15:01:03.892 INFO:tasks.workunit.client.1.vm09.stdout:7/882: write d3/d28/f29 [857627,112664] 0 2026-03-09T15:01:03.894 INFO:tasks.workunit.client.1.vm09.stdout:8/898: link df/d38/d64/f50 df/d2d/d46/d33/f10a 0 2026-03-09T15:01:03.894 INFO:tasks.workunit.client.1.vm09.stdout:4/963: dwrite db/d19/d23/d44/d7c/d7d/d97/da3/fcb [0,4194304] 0 2026-03-09T15:01:03.895 INFO:tasks.workunit.client.1.vm09.stdout:2/901: chown df/l1e 6 1 2026-03-09T15:01:03.901 INFO:tasks.workunit.client.1.vm09.stdout:0/976: dread da/dc/d1c/d46/d63/d86/dcd/f93 [0,4194304] 0 2026-03-09T15:01:03.907 INFO:tasks.workunit.client.1.vm09.stdout:1/798: rename d8/d10/d24/d45/f6c to d8/ff4 0 2026-03-09T15:01:03.918 INFO:tasks.workunit.client.1.vm09.stdout:0/977: creat da/d80/f13e x:0 0 0 2026-03-09T15:01:03.918 INFO:tasks.workunit.client.1.vm09.stdout:7/883: dread d3/db/d15/d5f/d6e/fdb [0,4194304] 0 2026-03-09T15:01:03.918 INFO:tasks.workunit.client.1.vm09.stdout:8/899: stat df/d24/d99/db6/lb0 0 2026-03-09T15:01:03.918 INFO:tasks.workunit.client.1.vm09.stdout:4/964: mknod db/d12/d16/d5b/d78/d7f/c132 0 2026-03-09T15:01:03.918 INFO:tasks.workunit.client.1.vm09.stdout:0/978: symlink da/dc/d22/l13f 0 2026-03-09T15:01:03.918 INFO:tasks.workunit.client.1.vm09.stdout:7/884: write d3/f26 [2922044,93762] 0 2026-03-09T15:01:03.921 INFO:tasks.workunit.client.1.vm09.stdout:9/818: unlink d1/d7/db8/lc2 0 2026-03-09T15:01:03.922 INFO:tasks.workunit.client.1.vm09.stdout:9/819: write d1/fbf [762099,59605] 0 2026-03-09T15:01:03.922 INFO:tasks.workunit.client.1.vm09.stdout:8/900: rmdir df/d2d/d46/d33/dc5 39 2026-03-09T15:01:03.926 INFO:tasks.workunit.client.1.vm09.stdout:4/965: truncate db/d19/d23/d44/d7c/d7d/d97/fbf 1802900 0 2026-03-09T15:01:03.927 INFO:tasks.workunit.client.1.vm09.stdout:4/966: chown db/d19/d23/d44/d7c/d7d/db7/l12e 1238 1 2026-03-09T15:01:03.927 INFO:tasks.workunit.client.1.vm09.stdout:0/979: mknod da/dc/d1c/d3c/d78/d7a/dbb/c140 0 2026-03-09T15:01:03.928 INFO:tasks.workunit.client.1.vm09.stdout:3/917: read d3/d3a/d2b/d7b/db0/fc7 [814075,118038] 0 2026-03-09T15:01:03.930 INFO:tasks.workunit.client.1.vm09.stdout:9/820: rename d1/d4f/d8f to d1/d7/d1e/d2b/d2e/d56/d6d/d104 0 2026-03-09T15:01:03.930 INFO:tasks.workunit.client.1.vm09.stdout:9/821: dread - d1/d7/d9f/daa/fae zero size 2026-03-09T15:01:03.935 INFO:tasks.workunit.client.1.vm09.stdout:6/860: write d6/d20/d38/d56/fbf [199923,990] 0 2026-03-09T15:01:03.935 INFO:tasks.workunit.client.1.vm09.stdout:4/967: creat db/d19/d23/d71/d53/ded/f133 x:0 0 0 2026-03-09T15:01:03.935 INFO:tasks.workunit.client.1.vm09.stdout:9/822: stat d1/d7/l1c 0 2026-03-09T15:01:03.935 INFO:tasks.workunit.client.1.vm09.stdout:0/980: symlink da/dc/d1c/d3c/d44/dba/l141 0 2026-03-09T15:01:03.935 INFO:tasks.workunit.client.1.vm09.stdout:6/861: chown d6/df/ffd 328 1 2026-03-09T15:01:03.935 INFO:tasks.workunit.client.1.vm09.stdout:9/823: mknod d1/d58/c105 0 2026-03-09T15:01:03.935 INFO:tasks.workunit.client.1.vm09.stdout:0/981: creat da/d80/f142 x:0 0 0 2026-03-09T15:01:03.935 INFO:tasks.workunit.client.1.vm09.stdout:4/968: symlink db/l134 0 2026-03-09T15:01:03.936 INFO:tasks.workunit.client.1.vm09.stdout:2/902: sync 2026-03-09T15:01:03.937 INFO:tasks.workunit.client.1.vm09.stdout:6/862: truncate d6/d20/d24/d7e/fb9 718892 0 2026-03-09T15:01:03.937 INFO:tasks.workunit.client.1.vm09.stdout:3/918: link d3/d5b/d79/d9d/ce0 d3/d100/d48/dc5/d10a/c13b 0 2026-03-09T15:01:03.938 INFO:tasks.workunit.client.1.vm09.stdout:0/982: symlink da/dc/d84/d124/l143 0 2026-03-09T15:01:03.940 INFO:tasks.workunit.client.1.vm09.stdout:4/969: unlink db/d19/d23/d44/dd2/ldc 0 2026-03-09T15:01:03.942 INFO:tasks.workunit.client.1.vm09.stdout:6/863: mkdir d6/d20/d2a/d3b/d91/d11b 0 2026-03-09T15:01:03.942 INFO:tasks.workunit.client.1.vm09.stdout:4/970: dread - db/d19/d23/d44/f101 zero size 2026-03-09T15:01:03.944 INFO:tasks.workunit.client.1.vm09.stdout:6/864: fsync d6/d20/d2a/d3b/d91/f10a 0 2026-03-09T15:01:03.947 INFO:tasks.workunit.client.1.vm09.stdout:9/824: dread d1/d6e/f9b [0,4194304] 0 2026-03-09T15:01:03.949 INFO:tasks.workunit.client.1.vm09.stdout:2/903: getdents df/d1f/d47/d71 0 2026-03-09T15:01:03.949 INFO:tasks.workunit.client.1.vm09.stdout:9/825: write d1/fb6 [1459823,31905] 0 2026-03-09T15:01:03.951 INFO:tasks.workunit.client.1.vm09.stdout:0/983: dwrite da/d30/d36/f114 [0,4194304] 0 2026-03-09T15:01:03.964 INFO:tasks.workunit.client.1.vm09.stdout:5/984: write d2/f34 [8271116,74422] 0 2026-03-09T15:01:03.968 INFO:tasks.workunit.client.1.vm09.stdout:4/971: symlink db/d19/d23/d44/d7c/d7d/d97/da8/d12d/l135 0 2026-03-09T15:01:03.969 INFO:tasks.workunit.client.1.vm09.stdout:4/972: stat db/d19/d52/d76/d3b/dd3/lff 0 2026-03-09T15:01:03.978 INFO:tasks.workunit.client.1.vm09.stdout:2/904: rename df/d1f/d6d/c7d to df/d93/c123 0 2026-03-09T15:01:03.979 INFO:tasks.workunit.client.1.vm09.stdout:8/901: rmdir df/d38/d64 39 2026-03-09T15:01:03.979 INFO:tasks.workunit.client.1.vm09.stdout:0/984: dread da/dc/d22/f53 [0,4194304] 0 2026-03-09T15:01:03.980 INFO:tasks.workunit.client.1.vm09.stdout:9/826: mknod d1/d7/d1e/d2b/d8d/dc8/c106 0 2026-03-09T15:01:03.981 INFO:tasks.workunit.client.1.vm09.stdout:5/985: sync 2026-03-09T15:01:03.986 INFO:tasks.workunit.client.1.vm09.stdout:8/902: dwrite df/d24/faa [0,4194304] 0 2026-03-09T15:01:04.000 INFO:tasks.workunit.client.1.vm09.stdout:2/905: mkdir df/d1f/d47/d71/d124 0 2026-03-09T15:01:04.004 INFO:tasks.workunit.client.1.vm09.stdout:6/865: getdents d6 0 2026-03-09T15:01:04.005 INFO:tasks.workunit.client.1.vm09.stdout:6/866: truncate d6/d20/f70 4773926 0 2026-03-09T15:01:04.007 INFO:tasks.workunit.client.1.vm09.stdout:5/986: symlink d2/d37/d3c/dbf/l15c 0 2026-03-09T15:01:04.008 INFO:tasks.workunit.client.1.vm09.stdout:7/885: write d3/d61/f90 [177938,11084] 0 2026-03-09T15:01:04.009 INFO:tasks.workunit.client.1.vm09.stdout:0/985: mknod da/dc/d92/c144 0 2026-03-09T15:01:04.009 INFO:tasks.workunit.client.1.vm09.stdout:7/886: chown d3/db/d46/f66 4 1 2026-03-09T15:01:04.010 INFO:tasks.workunit.client.1.vm09.stdout:0/986: chown da/dc/d84/d124/d11c/f83 21979 1 2026-03-09T15:01:04.020 INFO:tasks.workunit.client.1.vm09.stdout:6/867: dread d6/d20/d38/d56/d65/f100 [0,4194304] 0 2026-03-09T15:01:04.023 INFO:tasks.workunit.client.1.vm09.stdout:3/919: write d3/d3a/d54/fbd [389668,43777] 0 2026-03-09T15:01:04.023 INFO:tasks.workunit.client.1.vm09.stdout:0/987: symlink da/dc/d1c/d46/d5b/d9f/l145 0 2026-03-09T15:01:04.023 INFO:tasks.workunit.client.1.vm09.stdout:7/887: sync 2026-03-09T15:01:04.032 INFO:tasks.workunit.client.1.vm09.stdout:9/827: creat d1/d4f/f107 x:0 0 0 2026-03-09T15:01:04.033 INFO:tasks.workunit.client.1.vm09.stdout:9/828: write d1/d7/d1e/d2b/d2e/d56/d6d/d104/dc0/fe9 [635291,130799] 0 2026-03-09T15:01:04.034 INFO:tasks.workunit.client.1.vm09.stdout:1/799: mkdir d8/d10/d24/d48/d9b/df5 0 2026-03-09T15:01:04.037 INFO:tasks.workunit.client.1.vm09.stdout:3/920: mkdir d3/d3a/d2b/d7b/dd3/d13c 0 2026-03-09T15:01:04.038 INFO:tasks.workunit.client.1.vm09.stdout:4/973: rename db/d19/d23/d44/d7c/d7d/d97/da8 to db/d19/d23/d44/d136 0 2026-03-09T15:01:04.039 INFO:tasks.workunit.client.1.vm09.stdout:0/988: rmdir da/dc/d22/d64/df3 39 2026-03-09T15:01:04.040 INFO:tasks.workunit.client.1.vm09.stdout:7/888: dread d3/f97 [0,4194304] 0 2026-03-09T15:01:04.041 INFO:tasks.workunit.client.1.vm09.stdout:6/868: truncate d6/d20/d24/d7e/fb9 77908 0 2026-03-09T15:01:04.049 INFO:tasks.workunit.client.1.vm09.stdout:9/829: symlink d1/l108 0 2026-03-09T15:01:04.054 INFO:tasks.workunit.client.1.vm09.stdout:1/800: mkdir d8/d50/df6 0 2026-03-09T15:01:04.054 INFO:tasks.workunit.client.1.vm09.stdout:6/869: dread d6/db/d10/f1c [0,4194304] 0 2026-03-09T15:01:04.055 INFO:tasks.workunit.client.1.vm09.stdout:5/987: write d2/d37/d53/d86/d88/dc9/f10c [1247509,91891] 0 2026-03-09T15:01:04.057 INFO:tasks.workunit.client.1.vm09.stdout:2/906: rename df/d1f/d47/d5d/d90/lad to df/d58/d74/l125 0 2026-03-09T15:01:04.057 INFO:tasks.workunit.client.1.vm09.stdout:8/903: write df/d2d/f2f [1822423,103369] 0 2026-03-09T15:01:04.059 INFO:tasks.workunit.client.1.vm09.stdout:9/830: fdatasync d1/d7/d1e/f5d 0 2026-03-09T15:01:04.067 INFO:tasks.workunit.client.1.vm09.stdout:7/889: fdatasync d3/d61/f102 0 2026-03-09T15:01:04.077 INFO:tasks.workunit.client.1.vm09.stdout:6/870: creat d6/db/d10/d4f/f11c x:0 0 0 2026-03-09T15:01:04.081 INFO:tasks.workunit.client.1.vm09.stdout:6/871: dwrite d6/d20/f70 [4194304,4194304] 0 2026-03-09T15:01:04.082 INFO:tasks.workunit.client.1.vm09.stdout:4/974: write db/d19/f8e [4514972,126056] 0 2026-03-09T15:01:04.084 INFO:tasks.workunit.client.1.vm09.stdout:3/921: dwrite d3/d3a/f6b [0,4194304] 0 2026-03-09T15:01:04.086 INFO:tasks.workunit.client.1.vm09.stdout:5/988: stat d2/d37/d3c/c46 0 2026-03-09T15:01:04.103 INFO:tasks.workunit.client.1.vm09.stdout:0/989: creat da/d30/d36/f146 x:0 0 0 2026-03-09T15:01:04.108 INFO:tasks.workunit.client.1.vm09.stdout:6/872: rename d6/d20/d38/d4e/l105 to d6/d20/d44/d8f/l11d 0 2026-03-09T15:01:04.110 INFO:tasks.workunit.client.1.vm09.stdout:3/922: symlink d3/d9a/de3/l13d 0 2026-03-09T15:01:04.110 INFO:tasks.workunit.client.1.vm09.stdout:5/989: symlink d2/d37/d3c/d36/l15d 0 2026-03-09T15:01:04.114 INFO:tasks.workunit.client.1.vm09.stdout:3/923: fdatasync d3/d3a/d2b/d31/f34 0 2026-03-09T15:01:04.114 INFO:tasks.workunit.client.1.vm09.stdout:8/904: mknod df/d38/d64/c10b 0 2026-03-09T15:01:04.114 INFO:tasks.workunit.client.1.vm09.stdout:8/905: stat df/d2d/d46/d33/d103 0 2026-03-09T15:01:04.114 INFO:tasks.workunit.client.1.vm09.stdout:2/907: dwrite df/d1f/f9c [0,4194304] 0 2026-03-09T15:01:04.114 INFO:tasks.workunit.client.1.vm09.stdout:2/908: readlink df/d1f/d47/d5d/dbc/led 0 2026-03-09T15:01:04.117 INFO:tasks.workunit.client.1.vm09.stdout:0/990: symlink da/dc/d1c/d46/d5b/l147 0 2026-03-09T15:01:04.117 INFO:tasks.workunit.client.1.vm09.stdout:6/873: mkdir d6/d20/d2a/dc4/dba/dd8/d11e 0 2026-03-09T15:01:04.117 INFO:tasks.workunit.client.1.vm09.stdout:7/890: stat d3/d3d/f51 0 2026-03-09T15:01:04.118 INFO:tasks.workunit.client.1.vm09.stdout:8/906: write df/d5b/f85 [5188908,34446] 0 2026-03-09T15:01:04.120 INFO:tasks.workunit.client.1.vm09.stdout:4/975: creat db/d12/d16/d5b/d78/d11e/f137 x:0 0 0 2026-03-09T15:01:04.123 INFO:tasks.workunit.client.1.vm09.stdout:0/991: read da/dc/d10/f29 [2716719,61454] 0 2026-03-09T15:01:04.124 INFO:tasks.workunit.client.1.vm09.stdout:0/992: stat f7 0 2026-03-09T15:01:04.125 INFO:tasks.workunit.client.1.vm09.stdout:0/993: truncate da/dc/d61/f10e 1850053 0 2026-03-09T15:01:04.130 INFO:tasks.workunit.client.1.vm09.stdout:3/924: rename d3/d9a/de3/fa7 to d3/d60/f13e 0 2026-03-09T15:01:04.131 INFO:tasks.workunit.client.1.vm09.stdout:1/801: truncate d8/d10/d73/f37 2326727 0 2026-03-09T15:01:04.132 INFO:tasks.workunit.client.1.vm09.stdout:1/802: dread - d8/d10/d24/fc6 zero size 2026-03-09T15:01:04.132 INFO:tasks.workunit.client.1.vm09.stdout:6/874: read d6/db/f66 [2492553,21324] 0 2026-03-09T15:01:04.133 INFO:tasks.workunit.client.1.vm09.stdout:1/803: chown d8/d10/d24/d48/d9b/d78 6 1 2026-03-09T15:01:04.136 INFO:tasks.workunit.client.1.vm09.stdout:9/831: truncate d1/d7/d1e/fd2 34728 0 2026-03-09T15:01:04.155 INFO:tasks.workunit.client.1.vm09.stdout:2/909: mknod df/d1f/d47/d71/d124/c126 0 2026-03-09T15:01:04.156 INFO:tasks.workunit.client.1.vm09.stdout:5/990: rename d2/d37/d53/d86/d88/d117/f147 to d2/d37/d3c/d36/d45/dae/dd3/f15e 0 2026-03-09T15:01:04.158 INFO:tasks.workunit.client.1.vm09.stdout:0/994: symlink da/dc/d61/l148 0 2026-03-09T15:01:04.159 INFO:tasks.workunit.client.1.vm09.stdout:3/925: truncate d3/d3a/d2b/d31/d4a/ff2 187628 0 2026-03-09T15:01:04.160 INFO:tasks.workunit.client.1.vm09.stdout:0/995: dread - da/dc/d22/f12e zero size 2026-03-09T15:01:04.162 INFO:tasks.workunit.client.1.vm09.stdout:6/875: mknod d6/d20/d44/c11f 0 2026-03-09T15:01:04.162 INFO:tasks.workunit.client.1.vm09.stdout:0/996: write da/dc/d84/d124/d11c/f83 [1502473,9570] 0 2026-03-09T15:01:04.169 INFO:tasks.workunit.client.1.vm09.stdout:7/891: mkdir d3/db/d15/d5f/d6e/d10d 0 2026-03-09T15:01:04.171 INFO:tasks.workunit.client.1.vm09.stdout:5/991: sync 2026-03-09T15:01:04.173 INFO:tasks.workunit.client.1.vm09.stdout:8/907: mknod df/d5b/d65/dae/c10c 0 2026-03-09T15:01:04.182 INFO:tasks.workunit.client.1.vm09.stdout:2/910: unlink df/d1f/d47/d84/db7/dc3/dd9/fe7 0 2026-03-09T15:01:04.206 INFO:tasks.workunit.client.1.vm09.stdout:8/908: creat df/d2d/d90/f10d x:0 0 0 2026-03-09T15:01:04.211 INFO:tasks.workunit.client.1.vm09.stdout:8/909: dwrite df/d2d/d42/fd2 [0,4194304] 0 2026-03-09T15:01:04.212 INFO:tasks.workunit.client.1.vm09.stdout:8/910: stat df/d38/d64/fb2 0 2026-03-09T15:01:04.222 INFO:tasks.workunit.client.1.vm09.stdout:6/876: mknod d6/d20/d2a/d3b/d91/d11b/c120 0 2026-03-09T15:01:04.222 INFO:tasks.workunit.client.1.vm09.stdout:6/877: chown d6/d20/d38/d4e 12 1 2026-03-09T15:01:04.223 INFO:tasks.workunit.client.1.vm09.stdout:6/878: readlink d6/d20/d24/l51 0 2026-03-09T15:01:04.225 INFO:tasks.workunit.client.1.vm09.stdout:3/926: write d3/d9a/de3/dc4/fec [572391,59822] 0 2026-03-09T15:01:04.227 INFO:tasks.workunit.client.1.vm09.stdout:7/892: write d3/d1d/f30 [3022327,17337] 0 2026-03-09T15:01:04.227 INFO:tasks.workunit.client.1.vm09.stdout:3/927: fdatasync d3/d3a/f1d 0 2026-03-09T15:01:04.227 INFO:tasks.workunit.client.1.vm09.stdout:4/976: dwrite db/d19/d52/f6a [0,4194304] 0 2026-03-09T15:01:04.239 INFO:tasks.workunit.client.1.vm09.stdout:5/992: dwrite d2/d37/d3c/d36/d4c/d51/d96/f16 [0,4194304] 0 2026-03-09T15:01:04.241 INFO:tasks.workunit.client.1.vm09.stdout:9/832: link d1/d7/d1e/d2b/d2e/d56/d6d/d104/d91/cf5 d1/d4f/d52/c109 0 2026-03-09T15:01:04.242 INFO:tasks.workunit.client.1.vm09.stdout:9/833: stat d1/d7/d1e/d2b/d8d/dd5/d100 0 2026-03-09T15:01:04.246 INFO:tasks.workunit.client.1.vm09.stdout:9/834: dwrite d1/fb6 [0,4194304] 0 2026-03-09T15:01:04.248 INFO:tasks.workunit.client.1.vm09.stdout:1/804: rename d8/d10/d24/d45/ca8 to d8/d10/d24/d48/d9b/cf7 0 2026-03-09T15:01:04.254 INFO:tasks.workunit.client.1.vm09.stdout:1/805: stat d8/d10/d24/d48/d9b/d78/d8b/cb1 0 2026-03-09T15:01:04.259 INFO:tasks.workunit.client.1.vm09.stdout:9/835: dwrite d1/f29 [0,4194304] 0 2026-03-09T15:01:04.262 INFO:tasks.workunit.client.1.vm09.stdout:7/893: truncate d3/f97 73356 0 2026-03-09T15:01:04.265 INFO:tasks.workunit.client.1.vm09.stdout:7/894: write d3/db/ff8 [2503215,126628] 0 2026-03-09T15:01:04.271 INFO:tasks.workunit.client.1.vm09.stdout:9/836: dwrite d1/f1f [0,4194304] 0 2026-03-09T15:01:04.276 INFO:tasks.workunit.client.1.vm09.stdout:4/977: mkdir db/d12/d9e/dd0/d138 0 2026-03-09T15:01:04.288 INFO:tasks.workunit.client.1.vm09.stdout:1/806: dread d8/d50/fa7 [0,4194304] 0 2026-03-09T15:01:04.289 INFO:tasks.workunit.client.1.vm09.stdout:0/997: write da/dc/d1c/d3c/f13a [2403632,94861] 0 2026-03-09T15:01:04.291 INFO:tasks.workunit.client.1.vm09.stdout:0/998: dread da/d30/f3d [0,4194304] 0 2026-03-09T15:01:04.291 INFO:tasks.workunit.client.1.vm09.stdout:8/911: link df/d2d/f2f df/d38/f10e 0 2026-03-09T15:01:04.298 INFO:tasks.workunit.client.1.vm09.stdout:7/895: readlink d3/db/d15/d5f/l6b 0 2026-03-09T15:01:04.299 INFO:tasks.workunit.client.1.vm09.stdout:4/978: symlink db/d12/d9e/l139 0 2026-03-09T15:01:04.302 INFO:tasks.workunit.client.1.vm09.stdout:2/911: rename df/d1f/d47/d5d/fd8 to df/d1f/d47/f127 0 2026-03-09T15:01:04.305 INFO:tasks.workunit.client.1.vm09.stdout:9/837: rename d1/d7/d1e/d2b/d2e/d56/d6d/d104/lca to d1/d7/d1e/d2b/d2e/d56/d6d/d104/l10a 0 2026-03-09T15:01:04.312 INFO:tasks.workunit.client.1.vm09.stdout:7/896: fsync d3/db/d46/f5b 0 2026-03-09T15:01:04.315 INFO:tasks.workunit.client.1.vm09.stdout:9/838: rmdir d1/d7/d1e/d2b/d2e 39 2026-03-09T15:01:04.319 INFO:tasks.workunit.client.1.vm09.stdout:1/807: link d8/d50/d39/d95/d72/c8a d8/d10/d73/cf8 0 2026-03-09T15:01:04.320 INFO:tasks.workunit.client.1.vm09.stdout:4/979: rmdir db/d19/d23/d44/dd2/d103 0 2026-03-09T15:01:04.320 INFO:tasks.workunit.client.1.vm09.stdout:4/980: readlink db/d12/d16/d5b/lca 0 2026-03-09T15:01:04.320 INFO:tasks.workunit.client.1.vm09.stdout:9/839: fsync d1/d7/d1e/d2b/d2e/f95 0 2026-03-09T15:01:04.327 INFO:tasks.workunit.client.1.vm09.stdout:9/840: dwrite d1/d7/d1e/d2b/d40/fe2 [0,4194304] 0 2026-03-09T15:01:04.327 INFO:tasks.workunit.client.1.vm09.stdout:1/808: sync 2026-03-09T15:01:04.329 INFO:tasks.workunit.client.1.vm09.stdout:9/841: fdatasync d1/d7/d1e/d2b/d8d/f9d 0 2026-03-09T15:01:04.330 INFO:tasks.workunit.client.1.vm09.stdout:9/842: stat d1/d7/d1e/d2b/d2e/d56/d5e 0 2026-03-09T15:01:04.337 INFO:tasks.workunit.client.1.vm09.stdout:4/981: mknod db/d12/d16/d5b/d78/d11e/c13a 0 2026-03-09T15:01:04.338 INFO:tasks.workunit.client.1.vm09.stdout:4/982: write db/d19/d23/d71/d53/dcf/dfb/ffd [284301,9743] 0 2026-03-09T15:01:04.359 INFO:tasks.workunit.client.1.vm09.stdout:6/879: dwrite d6/df/d23/fae [0,4194304] 0 2026-03-09T15:01:04.360 INFO:tasks.workunit.client.1.vm09.stdout:9/843: symlink d1/d7/d1e/d2b/d2e/d56/d6d/d104/l10b 0 2026-03-09T15:01:04.360 INFO:tasks.workunit.client.1.vm09.stdout:5/993: write d2/d37/d3c/d36/d4c/d89/fcf [5233179,23430] 0 2026-03-09T15:01:04.362 INFO:tasks.workunit.client.1.vm09.stdout:3/928: dwrite d3/d3a/d2b/d36/f46 [0,4194304] 0 2026-03-09T15:01:04.365 INFO:tasks.workunit.client.1.vm09.stdout:5/994: chown d2/d37/d3c/d36/d45/dae/dc3/f130 4512 1 2026-03-09T15:01:04.372 INFO:tasks.workunit.client.1.vm09.stdout:6/880: dread - d6/d20/d2a/dc4/dba/fe7 zero size 2026-03-09T15:01:04.373 INFO:tasks.workunit.client.1.vm09.stdout:1/809: creat d8/ff9 x:0 0 0 2026-03-09T15:01:04.379 INFO:tasks.workunit.client.1.vm09.stdout:9/844: symlink d1/d7/l10c 0 2026-03-09T15:01:04.382 INFO:tasks.workunit.client.1.vm09.stdout:3/929: creat d3/d5b/d79/d9d/d116/f13f x:0 0 0 2026-03-09T15:01:04.384 INFO:tasks.workunit.client.1.vm09.stdout:4/983: rename db/d12/d9e to db/d19/d23/d44/d136/d13b 0 2026-03-09T15:01:04.385 INFO:tasks.workunit.client.1.vm09.stdout:6/881: mknod d6/db/d8b/c121 0 2026-03-09T15:01:04.385 INFO:tasks.workunit.client.1.vm09.stdout:1/810: sync 2026-03-09T15:01:04.386 INFO:tasks.workunit.client.1.vm09.stdout:5/995: mknod d2/c15f 0 2026-03-09T15:01:04.386 INFO:tasks.workunit.client.1.vm09.stdout:4/984: readlink db/d19/d81/l3c 0 2026-03-09T15:01:04.386 INFO:tasks.workunit.client.1.vm09.stdout:8/912: write df/d5c/f8b [3928901,89091] 0 2026-03-09T15:01:04.386 INFO:tasks.workunit.client.1.vm09.stdout:6/882: readlink d6/d20/d44/d45/ld0 0 2026-03-09T15:01:04.393 INFO:tasks.workunit.client.1.vm09.stdout:8/913: dwrite df/d2d/d46/d33/ddc/dfe/f87 [4194304,4194304] 0 2026-03-09T15:01:04.400 INFO:tasks.workunit.client.1.vm09.stdout:0/999: dwrite da/dc/d22/f3b [0,4194304] 0 2026-03-09T15:01:04.404 INFO:tasks.workunit.client.1.vm09.stdout:4/985: dwrite db/d19/d23/d71/fb3 [4194304,4194304] 0 2026-03-09T15:01:04.404 INFO:tasks.workunit.client.1.vm09.stdout:2/912: write df/d1f/d6d/f7f [2819845,8616] 0 2026-03-09T15:01:04.414 INFO:tasks.workunit.client.1.vm09.stdout:7/897: dwrite d3/db/d15/d5f/f89 [0,4194304] 0 2026-03-09T15:01:04.417 INFO:tasks.workunit.client.1.vm09.stdout:7/898: readlink d3/d1d/d94/d107/d7d/dc6/le1 0 2026-03-09T15:01:04.419 INFO:tasks.workunit.client.1.vm09.stdout:4/986: dwrite db/d19/d23/d44/d84/fd5 [0,4194304] 0 2026-03-09T15:01:04.422 INFO:tasks.workunit.client.1.vm09.stdout:7/899: chown d3/d1d/d94/d107/d7d/dd5/fd7 36910 1 2026-03-09T15:01:04.425 INFO:tasks.workunit.client.1.vm09.stdout:4/987: write db/d19/d23/d71/d53/fa0 [172277,25280] 0 2026-03-09T15:01:04.426 INFO:tasks.workunit.client.1.vm09.stdout:2/913: mkdir df/d1f/d6d/d8f/d5f/ddf/d128 0 2026-03-09T15:01:04.435 INFO:tasks.workunit.client.1.vm09.stdout:1/811: mkdir d8/d10/d24/d48/d9b/dfa 0 2026-03-09T15:01:04.441 INFO:tasks.workunit.client.1.vm09.stdout:2/914: write df/f5b [4852179,111317] 0 2026-03-09T15:01:04.443 INFO:tasks.workunit.client.1.vm09.stdout:5/996: link d2/d37/d67/df6/c143 d2/d37/d53/dc4/c160 0 2026-03-09T15:01:04.445 INFO:tasks.workunit.client.1.vm09.stdout:7/900: rename d3/d1d/d94/d107/db7/fd2 to d3/d61/f10e 0 2026-03-09T15:01:04.446 INFO:tasks.workunit.client.1.vm09.stdout:7/901: dread - d3/d1d/d65/da3/ff6 zero size 2026-03-09T15:01:04.446 INFO:tasks.workunit.client.1.vm09.stdout:2/915: symlink df/d1f/d47/d84/db7/dc3/da7/l129 0 2026-03-09T15:01:04.463 INFO:tasks.workunit.client.1.vm09.stdout:7/902: dread d3/d28/fb3 [0,4194304] 0 2026-03-09T15:01:04.505 INFO:tasks.workunit.client.1.vm09.stdout:7/903: symlink d3/d1d/d65/da3/de4/l10f 0 2026-03-09T15:01:04.509 INFO:tasks.workunit.client.1.vm09.stdout:7/904: dwrite d3/db/f42 [0,4194304] 0 2026-03-09T15:01:04.516 INFO:tasks.workunit.client.1.vm09.stdout:3/930: write d3/d74/f88 [790865,116783] 0 2026-03-09T15:01:04.517 INFO:tasks.workunit.client.1.vm09.stdout:9/845: write d1/d7/d1e/d2b/d2e/d56/d6d/d104/d91/fa5 [1393469,76118] 0 2026-03-09T15:01:04.519 INFO:tasks.workunit.client.1.vm09.stdout:3/931: dread - d3/d5b/d79/d9d/d116/f13f zero size 2026-03-09T15:01:04.520 INFO:tasks.workunit.client.1.vm09.stdout:9/846: chown d1/d4f/d52/f94 1913124 1 2026-03-09T15:01:04.523 INFO:tasks.workunit.client.1.vm09.stdout:6/883: dwrite d6/d20/d38/d56/fd3 [0,4194304] 0 2026-03-09T15:01:04.531 INFO:tasks.workunit.client.1.vm09.stdout:8/914: write df/d38/d64/d5f/f6f [2914064,42926] 0 2026-03-09T15:01:04.532 INFO:tasks.workunit.client.1.vm09.stdout:8/915: fsync df/d2d/d42/f7c 0 2026-03-09T15:01:04.559 INFO:tasks.workunit.client.1.vm09.stdout:4/988: dwrite db/d19/d52/d76/d3b/f49 [0,4194304] 0 2026-03-09T15:01:04.565 INFO:tasks.workunit.client.1.vm09.stdout:7/905: mkdir d3/d1d/d94/d107/d5c/d110 0 2026-03-09T15:01:04.566 INFO:tasks.workunit.client.1.vm09.stdout:1/812: write d8/d10/d24/d48/d9b/d68/fc4 [469854,58018] 0 2026-03-09T15:01:04.577 INFO:tasks.workunit.client.1.vm09.stdout:6/884: creat d6/d20/d44/d8f/f122 x:0 0 0 2026-03-09T15:01:04.580 INFO:tasks.workunit.client.1.vm09.stdout:5/997: dwrite d2/d37/d3c/d36/d45/fa0 [0,4194304] 0 2026-03-09T15:01:04.591 INFO:tasks.workunit.client.1.vm09.stdout:2/916: dwrite df/f33 [0,4194304] 0 2026-03-09T15:01:04.619 INFO:tasks.workunit.client.1.vm09.stdout:4/989: dread db/d19/f38 [0,4194304] 0 2026-03-09T15:01:04.628 INFO:tasks.workunit.client.1.vm09.stdout:4/990: sync 2026-03-09T15:01:04.628 INFO:tasks.workunit.client.1.vm09.stdout:4/991: truncate db/d19/f129 105915 0 2026-03-09T15:01:04.632 INFO:tasks.workunit.client.1.vm09.stdout:4/992: dwrite db/d19/d23/d71/d5f/f66 [0,4194304] 0 2026-03-09T15:01:04.641 INFO:tasks.workunit.client.1.vm09.stdout:2/917: fsync df/f13 0 2026-03-09T15:01:04.644 INFO:tasks.workunit.client.1.vm09.stdout:9/847: link d1/la0 d1/d7/d1e/d2b/d8d/dc8/l10d 0 2026-03-09T15:01:04.647 INFO:tasks.workunit.client.1.vm09.stdout:3/932: link d3/d9a/de3/cf1 d3/d74/c140 0 2026-03-09T15:01:04.655 INFO:tasks.workunit.client.1.vm09.stdout:1/813: mkdir d8/d10/d24/d48/d9b/df5/dfb 0 2026-03-09T15:01:04.671 INFO:tasks.workunit.client.1.vm09.stdout:3/933: creat d3/d3a/d2b/dee/f141 x:0 0 0 2026-03-09T15:01:04.673 INFO:tasks.workunit.client.1.vm09.stdout:3/934: sync 2026-03-09T15:01:04.674 INFO:tasks.workunit.client.1.vm09.stdout:3/935: sync 2026-03-09T15:01:04.680 INFO:tasks.workunit.client.1.vm09.stdout:7/906: symlink d3/d3d/d9b/da9/l111 0 2026-03-09T15:01:04.681 INFO:tasks.workunit.client.1.vm09.stdout:7/907: write d3/db/d15/d5f/f89 [531280,86624] 0 2026-03-09T15:01:04.686 INFO:tasks.workunit.client.1.vm09.stdout:8/916: rename df/d2d/d42/d70/dc0 to df/d24/d95/d10f 0 2026-03-09T15:01:04.686 INFO:tasks.workunit.client.1.vm09.stdout:8/917: fsync df/d2d/d90/f10d 0 2026-03-09T15:01:04.687 INFO:tasks.workunit.client.1.vm09.stdout:8/918: sync 2026-03-09T15:01:04.695 INFO:tasks.workunit.client.1.vm09.stdout:4/993: symlink db/d12/d16/d5b/da5/l13c 0 2026-03-09T15:01:04.696 INFO:tasks.workunit.client.1.vm09.stdout:4/994: write db/d19/d52/fb5 [240849,126157] 0 2026-03-09T15:01:04.698 INFO:tasks.workunit.client.1.vm09.stdout:4/995: dread - db/d19/d23/d71/d53/ded/f133 zero size 2026-03-09T15:01:04.698 INFO:tasks.workunit.client.1.vm09.stdout:5/998: link d2/d37/d3c/d36/d45/fab d2/f161 0 2026-03-09T15:01:04.699 INFO:tasks.workunit.client.1.vm09.stdout:9/848: creat d1/d4f/f10e x:0 0 0 2026-03-09T15:01:04.700 INFO:tasks.workunit.client.1.vm09.stdout:4/996: write db/d19/d23/d44/d7c/d7d/d97/da3/fcb [759087,13409] 0 2026-03-09T15:01:04.701 INFO:tasks.workunit.client.1.vm09.stdout:9/849: chown d1/d7/d1e/d2b/d40 28870800 1 2026-03-09T15:01:04.705 INFO:tasks.workunit.client.1.vm09.stdout:6/885: rmdir d6/db/d10/d4f 39 2026-03-09T15:01:04.706 INFO:tasks.workunit.client.1.vm09.stdout:6/886: write d6/d20/d2a/d3b/d91/ffb [185561,76805] 0 2026-03-09T15:01:04.707 INFO:tasks.workunit.client.1.vm09.stdout:6/887: chown d6/d20/d2a/f37 2096 1 2026-03-09T15:01:04.707 INFO:tasks.workunit.client.1.vm09.stdout:3/936: truncate d3/f3e 2516076 0 2026-03-09T15:01:04.710 INFO:tasks.workunit.client.1.vm09.stdout:8/919: creat df/d2d/d46/d33/d103/f110 x:0 0 0 2026-03-09T15:01:04.711 INFO:tasks.workunit.client.1.vm09.stdout:6/888: dread d6/d20/d38/d56/d65/f100 [0,4194304] 0 2026-03-09T15:01:04.715 INFO:tasks.workunit.client.1.vm09.stdout:4/997: mkdir db/d19/d52/d13d 0 2026-03-09T15:01:04.717 INFO:tasks.workunit.client.1.vm09.stdout:7/908: dread d3/db/d46/db2/df5/f84 [0,4194304] 0 2026-03-09T15:01:04.740 INFO:tasks.workunit.client.1.vm09.stdout:6/889: dwrite d6/df/d23/f78 [0,4194304] 0 2026-03-09T15:01:04.741 INFO:tasks.workunit.client.1.vm09.stdout:7/909: creat d3/d1d/d65/f112 x:0 0 0 2026-03-09T15:01:04.749 INFO:tasks.workunit.client.1.vm09.stdout:3/937: link d3/d100/d6a/dd5/f119 d3/d100/d48/dc5/f142 0 2026-03-09T15:01:04.764 INFO:tasks.workunit.client.1.vm09.stdout:3/938: symlink d3/d3a/d2b/d7b/l143 0 2026-03-09T15:01:04.765 INFO:tasks.workunit.client.1.vm09.stdout:9/850: link d1/d7/d1e/d2b/d2e/d56/d6d/d104/d91/l9a d1/d7/d1e/d2b/d2e/l10f 0 2026-03-09T15:01:04.772 INFO:tasks.workunit.client.1.vm09.stdout:9/851: mknod d1/d7/d1e/d2b/d2e/d56/d5e/c110 0 2026-03-09T15:01:04.774 INFO:tasks.workunit.client.1.vm09.stdout:4/998: getdents db/d19/d23/d71/d53/dcf/dfb 0 2026-03-09T15:01:04.785 INFO:tasks.workunit.client.1.vm09.stdout:4/999: mkdir db/d12/d16/d5b/d105/d13e 0 2026-03-09T15:01:04.814 INFO:tasks.workunit.client.1.vm09.stdout:6/890: rmdir d6/d20/d44 39 2026-03-09T15:01:04.819 INFO:tasks.workunit.client.1.vm09.stdout:6/891: creat d6/d20/d38/d56/d65/d68/d86/dc0/f123 x:0 0 0 2026-03-09T15:01:04.821 INFO:tasks.workunit.client.1.vm09.stdout:6/892: stat d6/d20/d44/cf3 0 2026-03-09T15:01:04.872 INFO:tasks.workunit.client.1.vm09.stdout:2/918: dwrite df/d58/d67/f61 [0,4194304] 0 2026-03-09T15:01:04.887 INFO:tasks.workunit.client.1.vm09.stdout:2/919: creat df/d20/d2e/f12a x:0 0 0 2026-03-09T15:01:04.897 INFO:tasks.workunit.client.1.vm09.stdout:2/920: dread df/d58/d74/f94 [0,4194304] 0 2026-03-09T15:01:04.897 INFO:tasks.workunit.client.1.vm09.stdout:2/921: dread - df/d1f/d47/d5d/f113 zero size 2026-03-09T15:01:04.903 INFO:tasks.workunit.client.1.vm09.stdout:2/922: mknod df/d1f/d47/d5d/d90/c12b 0 2026-03-09T15:01:04.905 INFO:tasks.workunit.client.1.vm09.stdout:8/920: dwrite df/d5b/f31 [0,4194304] 0 2026-03-09T15:01:04.908 INFO:tasks.workunit.client.1.vm09.stdout:8/921: readlink df/d38/d64/lea 0 2026-03-09T15:01:04.924 INFO:tasks.workunit.client.1.vm09.stdout:9/852: truncate d1/d7/d1e/d2b/d8d/dc8/fde 3179746 0 2026-03-09T15:01:04.927 INFO:tasks.workunit.client.1.vm09.stdout:9/853: link d1/d7/d1e/d2b/d8d/dc8/fe7 d1/d7/d9f/daa/f111 0 2026-03-09T15:01:04.928 INFO:tasks.workunit.client.1.vm09.stdout:9/854: fdatasync d1/d7/d1e/d2b/d2e/f95 0 2026-03-09T15:01:04.928 INFO:tasks.workunit.client.1.vm09.stdout:9/855: fsync d1/d7/d1e/d2b/d8d/dd5/fdb 0 2026-03-09T15:01:04.929 INFO:tasks.workunit.client.1.vm09.stdout:9/856: readlink d1/d7/l1c 0 2026-03-09T15:01:04.929 INFO:tasks.workunit.client.1.vm09.stdout:9/857: write d1/d4f/f10e [94894,31005] 0 2026-03-09T15:01:04.932 INFO:tasks.workunit.client.1.vm09.stdout:1/814: rename d8/d50/d39/d95/d56/db5 to d8/dfc 0 2026-03-09T15:01:04.932 INFO:tasks.workunit.client.1.vm09.stdout:1/815: chown d8/d50/d39/la9 2242 1 2026-03-09T15:01:04.933 INFO:tasks.workunit.client.1.vm09.stdout:5/999: rename d2/f34 to d2/d37/d3c/dbf/d125/f162 0 2026-03-09T15:01:04.933 INFO:tasks.workunit.client.1.vm09.stdout:9/858: dwrite d1/d7/f83 [0,4194304] 0 2026-03-09T15:01:04.937 INFO:tasks.workunit.client.1.vm09.stdout:7/910: rename d3/db/ff8 to d3/d1d/d94/d107/d7d/dc6/f113 0 2026-03-09T15:01:04.938 INFO:tasks.workunit.client.1.vm09.stdout:9/859: creat d1/d7/d1e/d2b/d2e/d56/d6d/d104/d91/f112 x:0 0 0 2026-03-09T15:01:04.939 INFO:tasks.workunit.client.1.vm09.stdout:1/816: creat d8/d10/d24/d48/d9b/df5/dfb/ffd x:0 0 0 2026-03-09T15:01:04.940 INFO:tasks.workunit.client.1.vm09.stdout:3/939: rename d3/d3a/d2b/d31/d9e/ff4 to d3/d100/d6a/f144 0 2026-03-09T15:01:04.941 INFO:tasks.workunit.client.1.vm09.stdout:3/940: stat d3/d3a/d2b/d7b/f121 0 2026-03-09T15:01:04.946 INFO:tasks.workunit.client.1.vm09.stdout:6/893: dwrite d6/db/d10/d7a/f80 [0,4194304] 0 2026-03-09T15:01:04.947 INFO:tasks.workunit.client.1.vm09.stdout:2/923: rename df/d1f/d47/c68 to df/d1f/d47/d71/c12c 0 2026-03-09T15:01:04.950 INFO:tasks.workunit.client.1.vm09.stdout:3/941: rmdir d3/d100/d48/dc5 39 2026-03-09T15:01:04.950 INFO:tasks.workunit.client.1.vm09.stdout:1/817: creat d8/d10/d73/ffe x:0 0 0 2026-03-09T15:01:04.951 INFO:tasks.workunit.client.1.vm09.stdout:9/860: rename d1/d58/da8/ld8 to d1/d7/da6/db3/l113 0 2026-03-09T15:01:04.952 INFO:tasks.workunit.client.1.vm09.stdout:1/818: chown d8/d10/d24/d45/d5f/ccb 279 1 2026-03-09T15:01:04.954 INFO:tasks.workunit.client.1.vm09.stdout:1/819: write d8/d10/d73/fef [730531,25880] 0 2026-03-09T15:01:04.957 INFO:tasks.workunit.client.1.vm09.stdout:3/942: creat d3/d9a/de3/dc4/f145 x:0 0 0 2026-03-09T15:01:04.961 INFO:tasks.workunit.client.1.vm09.stdout:9/861: symlink d1/d7/db8/l114 0 2026-03-09T15:01:04.964 INFO:tasks.workunit.client.1.vm09.stdout:9/862: dwrite d1/d7/d1e/d2b/d8d/dd5/fdb [0,4194304] 0 2026-03-09T15:01:04.974 INFO:tasks.workunit.client.1.vm09.stdout:9/863: read d1/d7/d1e/f5a [351071,85164] 0 2026-03-09T15:01:04.974 INFO:tasks.workunit.client.1.vm09.stdout:9/864: write d1/d7/fba [1733623,54350] 0 2026-03-09T15:01:04.975 INFO:tasks.workunit.client.1.vm09.stdout:9/865: mkdir d1/d7/d9f/daa/d115 0 2026-03-09T15:01:04.977 INFO:tasks.workunit.client.1.vm09.stdout:1/820: sync 2026-03-09T15:01:04.979 INFO:tasks.workunit.client.1.vm09.stdout:1/821: rename d8/f59 to d8/d10/d24/d48/d9b/fff 0 2026-03-09T15:01:05.006 INFO:tasks.workunit.client.1.vm09.stdout:8/922: truncate df/deb/fbc 2426936 0 2026-03-09T15:01:05.007 INFO:tasks.workunit.client.1.vm09.stdout:8/923: read df/d5c/f8b [1066128,90100] 0 2026-03-09T15:01:05.008 INFO:tasks.workunit.client.1.vm09.stdout:8/924: chown df/d24/d99/db6 1 1 2026-03-09T15:01:05.011 INFO:tasks.workunit.client.1.vm09.stdout:9/866: dread d1/d7/fba [0,4194304] 0 2026-03-09T15:01:05.011 INFO:tasks.workunit.client.1.vm09.stdout:8/925: sync 2026-03-09T15:01:05.013 INFO:tasks.workunit.client.1.vm09.stdout:8/926: dread - df/d5c/ff7 zero size 2026-03-09T15:01:05.013 INFO:tasks.workunit.client.1.vm09.stdout:8/927: chown df/d5b/d65/d1d/l5d 7 1 2026-03-09T15:01:05.014 INFO:tasks.workunit.client.1.vm09.stdout:8/928: dread - df/d5c/ff7 zero size 2026-03-09T15:01:05.017 INFO:tasks.workunit.client.1.vm09.stdout:1/822: rmdir d8/d10/d24/d48/d9b/df5/dfb 39 2026-03-09T15:01:05.019 INFO:tasks.workunit.client.1.vm09.stdout:8/929: dwrite df/d38/d64/d5f/f62 [0,4194304] 0 2026-03-09T15:01:05.023 INFO:tasks.workunit.client.1.vm09.stdout:6/894: write d6/d20/d2a/dc4/dba/fe7 [777899,19239] 0 2026-03-09T15:01:05.028 INFO:tasks.workunit.client.1.vm09.stdout:2/924: dwrite df/d93/ff5 [0,4194304] 0 2026-03-09T15:01:05.044 INFO:tasks.workunit.client.1.vm09.stdout:7/911: creat d3/db/d46/db2/f114 x:0 0 0 2026-03-09T15:01:05.052 INFO:tasks.workunit.client.1.vm09.stdout:2/925: truncate df/d58/f65 4972471 0 2026-03-09T15:01:05.053 INFO:tasks.workunit.client.1.vm09.stdout:8/930: mkdir df/d2d/d42/dfb/d111 0 2026-03-09T15:01:05.053 INFO:tasks.workunit.client.1.vm09.stdout:9/867: link d1/d7/d9f/daa/cdc d1/d7/d1e/d2b/d2e/d56/d6d/d104/d91/c116 0 2026-03-09T15:01:05.053 INFO:tasks.workunit.client.1.vm09.stdout:6/895: unlink d6/d20/l106 0 2026-03-09T15:01:05.053 INFO:tasks.workunit.client.1.vm09.stdout:1/823: rename d8/d10/dc9 to d8/d10/d24/d48/d9b/d100 0 2026-03-09T15:01:05.053 INFO:tasks.workunit.client.1.vm09.stdout:9/868: creat d1/d7/d1e/d2b/d2e/d56/d6d/d104/d91/f117 x:0 0 0 2026-03-09T15:01:05.053 INFO:tasks.workunit.client.1.vm09.stdout:6/896: mknod d6/d20/d24/d7e/c124 0 2026-03-09T15:01:05.053 INFO:tasks.workunit.client.1.vm09.stdout:7/912: fsync d3/d3d/f5a 0 2026-03-09T15:01:05.053 INFO:tasks.workunit.client.1.vm09.stdout:1/824: truncate d8/d10/d24/d48/d9b/d78/f7c 4771636 0 2026-03-09T15:01:05.056 INFO:tasks.workunit.client.1.vm09.stdout:6/897: read d6/d20/d38/d56/d65/d68/d86/dc0/ddb/fa9 [952214,52164] 0 2026-03-09T15:01:05.058 INFO:tasks.workunit.client.1.vm09.stdout:6/898: write d6/d20/d38/d56/d65/f7b [3399338,18963] 0 2026-03-09T15:01:05.061 INFO:tasks.workunit.client.1.vm09.stdout:1/825: chown d8/d10/f2f 55286 1 2026-03-09T15:01:05.062 INFO:tasks.workunit.client.1.vm09.stdout:9/869: rmdir d1/d7/d9f/daa/d115 0 2026-03-09T15:01:05.064 INFO:tasks.workunit.client.1.vm09.stdout:6/899: write d6/d20/d2a/d3b/fdc [3446840,3444] 0 2026-03-09T15:01:05.067 INFO:tasks.workunit.client.1.vm09.stdout:6/900: creat d6/d20/d24/da5/f125 x:0 0 0 2026-03-09T15:01:05.071 INFO:tasks.workunit.client.1.vm09.stdout:9/870: readlink d1/d7/d1e/d2b/d2e/d56/d6d/d104/l10b 0 2026-03-09T15:01:05.071 INFO:tasks.workunit.client.1.vm09.stdout:1/826: rmdir d8/d50/d39/d95/d56/dc7/dd9 39 2026-03-09T15:01:05.071 INFO:tasks.workunit.client.1.vm09.stdout:7/913: dread d3/d1d/d94/d107/d7d/dc6/f113 [0,4194304] 0 2026-03-09T15:01:05.073 INFO:tasks.workunit.client.1.vm09.stdout:6/901: dread d6/d20/d24/d7e/fb9 [0,4194304] 0 2026-03-09T15:01:05.073 INFO:tasks.workunit.client.1.vm09.stdout:6/902: chown d6/df/d23/c28 857805 1 2026-03-09T15:01:05.076 INFO:tasks.workunit.client.1.vm09.stdout:1/827: rename d8/d10/d24/l31 to d8/d50/df6/l101 0 2026-03-09T15:01:05.083 INFO:tasks.workunit.client.1.vm09.stdout:7/914: rename d3/db/d15/d5f/d6e/d83/c8e to d3/db/d15/d5f/d6e/d10d/c115 0 2026-03-09T15:01:05.085 INFO:tasks.workunit.client.1.vm09.stdout:6/903: dwrite d6/db/d8b/de8/d116/fda [0,4194304] 0 2026-03-09T15:01:05.088 INFO:tasks.workunit.client.1.vm09.stdout:3/943: dwrite d3/d100/d6a/f144 [0,4194304] 0 2026-03-09T15:01:05.089 INFO:tasks.workunit.client.1.vm09.stdout:7/915: mkdir d3/d1d/d94/d116 0 2026-03-09T15:01:05.097 INFO:tasks.workunit.client.1.vm09.stdout:2/926: dwrite df/d2d/f41 [0,4194304] 0 2026-03-09T15:01:05.119 INFO:tasks.workunit.client.1.vm09.stdout:3/944: unlink d3/d3a/d2b/d31/d9e/fb7 0 2026-03-09T15:01:05.120 INFO:tasks.workunit.client.1.vm09.stdout:7/916: truncate d3/d1d/d65/fb0 487644 0 2026-03-09T15:01:05.123 INFO:tasks.workunit.client.1.vm09.stdout:8/931: dwrite df/d38/f80 [0,4194304] 0 2026-03-09T15:01:05.133 INFO:tasks.workunit.client.1.vm09.stdout:8/932: mkdir df/d2d/d42/d112 0 2026-03-09T15:01:05.133 INFO:tasks.workunit.client.1.vm09.stdout:9/871: write d1/d7/fb0 [183497,20816] 0 2026-03-09T15:01:05.146 INFO:tasks.workunit.client.1.vm09.stdout:7/917: symlink d3/d1d/d94/d107/d5c/d110/l117 0 2026-03-09T15:01:05.153 INFO:tasks.workunit.client.1.vm09.stdout:3/945: creat d3/d3a/d2b/d7b/db6/f146 x:0 0 0 2026-03-09T15:01:05.153 INFO:tasks.workunit.client.1.vm09.stdout:7/918: chown d3/d3d/d9b/fa2 1007851 1 2026-03-09T15:01:05.153 INFO:tasks.workunit.client.1.vm09.stdout:1/828: rename d8/d10/d73/ffe to d8/d10/d24/d48/d9b/d78/db4/f102 0 2026-03-09T15:01:05.153 INFO:tasks.workunit.client.1.vm09.stdout:9/872: creat d1/d7/d1e/d2b/d8d/dd5/d100/f118 x:0 0 0 2026-03-09T15:01:05.153 INFO:tasks.workunit.client.1.vm09.stdout:2/927: getdents df/d1f/d47/d5d/d90 0 2026-03-09T15:01:05.153 INFO:tasks.workunit.client.1.vm09.stdout:3/946: dwrite d3/d5b/d79/d9d/df9/f129 [0,4194304] 0 2026-03-09T15:01:05.153 INFO:tasks.workunit.client.1.vm09.stdout:7/919: getdents d3/db/d46/db2 0 2026-03-09T15:01:05.155 INFO:tasks.workunit.client.1.vm09.stdout:1/829: mknod d8/d50/d39/d95/d56/c103 0 2026-03-09T15:01:05.164 INFO:tasks.workunit.client.1.vm09.stdout:3/947: mkdir d3/d100/d6a/dd5/d147 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:3/948: readlink d3/d5b/le7 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:7/920: mknod d3/db/d15/c118 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:2/928: mkdir df/d1f/d47/d5d/d90/d107/d12d 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:1/830: unlink d8/d10/d24/d48/d9b/d100/fc8 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:9/873: dwrite d1/d7/d1e/d2b/d2e/d56/d6d/fbd [0,4194304] 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:3/949: creat d3/d3a/d2b/d31/f148 x:0 0 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:3/950: chown d3/d5b/d79/cba 1263566 1 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:3/951: readlink d3/d60/lf5 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:2/929: chown df/d1f/d6d/d8f/la6 2690 1 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:7/921: chown d3/d1d/d65/c101 36 1 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:3/952: creat d3/d3a/d2b/d31/d4a/d62/f149 x:0 0 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:2/930: symlink df/d1f/d47/d84/db7/dc3/da7/l12e 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:9/874: dwrite d1/d7/d1e/d2b/d8d/dd5/fdb [0,4194304] 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:9/875: chown d1/d4f/d52/fec 2972 1 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:2/931: mkdir df/d93/d12f 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:3/953: mknod d3/d3a/d2b/d12f/c14a 0 2026-03-09T15:01:05.186 INFO:tasks.workunit.client.1.vm09.stdout:2/932: readlink df/d1f/d47/d84/db7/dc3/dd9/lfd 0 2026-03-09T15:01:05.188 INFO:tasks.workunit.client.1.vm09.stdout:1/831: dread d8/d10/d24/d48/fd4 [0,4194304] 0 2026-03-09T15:01:05.192 INFO:tasks.workunit.client.1.vm09.stdout:3/954: truncate d3/f3b 4633724 0 2026-03-09T15:01:05.193 INFO:tasks.workunit.client.1.vm09.stdout:2/933: readlink df/d20/l75 0 2026-03-09T15:01:05.196 INFO:tasks.workunit.client.1.vm09.stdout:3/955: rename d3/d100/d6a/dd5/d147 to d3/d5b/d79/d14b 0 2026-03-09T15:01:05.198 INFO:tasks.workunit.client.1.vm09.stdout:2/934: mkdir df/d1f/d47/d5d/dfb/d130 0 2026-03-09T15:01:05.198 INFO:tasks.workunit.client.1.vm09.stdout:1/832: creat d8/d10/d24/d83/f104 x:0 0 0 2026-03-09T15:01:05.206 INFO:tasks.workunit.client.1.vm09.stdout:3/956: dwrite d3/d5b/d79/d9d/d116/f13f [0,4194304] 0 2026-03-09T15:01:05.207 INFO:tasks.workunit.client.1.vm09.stdout:1/833: dread d8/d10/d24/d48/d9b/d100/fd8 [4194304,4194304] 0 2026-03-09T15:01:05.212 INFO:tasks.workunit.client.1.vm09.stdout:6/904: write d6/d20/d38/d56/d65/d68/d6f/fa7 [13580,9776] 0 2026-03-09T15:01:05.212 INFO:tasks.workunit.client.1.vm09.stdout:1/834: creat d8/d10/d24/d48/d9b/d78/db4/f105 x:0 0 0 2026-03-09T15:01:05.214 INFO:tasks.workunit.client.1.vm09.stdout:9/876: dread d1/d6e/f88 [0,4194304] 0 2026-03-09T15:01:05.217 INFO:tasks.workunit.client.1.vm09.stdout:3/957: read d3/d3a/d2b/d31/d4a/ff2 [125039,55183] 0 2026-03-09T15:01:05.222 INFO:tasks.workunit.client.1.vm09.stdout:6/905: rmdir d6/df/d23/d89 39 2026-03-09T15:01:05.222 INFO:tasks.workunit.client.1.vm09.stdout:6/906: chown d6/d20/d2a/d3b/d91/d11b 585168 1 2026-03-09T15:01:05.223 INFO:tasks.workunit.client.1.vm09.stdout:1/835: rename d8/d10/d24/d48/l8e to d8/d10/d24/d48/d9b/d78/db4/l106 0 2026-03-09T15:01:05.226 INFO:tasks.workunit.client.1.vm09.stdout:1/836: dread - d8/d10/d24/d48/d9b/d78/db4/fe6 zero size 2026-03-09T15:01:05.226 INFO:tasks.workunit.client.1.vm09.stdout:6/907: symlink d6/d20/d38/d56/d65/d68/l126 0 2026-03-09T15:01:05.226 INFO:tasks.workunit.client.1.vm09.stdout:3/958: creat d3/d100/f14c x:0 0 0 2026-03-09T15:01:05.230 INFO:tasks.workunit.client.1.vm09.stdout:3/959: mknod d3/d3a/d2b/d31/d4a/d62/c14d 0 2026-03-09T15:01:05.234 INFO:tasks.workunit.client.1.vm09.stdout:6/908: rename d6/df/d23/d89/f8e to d6/d20/d44/d45/f127 0 2026-03-09T15:01:05.235 INFO:tasks.workunit.client.1.vm09.stdout:3/960: mkdir d3/d100/d48/dc5/d10a/d14e 0 2026-03-09T15:01:05.238 INFO:tasks.workunit.client.1.vm09.stdout:6/909: rename d6/d20/d44/c11f to d6/db/d8b/c128 0 2026-03-09T15:01:05.238 INFO:tasks.workunit.client.1.vm09.stdout:1/837: dread d8/d10/d24/d45/ddc/fe5 [0,4194304] 0 2026-03-09T15:01:05.247 INFO:tasks.workunit.client.1.vm09.stdout:2/935: dread df/d20/f52 [0,4194304] 0 2026-03-09T15:01:05.255 INFO:tasks.workunit.client.1.vm09.stdout:9/877: dread d1/d4f/fa3 [0,4194304] 0 2026-03-09T15:01:05.256 INFO:tasks.workunit.client.1.vm09.stdout:2/936: fdatasync df/d20/d2e/fb5 0 2026-03-09T15:01:05.256 INFO:tasks.workunit.client.1.vm09.stdout:9/878: mkdir d1/d7/d1e/d2b/d119 0 2026-03-09T15:01:05.256 INFO:tasks.workunit.client.1.vm09.stdout:2/937: creat df/d58/d74/f131 x:0 0 0 2026-03-09T15:01:05.256 INFO:tasks.workunit.client.1.vm09.stdout:9/879: chown d1/d7/d1e/d2b/d2e/d56/d6d/d104/d91/fa5 689859821 1 2026-03-09T15:01:05.256 INFO:tasks.workunit.client.1.vm09.stdout:2/938: write df/da0/fc6 [630240,46417] 0 2026-03-09T15:01:05.256 INFO:tasks.workunit.client.1.vm09.stdout:9/880: rename d1/d7/d1e/d2b/d40/cee to d1/d4f/d52/c11a 0 2026-03-09T15:01:05.271 INFO:tasks.workunit.client.1.vm09.stdout:9/881: dread d1/d7/d1e/f34 [0,4194304] 0 2026-03-09T15:01:05.271 INFO:tasks.workunit.client.1.vm09.stdout:2/939: creat df/d58/d67/d10a/f132 x:0 0 0 2026-03-09T15:01:05.276 INFO:tasks.workunit.client.1.vm09.stdout:2/940: dwrite df/d20/d29/da9/fca [0,4194304] 0 2026-03-09T15:01:05.278 INFO:tasks.workunit.client.1.vm09.stdout:2/941: chown df/d2d/l119 2047616 1 2026-03-09T15:01:05.279 INFO:tasks.workunit.client.1.vm09.stdout:9/882: rename d1/d7/fba to d1/d58/f11b 0 2026-03-09T15:01:05.285 INFO:tasks.workunit.client.1.vm09.stdout:9/883: fsync d1/d7/f3e 0 2026-03-09T15:01:05.290 INFO:tasks.workunit.client.1.vm09.stdout:2/942: dwrite df/d1f/d6d/d8f/f99 [0,4194304] 0 2026-03-09T15:01:05.299 INFO:tasks.workunit.client.1.vm09.stdout:8/933: write df/d38/d64/fa7 [1452131,109225] 0 2026-03-09T15:01:05.302 INFO:tasks.workunit.client.1.vm09.stdout:8/934: dread df/d2d/d42/fd2 [0,4194304] 0 2026-03-09T15:01:05.303 INFO:tasks.workunit.client.1.vm09.stdout:8/935: truncate df/d24/d95/d10f/dc9/ff1 894558 0 2026-03-09T15:01:05.309 INFO:tasks.workunit.client.1.vm09.stdout:9/884: link d1/d7/d1e/d2b/f32 d1/d7/da6/db3/f11c 0 2026-03-09T15:01:05.315 INFO:tasks.workunit.client.1.vm09.stdout:9/885: dwrite d1/d7/d1e/d2b/f5f [4194304,4194304] 0 2026-03-09T15:01:05.316 INFO:tasks.workunit.client.1.vm09.stdout:8/936: creat df/d24/d99/db6/dca/f113 x:0 0 0 2026-03-09T15:01:05.320 INFO:tasks.workunit.client.1.vm09.stdout:9/886: dwrite d1/fb6 [0,4194304] 0 2026-03-09T15:01:05.323 INFO:tasks.workunit.client.1.vm09.stdout:9/887: write d1/d7/d9f/fa4 [243317,96780] 0 2026-03-09T15:01:05.324 INFO:tasks.workunit.client.1.vm09.stdout:9/888: write d1/d7/d1e/d2b/d2e/d56/f103 [263354,98799] 0 2026-03-09T15:01:05.332 INFO:tasks.workunit.client.1.vm09.stdout:7/922: truncate d3/d1d/d94/d107/d7d/dc6/f113 403193 0 2026-03-09T15:01:05.337 INFO:tasks.workunit.client.1.vm09.stdout:8/937: rmdir df/d2d/d46/d33/d103 39 2026-03-09T15:01:05.342 INFO:tasks.workunit.client.1.vm09.stdout:9/889: symlink d1/d7/d1e/d2b/d8d/dd5/d100/l11d 0 2026-03-09T15:01:05.344 INFO:tasks.workunit.client.1.vm09.stdout:7/923: mknod d3/db/d15/d5f/d6e/d10d/c119 0 2026-03-09T15:01:05.347 INFO:tasks.workunit.client.1.vm09.stdout:8/938: mknod df/d38/d64/d5f/c114 0 2026-03-09T15:01:05.351 INFO:tasks.workunit.client.1.vm09.stdout:1/838: rmdir d8/d10/d24/d48/d9b/d78/db4 39 2026-03-09T15:01:05.352 INFO:tasks.workunit.client.1.vm09.stdout:7/924: creat d3/d61/f11a x:0 0 0 2026-03-09T15:01:05.358 INFO:tasks.workunit.client.1.vm09.stdout:7/925: mknod d3/d61/c11b 0 2026-03-09T15:01:05.358 INFO:tasks.workunit.client.1.vm09.stdout:8/939: getdents df/d24/d99/db6/dee 0 2026-03-09T15:01:05.364 INFO:tasks.workunit.client.1.vm09.stdout:9/890: getdents d1/d7/d1e/d2b/d2e/d56/d6d/d104/d91 0 2026-03-09T15:01:05.366 INFO:tasks.workunit.client.1.vm09.stdout:1/839: creat d8/d10/d24/d48/d9b/d78/f107 x:0 0 0 2026-03-09T15:01:05.366 INFO:tasks.workunit.client.1.vm09.stdout:3/961: dwrite d3/d3a/d2b/ff7 [4194304,4194304] 0 2026-03-09T15:01:05.366 INFO:tasks.workunit.client.1.vm09.stdout:7/926: mkdir d3/db/d15/d5f/d6e/d11c 0 2026-03-09T15:01:05.366 INFO:tasks.workunit.client.1.vm09.stdout:8/940: getdents df/d24/d99/db6/ddd 0 2026-03-09T15:01:05.369 INFO:tasks.workunit.client.1.vm09.stdout:6/910: dwrite d6/d20/d2a/f37 [0,4194304] 0 2026-03-09T15:01:05.375 INFO:tasks.workunit.client.1.vm09.stdout:8/941: chown df/d24/d95/d10f/dc9 57218612 1 2026-03-09T15:01:05.382 INFO:tasks.workunit.client.1.vm09.stdout:9/891: rename d1/d6e/l90 to d1/d7/da6/db3/l11e 0 2026-03-09T15:01:05.385 INFO:tasks.workunit.client.1.vm09.stdout:2/943: dwrite df/d1f/d47/d5d/dbc/ff2 [0,4194304] 0 2026-03-09T15:01:05.390 INFO:tasks.workunit.client.1.vm09.stdout:3/962: truncate d3/d100/d48/dc5/f142 3711943 0 2026-03-09T15:01:05.391 INFO:tasks.workunit.client.1.vm09.stdout:3/963: stat d3/fe6 0 2026-03-09T15:01:05.394 INFO:tasks.workunit.client.1.vm09.stdout:8/942: rename f8 to df/d2d/d42/f115 0 2026-03-09T15:01:05.395 INFO:tasks.workunit.client.1.vm09.stdout:9/892: truncate d1/d7/d1e/fd2 461959 0 2026-03-09T15:01:05.397 INFO:tasks.workunit.client.1.vm09.stdout:8/943: readlink df/d38/d64/lea 0 2026-03-09T15:01:05.397 INFO:tasks.workunit.client.1.vm09.stdout:1/840: dread d8/d10/d24/d48/f76 [0,4194304] 0 2026-03-09T15:01:05.401 INFO:tasks.workunit.client.1.vm09.stdout:3/964: link d3/d3a/d2b/d31/f148 d3/d60/df6/f14f 0 2026-03-09T15:01:05.417 INFO:tasks.workunit.client.1.vm09.stdout:7/927: dread d3/d1d/f79 [4194304,4194304] 0 2026-03-09T15:01:05.418 INFO:tasks.workunit.client.1.vm09.stdout:1/841: mkdir d8/d10/d24/d48/d9b/d78/db4/d108 0 2026-03-09T15:01:05.418 INFO:tasks.workunit.client.1.vm09.stdout:3/965: truncate d3/ff 171024 0 2026-03-09T15:01:05.418 INFO:tasks.workunit.client.1.vm09.stdout:7/928: mkdir d3/d1d/d65/da3/d11d 0 2026-03-09T15:01:05.418 INFO:tasks.workunit.client.1.vm09.stdout:3/966: truncate d3/d5b/d79/d9d/ffc 738764 0 2026-03-09T15:01:05.418 INFO:tasks.workunit.client.1.vm09.stdout:9/893: getdents d1/d7/d1e/d2b/d2e/d56/d6d/d104 0 2026-03-09T15:01:05.418 INFO:tasks.workunit.client.1.vm09.stdout:7/929: rmdir d3/db 39 2026-03-09T15:01:05.418 INFO:tasks.workunit.client.1.vm09.stdout:7/930: write d3/d1d/f30 [4648748,36018] 0 2026-03-09T15:01:05.418 INFO:tasks.workunit.client.1.vm09.stdout:6/911: sync 2026-03-09T15:01:05.421 INFO:tasks.workunit.client.1.vm09.stdout:9/894: mknod d1/d7/db8/c11f 0 2026-03-09T15:01:05.422 INFO:tasks.workunit.client.1.vm09.stdout:6/912: creat d6/d20/d24/da5/f129 x:0 0 0 2026-03-09T15:01:05.425 INFO:tasks.workunit.client.1.vm09.stdout:8/944: dread df/d2d/d42/f96 [0,4194304] 0 2026-03-09T15:01:05.425 INFO:tasks.workunit.client.1.vm09.stdout:9/895: fdatasync d1/d7/d1e/d2b/d40/fd9 0 2026-03-09T15:01:05.427 INFO:tasks.workunit.client.1.vm09.stdout:8/945: creat df/d24/d95/d10f/dc9/f116 x:0 0 0 2026-03-09T15:01:05.428 INFO:tasks.workunit.client.1.vm09.stdout:9/896: fsync d1/d7/d1e/f2a 0 2026-03-09T15:01:05.435 INFO:tasks.workunit.client.1.vm09.stdout:8/946: link df/d5b/d65/d1d/l5d df/d5c/l117 0 2026-03-09T15:01:05.435 INFO:tasks.workunit.client.1.vm09.stdout:9/897: dwrite d1/d7/d1e/d2b/d2e/d56/f103 [0,4194304] 0 2026-03-09T15:01:05.438 INFO:tasks.workunit.client.1.vm09.stdout:9/898: dread d1/d7/d1e/f34 [0,4194304] 0 2026-03-09T15:01:05.439 INFO:tasks.workunit.client.1.vm09.stdout:8/947: link df/l19 df/d2d/d4f/l118 0 2026-03-09T15:01:05.440 INFO:tasks.workunit.client.1.vm09.stdout:8/948: stat df/d2d/d90/lbb 0 2026-03-09T15:01:05.444 INFO:tasks.workunit.client.1.vm09.stdout:9/899: rename d1/d7/d1e/d2b/d2e/d56/d6d/d104/d91 to d1/d7/d1e/d2b/d2e/d56/d6d/d120 0 2026-03-09T15:01:05.447 INFO:tasks.workunit.client.1.vm09.stdout:8/949: readlink df/d2d/d4f/l118 0 2026-03-09T15:01:05.448 INFO:tasks.workunit.client.1.vm09.stdout:8/950: fdatasync df/d2d/d46/fef 0 2026-03-09T15:01:05.453 INFO:tasks.workunit.client.1.vm09.stdout:2/944: dread df/d20/d2e/fbb [0,4194304] 0 2026-03-09T15:01:05.453 INFO:tasks.workunit.client.1.vm09.stdout:8/951: mkdir df/d2d/d42/d70/d119 0 2026-03-09T15:01:05.456 INFO:tasks.workunit.client.1.vm09.stdout:8/952: unlink df/d38/d64/l47 0 2026-03-09T15:01:05.460 INFO:tasks.workunit.client.1.vm09.stdout:8/953: dwrite df/d2d/f2f [4194304,4194304] 0 2026-03-09T15:01:05.466 INFO:tasks.workunit.client.1.vm09.stdout:1/842: dread d8/d50/d39/d95/f6e [0,4194304] 0 2026-03-09T15:01:05.467 INFO:tasks.workunit.client.1.vm09.stdout:1/843: stat d8/d10/d24/d48/d9b/d68/fc4 0 2026-03-09T15:01:05.478 INFO:tasks.workunit.client.1.vm09.stdout:2/945: getdents df/d93/da3 0 2026-03-09T15:01:05.479 INFO:tasks.workunit.client.1.vm09.stdout:8/954: mknod df/d5b/d65/dae/c11a 0 2026-03-09T15:01:05.479 INFO:tasks.workunit.client.1.vm09.stdout:8/955: readlink df/deb/led 0 2026-03-09T15:01:05.483 INFO:tasks.workunit.client.1.vm09.stdout:1/844: read d8/d90/fd0 [1744260,104190] 0 2026-03-09T15:01:05.486 INFO:tasks.workunit.client.1.vm09.stdout:2/946: unlink c2 0 2026-03-09T15:01:05.494 INFO:tasks.workunit.client.1.vm09.stdout:2/947: creat df/d1f/d10d/f133 x:0 0 0 2026-03-09T15:01:05.496 INFO:tasks.workunit.client.1.vm09.stdout:2/948: creat df/d93/f134 x:0 0 0 2026-03-09T15:01:05.499 INFO:tasks.workunit.client.1.vm09.stdout:2/949: dwrite df/d58/f110 [0,4194304] 0 2026-03-09T15:01:05.504 INFO:tasks.workunit.client.1.vm09.stdout:2/950: stat df/d1f/d47/d84/lb0 0 2026-03-09T15:01:05.506 INFO:tasks.workunit.client.1.vm09.stdout:2/951: mknod df/d1f/d47/d5d/d90/d107/c135 0 2026-03-09T15:01:05.506 INFO:tasks.workunit.client.1.vm09.stdout:2/952: fsync df/d20/f6a 0 2026-03-09T15:01:05.507 INFO:tasks.workunit.client.1.vm09.stdout:2/953: fsync df/d1f/d6d/d8f/d5f/ddf/f111 0 2026-03-09T15:01:05.510 INFO:tasks.workunit.client.1.vm09.stdout:7/931: write d3/d1d/f79 [6798127,14807] 0 2026-03-09T15:01:05.513 INFO:tasks.workunit.client.1.vm09.stdout:7/932: mknod d3/db/d46/db2/df5/c11e 0 2026-03-09T15:01:05.515 INFO:tasks.workunit.client.1.vm09.stdout:7/933: getdents d3/db/d15/d5f/d6e/d11c 0 2026-03-09T15:01:05.519 INFO:tasks.workunit.client.1.vm09.stdout:7/934: symlink d3/d28/l11f 0 2026-03-09T15:01:05.521 INFO:tasks.workunit.client.1.vm09.stdout:7/935: getdents d3/d1d/d94/d107 0 2026-03-09T15:01:05.522 INFO:tasks.workunit.client.1.vm09.stdout:7/936: write d3/d1d/d94/d107/d5c/d75/db4/fc5 [1686234,53543] 0 2026-03-09T15:01:05.526 INFO:tasks.workunit.client.1.vm09.stdout:7/937: symlink d3/d1d/d65/da3/de4/l120 0 2026-03-09T15:01:05.526 INFO:tasks.workunit.client.1.vm09.stdout:7/938: dread - d3/d1d/d94/d107/d5c/d75/db4/f10a zero size 2026-03-09T15:01:05.531 INFO:tasks.workunit.client.1.vm09.stdout:7/939: mkdir d3/d1d/d65/da3/d121 0 2026-03-09T15:01:05.533 INFO:tasks.workunit.client.1.vm09.stdout:6/913: write d6/df/f40 [387623,25485] 0 2026-03-09T15:01:05.544 INFO:tasks.workunit.client.1.vm09.stdout:6/914: dread d6/d20/d38/d56/d65/fb2 [0,4194304] 0 2026-03-09T15:01:05.546 INFO:tasks.workunit.client.1.vm09.stdout:7/940: link d3/db/d46/db2/df5/c11e d3/d1d/d94/d107/d7d/dd5/c122 0 2026-03-09T15:01:05.551 INFO:tasks.workunit.client.1.vm09.stdout:6/915: creat d6/d20/d24/d7e/f12a x:0 0 0 2026-03-09T15:01:05.552 INFO:tasks.workunit.client.1.vm09.stdout:9/900: truncate d1/d58/f72 4175824 0 2026-03-09T15:01:05.553 INFO:tasks.workunit.client.1.vm09.stdout:6/916: write d6/d20/d38/d4e/d55/dd2/f11a [466044,2589] 0 2026-03-09T15:01:05.559 INFO:tasks.workunit.client.1.vm09.stdout:9/901: creat d1/d7/d1e/d2b/d2e/d56/f121 x:0 0 0 2026-03-09T15:01:05.560 INFO:tasks.workunit.client.1.vm09.stdout:9/902: chown d1/d7/d1e/d2b/d2e/f12 2018 1 2026-03-09T15:01:05.562 INFO:tasks.workunit.client.1.vm09.stdout:9/903: fdatasync d1/d7/d1e/d2b/d40/fd9 0 2026-03-09T15:01:05.564 INFO:tasks.workunit.client.1.vm09.stdout:3/967: dwrite d3/d74/fc3 [0,4194304] 0 2026-03-09T15:01:05.565 INFO:tasks.workunit.client.1.vm09.stdout:6/917: sync 2026-03-09T15:01:05.565 INFO:tasks.workunit.client.1.vm09.stdout:9/904: mknod d1/d7/d1e/d2b/d2e/d56/d5e/c122 0 2026-03-09T15:01:05.572 INFO:tasks.workunit.client.1.vm09.stdout:9/905: dread - d1/d7/db8/ff4 zero size 2026-03-09T15:01:05.573 INFO:tasks.workunit.client.1.vm09.stdout:9/906: read - d1/d7/f45 zero size 2026-03-09T15:01:05.573 INFO:tasks.workunit.client.1.vm09.stdout:6/918: readlink d6/d20/d38/d56/d65/d68/d86/dc0/lfe 0 2026-03-09T15:01:05.574 INFO:tasks.workunit.client.1.vm09.stdout:3/968: read d3/d5b/d79/f83 [3661728,93692] 0 2026-03-09T15:01:05.578 INFO:tasks.workunit.client.1.vm09.stdout:8/956: dwrite df/d2d/dff/fcf [0,4194304] 0 2026-03-09T15:01:05.582 INFO:tasks.workunit.client.1.vm09.stdout:6/919: rename d6/d20/f6e to d6/d20/d44/d8f/f12b 0 2026-03-09T15:01:05.586 INFO:tasks.workunit.client.1.vm09.stdout:8/957: rmdir df/d24/d95/d10f/dc9 39 2026-03-09T15:01:05.586 INFO:tasks.workunit.client.1.vm09.stdout:9/907: truncate d1/d7/d1e/d2b/d2e/d56/d6d/d104/dc0/fe9 1661280 0 2026-03-09T15:01:05.586 INFO:tasks.workunit.client.1.vm09.stdout:6/920: mkdir d6/d12c 0 2026-03-09T15:01:05.587 INFO:tasks.workunit.client.1.vm09.stdout:2/954: dwrite df/d20/f49 [0,4194304] 0 2026-03-09T15:01:05.587 INFO:tasks.workunit.client.1.vm09.stdout:3/969: rmdir d3/d3a/d2b/d7b/dd3/d13c 0 2026-03-09T15:01:05.588 INFO:tasks.workunit.client.1.vm09.stdout:9/908: write d1/d7/d1e/d2b/d2e/d56/feb [675744,52593] 0 2026-03-09T15:01:05.593 INFO:tasks.workunit.client.1.vm09.stdout:3/970: chown d3/d3a/d2b/d31/f34 2271 1 2026-03-09T15:01:05.594 INFO:tasks.workunit.client.1.vm09.stdout:9/909: sync 2026-03-09T15:01:05.595 INFO:tasks.workunit.client.1.vm09.stdout:9/910: chown d1/d7/d1e/d2b/d2e/d56/d6d/d120/f117 1172 1 2026-03-09T15:01:05.600 INFO:tasks.workunit.client.1.vm09.stdout:3/971: dwrite d3/d100/d6a/f125 [0,4194304] 0 2026-03-09T15:01:05.603 INFO:tasks.workunit.client.1.vm09.stdout:3/972: stat d3/d3a/c101 0 2026-03-09T15:01:05.603 INFO:tasks.workunit.client.1.vm09.stdout:9/911: unlink d1/d4f/lc6 0 2026-03-09T15:01:05.604 INFO:tasks.workunit.client.1.vm09.stdout:3/973: truncate d3/d5b/d79/d9d/f106 292276 0 2026-03-09T15:01:05.605 INFO:tasks.workunit.client.1.vm09.stdout:3/974: rmdir d3/d3a/d2b/d36/dac 39 2026-03-09T15:01:05.607 INFO:tasks.workunit.client.1.vm09.stdout:9/912: link d1/fe8 d1/d7/d1e/d2b/d119/f123 0 2026-03-09T15:01:05.610 INFO:tasks.workunit.client.1.vm09.stdout:9/913: fsync d1/d7/d1e/d2b/f32 0 2026-03-09T15:01:05.611 INFO:tasks.workunit.client.1.vm09.stdout:3/975: dwrite d3/d100/d6a/dd5/f108 [0,4194304] 0 2026-03-09T15:01:05.611 INFO:tasks.workunit.client.1.vm09.stdout:7/941: truncate d3/f9 6358554 0 2026-03-09T15:01:05.627 INFO:tasks.workunit.client.1.vm09.stdout:7/942: rename d3/d1d/d94/d107/d5c to d3/d1d/d94/d107/d7d/d123 0 2026-03-09T15:01:05.636 INFO:tasks.workunit.client.1.vm09.stdout:3/976: dwrite d3/d100/d48/da0/fef [0,4194304] 0 2026-03-09T15:01:05.637 INFO:tasks.workunit.client.1.vm09.stdout:9/914: dwrite d1/d7/d1e/d2b/d2e/d56/d6d/d104/fcb [0,4194304] 0 2026-03-09T15:01:05.639 INFO:tasks.workunit.client.1.vm09.stdout:7/943: dread d3/d28/fb3 [4194304,4194304] 0 2026-03-09T15:01:05.650 INFO:tasks.workunit.client.1.vm09.stdout:9/915: fdatasync d1/d7/d1e/d2b/f81 0 2026-03-09T15:01:05.658 INFO:tasks.workunit.client.1.vm09.stdout:2/955: rmdir df/d58/d74/def/d105 0 2026-03-09T15:01:05.664 INFO:tasks.workunit.client.1.vm09.stdout:2/956: getdents df/d1f/d47/d71/d124 0 2026-03-09T15:01:05.677 INFO:tasks.workunit.client.1.vm09.stdout:6/921: fsync d6/df/d23/f29 0 2026-03-09T15:01:05.677 INFO:tasks.workunit.client.1.vm09.stdout:8/958: dwrite df/d38/f58 [0,4194304] 0 2026-03-09T15:01:05.688 INFO:tasks.workunit.client.1.vm09.stdout:6/922: mkdir d6/d20/d24/da5/daf/d12d 0 2026-03-09T15:01:05.692 INFO:tasks.workunit.client.1.vm09.stdout:6/923: creat d6/d20/d24/da5/daf/f12e x:0 0 0 2026-03-09T15:01:05.696 INFO:tasks.workunit.client.1.vm09.stdout:8/959: getdents df/d24/d95/d10f 0 2026-03-09T15:01:05.696 INFO:tasks.workunit.client.1.vm09.stdout:3/977: dread d3/fe6 [0,4194304] 0 2026-03-09T15:01:05.696 INFO:tasks.workunit.client.1.vm09.stdout:8/960: read - df/d5b/d65/fd1 zero size 2026-03-09T15:01:05.697 INFO:tasks.workunit.client.1.vm09.stdout:3/978: read d3/d60/f6e [864819,35416] 0 2026-03-09T15:01:05.702 INFO:tasks.workunit.client.1.vm09.stdout:1/845: write d8/d10/d24/d48/d9b/fbd [867,8851] 0 2026-03-09T15:01:05.702 INFO:tasks.workunit.client.1.vm09.stdout:8/961: truncate df/d2d/d46/d33/ddc/f9f 386529 0 2026-03-09T15:01:05.703 INFO:tasks.workunit.client.1.vm09.stdout:6/924: dread d6/d20/d2a/f61 [0,4194304] 0 2026-03-09T15:01:05.704 INFO:tasks.workunit.client.1.vm09.stdout:8/962: dread - df/d24/d99/dfc/f109 zero size 2026-03-09T15:01:05.706 INFO:tasks.workunit.client.1.vm09.stdout:6/925: dread - d6/df/ffd zero size 2026-03-09T15:01:05.706 INFO:tasks.workunit.client.1.vm09.stdout:1/846: fdatasync d8/d10/d24/d45/ddc/fe9 0 2026-03-09T15:01:05.706 INFO:tasks.workunit.client.1.vm09.stdout:7/944: truncate d3/d1d/d94/d107/d7d/d123/f5e 158354 0 2026-03-09T15:01:05.709 INFO:tasks.workunit.client.1.vm09.stdout:3/979: dread d3/d100/d6a/f144 [0,4194304] 0 2026-03-09T15:01:05.711 INFO:tasks.workunit.client.1.vm09.stdout:2/957: write df/f9b [440946,1479] 0 2026-03-09T15:01:05.715 INFO:tasks.workunit.client.1.vm09.stdout:7/945: read d3/d3d/d9b/fac [308063,117122] 0 2026-03-09T15:01:05.716 INFO:tasks.workunit.client.1.vm09.stdout:3/980: symlink d3/d100/d48/da0/l150 0 2026-03-09T15:01:05.717 INFO:tasks.workunit.client.1.vm09.stdout:6/926: truncate d6/d20/d38/d4e/d55/fac 222867 0 2026-03-09T15:01:05.718 INFO:tasks.workunit.client.1.vm09.stdout:9/916: dwrite d1/d7/d1e/f20 [0,4194304] 0 2026-03-09T15:01:05.720 INFO:tasks.workunit.client.1.vm09.stdout:8/963: rename df/d24/d99/db6/d60/l77 to df/d38/l11b 0 2026-03-09T15:01:05.732 INFO:tasks.workunit.client.1.vm09.stdout:1/847: mkdir d8/d50/d39/d95/d72/d64/d109 0 2026-03-09T15:01:05.733 INFO:tasks.workunit.client.1.vm09.stdout:8/964: dwrite df/d24/f32 [0,4194304] 0 2026-03-09T15:01:05.751 INFO:tasks.workunit.client.1.vm09.stdout:1/848: read d8/d90/f99 [259643,39787] 0 2026-03-09T15:01:05.753 INFO:tasks.workunit.client.1.vm09.stdout:1/849: write d8/d10/d24/d45/fee [794328,109202] 0 2026-03-09T15:01:05.757 INFO:tasks.workunit.client.1.vm09.stdout:1/850: chown d8/d10/d24/d48/d9b/fbd 77 1 2026-03-09T15:01:05.759 INFO:tasks.workunit.client.1.vm09.stdout:7/946: dread d3/d1d/d94/d107/fbb [0,4194304] 0 2026-03-09T15:01:05.759 INFO:tasks.workunit.client.1.vm09.stdout:8/965: mknod df/d38/d64/d5f/c11c 0 2026-03-09T15:01:05.761 INFO:tasks.workunit.client.1.vm09.stdout:1/851: truncate d8/d50/fd3 412178 0 2026-03-09T15:01:05.764 INFO:tasks.workunit.client.1.vm09.stdout:9/917: getdents d1/d4f/d52 0 2026-03-09T15:01:05.764 INFO:tasks.workunit.client.1.vm09.stdout:6/927: getdents d6/db/d10/d7a 0 2026-03-09T15:01:05.765 INFO:tasks.workunit.client.1.vm09.stdout:2/958: dwrite df/f14 [0,4194304] 0 2026-03-09T15:01:05.767 INFO:tasks.workunit.client.1.vm09.stdout:1/852: chown d8/d10/d24/d48/d9b/d78/db4/fe6 59286 1 2026-03-09T15:01:05.767 INFO:tasks.workunit.client.1.vm09.stdout:1/853: dread - d8/ff9 zero size 2026-03-09T15:01:05.769 INFO:tasks.workunit.client.1.vm09.stdout:9/918: read - d1/d7/d1e/d2b/d2e/d56/d6d/d120/fe3 zero size 2026-03-09T15:01:05.769 INFO:tasks.workunit.client.1.vm09.stdout:7/947: creat d3/d1d/d94/d107/d7d/d123/d110/f124 x:0 0 0 2026-03-09T15:01:05.771 INFO:tasks.workunit.client.1.vm09.stdout:7/948: chown d3/db/d15/d5f/d6e 107 1 2026-03-09T15:01:05.779 INFO:tasks.workunit.client.1.vm09.stdout:3/981: write d3/d5b/d79/d9d/f106 [388772,98075] 0 2026-03-09T15:01:05.780 INFO:tasks.workunit.client.1.vm09.stdout:9/919: unlink d1/d7/d1e/d2b/f5f 0 2026-03-09T15:01:05.788 INFO:tasks.workunit.client.1.vm09.stdout:2/959: rename df/d1f/d47/d84/db7/dc3/f100 to df/d93/dd3/f136 0 2026-03-09T15:01:05.792 INFO:tasks.workunit.client.1.vm09.stdout:9/920: unlink d1/d7/d1e/d2b/l9c 0 2026-03-09T15:01:05.795 INFO:tasks.workunit.client.1.vm09.stdout:1/854: symlink d8/d10/d24/d45/l10a 0 2026-03-09T15:01:05.795 INFO:tasks.workunit.client.1.vm09.stdout:8/966: getdents df/d2d/d46/d33 0 2026-03-09T15:01:05.797 INFO:tasks.workunit.client.1.vm09.stdout:2/960: symlink df/d1f/d47/d5d/d90/d107/l137 0 2026-03-09T15:01:05.797 INFO:tasks.workunit.client.1.vm09.stdout:3/982: creat d3/d60/d12a/f151 x:0 0 0 2026-03-09T15:01:05.797 INFO:tasks.workunit.client.1.vm09.stdout:6/928: getdents d6/d20/d2a/dc4/dba/dd8 0 2026-03-09T15:01:05.800 INFO:tasks.workunit.client.1.vm09.stdout:6/929: chown d6/df/d23/d89 73342930 1 2026-03-09T15:01:05.803 INFO:tasks.workunit.client.1.vm09.stdout:7/949: dwrite d3/d3d/f51 [0,4194304] 0 2026-03-09T15:01:05.804 INFO:tasks.workunit.client.1.vm09.stdout:1/855: creat d8/d10/d24/d48/d9b/d78/db4/f10b x:0 0 0 2026-03-09T15:01:05.809 INFO:tasks.workunit.client.1.vm09.stdout:9/921: dread d1/d7/f83 [0,4194304] 0 2026-03-09T15:01:05.811 INFO:tasks.workunit.client.1.vm09.stdout:3/983: read d3/d3a/d2b/d31/d4a/fd2 [1035071,92673] 0 2026-03-09T15:01:05.823 INFO:tasks.workunit.client.1.vm09.stdout:6/930: write d6/db/d10/d4f/f11c [1003264,8453] 0 2026-03-09T15:01:05.829 INFO:tasks.workunit.client.1.vm09.stdout:8/967: dwrite df/d2d/d46/d33/f8e [0,4194304] 0 2026-03-09T15:01:05.831 INFO:tasks.workunit.client.1.vm09.stdout:9/922: dwrite d1/d7/d1e/d2b/d2e/d56/d6d/d104/dc0/ffb [0,4194304] 0 2026-03-09T15:01:05.834 INFO:tasks.workunit.client.1.vm09.stdout:3/984: creat d3/d3a/d54/f152 x:0 0 0 2026-03-09T15:01:05.839 INFO:tasks.workunit.client.1.vm09.stdout:2/961: dwrite df/d1f/d6d/fb3 [0,4194304] 0 2026-03-09T15:01:05.846 INFO:tasks.workunit.client.1.vm09.stdout:1/856: dwrite d8/d10/f12 [0,4194304] 0 2026-03-09T15:01:05.847 INFO:tasks.workunit.client.1.vm09.stdout:9/923: creat d1/d7/d1e/d2b/d8d/dd5/d100/f124 x:0 0 0 2026-03-09T15:01:05.847 INFO:tasks.workunit.client.1.vm09.stdout:6/931: creat d6/d12c/f12f x:0 0 0 2026-03-09T15:01:05.847 INFO:tasks.workunit.client.1.vm09.stdout:2/962: chown df/d1f/d47/d5d/dbc/ff2 2 1 2026-03-09T15:01:05.851 INFO:tasks.workunit.client.1.vm09.stdout:9/924: write d1/d7/d9f/fa4 [1265436,21490] 0 2026-03-09T15:01:05.853 INFO:tasks.workunit.client.1.vm09.stdout:7/950: dwrite d3/d1d/d94/d107/d7d/d123/d75/db4/fc5 [0,4194304] 0 2026-03-09T15:01:05.856 INFO:tasks.workunit.client.1.vm09.stdout:7/951: chown d3/d1d/d94/d107/d7d/d123/d75/f7e 248 1 2026-03-09T15:01:05.857 INFO:tasks.workunit.client.1.vm09.stdout:8/968: dwrite df/d38/f58 [0,4194304] 0 2026-03-09T15:01:05.863 INFO:tasks.workunit.client.1.vm09.stdout:6/932: unlink d6/db/fb3 0 2026-03-09T15:01:05.863 INFO:tasks.workunit.client.1.vm09.stdout:2/963: chown df/d1f/f7e 179 1 2026-03-09T15:01:05.868 INFO:tasks.workunit.client.1.vm09.stdout:9/925: dread d1/d7/d1e/d2b/d2e/d56/feb [0,4194304] 0 2026-03-09T15:01:05.869 INFO:tasks.workunit.client.1.vm09.stdout:1/857: getdents d8/d10/d24 0 2026-03-09T15:01:05.870 INFO:tasks.workunit.client.1.vm09.stdout:8/969: dread df/d2d/d46/d33/ddc/f9f [0,4194304] 0 2026-03-09T15:01:05.870 INFO:tasks.workunit.client.1.vm09.stdout:2/964: mkdir df/d1f/d6d/dd1/d138 0 2026-03-09T15:01:05.871 INFO:tasks.workunit.client.1.vm09.stdout:1/858: chown d8/d10/l11 216615 1 2026-03-09T15:01:05.877 INFO:tasks.workunit.client.1.vm09.stdout:8/970: creat df/d2d/df4/f11d x:0 0 0 2026-03-09T15:01:05.879 INFO:tasks.workunit.client.1.vm09.stdout:1/859: mknod d8/d50/d39/d95/d56/dc7/dd9/c10c 0 2026-03-09T15:01:05.879 INFO:tasks.workunit.client.1.vm09.stdout:8/971: rmdir df/d2d/d46/d33/ddc/dfe/dcc 39 2026-03-09T15:01:05.881 INFO:tasks.workunit.client.1.vm09.stdout:1/860: write d8/d10/d24/d48/d9b/d78/db4/f10b [105727,46665] 0 2026-03-09T15:01:05.884 INFO:tasks.workunit.client.1.vm09.stdout:8/972: creat df/d38/f11e x:0 0 0 2026-03-09T15:01:05.887 INFO:tasks.workunit.client.1.vm09.stdout:8/973: creat df/d5b/d65/d1d/f11f x:0 0 0 2026-03-09T15:01:05.890 INFO:tasks.workunit.client.1.vm09.stdout:1/861: dwrite d8/f57 [8388608,4194304] 0 2026-03-09T15:01:05.896 INFO:tasks.workunit.client.1.vm09.stdout:8/974: dread df/d24/d99/db6/d60/fc6 [0,4194304] 0 2026-03-09T15:01:05.896 INFO:tasks.workunit.client.1.vm09.stdout:7/952: sync 2026-03-09T15:01:05.899 INFO:tasks.workunit.client.1.vm09.stdout:8/975: stat df/d24/l3d 0 2026-03-09T15:01:05.900 INFO:tasks.workunit.client.1.vm09.stdout:7/953: fsync d3/db/d15/d5f/d6e/d83/fd1 0 2026-03-09T15:01:05.901 INFO:tasks.workunit.client.1.vm09.stdout:7/954: stat d3/db/d46/db2/df5/cba 0 2026-03-09T15:01:05.904 INFO:tasks.workunit.client.1.vm09.stdout:9/926: dread d1/d7/da6/db3/fce [0,4194304] 0 2026-03-09T15:01:05.911 INFO:tasks.workunit.client.1.vm09.stdout:8/976: creat df/d24/d99/db6/d60/f120 x:0 0 0 2026-03-09T15:01:05.912 INFO:tasks.workunit.client.1.vm09.stdout:8/977: truncate df/d24/d99/db6/fa6 771112 0 2026-03-09T15:01:05.912 INFO:tasks.workunit.client.1.vm09.stdout:8/978: stat df/d2d/dff 0 2026-03-09T15:01:05.950 INFO:tasks.workunit.client.1.vm09.stdout:3/985: write d3/d3a/d2b/d7b/db0/fc7 [632533,11998] 0 2026-03-09T15:01:05.951 INFO:tasks.workunit.client.1.vm09.stdout:6/933: truncate d6/d20/f52 874919 0 2026-03-09T15:01:05.952 INFO:tasks.workunit.client.1.vm09.stdout:3/986: chown d3/d5b/d79/d9d/f106 15112 1 2026-03-09T15:01:05.952 INFO:tasks.workunit.client.1.vm09.stdout:2/965: write df/d1f/d47/f127 [5971094,125688] 0 2026-03-09T15:01:05.953 INFO:tasks.workunit.client.1.vm09.stdout:6/934: fdatasync d6/d20/d38/d56/fd3 0 2026-03-09T15:01:05.957 INFO:tasks.workunit.client.1.vm09.stdout:3/987: dread d3/d3a/d2b/f92 [0,4194304] 0 2026-03-09T15:01:05.963 INFO:tasks.workunit.client.1.vm09.stdout:6/935: creat d6/d20/d44/d45/f130 x:0 0 0 2026-03-09T15:01:05.964 INFO:tasks.workunit.client.1.vm09.stdout:6/936: creat d6/db/f131 x:0 0 0 2026-03-09T15:01:05.964 INFO:tasks.workunit.client.1.vm09.stdout:6/937: mknod d6/d20/d2a/dc4/dba/c132 0 2026-03-09T15:01:05.964 INFO:tasks.workunit.client.1.vm09.stdout:6/938: write d6/df/d23/fae [4113481,120343] 0 2026-03-09T15:01:05.967 INFO:tasks.workunit.client.1.vm09.stdout:2/966: dread df/d1f/d6d/d8f/d5f/fe4 [0,4194304] 0 2026-03-09T15:01:05.968 INFO:tasks.workunit.client.1.vm09.stdout:3/988: sync 2026-03-09T15:01:05.970 INFO:tasks.workunit.client.1.vm09.stdout:3/989: readlink d3/d3a/d2b/d7b/l143 0 2026-03-09T15:01:05.973 INFO:tasks.workunit.client.1.vm09.stdout:6/939: dwrite d6/d20/d24/da5/daf/f12e [0,4194304] 0 2026-03-09T15:01:05.978 INFO:tasks.workunit.client.1.vm09.stdout:3/990: truncate d3/d3a/d2b/f32 681776 0 2026-03-09T15:01:05.980 INFO:tasks.workunit.client.1.vm09.stdout:3/991: sync 2026-03-09T15:01:05.981 INFO:tasks.workunit.client.1.vm09.stdout:3/992: write d3/d5b/d79/d9d/f106 [1265132,33118] 0 2026-03-09T15:01:05.985 INFO:tasks.workunit.client.1.vm09.stdout:6/940: mknod d6/d20/d38/d56/d65/d68/d86/dc0/ddb/c133 0 2026-03-09T15:01:05.998 INFO:tasks.workunit.client.1.vm09.stdout:3/993: symlink d3/d100/d48/l153 0 2026-03-09T15:01:05.998 INFO:tasks.workunit.client.1.vm09.stdout:3/994: readlink d3/le 0 2026-03-09T15:01:06.007 INFO:tasks.workunit.client.1.vm09.stdout:8/979: rmdir df/d38 39 2026-03-09T15:01:06.007 INFO:tasks.workunit.client.1.vm09.stdout:8/980: write df/d2d/d46/d33/ddc/dfe/f87 [1868177,27096] 0 2026-03-09T15:01:06.013 INFO:tasks.workunit.client.1.vm09.stdout:3/995: creat d3/d3a/d2b/d12f/f154 x:0 0 0 2026-03-09T15:01:06.014 INFO:tasks.workunit.client.1.vm09.stdout:1/862: truncate d8/fa 2456658 0 2026-03-09T15:01:06.014 INFO:tasks.workunit.client.1.vm09.stdout:9/927: write d1/d7/da6/db3/f11c [5046887,120402] 0 2026-03-09T15:01:06.014 INFO:tasks.workunit.client.1.vm09.stdout:6/941: link d6/df/l30 d6/d20/d2a/d3d/l134 0 2026-03-09T15:01:06.023 INFO:tasks.workunit.client.1.vm09.stdout:7/955: dwrite d3/f97 [0,4194304] 0 2026-03-09T15:01:06.024 INFO:tasks.workunit.client.1.vm09.stdout:2/967: dwrite df/d1f/d47/d5d/f96 [0,4194304] 0 2026-03-09T15:01:06.035 INFO:tasks.workunit.client.1.vm09.stdout:6/942: dwrite d6/db/d8b/f73 [0,4194304] 0 2026-03-09T15:01:06.044 INFO:tasks.workunit.client.1.vm09.stdout:8/981: truncate df/deb/fbc 1491307 0 2026-03-09T15:01:06.045 INFO:tasks.workunit.client.1.vm09.stdout:2/968: chown df/d2d/f8c 17 1 2026-03-09T15:01:06.046 INFO:tasks.workunit.client.1.vm09.stdout:6/943: truncate d6/d20/d44/d45/f4c 3213212 0 2026-03-09T15:01:06.048 INFO:tasks.workunit.client.1.vm09.stdout:2/969: truncate df/d1f/d47/d84/db7/dc3/fe0 1256740 0 2026-03-09T15:01:06.062 INFO:tasks.workunit.client.1.vm09.stdout:7/956: mknod d3/db/d15/d5f/c125 0 2026-03-09T15:01:06.065 INFO:tasks.workunit.client.1.vm09.stdout:8/982: rename df/d2d/d42/f96 to df/d24/d99/f121 0 2026-03-09T15:01:06.065 INFO:tasks.workunit.client.1.vm09.stdout:6/944: mknod d6/d20/d38/d56/d65/d68/c135 0 2026-03-09T15:01:06.068 INFO:tasks.workunit.client.1.vm09.stdout:2/970: creat df/d58/d74/def/f139 x:0 0 0 2026-03-09T15:01:06.069 INFO:tasks.workunit.client.1.vm09.stdout:9/928: link d1/d7/d1e/d2b/d40/l70 d1/d58/da8/l125 0 2026-03-09T15:01:06.077 INFO:tasks.workunit.client.1.vm09.stdout:6/945: rename d6/df/f40 to d6/d20/d2a/d3b/d91/d11b/f136 0 2026-03-09T15:01:06.080 INFO:tasks.workunit.client.1.vm09.stdout:9/929: symlink d1/d58/l126 0 2026-03-09T15:01:06.085 INFO:tasks.workunit.client.1.vm09.stdout:2/971: dwrite df/d20/d2e/f12a [0,4194304] 0 2026-03-09T15:01:06.091 INFO:tasks.workunit.client.1.vm09.stdout:2/972: write df/d58/d74/fcd [164901,58168] 0 2026-03-09T15:01:06.097 INFO:tasks.workunit.client.1.vm09.stdout:2/973: dwrite df/d20/f24 [0,4194304] 0 2026-03-09T15:01:06.101 INFO:tasks.workunit.client.1.vm09.stdout:8/983: link df/d24/d99/l107 df/d24/d95/l122 0 2026-03-09T15:01:06.107 INFO:tasks.workunit.client.1.vm09.stdout:1/863: truncate d8/ff4 764691 0 2026-03-09T15:01:06.112 INFO:tasks.workunit.client.1.vm09.stdout:1/864: dwrite d8/d10/f69 [0,4194304] 0 2026-03-09T15:01:06.127 INFO:tasks.workunit.client.1.vm09.stdout:8/984: creat df/d2d/d4f/f123 x:0 0 0 2026-03-09T15:01:06.127 INFO:tasks.workunit.client.1.vm09.stdout:3/996: dwrite d3/d3a/d2b/f32 [0,4194304] 0 2026-03-09T15:01:06.136 INFO:tasks.workunit.client.1.vm09.stdout:7/957: write d3/d61/f102 [2295506,39391] 0 2026-03-09T15:01:06.137 INFO:tasks.workunit.client.1.vm09.stdout:2/974: dread df/f9b [0,4194304] 0 2026-03-09T15:01:06.137 INFO:tasks.workunit.client.1.vm09.stdout:7/958: chown d3/db/d15 221 1 2026-03-09T15:01:06.140 INFO:tasks.workunit.client.1.vm09.stdout:9/930: dread d1/d7/d1e/f2a [0,4194304] 0 2026-03-09T15:01:06.140 INFO:tasks.workunit.client.1.vm09.stdout:7/959: write d3/d1d/d94/d107/d7d/d123/d75/db4/f10a [340621,61842] 0 2026-03-09T15:01:06.150 INFO:tasks.workunit.client.1.vm09.stdout:7/960: dwrite d3/db/d15/d5f/f89 [0,4194304] 0 2026-03-09T15:01:06.189 INFO:tasks.workunit.client.1.vm09.stdout:7/961: creat d3/d1d/d94/d107/d7d/d123/d75/db4/f126 x:0 0 0 2026-03-09T15:01:06.189 INFO:tasks.workunit.client.1.vm09.stdout:2/975: dwrite df/f16 [0,4194304] 0 2026-03-09T15:01:06.210 INFO:tasks.workunit.client.1.vm09.stdout:1/865: link d8/d50/d39/d95/d56/dc7/dd9/lf2 d8/d10/d73/l10d 0 2026-03-09T15:01:06.214 INFO:tasks.workunit.client.1.vm09.stdout:8/985: dwrite df/d2d/d46/fa9 [0,4194304] 0 2026-03-09T15:01:06.225 INFO:tasks.workunit.client.1.vm09.stdout:7/962: rename d3/d1d/d94/d107/d7d/dc6/le1 to d3/l127 0 2026-03-09T15:01:06.227 INFO:tasks.workunit.client.1.vm09.stdout:8/986: write df/d2d/f57 [7661908,79786] 0 2026-03-09T15:01:06.231 INFO:tasks.workunit.client.1.vm09.stdout:1/866: rename d8/d50/d39/d95/d56/fde to d8/d90/f10e 0 2026-03-09T15:01:06.235 INFO:tasks.workunit.client.1.vm09.stdout:1/867: dread d8/d10/d24/d45/fee [0,4194304] 0 2026-03-09T15:01:06.239 INFO:tasks.workunit.client.1.vm09.stdout:1/868: creat d8/d50/d39/d95/f10f x:0 0 0 2026-03-09T15:01:06.240 INFO:tasks.workunit.client.1.vm09.stdout:6/946: write d6/d20/f52 [1328419,31500] 0 2026-03-09T15:01:06.240 INFO:tasks.workunit.client.1.vm09.stdout:9/931: write d1/d7/d1e/d2b/d2e/d56/d6d/d104/fcf [594766,101526] 0 2026-03-09T15:01:06.245 INFO:tasks.workunit.client.1.vm09.stdout:3/997: dwrite d3/d3a/d2b/d31/f45 [0,4194304] 0 2026-03-09T15:01:06.245 INFO:tasks.workunit.client.1.vm09.stdout:8/987: dread df/d2d/d46/d33/ddc/fc3 [0,4194304] 0 2026-03-09T15:01:06.254 INFO:tasks.workunit.client.1.vm09.stdout:1/869: write d8/d50/d5b/f6f [1675241,75687] 0 2026-03-09T15:01:06.256 INFO:tasks.workunit.client.1.vm09.stdout:8/988: readlink df/d5b/d65/dae/l9d 0 2026-03-09T15:01:06.257 INFO:tasks.workunit.client.1.vm09.stdout:1/870: write d8/d10/d24/d48/d9b/d100/fd8 [6991993,114488] 0 2026-03-09T15:01:06.259 INFO:tasks.workunit.client.1.vm09.stdout:6/947: rmdir d6/db/d10/d4f/d110 0 2026-03-09T15:01:06.264 INFO:tasks.workunit.client.1.vm09.stdout:6/948: chown d6/d20/d38/d56/d65/l9a 28441436 1 2026-03-09T15:01:06.264 INFO:tasks.workunit.client.1.vm09.stdout:2/976: write df/d1f/d6d/d8f/d5f/f63 [1933186,89039] 0 2026-03-09T15:01:06.264 INFO:tasks.workunit.client.1.vm09.stdout:6/949: readlink d6/d20/d2a/d3d/d46/lfc 0 2026-03-09T15:01:06.264 INFO:tasks.workunit.client.1.vm09.stdout:6/950: write d6/d20/d24/f67 [4453210,129399] 0 2026-03-09T15:01:06.264 INFO:tasks.workunit.client.1.vm09.stdout:2/977: rmdir df/d93/da3/dcf 39 2026-03-09T15:01:06.268 INFO:tasks.workunit.client.1.vm09.stdout:2/978: mknod df/d20/c13a 0 2026-03-09T15:01:06.268 INFO:tasks.workunit.client.1.vm09.stdout:2/979: readlink df/d20/l75 0 2026-03-09T15:01:06.269 INFO:tasks.workunit.client.1.vm09.stdout:8/989: link df/d24/d99/f121 df/d2d/f124 0 2026-03-09T15:01:06.269 INFO:tasks.workunit.client.1.vm09.stdout:2/980: rmdir df/d1f/d47/d84/db7/dc3/dc4 39 2026-03-09T15:01:06.274 INFO:tasks.workunit.client.1.vm09.stdout:6/951: dread d6/d20/f27 [0,4194304] 0 2026-03-09T15:01:06.276 INFO:tasks.workunit.client.1.vm09.stdout:2/981: dread df/d20/f9d [0,4194304] 0 2026-03-09T15:01:06.285 INFO:tasks.workunit.client.1.vm09.stdout:2/982: mknod df/d1f/d47/d71/d124/c13b 0 2026-03-09T15:01:06.290 INFO:tasks.workunit.client.1.vm09.stdout:2/983: rename df/d93/d12f to df/d1f/d47/d84/db7/dc3/dc4/d13c 0 2026-03-09T15:01:06.290 INFO:tasks.workunit.client.1.vm09.stdout:7/963: write d3/d1d/d94/fa4 [992349,54576] 0 2026-03-09T15:01:06.292 INFO:tasks.workunit.client.1.vm09.stdout:7/964: fdatasync d3/f26 0 2026-03-09T15:01:06.293 INFO:tasks.workunit.client.1.vm09.stdout:7/965: write d3/d1d/d94/d107/d7d/d123/fd3 [1195654,61539] 0 2026-03-09T15:01:06.294 INFO:tasks.workunit.client.1.vm09.stdout:7/966: readlink d3/d1d/d65/da3/de4/l10f 0 2026-03-09T15:01:06.295 INFO:tasks.workunit.client.1.vm09.stdout:2/984: mkdir df/d1f/d47/d84/db7/dc3/dc4/d13c/d13d 0 2026-03-09T15:01:06.296 INFO:tasks.workunit.client.1.vm09.stdout:9/932: write d1/d7/d1e/d2b/f30 [2668653,33224] 0 2026-03-09T15:01:06.300 INFO:tasks.workunit.client.1.vm09.stdout:2/985: read - df/d1f/fe5 zero size 2026-03-09T15:01:06.301 INFO:tasks.workunit.client.1.vm09.stdout:9/933: stat d1/d4f/fc3 0 2026-03-09T15:01:06.301 INFO:tasks.workunit.client.1.vm09.stdout:9/934: readlink d1/l82 0 2026-03-09T15:01:06.306 INFO:tasks.workunit.client.1.vm09.stdout:3/998: dwrite d3/d74/fb4 [0,4194304] 0 2026-03-09T15:01:06.308 INFO:tasks.workunit.client.1.vm09.stdout:9/935: symlink d1/d4f/d52/l127 0 2026-03-09T15:01:06.310 INFO:tasks.workunit.client.1.vm09.stdout:9/936: truncate d1/d7/d1e/d2b/f32 6069694 0 2026-03-09T15:01:06.315 INFO:tasks.workunit.client.1.vm09.stdout:8/990: dwrite df/d38/f52 [0,4194304] 0 2026-03-09T15:01:06.315 INFO:tasks.workunit.client.1.vm09.stdout:2/986: mkdir df/d1f/d47/d84/db7/dc3/da7/d13e 0 2026-03-09T15:01:06.316 INFO:tasks.workunit.client.1.vm09.stdout:6/952: write d6/df/d23/d89/fe1 [289086,59728] 0 2026-03-09T15:01:06.321 INFO:tasks.workunit.client.1.vm09.stdout:2/987: dread - df/d1f/d6d/d8f/d5f/f72 zero size 2026-03-09T15:01:06.326 INFO:tasks.workunit.client.1.vm09.stdout:7/967: dwrite d3/f5 [0,4194304] 0 2026-03-09T15:01:06.331 INFO:tasks.workunit.client.1.vm09.stdout:8/991: creat df/d2d/d46/d33/f125 x:0 0 0 2026-03-09T15:01:06.331 INFO:tasks.workunit.client.1.vm09.stdout:6/953: mkdir d6/d20/d24/da5/daf/d12d/d137 0 2026-03-09T15:01:06.331 INFO:tasks.workunit.client.1.vm09.stdout:1/871: dwrite d8/ff4 [0,4194304] 0 2026-03-09T15:01:06.331 INFO:tasks.workunit.client.1.vm09.stdout:7/968: write d3/d1d/d94/d107/d7d/d123/fd3 [1412413,128051] 0 2026-03-09T15:01:06.332 INFO:tasks.workunit.client.1.vm09.stdout:1/872: chown d8/d10/d24/cdf 6272280 1 2026-03-09T15:01:06.357 INFO:tasks.workunit.client.1.vm09.stdout:2/988: rename df/d1f/d47/d84/db7/dc3/dd9/lfe to df/d2d/l13f 0 2026-03-09T15:01:06.357 INFO:tasks.workunit.client.1.vm09.stdout:7/969: chown d3/db/d15/d5f/d6e/d10d/c115 4187 1 2026-03-09T15:01:06.357 INFO:tasks.workunit.client.1.vm09.stdout:2/989: write df/d20/d29/fc0 [7432152,101452] 0 2026-03-09T15:01:06.359 INFO:tasks.workunit.client.1.vm09.stdout:6/954: link d6/df/d23/ca8 d6/d20/d44/d45/c138 0 2026-03-09T15:01:06.361 INFO:tasks.workunit.client.1.vm09.stdout:1/873: creat d8/d10/d24/d48/d9b/f110 x:0 0 0 2026-03-09T15:01:06.362 INFO:tasks.workunit.client.1.vm09.stdout:6/955: creat d6/d20/d38/d56/d65/d68/d6f/db6/f139 x:0 0 0 2026-03-09T15:01:06.363 INFO:tasks.workunit.client.1.vm09.stdout:2/990: dwrite df/d20/d29/f51 [0,4194304] 0 2026-03-09T15:01:06.363 INFO:tasks.workunit.client.1.vm09.stdout:7/970: getdents d3/d1d/d65 0 2026-03-09T15:01:06.364 INFO:tasks.workunit.client.1.vm09.stdout:1/874: truncate d8/d10/f2f 1481656 0 2026-03-09T15:01:06.365 INFO:tasks.workunit.client.1.vm09.stdout:6/956: mkdir d6/d20/d24/da5/daf/d12d/d137/d13a 0 2026-03-09T15:01:06.365 INFO:tasks.workunit.client.1.vm09.stdout:1/875: chown d8/ff4 93587821 1 2026-03-09T15:01:06.367 INFO:tasks.workunit.client.1.vm09.stdout:1/876: write d8/d10/d24/d48/d9b/d78/f107 [707938,36179] 0 2026-03-09T15:01:06.375 INFO:tasks.workunit.client.1.vm09.stdout:1/877: dread d8/d10/d24/d48/fd4 [0,4194304] 0 2026-03-09T15:01:06.380 INFO:tasks.workunit.client.1.vm09.stdout:2/991: symlink df/d1f/d47/d5d/d90/d107/l140 0 2026-03-09T15:01:06.384 INFO:tasks.workunit.client.1.vm09.stdout:2/992: chown df/d1f/d47 4465 1 2026-03-09T15:01:06.384 INFO:tasks.workunit.client.1.vm09.stdout:7/971: mkdir d3/d1d/d65/da3/d11d/d128 0 2026-03-09T15:01:06.384 INFO:tasks.workunit.client.1.vm09.stdout:7/972: readlink d3/d3d/d9b/l9e 0 2026-03-09T15:01:06.384 INFO:tasks.workunit.client.1.vm09.stdout:6/957: rename d6/d20/d38/d4e/d55/dd2 to d6/d20/d38/d56/d65/d68/d86/d103/d102/d13b 0 2026-03-09T15:01:06.385 INFO:tasks.workunit.client.1.vm09.stdout:6/958: readlink d6/d20/d24/l51 0 2026-03-09T15:01:06.385 INFO:tasks.workunit.client.1.vm09.stdout:1/878: chown d8/d10/f13 34604 1 2026-03-09T15:01:06.385 INFO:tasks.workunit.client.1.vm09.stdout:7/973: rmdir d3/d3d/d9b/da9 39 2026-03-09T15:01:06.385 INFO:tasks.workunit.client.1.vm09.stdout:6/959: write d6/d20/d38/d56/d65/d68/f99 [225332,76160] 0 2026-03-09T15:01:06.391 INFO:tasks.workunit.client.1.vm09.stdout:1/879: rmdir d8/d10/d24/d45/d5f 39 2026-03-09T15:01:06.393 INFO:tasks.workunit.client.1.vm09.stdout:2/993: dread df/d1f/d47/d5d/f6c [0,4194304] 0 2026-03-09T15:01:06.393 INFO:tasks.workunit.client.1.vm09.stdout:2/994: chown df/d58/d67/f11d 7762 1 2026-03-09T15:01:06.394 INFO:tasks.workunit.client.1.vm09.stdout:2/995: chown df/d93/fb9 3 1 2026-03-09T15:01:06.395 INFO:tasks.workunit.client.1.vm09.stdout:9/937: truncate d1/d7/da6/db3/f11c 1881017 0 2026-03-09T15:01:06.397 INFO:tasks.workunit.client.1.vm09.stdout:3/999: dwrite d3/d3a/d2b/d31/d4a/d62/f8 [0,4194304] 0 2026-03-09T15:01:06.397 INFO:tasks.workunit.client.1.vm09.stdout:9/938: readlink d1/d7/d1e/d2b/d40/ld3 0 2026-03-09T15:01:06.397 INFO:tasks.workunit.client.1.vm09.stdout:2/996: truncate df/d1f/f7e 2333093 0 2026-03-09T15:01:06.398 INFO:tasks.workunit.client.1.vm09.stdout:2/997: dread - df/d58/fc2 zero size 2026-03-09T15:01:06.400 INFO:tasks.workunit.client.1.vm09.stdout:7/974: link d3/d1d/d65/fbe d3/d61/f129 0 2026-03-09T15:01:06.400 INFO:tasks.workunit.client.1.vm09.stdout:6/960: link d6/d20/d38/d56/d65/d68/d86/f92 d6/d20/d2a/d3d/d46/f13c 0 2026-03-09T15:01:06.408 INFO:tasks.workunit.client.1.vm09.stdout:9/939: mkdir d1/d7/d1e/d2b/d2e/d128 0 2026-03-09T15:01:06.409 INFO:tasks.workunit.client.1.vm09.stdout:6/961: symlink d6/d20/d2a/dc4/dba/l13d 0 2026-03-09T15:01:06.410 INFO:tasks.workunit.client.1.vm09.stdout:2/998: mknod df/d1f/d47/d84/c141 0 2026-03-09T15:01:06.414 INFO:tasks.workunit.client.1.vm09.stdout:8/992: dwrite df/d24/d95/da4/f108 [0,4194304] 0 2026-03-09T15:01:06.422 INFO:tasks.workunit.client.1.vm09.stdout:2/999: chown df/d1f/l4d 1526420333 1 2026-03-09T15:01:06.424 INFO:tasks.workunit.client.1.vm09.stdout:6/962: rename d6/d20/d38/d56/d65/d68/d86/f92 to d6/db/d10/d4f/f13e 0 2026-03-09T15:01:06.426 INFO:tasks.workunit.client.1.vm09.stdout:8/993: chown df/d5b/d65/d1d/l5d 2482720 1 2026-03-09T15:01:06.426 INFO:tasks.workunit.client.1.vm09.stdout:9/940: creat d1/d7/d9f/f129 x:0 0 0 2026-03-09T15:01:06.431 INFO:tasks.workunit.client.1.vm09.stdout:7/975: dread d3/db/f42 [0,4194304] 0 2026-03-09T15:01:06.435 INFO:tasks.workunit.client.1.vm09.stdout:6/963: mkdir d6/d20/d2a/dde/d13f 0 2026-03-09T15:01:06.435 INFO:tasks.workunit.client.1.vm09.stdout:9/941: dread d1/d7/d1e/d2b/d2e/d56/d6d/d104/fcf [0,4194304] 0 2026-03-09T15:01:06.435 INFO:tasks.workunit.client.1.vm09.stdout:8/994: creat df/d24/d99/db6/f126 x:0 0 0 2026-03-09T15:01:06.449 INFO:tasks.workunit.client.1.vm09.stdout:9/942: creat d1/d4f/d52/f12a x:0 0 0 2026-03-09T15:01:06.468 INFO:tasks.workunit.client.1.vm09.stdout:7/976: creat d3/d1d/d94/d107/d7d/d123/d75/dcb/f12a x:0 0 0 2026-03-09T15:01:06.472 INFO:tasks.workunit.client.1.vm09.stdout:1/880: dwrite d8/d10/d24/d45/d5f/fc5 [0,4194304] 0 2026-03-09T15:01:06.473 INFO:tasks.workunit.client.1.vm09.stdout:1/881: chown d8/d50/d39/d95/d72/d64/d109 0 1 2026-03-09T15:01:06.481 INFO:tasks.workunit.client.1.vm09.stdout:7/977: rename d3/db/d46/db2/f114 to d3/db/d46/db2/f12b 0 2026-03-09T15:01:06.481 INFO:tasks.workunit.client.1.vm09.stdout:7/978: readlink d3/db/d46/db2/df5/l39 0 2026-03-09T15:01:06.482 INFO:tasks.workunit.client.1.vm09.stdout:8/995: chown df/d2d/d46/d33/ddc/dfe/dcc/ld9 40617546 1 2026-03-09T15:01:06.483 INFO:tasks.workunit.client.1.vm09.stdout:8/996: read - df/d2d/d90/f10d zero size 2026-03-09T15:01:06.492 INFO:tasks.workunit.client.1.vm09.stdout:9/943: symlink d1/d7/d1e/d2b/l12b 0 2026-03-09T15:01:06.494 INFO:tasks.workunit.client.1.vm09.stdout:6/964: write d6/df/fea [118940,106785] 0 2026-03-09T15:01:06.497 INFO:tasks.workunit.client.1.vm09.stdout:7/979: symlink d3/db/d46/db2/l12c 0 2026-03-09T15:01:06.502 INFO:tasks.workunit.client.1.vm09.stdout:8/997: rename df/d24/d99/db6/l6b to df/d38/d64/l127 0 2026-03-09T15:01:06.504 INFO:tasks.workunit.client.1.vm09.stdout:8/998: dread df/d2d/d46/d33/ddc/fc3 [0,4194304] 0 2026-03-09T15:01:06.512 INFO:tasks.workunit.client.1.vm09.stdout:1/882: dwrite d8/d10/d24/f8f [0,4194304] 0 2026-03-09T15:01:06.518 INFO:tasks.workunit.client.1.vm09.stdout:9/944: dread d1/d7/d1e/f46 [0,4194304] 0 2026-03-09T15:01:06.523 INFO:tasks.workunit.client.1.vm09.stdout:7/980: getdents d3/d1d/d65 0 2026-03-09T15:01:06.525 INFO:tasks.workunit.client.1.vm09.stdout:6/965: rename d6/db/f42 to d6/df/f140 0 2026-03-09T15:01:06.527 INFO:tasks.workunit.client.1.vm09.stdout:8/999: creat df/d2d/d46/d33/ddc/dfe/f128 x:0 0 0 2026-03-09T15:01:06.531 INFO:tasks.workunit.client.1.vm09.stdout:7/981: rename d3/d1d/d65/fff to d3/d1d/d94/d107/d7d/dd5/f12d 0 2026-03-09T15:01:06.533 INFO:tasks.workunit.client.1.vm09.stdout:1/883: getdents d8/d10/d24/d48/d9b/d78/db4 0 2026-03-09T15:01:06.533 INFO:tasks.workunit.client.1.vm09.stdout:7/982: truncate d3/db/d46/db2/f106 768588 0 2026-03-09T15:01:06.534 INFO:tasks.workunit.client.1.vm09.stdout:7/983: write d3/d61/f90 [615040,67270] 0 2026-03-09T15:01:06.535 INFO:tasks.workunit.client.1.vm09.stdout:9/945: creat d1/d7/d1e/d2b/f12c x:0 0 0 2026-03-09T15:01:06.540 INFO:tasks.workunit.client.1.vm09.stdout:6/966: creat d6/d20/f141 x:0 0 0 2026-03-09T15:01:06.544 INFO:tasks.workunit.client.1.vm09.stdout:7/984: fdatasync d3/db/d46/f5b 0 2026-03-09T15:01:06.548 INFO:tasks.workunit.client.1.vm09.stdout:9/946: rename d1/d7/f67 to d1/d7/d1e/d2b/d40/f12d 0 2026-03-09T15:01:06.549 INFO:tasks.workunit.client.1.vm09.stdout:9/947: write d1/d4f/fe0 [836622,68933] 0 2026-03-09T15:01:06.553 INFO:tasks.workunit.client.1.vm09.stdout:6/967: symlink d6/d20/d38/d56/d65/d68/d6f/l142 0 2026-03-09T15:01:06.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:06 vm05.local ceph-mon[50611]: pgmap v159: 65 pgs: 65 active+clean; 1.9 GiB data, 6.3 GiB used, 114 GiB / 120 GiB avail; 63 MiB/s rd, 142 MiB/s wr, 399 op/s 2026-03-09T15:01:06.554 INFO:tasks.workunit.client.1.vm09.stdout:1/884: creat d8/d10/d24/d48/d9b/dfa/f111 x:0 0 0 2026-03-09T15:01:06.557 INFO:tasks.workunit.client.1.vm09.stdout:7/985: dread d3/db/d46/f5b [0,4194304] 0 2026-03-09T15:01:06.564 INFO:tasks.workunit.client.1.vm09.stdout:6/968: fdatasync d6/d20/d24/d7e/fe0 0 2026-03-09T15:01:06.565 INFO:tasks.workunit.client.1.vm09.stdout:6/969: chown d6/d20/d24 2345 1 2026-03-09T15:01:06.568 INFO:tasks.workunit.client.1.vm09.stdout:7/986: unlink d3/d1d/d94/d107/d7d/c98 0 2026-03-09T15:01:06.569 INFO:tasks.workunit.client.1.vm09.stdout:6/970: dwrite d6/df/d23/f78 [0,4194304] 0 2026-03-09T15:01:06.599 INFO:tasks.workunit.client.1.vm09.stdout:1/885: creat d8/d10/d24/d48/d9b/d78/f112 x:0 0 0 2026-03-09T15:01:06.601 INFO:tasks.workunit.client.1.vm09.stdout:9/948: write d1/d7/d1e/f5a [764725,108244] 0 2026-03-09T15:01:06.602 INFO:tasks.workunit.client.1.vm09.stdout:7/987: mknod d3/d1d/d94/d116/c12e 0 2026-03-09T15:01:06.605 INFO:tasks.workunit.client.1.vm09.stdout:6/971: symlink d6/db/d10/l143 0 2026-03-09T15:01:06.606 INFO:tasks.workunit.client.1.vm09.stdout:1/886: mkdir d8/d50/d39/d95/d113 0 2026-03-09T15:01:06.607 INFO:tasks.workunit.client.1.vm09.stdout:9/949: creat d1/d58/f12e x:0 0 0 2026-03-09T15:01:06.607 INFO:tasks.workunit.client.1.vm09.stdout:1/887: stat d8/d10/d24/d48/d9b/d78/d8b/cdb 0 2026-03-09T15:01:06.609 INFO:tasks.workunit.client.1.vm09.stdout:7/988: rename d3/db/d15/l4a to d3/db/d15/l12f 0 2026-03-09T15:01:06.609 INFO:tasks.workunit.client.1.vm09.stdout:6/972: mknod d6/d20/d44/d8f/c144 0 2026-03-09T15:01:06.609 INFO:tasks.workunit.client.1.vm09.stdout:1/888: rmdir d8/d10 39 2026-03-09T15:01:06.611 INFO:tasks.workunit.client.1.vm09.stdout:1/889: mkdir d8/d50/d5b/d114 0 2026-03-09T15:01:06.612 INFO:tasks.workunit.client.1.vm09.stdout:7/989: creat d3/d1d/d94/d107/d7d/dd5/f130 x:0 0 0 2026-03-09T15:01:06.613 INFO:tasks.workunit.client.1.vm09.stdout:7/990: fsync d3/d61/f11a 0 2026-03-09T15:01:06.615 INFO:tasks.workunit.client.1.vm09.stdout:9/950: dwrite d1/d58/f12e [0,4194304] 0 2026-03-09T15:01:06.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:06 vm09.local ceph-mon[59673]: pgmap v159: 65 pgs: 65 active+clean; 1.9 GiB data, 6.3 GiB used, 114 GiB / 120 GiB avail; 63 MiB/s rd, 142 MiB/s wr, 399 op/s 2026-03-09T15:01:06.622 INFO:tasks.workunit.client.1.vm09.stdout:6/973: rename d6/df/d23/d89 to d6/db/d145 0 2026-03-09T15:01:06.623 INFO:tasks.workunit.client.1.vm09.stdout:1/890: stat d8/d10/d73/l26 0 2026-03-09T15:01:06.624 INFO:tasks.workunit.client.1.vm09.stdout:7/991: mknod d3/db/d15/d5f/d6e/d10d/c131 0 2026-03-09T15:01:06.624 INFO:tasks.workunit.client.1.vm09.stdout:1/891: readlink d8/d10/d24/d48/d9b/le1 0 2026-03-09T15:01:06.624 INFO:tasks.workunit.client.1.vm09.stdout:9/951: creat d1/d7/d1e/d2b/d2e/d56/d6d/d120/f12f x:0 0 0 2026-03-09T15:01:06.624 INFO:tasks.workunit.client.1.vm09.stdout:6/974: fdatasync d6/d20/d24/d7e/fbc 0 2026-03-09T15:01:06.625 INFO:tasks.workunit.client.1.vm09.stdout:9/952: chown d1/d4f/d52/ffa 1117 1 2026-03-09T15:01:06.626 INFO:tasks.workunit.client.1.vm09.stdout:7/992: truncate d3/f8 4697995 0 2026-03-09T15:01:06.627 INFO:tasks.workunit.client.1.vm09.stdout:6/975: write d6/d20/d38/d56/d65/d68/d6f/f85 [547557,56048] 0 2026-03-09T15:01:06.627 INFO:tasks.workunit.client.1.vm09.stdout:1/892: rename d8/d10/d24/d45/d5f/fc5 to d8/dfc/f115 0 2026-03-09T15:01:06.628 INFO:tasks.workunit.client.1.vm09.stdout:1/893: chown d8/d10/l4f 170 1 2026-03-09T15:01:06.631 INFO:tasks.workunit.client.1.vm09.stdout:1/894: read d8/d10/d24/d48/d9b/d68/fc4 [295227,28437] 0 2026-03-09T15:01:06.634 INFO:tasks.workunit.client.1.vm09.stdout:9/953: chown d1/d7/d1e/d2b/d2e/d56/d6d/d104/cac 31486 1 2026-03-09T15:01:06.636 INFO:tasks.workunit.client.1.vm09.stdout:7/993: truncate d3/d1d/d94/d107/db7/dd4/fe0 523682 0 2026-03-09T15:01:06.638 INFO:tasks.workunit.client.1.vm09.stdout:1/895: rename d8/d10/c4b to d8/d10/d24/d48/d9b/d78/db4/d108/c116 0 2026-03-09T15:01:06.642 INFO:tasks.workunit.client.1.vm09.stdout:9/954: dwrite d1/d7/d1e/d2b/d40/f57 [0,4194304] 0 2026-03-09T15:01:06.644 INFO:tasks.workunit.client.1.vm09.stdout:7/994: read d3/db/d15/d5f/f36 [22535,38988] 0 2026-03-09T15:01:06.653 INFO:tasks.workunit.client.1.vm09.stdout:7/995: mknod d3/db/d46/dc9/c132 0 2026-03-09T15:01:06.653 INFO:tasks.workunit.client.1.vm09.stdout:9/955: creat d1/d7/d1e/d2b/d8d/dc8/f130 x:0 0 0 2026-03-09T15:01:06.661 INFO:tasks.workunit.client.1.vm09.stdout:9/956: symlink d1/d58/da8/l131 0 2026-03-09T15:01:06.664 INFO:tasks.workunit.client.1.vm09.stdout:9/957: write d1/d7/d1e/f46 [755174,116499] 0 2026-03-09T15:01:06.671 INFO:tasks.workunit.client.1.vm09.stdout:6/976: write d6/d20/d38/d56/d65/d68/d6f/fc3 [468011,119786] 0 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:9/958: symlink d1/l132 0 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:9/959: write d1/d4f/d52/ffd [870245,124321] 0 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:6/977: dread d6/d20/d38/d56/d65/f100 [0,4194304] 0 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:6/978: stat d6/d20/d38/d56/d65/d68/lcf 0 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:6/979: stat d6/l9 0 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:6/980: rename d6/df/d23/f29 to d6/d20/d38/d56/d65/d68/d86/d103/d102/f146 0 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:6/981: mkdir d6/d20/d24/d147 0 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:6/982: read - d6/db/f131 zero size 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:6/983: mknod d6/d20/d24/d7e/c148 0 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:6/984: truncate d6/f39 487095 0 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:6/985: dread - d6/d20/d2a/d3b/d91/f10a zero size 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:6/986: read - d6/d20/d44/d8f/f122 zero size 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:6/987: fsync d6/d20/d38/d4e/d55/f77 0 2026-03-09T15:01:06.692 INFO:tasks.workunit.client.1.vm09.stdout:6/988: fsync d6/d20/d38/d4e/d55/f77 0 2026-03-09T15:01:06.696 INFO:tasks.workunit.client.1.vm09.stdout:1/896: sync 2026-03-09T15:01:06.696 INFO:tasks.workunit.client.1.vm09.stdout:7/996: sync 2026-03-09T15:01:06.698 INFO:tasks.workunit.client.1.vm09.stdout:1/897: write d8/d50/fd3 [180740,104814] 0 2026-03-09T15:01:06.700 INFO:tasks.workunit.client.1.vm09.stdout:1/898: chown d8/d50/d5b 1971054 1 2026-03-09T15:01:06.704 INFO:tasks.workunit.client.1.vm09.stdout:7/997: dwrite d3/db/d46/db2/f106 [0,4194304] 0 2026-03-09T15:01:06.710 INFO:tasks.workunit.client.1.vm09.stdout:1/899: mknod d8/d10/d73/c117 0 2026-03-09T15:01:06.714 INFO:tasks.workunit.client.1.vm09.stdout:7/998: read d3/db/d15/d5f/d6e/fdb [445037,86765] 0 2026-03-09T15:01:06.718 INFO:tasks.workunit.client.1.vm09.stdout:9/960: truncate d1/d7/d1e/d2b/d2e/d56/d6d/d104/dc0/ffb 3247132 0 2026-03-09T15:01:06.718 INFO:tasks.workunit.client.1.vm09.stdout:9/961: readlink d1/d7/d1e/d2b/d2e/l4c 0 2026-03-09T15:01:06.719 INFO:tasks.workunit.client.1.vm09.stdout:1/900: mkdir d8/d50/d39/d95/d72/d64/d118 0 2026-03-09T15:01:06.720 INFO:tasks.workunit.client.1.vm09.stdout:7/999: mkdir d3/db/d15/d133 0 2026-03-09T15:01:06.722 INFO:tasks.workunit.client.1.vm09.stdout:6/989: write d6/f17 [1300048,7340] 0 2026-03-09T15:01:06.724 INFO:tasks.workunit.client.1.vm09.stdout:9/962: mkdir d1/d7/d1e/d2b/d2e/d56/d133 0 2026-03-09T15:01:06.724 INFO:tasks.workunit.client.1.vm09.stdout:1/901: fsync d8/d10/d73/f37 0 2026-03-09T15:01:06.725 INFO:tasks.workunit.client.1.vm09.stdout:9/963: mknod d1/d4f/d52/c134 0 2026-03-09T15:01:06.726 INFO:tasks.workunit.client.1.vm09.stdout:6/990: mknod d6/d20/d38/d56/c149 0 2026-03-09T15:01:06.728 INFO:tasks.workunit.client.1.vm09.stdout:6/991: dread - d6/d20/d24/f10b zero size 2026-03-09T15:01:06.728 INFO:tasks.workunit.client.1.vm09.stdout:1/902: fdatasync d8/d10/d24/d48/d9b/d78/d8b/fce 0 2026-03-09T15:01:06.728 INFO:tasks.workunit.client.1.vm09.stdout:9/964: symlink d1/d7/d1e/d2b/d8d/dd5/d100/l135 0 2026-03-09T15:01:06.734 INFO:tasks.workunit.client.1.vm09.stdout:1/903: unlink d8/d10/d73/f37 0 2026-03-09T15:01:06.740 INFO:tasks.workunit.client.1.vm09.stdout:1/904: sync 2026-03-09T15:01:06.741 INFO:tasks.workunit.client.1.vm09.stdout:6/992: dread d6/df/f140 [0,4194304] 0 2026-03-09T15:01:06.742 INFO:tasks.workunit.client.1.vm09.stdout:9/965: mknod d1/c136 0 2026-03-09T15:01:06.747 INFO:tasks.workunit.client.1.vm09.stdout:1/905: write d8/d90/f10e [201426,127170] 0 2026-03-09T15:01:06.747 INFO:tasks.workunit.client.1.vm09.stdout:1/906: chown d8/d10/d24/d48/d9b/d78/d8b/cdb 0 1 2026-03-09T15:01:06.747 INFO:tasks.workunit.client.1.vm09.stdout:6/993: unlink d6/df/d23/de3/cf7 0 2026-03-09T15:01:06.752 INFO:tasks.workunit.client.1.vm09.stdout:9/966: symlink d1/d58/da8/l137 0 2026-03-09T15:01:06.754 INFO:tasks.workunit.client.1.vm09.stdout:6/994: dread d6/d20/d38/d56/d65/d68/d6f/fa7 [0,4194304] 0 2026-03-09T15:01:06.756 INFO:tasks.workunit.client.1.vm09.stdout:1/907: dwrite d8/d10/d24/d45/ddc/fe9 [0,4194304] 0 2026-03-09T15:01:06.757 INFO:tasks.workunit.client.1.vm09.stdout:1/908: chown d8/d10/d73/f41 14413851 1 2026-03-09T15:01:06.764 INFO:tasks.workunit.client.1.vm09.stdout:6/995: rename d6/d20/d38/d56/d65/d68/d6f/fc3 to d6/d20/d38/d4e/d55/f14a 0 2026-03-09T15:01:06.764 INFO:tasks.workunit.client.1.vm09.stdout:9/967: link d1/la0 d1/d7/d1e/d2b/d8d/dc8/l138 0 2026-03-09T15:01:06.767 INFO:tasks.workunit.client.1.vm09.stdout:1/909: rename d8/d10/d24/d48/d9b/d78/d8b/cf3 to d8/d50/d39/d95/d56/c119 0 2026-03-09T15:01:06.768 INFO:tasks.workunit.client.1.vm09.stdout:6/996: creat d6/d20/d38/d56/d65/f14b x:0 0 0 2026-03-09T15:01:06.769 INFO:tasks.workunit.client.1.vm09.stdout:6/997: truncate d6/d20/d38/fa3 4911113 0 2026-03-09T15:01:06.770 INFO:tasks.workunit.client.1.vm09.stdout:9/968: symlink d1/d7/d1e/d2b/d8d/dd5/d100/l139 0 2026-03-09T15:01:06.773 INFO:tasks.workunit.client.1.vm09.stdout:1/910: creat d8/d50/d5b/d114/f11a x:0 0 0 2026-03-09T15:01:06.779 INFO:tasks.workunit.client.1.vm09.stdout:1/911: mkdir d8/d10/d24/d48/d9b/dfa/d11b 0 2026-03-09T15:01:06.782 INFO:tasks.workunit.client.1.vm09.stdout:9/969: creat d1/d7/d1e/f13a x:0 0 0 2026-03-09T15:01:06.782 INFO:tasks.workunit.client.1.vm09.stdout:6/998: getdents d6/db/d8b 0 2026-03-09T15:01:06.784 INFO:tasks.workunit.client.1.vm09.stdout:9/970: getdents d1/d7/d1e/d2b/d2e/d56/d133 0 2026-03-09T15:01:06.786 INFO:tasks.workunit.client.1.vm09.stdout:1/912: dread d8/d10/d24/d48/d9b/fbd [0,4194304] 0 2026-03-09T15:01:06.789 INFO:tasks.workunit.client.1.vm09.stdout:6/999: rename d6/d20/d38/d56/d65/d68/d86/d103 to d6/db/d145/d14c 0 2026-03-09T15:01:06.790 INFO:tasks.workunit.client.1.vm09.stdout:1/913: dread - d8/d10/d73/feb zero size 2026-03-09T15:01:06.793 INFO:tasks.workunit.client.1.vm09.stdout:9/971: rename d1/d7/d1e/d2b/d2e/l61 to d1/d4f/d52/l13b 0 2026-03-09T15:01:06.798 INFO:tasks.workunit.client.1.vm09.stdout:9/972: mkdir d1/d7/d1e/d2b/d8d/dd5/d13c 0 2026-03-09T15:01:06.800 INFO:tasks.workunit.client.1.vm09.stdout:9/973: creat d1/d7/da6/db3/f13d x:0 0 0 2026-03-09T15:01:06.802 INFO:tasks.workunit.client.1.vm09.stdout:1/914: rename d8/d10/d24/d45/dcf to d8/d10/d11c 0 2026-03-09T15:01:06.803 INFO:tasks.workunit.client.1.vm09.stdout:1/915: fdatasync d8/d50/d39/d95/f10f 0 2026-03-09T15:01:06.807 INFO:tasks.workunit.client.1.vm09.stdout:1/916: rmdir d8/d10/d24/d48/d9b/d78/db4/d108 39 2026-03-09T15:01:06.808 INFO:tasks.workunit.client.1.vm09.stdout:1/917: chown d8/d10/d73/c117 1951 1 2026-03-09T15:01:06.809 INFO:tasks.workunit.client.1.vm09.stdout:9/974: link d1/d7/fbb d1/d7/d1e/d2b/d2e/d56/d6d/d104/dc0/f13e 0 2026-03-09T15:01:06.813 INFO:tasks.workunit.client.1.vm09.stdout:1/918: dwrite d8/d10/d24/d48/d9b/dfa/f111 [0,4194304] 0 2026-03-09T15:01:06.824 INFO:tasks.workunit.client.1.vm09.stdout:9/975: rmdir d1/d7/d1e/d2b/d8d/dd5/d13c 0 2026-03-09T15:01:06.825 INFO:tasks.workunit.client.1.vm09.stdout:1/919: creat d8/d50/d39/d95/d72/d64/d109/f11d x:0 0 0 2026-03-09T15:01:06.828 INFO:tasks.workunit.client.1.vm09.stdout:1/920: creat d8/d10/d24/d83/f11e x:0 0 0 2026-03-09T15:01:06.842 INFO:tasks.workunit.client.1.vm09.stdout:9/976: sync 2026-03-09T15:01:06.846 INFO:tasks.workunit.client.1.vm09.stdout:1/921: write d8/d10/d24/d48/fc2 [1010131,36177] 0 2026-03-09T15:01:06.847 INFO:tasks.workunit.client.1.vm09.stdout:9/977: dwrite d1/d7/d1e/f9e [0,4194304] 0 2026-03-09T15:01:06.849 INFO:tasks.workunit.client.1.vm09.stdout:1/922: creat d8/d50/df6/f11f x:0 0 0 2026-03-09T15:01:06.850 INFO:tasks.workunit.client.1.vm09.stdout:9/978: creat d1/d7/d1e/d2b/d40/f13f x:0 0 0 2026-03-09T15:01:06.855 INFO:tasks.workunit.client.1.vm09.stdout:1/923: rename d8/d10/d73 to d8/d50/d39/d95/d72/d64/d120 0 2026-03-09T15:01:06.864 INFO:tasks.workunit.client.1.vm09.stdout:1/924: rmdir d8/d10/d24/d48/d9b/d68 39 2026-03-09T15:01:06.869 INFO:tasks.workunit.client.1.vm09.stdout:9/979: mkdir d1/d7/d1e/d140 0 2026-03-09T15:01:06.869 INFO:tasks.workunit.client.1.vm09.stdout:9/980: fsync d1/d7/d1e/f22 0 2026-03-09T15:01:06.870 INFO:tasks.workunit.client.1.vm09.stdout:1/925: mkdir d8/d10/d24/d48/d9b/d78/db4/d108/d121 0 2026-03-09T15:01:06.874 INFO:tasks.workunit.client.1.vm09.stdout:1/926: chown d8/d10/d24/d48/d9b/d78/d8b/cb1 3246765 1 2026-03-09T15:01:06.874 INFO:tasks.workunit.client.1.vm09.stdout:9/981: mknod d1/d58/c141 0 2026-03-09T15:01:06.874 INFO:tasks.workunit.client.1.vm09.stdout:9/982: chown d1/d7/d1e/d2b/d8d/dc8/l10d 28 1 2026-03-09T15:01:06.880 INFO:tasks.workunit.client.1.vm09.stdout:9/983: symlink d1/d7/d1e/d2b/d8d/dc8/l142 0 2026-03-09T15:01:06.886 INFO:tasks.workunit.client.1.vm09.stdout:1/927: truncate d8/d10/d24/d48/f9a 3795198 0 2026-03-09T15:01:06.897 INFO:tasks.workunit.client.1.vm09.stdout:9/984: write d1/d7/da6/fd1 [1322575,120167] 0 2026-03-09T15:01:06.898 INFO:tasks.workunit.client.1.vm09.stdout:9/985: chown d1/d7/d1e/f34 21046 1 2026-03-09T15:01:06.904 INFO:tasks.workunit.client.1.vm09.stdout:1/928: dwrite d8/d10/d24/d48/f76 [4194304,4194304] 0 2026-03-09T15:01:06.918 INFO:tasks.workunit.client.1.vm09.stdout:1/929: rename d8/d10/d24/d48/d9b/dfa/f111 to d8/d10/d24/d48/d9b/df5/dfb/f122 0 2026-03-09T15:01:06.919 INFO:tasks.workunit.client.1.vm09.stdout:9/986: dwrite d1/d7/d1e/d2b/d2e/d56/d5e/fd7 [0,4194304] 0 2026-03-09T15:01:06.922 INFO:tasks.workunit.client.1.vm09.stdout:1/930: mknod d8/d50/d39/d95/c123 0 2026-03-09T15:01:06.922 INFO:tasks.workunit.client.1.vm09.stdout:9/987: dread - d1/d7/d1e/d2b/d2e/d56/d6d/d120/f112 zero size 2026-03-09T15:01:06.927 INFO:tasks.workunit.client.1.vm09.stdout:9/988: stat d1/d6e/la2 0 2026-03-09T15:01:06.928 INFO:tasks.workunit.client.1.vm09.stdout:1/931: mknod d8/d10/d24/d83/c124 0 2026-03-09T15:01:06.932 INFO:tasks.workunit.client.1.vm09.stdout:1/932: fdatasync d8/d10/d24/d48/d9b/d78/f7c 0 2026-03-09T15:01:06.933 INFO:tasks.workunit.client.1.vm09.stdout:9/989: getdents d1/d7/d1e/d2b/d2e/d56/d6d/d104/dc0 0 2026-03-09T15:01:06.934 INFO:tasks.workunit.client.1.vm09.stdout:9/990: stat d1/d7/d1e/d2b/d2e/d56/d6d/cb2 0 2026-03-09T15:01:06.936 INFO:tasks.workunit.client.1.vm09.stdout:1/933: creat d8/d50/d39/d95/d56/dc7/dd9/f125 x:0 0 0 2026-03-09T15:01:06.937 INFO:tasks.workunit.client.1.vm09.stdout:9/991: fsync d1/d58/f75 0 2026-03-09T15:01:06.941 INFO:tasks.workunit.client.1.vm09.stdout:9/992: link d1/d7/d1e/d2b/d2e/d56/d5e/c110 d1/d7/d9f/c143 0 2026-03-09T15:01:06.959 INFO:tasks.workunit.client.1.vm09.stdout:1/934: write d8/d10/f2f [706460,25962] 0 2026-03-09T15:01:06.959 INFO:tasks.workunit.client.1.vm09.stdout:9/993: write d1/d7/d1e/d2b/d2e/d56/f103 [4517006,5340] 0 2026-03-09T15:01:06.960 INFO:tasks.workunit.client.1.vm09.stdout:9/994: dread - d1/d7/d1e/d2b/d2e/ff2 zero size 2026-03-09T15:01:06.962 INFO:tasks.workunit.client.1.vm09.stdout:1/935: creat d8/d50/d39/d95/d72/d64/f126 x:0 0 0 2026-03-09T15:01:06.962 INFO:tasks.workunit.client.1.vm09.stdout:9/995: symlink d1/d6e/l144 0 2026-03-09T15:01:06.963 INFO:tasks.workunit.client.1.vm09.stdout:1/936: chown d8/d50/d39/d95/d72/d64/d120/fef 12279659 1 2026-03-09T15:01:06.966 INFO:tasks.workunit.client.1.vm09.stdout:1/937: creat d8/d10/d24/d48/d9b/d100/f127 x:0 0 0 2026-03-09T15:01:06.970 INFO:tasks.workunit.client.1.vm09.stdout:9/996: dwrite d1/d7/d1e/d2b/d2e/d56/d6d/ff0 [0,4194304] 0 2026-03-09T15:01:06.986 INFO:tasks.workunit.client.1.vm09.stdout:9/997: creat d1/d7/da6/f145 x:0 0 0 2026-03-09T15:01:06.987 INFO:tasks.workunit.client.1.vm09.stdout:9/998: rmdir d1/d7/d1e/d2b/d119 39 2026-03-09T15:01:06.990 INFO:tasks.workunit.client.1.vm09.stdout:1/938: rmdir d8/d50/d39/d95/d72/d64/d118 0 2026-03-09T15:01:06.990 INFO:tasks.workunit.client.1.vm09.stdout:9/999: dread - d1/d4f/fc3 zero size 2026-03-09T15:01:06.991 INFO:tasks.workunit.client.1.vm09.stdout:1/939: chown d8/d10/d24/d48/d9b/d78/db4/d108/d121 11538 1 2026-03-09T15:01:06.995 INFO:tasks.workunit.client.1.vm09.stdout:1/940: rename d8/d10/d24/d45/d5f/c6d to d8/d50/c128 0 2026-03-09T15:01:06.995 INFO:tasks.workunit.client.1.vm09.stdout:1/941: write d8/d10/d24/d48/f7f [4130,50755] 0 2026-03-09T15:01:07.010 INFO:tasks.workunit.client.1.vm09.stdout:1/942: write d8/d10/d24/d48/d9b/d78/fa3 [945861,107793] 0 2026-03-09T15:01:07.011 INFO:tasks.workunit.client.1.vm09.stdout:1/943: stat d8 0 2026-03-09T15:01:07.012 INFO:tasks.workunit.client.1.vm09.stdout:1/944: write d8/d50/d5b/f6f [2209946,25537] 0 2026-03-09T15:01:07.014 INFO:tasks.workunit.client.1.vm09.stdout:1/945: read d8/d10/f69 [899303,112485] 0 2026-03-09T15:01:07.017 INFO:tasks.workunit.client.1.vm09.stdout:1/946: rmdir d8/d50/df6 39 2026-03-09T15:01:07.021 INFO:tasks.workunit.client.1.vm09.stdout:1/947: fsync d8/d10/d24/d45/f92 0 2026-03-09T15:01:07.024 INFO:tasks.workunit.client.1.vm09.stdout:1/948: mknod d8/d10/d24/d45/ddc/c129 0 2026-03-09T15:01:07.025 INFO:tasks.workunit.client.1.vm09.stdout:1/949: chown d8/d10/d24/d48/d9b/d78/d8b/lbb 707242766 1 2026-03-09T15:01:07.028 INFO:tasks.workunit.client.1.vm09.stdout:1/950: dread d8/d50/fd3 [0,4194304] 0 2026-03-09T15:01:07.030 INFO:tasks.workunit.client.1.vm09.stdout:1/951: symlink d8/d10/d24/d83/l12a 0 2026-03-09T15:01:07.037 INFO:tasks.workunit.client.1.vm09.stdout:1/952: creat d8/d10/d11c/f12b x:0 0 0 2026-03-09T15:01:07.038 INFO:tasks.workunit.client.1.vm09.stdout:1/953: dread d8/d10/d24/d48/d9b/fbd [0,4194304] 0 2026-03-09T15:01:07.039 INFO:tasks.workunit.client.1.vm09.stdout:1/954: dread - d8/d50/d39/d95/d72/d64/f126 zero size 2026-03-09T15:01:07.049 INFO:tasks.workunit.client.1.vm09.stdout:1/955: sync 2026-03-09T15:01:07.050 INFO:tasks.workunit.client.1.vm09.stdout:1/956: dread - d8/d10/d24/d48/d9b/d78/f112 zero size 2026-03-09T15:01:07.051 INFO:tasks.workunit.client.1.vm09.stdout:1/957: readlink d8/l3f 0 2026-03-09T15:01:07.052 INFO:tasks.workunit.client.1.vm09.stdout:1/958: write d8/ff9 [677321,29581] 0 2026-03-09T15:01:07.057 INFO:tasks.workunit.client.1.vm09.stdout:1/959: rename d8/d10/d24/d45/l70 to d8/d50/d39/d95/d56/l12c 0 2026-03-09T15:01:07.058 INFO:tasks.workunit.client.1.vm09.stdout:1/960: read d8/d50/d39/d95/d72/d64/d120/f41 [4745192,88855] 0 2026-03-09T15:01:07.072 INFO:tasks.workunit.client.1.vm09.stdout:1/961: truncate d8/d50/fd3 426940 0 2026-03-09T15:01:07.084 INFO:tasks.workunit.client.1.vm09.stdout:1/962: dwrite d8/d10/d24/f2a [0,4194304] 0 2026-03-09T15:01:07.087 INFO:tasks.workunit.client.1.vm09.stdout:1/963: sync 2026-03-09T15:01:07.114 INFO:tasks.workunit.client.1.vm09.stdout:1/964: link d8/f57 d8/d10/d24/d48/d9b/d78/db4/f12d 0 2026-03-09T15:01:07.118 INFO:tasks.workunit.client.1.vm09.stdout:1/965: symlink d8/d10/d24/d48/d9b/df5/l12e 0 2026-03-09T15:01:07.119 INFO:tasks.workunit.client.1.vm09.stdout:1/966: stat d8/d10/d24/d48/d9b/d78/db4/f12d 0 2026-03-09T15:01:07.125 INFO:tasks.workunit.client.1.vm09.stdout:1/967: symlink d8/d50/l12f 0 2026-03-09T15:01:07.127 INFO:tasks.workunit.client.1.vm09.stdout:1/968: mkdir d8/d10/d24/d45/d5f/d130 0 2026-03-09T15:01:07.132 INFO:tasks.workunit.client.1.vm09.stdout:1/969: dwrite d8/f7e [0,4194304] 0 2026-03-09T15:01:07.147 INFO:tasks.workunit.client.1.vm09.stdout:1/970: dread d8/d50/d5b/f6f [0,4194304] 0 2026-03-09T15:01:07.183 INFO:tasks.workunit.client.1.vm09.stdout:1/971: link d8/d10/l11 d8/d50/d39/d95/l131 0 2026-03-09T15:01:07.189 INFO:tasks.workunit.client.1.vm09.stdout:1/972: truncate d8/d10/d24/d48/f9a 3768728 0 2026-03-09T15:01:07.190 INFO:tasks.workunit.client.1.vm09.stdout:1/973: chown d8/d50/d39/d95/f10f 22 1 2026-03-09T15:01:07.192 INFO:tasks.workunit.client.1.vm09.stdout:1/974: mkdir d8/d10/d24/d48/d9b/df5/dfb/d132 0 2026-03-09T15:01:07.230 INFO:tasks.workunit.client.1.vm09.stdout:1/975: dwrite d8/d10/d24/d48/d9b/d78/db4/f12d [0,4194304] 0 2026-03-09T15:01:07.304 INFO:tasks.workunit.client.1.vm09.stdout:1/976: write d8/d50/d39/d95/f6e [2477941,90037] 0 2026-03-09T15:01:07.306 INFO:tasks.workunit.client.1.vm09.stdout:1/977: mknod d8/d50/d39/d95/d72/c133 0 2026-03-09T15:01:07.310 INFO:tasks.workunit.client.1.vm09.stdout:1/978: stat d8/d50/d5b/caf 0 2026-03-09T15:01:07.321 INFO:tasks.workunit.client.1.vm09.stdout:1/979: symlink d8/d50/d39/d95/d56/l134 0 2026-03-09T15:01:07.327 INFO:tasks.workunit.client.1.vm09.stdout:1/980: link d8/d10/d24/d48/d9b/d100/fd8 d8/d10/d11c/f135 0 2026-03-09T15:01:07.331 INFO:tasks.workunit.client.1.vm09.stdout:1/981: truncate d8/d50/d39/d95/d56/f9f 3627873 0 2026-03-09T15:01:07.358 INFO:tasks.workunit.client.1.vm09.stdout:1/982: dwrite d8/d10/d11c/fa0 [4194304,4194304] 0 2026-03-09T15:01:07.359 INFO:tasks.workunit.client.1.vm09.stdout:1/983: write d8/d10/d24/d48/d9b/d78/f112 [190919,83479] 0 2026-03-09T15:01:07.372 INFO:tasks.workunit.client.1.vm09.stdout:1/984: creat d8/d50/d39/d95/d56/dc7/f136 x:0 0 0 2026-03-09T15:01:07.389 INFO:tasks.workunit.client.1.vm09.stdout:1/985: dwrite d8/d10/f69 [0,4194304] 0 2026-03-09T15:01:07.401 INFO:tasks.workunit.client.1.vm09.stdout:1/986: creat d8/d10/d24/f137 x:0 0 0 2026-03-09T15:01:07.437 INFO:tasks.workunit.client.1.vm09.stdout:1/987: write d8/d10/d24/d48/d9b/fbd [1034240,17170] 0 2026-03-09T15:01:07.440 INFO:tasks.workunit.client.1.vm09.stdout:1/988: chown d8/d10/d24/d45/d5f/ld7 154800 1 2026-03-09T15:01:07.440 INFO:tasks.workunit.client.1.vm09.stdout:1/989: fsync d8/d50/d39/d95/f6e 0 2026-03-09T15:01:07.443 INFO:tasks.workunit.client.1.vm09.stdout:1/990: mkdir d8/d10/d24/d48/d9b/dfa/d138 0 2026-03-09T15:01:07.452 INFO:tasks.workunit.client.1.vm09.stdout:1/991: getdents d8/d50/d39/d95/d56/dc7 0 2026-03-09T15:01:07.459 INFO:tasks.workunit.client.1.vm09.stdout:1/992: symlink d8/d10/d24/d83/l139 0 2026-03-09T15:01:07.467 INFO:tasks.workunit.client.1.vm09.stdout:1/993: mkdir d8/d50/d39/d95/d113/d13a 0 2026-03-09T15:01:07.473 INFO:tasks.workunit.client.1.vm09.stdout:1/994: dwrite d8/d10/d24/d48/d9b/d78/fa3 [0,4194304] 0 2026-03-09T15:01:07.474 INFO:tasks.workunit.client.1.vm09.stdout:1/995: stat d8/d10/d24/d48/d9b/d78/f7c 0 2026-03-09T15:01:07.476 INFO:tasks.workunit.client.1.vm09.stdout:1/996: chown d8/d10/d24/d48/la4 47779 1 2026-03-09T15:01:07.540 INFO:tasks.workunit.client.1.vm09.stdout:1/997: write d8/d10/d24/d48/d9b/d68/fc4 [176610,128491] 0 2026-03-09T15:01:07.540 INFO:tasks.workunit.client.1.vm09.stdout:1/998: dread - d8/d10/d24/d48/d9b/d78/d8b/fce zero size 2026-03-09T15:01:07.543 INFO:tasks.workunit.client.1.vm09.stdout:1/999: dread d8/d10/d24/d48/d9b/d78/fa3 [0,4194304] 0 2026-03-09T15:01:07.547 INFO:tasks.workunit.client.1.vm09.stderr:+ rm -rf -- ./tmp.eO2gMYMgiH 2026-03-09T15:01:07.582 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:07 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:01:07.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:07 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:01:08.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:08 vm09.local ceph-mon[59673]: pgmap v160: 65 pgs: 65 active+clean; 2.0 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail; 54 MiB/s rd, 127 MiB/s wr, 327 op/s 2026-03-09T15:01:08.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:08 vm05.local ceph-mon[50611]: pgmap v160: 65 pgs: 65 active+clean; 2.0 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail; 54 MiB/s rd, 127 MiB/s wr, 327 op/s 2026-03-09T15:01:09.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.140+0000 7fadaaa91700 1 -- 192.168.123.105:0/1507926380 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fada4072360 msgr2=0x7fada40770e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:09.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.140+0000 7fadaaa91700 1 --2- 192.168.123.105:0/1507926380 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fada4072360 0x7fada40770e0 secure :-1 s=READY pgs=304 cs=0 l=1 rev1=1 crypto rx=0x7fad9c00d3f0 tx=0x7fad9c00d700 comp rx=0 tx=0).stop 2026-03-09T15:01:09.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.141+0000 7fadaaa91700 1 -- 192.168.123.105:0/1507926380 shutdown_connections 2026-03-09T15:01:09.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.141+0000 7fadaaa91700 1 --2- 192.168.123.105:0/1507926380 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fada4072360 0x7fada40770e0 unknown :-1 s=CLOSED pgs=304 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.141+0000 7fadaaa91700 1 --2- 192.168.123.105:0/1507926380 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fada4071980 0x7fada4071d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.141+0000 7fadaaa91700 1 -- 192.168.123.105:0/1507926380 >> 192.168.123.105:0/1507926380 conn(0x7fada406d1a0 msgr2=0x7fada406f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:09.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.142+0000 7fadaaa91700 1 -- 192.168.123.105:0/1507926380 shutdown_connections 2026-03-09T15:01:09.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.142+0000 7fadaaa91700 1 -- 192.168.123.105:0/1507926380 wait complete. 2026-03-09T15:01:09.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.143+0000 7fadaaa91700 1 Processor -- start 2026-03-09T15:01:09.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.143+0000 7fadaaa91700 1 -- start start 2026-03-09T15:01:09.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.143+0000 7fadaaa91700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fada4071980 0x7fada41312c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:09.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.143+0000 7fadaaa91700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fada4072360 0x7fada4131800 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:09.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.143+0000 7fadaaa91700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fada4131e20 con 0x7fada4072360 2026-03-09T15:01:09.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.143+0000 7fadaaa91700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fada407f460 con 0x7fada4071980 2026-03-09T15:01:09.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.144+0000 7fada882d700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fada4071980 0x7fada41312c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:09.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.144+0000 7fada882d700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fada4071980 0x7fada41312c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:38440/0 (socket says 192.168.123.105:38440) 2026-03-09T15:01:09.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.144+0000 7fada882d700 1 -- 192.168.123.105:0/3543856743 learned_addr learned my addr 192.168.123.105:0/3543856743 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:01:09.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.144+0000 7fada882d700 1 -- 192.168.123.105:0/3543856743 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fada4072360 msgr2=0x7fada4131800 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:09.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.144+0000 7fada882d700 1 --2- 192.168.123.105:0/3543856743 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fada4072360 0x7fada4131800 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.144+0000 7fada882d700 1 -- 192.168.123.105:0/3543856743 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fad9c007ed0 con 0x7fada4071980 2026-03-09T15:01:09.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.144+0000 7fada882d700 1 --2- 192.168.123.105:0/3543856743 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fada4071980 0x7fada41312c0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fad9400b700 tx=0x7fad9400bac0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:09.146 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.144+0000 7fada1ffb700 1 -- 192.168.123.105:0/3543856743 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fad94010840 con 0x7fada4071980 2026-03-09T15:01:09.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.145+0000 7fadaaa91700 1 -- 192.168.123.105:0/3543856743 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fada407f740 con 0x7fada4071980 2026-03-09T15:01:09.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.145+0000 7fadaaa91700 1 -- 192.168.123.105:0/3543856743 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fada407fc90 con 0x7fada4071980 2026-03-09T15:01:09.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.146+0000 7fada1ffb700 1 -- 192.168.123.105:0/3543856743 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fad94010e80 con 0x7fada4071980 2026-03-09T15:01:09.147 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.146+0000 7fada1ffb700 1 -- 192.168.123.105:0/3543856743 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fad9400d590 con 0x7fada4071980 2026-03-09T15:01:09.148 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.147+0000 7fada1ffb700 1 -- 192.168.123.105:0/3543856743 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fad9400d6f0 con 0x7fada4071980 2026-03-09T15:01:09.149 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.148+0000 7fada1ffb700 1 --2- 192.168.123.105:0/3543856743 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fad8c06c6d0 0x7fad8c06eb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:09.149 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.148+0000 7fada3fff700 1 --2- 192.168.123.105:0/3543856743 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fad8c06c6d0 0x7fad8c06eb80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:09.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.149+0000 7fada1ffb700 1 -- 192.168.123.105:0/3543856743 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fad9408c450 con 0x7fada4071980 2026-03-09T15:01:09.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.149+0000 7fada3fff700 1 --2- 192.168.123.105:0/3543856743 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fad8c06c6d0 0x7fad8c06eb80 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7fad9c00db80 tx=0x7fad9c007480 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:09.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.149+0000 7fadaaa91700 1 -- 192.168.123.105:0/3543856743 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fad90005320 con 0x7fada4071980 2026-03-09T15:01:09.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.153+0000 7fada1ffb700 1 -- 192.168.123.105:0/3543856743 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fad94056b50 con 0x7fada4071980 2026-03-09T15:01:09.339 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.338+0000 7fadaaa91700 1 -- 192.168.123.105:0/3543856743 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fad90000bf0 con 0x7fad8c06c6d0 2026-03-09T15:01:09.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.339+0000 7fada1ffb700 1 -- 192.168.123.105:0/3543856743 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+434 (secure 0 0 0) 0x7fad90000bf0 con 0x7fad8c06c6d0 2026-03-09T15:01:09.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.345+0000 7fad8b7fe700 1 -- 192.168.123.105:0/3543856743 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fad8c06c6d0 msgr2=0x7fad8c06eb80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:09.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.345+0000 7fad8b7fe700 1 --2- 192.168.123.105:0/3543856743 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fad8c06c6d0 0x7fad8c06eb80 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7fad9c00db80 tx=0x7fad9c007480 comp rx=0 tx=0).stop 2026-03-09T15:01:09.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.345+0000 7fad8b7fe700 1 -- 192.168.123.105:0/3543856743 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fada4071980 msgr2=0x7fada41312c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:09.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.345+0000 7fad8b7fe700 1 --2- 192.168.123.105:0/3543856743 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fada4071980 0x7fada41312c0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fad9400b700 tx=0x7fad9400bac0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.345+0000 7fad8b7fe700 1 -- 192.168.123.105:0/3543856743 shutdown_connections 2026-03-09T15:01:09.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.345+0000 7fad8b7fe700 1 --2- 192.168.123.105:0/3543856743 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fad8c06c6d0 0x7fad8c06eb80 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.345+0000 7fad8b7fe700 1 --2- 192.168.123.105:0/3543856743 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fada4071980 0x7fada41312c0 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.345+0000 7fad8b7fe700 1 --2- 192.168.123.105:0/3543856743 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fada4072360 0x7fada4131800 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.345+0000 7fad8b7fe700 1 -- 192.168.123.105:0/3543856743 >> 192.168.123.105:0/3543856743 conn(0x7fada406d1a0 msgr2=0x7fada40766b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:09.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.345+0000 7fad8b7fe700 1 -- 192.168.123.105:0/3543856743 shutdown_connections 2026-03-09T15:01:09.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.345+0000 7fad8b7fe700 1 -- 192.168.123.105:0/3543856743 wait complete. 2026-03-09T15:01:09.358 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:01:09.450 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:09 vm05.local ceph-mon[50611]: pgmap v161: 65 pgs: 65 active+clean; 2.0 GiB data, 6.8 GiB used, 113 GiB / 120 GiB avail; 48 MiB/s rd, 112 MiB/s wr, 308 op/s 2026-03-09T15:01:09.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.447+0000 7f68cdf9a700 1 -- 192.168.123.105:0/974191846 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f68c80722f0 msgr2=0x7f68c8077070 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:09.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.447+0000 7f68cdf9a700 1 --2- 192.168.123.105:0/974191846 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f68c80722f0 0x7f68c8077070 secure :-1 s=READY pgs=305 cs=0 l=1 rev1=1 crypto rx=0x7f68b80099c0 tx=0x7f68b8009cd0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.447+0000 7f68cdf9a700 1 -- 192.168.123.105:0/974191846 shutdown_connections 2026-03-09T15:01:09.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.447+0000 7f68cdf9a700 1 --2- 192.168.123.105:0/974191846 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f68c80722f0 0x7f68c8077070 unknown :-1 s=CLOSED pgs=305 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.447+0000 7f68cdf9a700 1 --2- 192.168.123.105:0/974191846 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f68c8071910 0x7f68c8071d20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.447+0000 7f68cdf9a700 1 -- 192.168.123.105:0/974191846 >> 192.168.123.105:0/974191846 conn(0x7f68c806d160 msgr2=0x7f68c806f5b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:09.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.451+0000 7f68cdf9a700 1 -- 192.168.123.105:0/974191846 shutdown_connections 2026-03-09T15:01:09.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.451+0000 7f68cdf9a700 1 -- 192.168.123.105:0/974191846 wait complete. 2026-03-09T15:01:09.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.451+0000 7f68cdf9a700 1 Processor -- start 2026-03-09T15:01:09.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.451+0000 7f68cdf9a700 1 -- start start 2026-03-09T15:01:09.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.452+0000 7f68cdf9a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f68c8071910 0x7f68c81b6010 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:09.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.453+0000 7f68cdf9a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f68c81b6550 0x7f68c807f4c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:09.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.453+0000 7f68cdf9a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f68c81b69c0 con 0x7f68c81b6550 2026-03-09T15:01:09.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.453+0000 7f68cdf9a700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f68c81b6b30 con 0x7f68c8071910 2026-03-09T15:01:09.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.453+0000 7f68c6ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f68c81b6550 0x7f68c807f4c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:09.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.453+0000 7f68c6ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f68c81b6550 0x7f68c807f4c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44620/0 (socket says 192.168.123.105:44620) 2026-03-09T15:01:09.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.453+0000 7f68c6ffd700 1 -- 192.168.123.105:0/880013526 learned_addr learned my addr 192.168.123.105:0/880013526 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:01:09.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.454+0000 7f68c77fe700 1 --2- 192.168.123.105:0/880013526 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f68c8071910 0x7f68c81b6010 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:09.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.455+0000 7f68c77fe700 1 -- 192.168.123.105:0/880013526 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f68c81b6550 msgr2=0x7f68c807f4c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:09.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.455+0000 7f68c77fe700 1 --2- 192.168.123.105:0/880013526 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f68c81b6550 0x7f68c807f4c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.455+0000 7f68c77fe700 1 -- 192.168.123.105:0/880013526 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f68b80096b0 con 0x7f68c8071910 2026-03-09T15:01:09.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.456+0000 7f68c77fe700 1 --2- 192.168.123.105:0/880013526 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f68c8071910 0x7f68c81b6010 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f68c000bee0 tx=0x7f68c000bf10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:09.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.456+0000 7f68c4ff9700 1 -- 192.168.123.105:0/880013526 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f68c000cdb0 con 0x7f68c8071910 2026-03-09T15:01:09.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.456+0000 7f68cdf9a700 1 -- 192.168.123.105:0/880013526 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f68c807fa00 con 0x7f68c8071910 2026-03-09T15:01:09.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.456+0000 7f68cdf9a700 1 -- 192.168.123.105:0/880013526 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f68c807fef0 con 0x7f68c8071910 2026-03-09T15:01:09.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.457+0000 7f68cdf9a700 1 -- 192.168.123.105:0/880013526 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f68c81b0210 con 0x7f68c8071910 2026-03-09T15:01:09.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.457+0000 7f68c4ff9700 1 -- 192.168.123.105:0/880013526 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f68c0014920 con 0x7f68c8071910 2026-03-09T15:01:09.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.457+0000 7f68c4ff9700 1 -- 192.168.123.105:0/880013526 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f68c0012ac0 con 0x7f68c8071910 2026-03-09T15:01:09.459 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.458+0000 7f68c4ff9700 1 -- 192.168.123.105:0/880013526 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f68c0012c20 con 0x7f68c8071910 2026-03-09T15:01:09.460 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.459+0000 7f68c4ff9700 1 --2- 192.168.123.105:0/880013526 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f68b006c6d0 0x7f68b006eb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:09.460 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.459+0000 7f68c4ff9700 1 -- 192.168.123.105:0/880013526 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f68c008cb60 con 0x7f68c8071910 2026-03-09T15:01:09.462 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.461+0000 7f68c6ffd700 1 --2- 192.168.123.105:0/880013526 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f68b006c6d0 0x7f68b006eb80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:09.462 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.461+0000 7f68c4ff9700 1 -- 192.168.123.105:0/880013526 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f68c0057410 con 0x7f68c8071910 2026-03-09T15:01:09.466 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.465+0000 7f68c6ffd700 1 --2- 192.168.123.105:0/880013526 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f68b006c6d0 0x7f68b006eb80 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f68b8005fd0 tx=0x7f68b800c650 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:09.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.611+0000 7f68cdf9a700 1 -- 192.168.123.105:0/880013526 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f68c8061190 con 0x7f68b006c6d0 2026-03-09T15:01:09.615 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.614+0000 7f68c4ff9700 1 -- 192.168.123.105:0/880013526 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+434 (secure 0 0 0) 0x7f68c8061190 con 0x7f68b006c6d0 2026-03-09T15:01:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:09 vm09.local ceph-mon[59673]: pgmap v161: 65 pgs: 65 active+clean; 2.0 GiB data, 6.8 GiB used, 113 GiB / 120 GiB avail; 48 MiB/s rd, 112 MiB/s wr, 308 op/s 2026-03-09T15:01:09.618 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.617+0000 7f68ae7fc700 1 -- 192.168.123.105:0/880013526 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f68b006c6d0 msgr2=0x7f68b006eb80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:09.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.618+0000 7f68ae7fc700 1 --2- 192.168.123.105:0/880013526 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f68b006c6d0 0x7f68b006eb80 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f68b8005fd0 tx=0x7f68b800c650 comp rx=0 tx=0).stop 2026-03-09T15:01:09.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.618+0000 7f68ae7fc700 1 -- 192.168.123.105:0/880013526 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f68c8071910 msgr2=0x7f68c81b6010 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:09.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.618+0000 7f68ae7fc700 1 --2- 192.168.123.105:0/880013526 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f68c8071910 0x7f68c81b6010 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f68c000bee0 tx=0x7f68c000bf10 comp rx=0 tx=0).stop 2026-03-09T15:01:09.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.618+0000 7f68ae7fc700 1 -- 192.168.123.105:0/880013526 shutdown_connections 2026-03-09T15:01:09.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.618+0000 7f68ae7fc700 1 --2- 192.168.123.105:0/880013526 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f68b006c6d0 0x7f68b006eb80 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.618+0000 7f68ae7fc700 1 --2- 192.168.123.105:0/880013526 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f68c8071910 0x7f68c81b6010 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.618+0000 7f68ae7fc700 1 --2- 192.168.123.105:0/880013526 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f68c81b6550 0x7f68c807f4c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.618+0000 7f68ae7fc700 1 -- 192.168.123.105:0/880013526 >> 192.168.123.105:0/880013526 conn(0x7f68c806d160 msgr2=0x7f68c8076380 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:09.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.619+0000 7f68ae7fc700 1 -- 192.168.123.105:0/880013526 shutdown_connections 2026-03-09T15:01:09.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.619+0000 7f68ae7fc700 1 -- 192.168.123.105:0/880013526 wait complete. 2026-03-09T15:01:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.745+0000 7fbe17f9d700 1 -- 192.168.123.105:0/2986956003 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe10072470 msgr2=0x7fbe1010beb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.745+0000 7fbe17f9d700 1 --2- 192.168.123.105:0/2986956003 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe10072470 0x7fbe1010beb0 secure :-1 s=READY pgs=306 cs=0 l=1 rev1=1 crypto rx=0x7fbe0800b3a0 tx=0x7fbe0800b6b0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.745+0000 7fbe17f9d700 1 -- 192.168.123.105:0/2986956003 shutdown_connections 2026-03-09T15:01:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.745+0000 7fbe17f9d700 1 --2- 192.168.123.105:0/2986956003 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe10072470 0x7fbe1010beb0 unknown :-1 s=CLOSED pgs=306 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.745+0000 7fbe17f9d700 1 --2- 192.168.123.105:0/2986956003 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbe10071a90 0x7fbe10071ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.745+0000 7fbe17f9d700 1 -- 192.168.123.105:0/2986956003 >> 192.168.123.105:0/2986956003 conn(0x7fbe1006d1a0 msgr2=0x7fbe1006f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.746+0000 7fbe17f9d700 1 -- 192.168.123.105:0/2986956003 shutdown_connections 2026-03-09T15:01:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.746+0000 7fbe17f9d700 1 -- 192.168.123.105:0/2986956003 wait complete. 2026-03-09T15:01:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.746+0000 7fbe17f9d700 1 Processor -- start 2026-03-09T15:01:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.746+0000 7fbe17f9d700 1 -- start start 2026-03-09T15:01:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.746+0000 7fbe17f9d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe10071a90 0x7fbe10116af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.746+0000 7fbe17f9d700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbe10117030 0x7fbe101a14c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.746+0000 7fbe17f9d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbe10117560 con 0x7fbe10071a90 2026-03-09T15:01:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.746+0000 7fbe17f9d700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbe101176d0 con 0x7fbe10117030 2026-03-09T15:01:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.747+0000 7fbe15d39700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe10071a90 0x7fbe10116af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.747+0000 7fbe15538700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbe10117030 0x7fbe101a14c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.747+0000 7fbe15538700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbe10117030 0x7fbe101a14c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:38462/0 (socket says 192.168.123.105:38462) 2026-03-09T15:01:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.747+0000 7fbe15538700 1 -- 192.168.123.105:0/3163566427 learned_addr learned my addr 192.168.123.105:0/3163566427 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:01:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.747+0000 7fbe15538700 1 -- 192.168.123.105:0/3163566427 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe10071a90 msgr2=0x7fbe10116af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.747+0000 7fbe15538700 1 --2- 192.168.123.105:0/3163566427 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe10071a90 0x7fbe10116af0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.747+0000 7fbe15538700 1 -- 192.168.123.105:0/3163566427 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbe0800b050 con 0x7fbe10117030 2026-03-09T15:01:09.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.749+0000 7fbe15538700 1 --2- 192.168.123.105:0/3163566427 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbe10117030 0x7fbe101a14c0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7fbe08007b60 tx=0x7fbe0800bce0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:09.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.749+0000 7fbe06ffd700 1 -- 192.168.123.105:0/3163566427 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbe0800e050 con 0x7fbe10117030 2026-03-09T15:01:09.751 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.749+0000 7fbe17f9d700 1 -- 192.168.123.105:0/3163566427 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbe101a1a60 con 0x7fbe10117030 2026-03-09T15:01:09.751 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.749+0000 7fbe17f9d700 1 -- 192.168.123.105:0/3163566427 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbe101a1f50 con 0x7fbe10117030 2026-03-09T15:01:09.751 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.750+0000 7fbe06ffd700 1 -- 192.168.123.105:0/3163566427 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fbe08007ce0 con 0x7fbe10117030 2026-03-09T15:01:09.751 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.750+0000 7fbe06ffd700 1 -- 192.168.123.105:0/3163566427 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbe0801b720 con 0x7fbe10117030 2026-03-09T15:01:09.753 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.752+0000 7fbe06ffd700 1 -- 192.168.123.105:0/3163566427 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fbe08019040 con 0x7fbe10117030 2026-03-09T15:01:09.754 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.752+0000 7fbe17f9d700 1 -- 192.168.123.105:0/3163566427 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbdf4005320 con 0x7fbe10117030 2026-03-09T15:01:09.754 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.752+0000 7fbe06ffd700 1 --2- 192.168.123.105:0/3163566427 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbdfc06c680 0x7fbdfc06eb30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:09.754 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.753+0000 7fbe06ffd700 1 -- 192.168.123.105:0/3163566427 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fbe0808c960 con 0x7fbe10117030 2026-03-09T15:01:09.754 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.753+0000 7fbe15d39700 1 --2- 192.168.123.105:0/3163566427 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbdfc06c680 0x7fbdfc06eb30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:09.754 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.753+0000 7fbe15d39700 1 --2- 192.168.123.105:0/3163566427 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbdfc06c680 0x7fbdfc06eb30 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7fbe101ae840 tx=0x7fbe00009380 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:09.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.755+0000 7fbe06ffd700 1 -- 192.168.123.105:0/3163566427 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fbe080570e0 con 0x7fbe10117030 2026-03-09T15:01:09.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.955+0000 7fbe17f9d700 1 -- 192.168.123.105:0/3163566427 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fbdf4000bf0 con 0x7fbdfc06c680 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (4m) 2m ago 5m 22.6M - 0.25.0 c8568f914cd2 35e160b8d1de 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (5m) 2m ago 5m 8061k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (4m) 2m ago 4m 8242k - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (5m) 2m ago 5m 7411k - 18.2.0 dc2bc1663786 1c577d7a0de0 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (4m) 2m ago 4m 7402k - 18.2.0 dc2bc1663786 9e4961442551 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (4m) 2m ago 4m 82.7M - 9.4.7 954c08fa6188 46e00e5e5b38 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (2m) 2m ago 2m 16.0M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (2m) 2m ago 2m 12.8M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (2m) 2m ago 2m 12.9M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (2m) 2m ago 2m 15.7M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:9283,8765,8443 running (5m) 2m ago 5m 499M - 18.2.0 dc2bc1663786 528c75e7c581 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (4m) 2m ago 4m 444M - 18.2.0 dc2bc1663786 b7db289ecc14 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (5m) 2m ago 5m 49.2M 2048M 18.2.0 dc2bc1663786 c83e96b62251 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (4m) 2m ago 4m 45.0M 2048M 18.2.0 dc2bc1663786 7963792b5376 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (5m) 2m ago 5m 13.9M - 1.5.0 0da6a335fe13 925d94d1da6f 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (4m) 2m ago 4m 14.6M - 1.5.0 0da6a335fe13 e0b25e3a046e 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (4m) 2m ago 4m 45.7M 4096M 18.2.0 dc2bc1663786 50f3ca995318 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (3m) 2m ago 3m 46.9M 4096M 18.2.0 dc2bc1663786 23e35bdafe50 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (3m) 2m ago 3m 45.8M 4096M 18.2.0 dc2bc1663786 75097dc12979 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (3m) 2m ago 3m 44.4M 4096M 18.2.0 dc2bc1663786 e79644a0564f 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (3m) 2m ago 3m 43.6M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (3m) 2m ago 3m 42.5M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:01:09.965 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (4m) 2m ago 4m 38.9M - 2.43.0 a07b618ecd1d c36363ff6641 2026-03-09T15:01:09.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.962+0000 7fbe06ffd700 1 -- 192.168.123.105:0/3163566427 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3216 (secure 0 0 0) 0x7fbdf4000bf0 con 0x7fbdfc06c680 2026-03-09T15:01:09.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.965+0000 7fbe04ff9700 1 -- 192.168.123.105:0/3163566427 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbdfc06c680 msgr2=0x7fbdfc06eb30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:09.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.965+0000 7fbe04ff9700 1 --2- 192.168.123.105:0/3163566427 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbdfc06c680 0x7fbdfc06eb30 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7fbe101ae840 tx=0x7fbe00009380 comp rx=0 tx=0).stop 2026-03-09T15:01:09.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.965+0000 7fbe04ff9700 1 -- 192.168.123.105:0/3163566427 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbe10117030 msgr2=0x7fbe101a14c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:09.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.965+0000 7fbe04ff9700 1 --2- 192.168.123.105:0/3163566427 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbe10117030 0x7fbe101a14c0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7fbe08007b60 tx=0x7fbe0800bce0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.965+0000 7fbe04ff9700 1 -- 192.168.123.105:0/3163566427 shutdown_connections 2026-03-09T15:01:09.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.965+0000 7fbe04ff9700 1 --2- 192.168.123.105:0/3163566427 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fbdfc06c680 0x7fbdfc06eb30 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.965+0000 7fbe04ff9700 1 --2- 192.168.123.105:0/3163566427 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbe10071a90 0x7fbe10116af0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.965+0000 7fbe04ff9700 1 --2- 192.168.123.105:0/3163566427 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbe10117030 0x7fbe101a14c0 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:09.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.965+0000 7fbe04ff9700 1 -- 192.168.123.105:0/3163566427 >> 192.168.123.105:0/3163566427 conn(0x7fbe1006d1a0 msgr2=0x7fbe1010b300 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:09.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.965+0000 7fbe04ff9700 1 -- 192.168.123.105:0/3163566427 shutdown_connections 2026-03-09T15:01:09.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:09.965+0000 7fbe04ff9700 1 -- 192.168.123.105:0/3163566427 wait complete. 2026-03-09T15:01:10.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 -- 192.168.123.105:0/1235076917 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a3c072360 msgr2=0x7f2a3c0770e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:10.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 --2- 192.168.123.105:0/1235076917 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a3c072360 0x7f2a3c0770e0 secure :-1 s=READY pgs=307 cs=0 l=1 rev1=1 crypto rx=0x7f2a3400cd40 tx=0x7f2a3400a320 comp rx=0 tx=0).stop 2026-03-09T15:01:10.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 -- 192.168.123.105:0/1235076917 shutdown_connections 2026-03-09T15:01:10.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 --2- 192.168.123.105:0/1235076917 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a3c072360 0x7f2a3c0770e0 unknown :-1 s=CLOSED pgs=307 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 --2- 192.168.123.105:0/1235076917 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2a3c071980 0x7f2a3c071d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 -- 192.168.123.105:0/1235076917 >> 192.168.123.105:0/1235076917 conn(0x7f2a3c06d1a0 msgr2=0x7f2a3c06f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:10.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 -- 192.168.123.105:0/1235076917 shutdown_connections 2026-03-09T15:01:10.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 -- 192.168.123.105:0/1235076917 wait complete. 2026-03-09T15:01:10.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 Processor -- start 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 -- start start 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a3c071980 0x7f2a3c0824d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2a3c082a10 0x7f2a3c082e80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2a3c1b2a90 con 0x7f2a3c071980 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.068+0000 7f2a40d63700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2a3c1b2bd0 con 0x7f2a3c082a10 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.069+0000 7f2a39d9b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2a3c082a10 0x7f2a3c082e80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.069+0000 7f2a39d9b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2a3c082a10 0x7f2a3c082e80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:38472/0 (socket says 192.168.123.105:38472) 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.069+0000 7f2a39d9b700 1 -- 192.168.123.105:0/3159707545 learned_addr learned my addr 192.168.123.105:0/3159707545 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.069+0000 7f2a3a59c700 1 --2- 192.168.123.105:0/3159707545 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a3c071980 0x7f2a3c0824d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.069+0000 7f2a39d9b700 1 -- 192.168.123.105:0/3159707545 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a3c071980 msgr2=0x7f2a3c0824d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.069+0000 7f2a39d9b700 1 --2- 192.168.123.105:0/3159707545 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a3c071980 0x7f2a3c0824d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.069+0000 7f2a39d9b700 1 -- 192.168.123.105:0/3159707545 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2a3400c9f0 con 0x7f2a3c082a10 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.069+0000 7f2a3a59c700 1 --2- 192.168.123.105:0/3159707545 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a3c071980 0x7f2a3c0824d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.069+0000 7f2a39d9b700 1 --2- 192.168.123.105:0/3159707545 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2a3c082a10 0x7f2a3c082e80 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f2a340046c0 tx=0x7f2a340047a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:10.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.070+0000 7f2a2b7fe700 1 -- 192.168.123.105:0/3159707545 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2a34009d70 con 0x7f2a3c082a10 2026-03-09T15:01:10.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.070+0000 7f2a40d63700 1 -- 192.168.123.105:0/3159707545 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2a3c1b2d10 con 0x7f2a3c082a10 2026-03-09T15:01:10.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.070+0000 7f2a40d63700 1 -- 192.168.123.105:0/3159707545 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2a3c1b31b0 con 0x7f2a3c082a10 2026-03-09T15:01:10.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.070+0000 7f2a2b7fe700 1 -- 192.168.123.105:0/3159707545 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f2a3400ce90 con 0x7f2a3c082a10 2026-03-09T15:01:10.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.070+0000 7f2a2b7fe700 1 -- 192.168.123.105:0/3159707545 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2a3400c3a0 con 0x7f2a3c082a10 2026-03-09T15:01:10.072 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.071+0000 7f2a2b7fe700 1 -- 192.168.123.105:0/3159707545 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7f2a34004030 con 0x7f2a3c082a10 2026-03-09T15:01:10.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.072+0000 7f2a2b7fe700 1 --2- 192.168.123.105:0/3159707545 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f2a2406c6d0 0x7f2a2406eb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:10.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.072+0000 7f2a2b7fe700 1 -- 192.168.123.105:0/3159707545 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7f2a3408ced0 con 0x7f2a3c082a10 2026-03-09T15:01:10.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.073+0000 7f2a40d63700 1 -- 192.168.123.105:0/3159707545 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2a1c005320 con 0x7f2a3c082a10 2026-03-09T15:01:10.074 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.073+0000 7f2a3a59c700 1 --2- 192.168.123.105:0/3159707545 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f2a2406c6d0 0x7f2a2406eb80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:10.075 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.074+0000 7f2a3a59c700 1 --2- 192.168.123.105:0/3159707545 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f2a2406c6d0 0x7f2a2406eb80 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f2a2c005950 tx=0x7f2a2c005f50 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:10.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.076+0000 7f2a2b7fe700 1 -- 192.168.123.105:0/3159707545 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f2a3405b0e0 con 0x7f2a3c082a10 2026-03-09T15:01:10.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.270+0000 7f2a40d63700 1 -- 192.168.123.105:0/3159707545 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f2a1c006200 con 0x7f2a3c082a10 2026-03-09T15:01:10.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.270+0000 7f2a2b7fe700 1 -- 192.168.123.105:0/3159707545 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+558 (secure 0 0 0) 0x7f2a34007e60 con 0x7f2a3c082a10 2026-03-09T15:01:10.271 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:01:10.271 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:01:10.271 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 6 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 14 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:01:10.272 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:01:10.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.273+0000 7f2a297fa700 1 -- 192.168.123.105:0/3159707545 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f2a2406c6d0 msgr2=0x7f2a2406eb80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:10.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.273+0000 7f2a297fa700 1 --2- 192.168.123.105:0/3159707545 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f2a2406c6d0 0x7f2a2406eb80 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f2a2c005950 tx=0x7f2a2c005f50 comp rx=0 tx=0).stop 2026-03-09T15:01:10.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.273+0000 7f2a297fa700 1 -- 192.168.123.105:0/3159707545 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2a3c082a10 msgr2=0x7f2a3c082e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:10.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.273+0000 7f2a297fa700 1 --2- 192.168.123.105:0/3159707545 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2a3c082a10 0x7f2a3c082e80 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f2a340046c0 tx=0x7f2a340047a0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.274+0000 7f2a297fa700 1 -- 192.168.123.105:0/3159707545 shutdown_connections 2026-03-09T15:01:10.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.274+0000 7f2a297fa700 1 --2- 192.168.123.105:0/3159707545 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7f2a2406c6d0 0x7f2a2406eb80 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.274+0000 7f2a297fa700 1 --2- 192.168.123.105:0/3159707545 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2a3c071980 0x7f2a3c0824d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.274+0000 7f2a297fa700 1 --2- 192.168.123.105:0/3159707545 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2a3c082a10 0x7f2a3c082e80 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.274+0000 7f2a297fa700 1 -- 192.168.123.105:0/3159707545 >> 192.168.123.105:0/3159707545 conn(0x7f2a3c06d1a0 msgr2=0x7f2a3c0764c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:10.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.274+0000 7f2a297fa700 1 -- 192.168.123.105:0/3159707545 shutdown_connections 2026-03-09T15:01:10.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.274+0000 7f2a297fa700 1 -- 192.168.123.105:0/3159707545 wait complete. 2026-03-09T15:01:10.365 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.364+0000 7fe1b5a5a700 1 -- 192.168.123.105:0/3786978362 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1b0072360 msgr2=0x7fe1b00770e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:10.365 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.364+0000 7fe1b5a5a700 1 --2- 192.168.123.105:0/3786978362 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1b0072360 0x7fe1b00770e0 secure :-1 s=READY pgs=308 cs=0 l=1 rev1=1 crypto rx=0x7fe1a8009230 tx=0x7fe1a8009260 comp rx=0 tx=0).stop 2026-03-09T15:01:10.365 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.364+0000 7fe1b5a5a700 1 -- 192.168.123.105:0/3786978362 shutdown_connections 2026-03-09T15:01:10.365 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.364+0000 7fe1b5a5a700 1 --2- 192.168.123.105:0/3786978362 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1b0072360 0x7fe1b00770e0 unknown :-1 s=CLOSED pgs=308 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.365 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.364+0000 7fe1b5a5a700 1 --2- 192.168.123.105:0/3786978362 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe1b0071980 0x7fe1b0071d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.365 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.364+0000 7fe1b5a5a700 1 -- 192.168.123.105:0/3786978362 >> 192.168.123.105:0/3786978362 conn(0x7fe1b006d1a0 msgr2=0x7fe1b006f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:10.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.364+0000 7fe1b5a5a700 1 -- 192.168.123.105:0/3786978362 shutdown_connections 2026-03-09T15:01:10.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.364+0000 7fe1b5a5a700 1 -- 192.168.123.105:0/3786978362 wait complete. 2026-03-09T15:01:10.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.365+0000 7fe1b5a5a700 1 Processor -- start 2026-03-09T15:01:10.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.365+0000 7fe1b5a5a700 1 -- start start 2026-03-09T15:01:10.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.365+0000 7fe1b5a5a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe1b0071980 0x7fe1b0082560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:10.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.365+0000 7fe1b5a5a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1b0082aa0 0x7fe1b0082f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:10.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.365+0000 7fe1b5a5a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe1b01b2a90 con 0x7fe1b0082aa0 2026-03-09T15:01:10.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.365+0000 7fe1b5a5a700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe1b01b2bd0 con 0x7fe1b0071980 2026-03-09T15:01:10.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.366+0000 7fe1ae7fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1b0082aa0 0x7fe1b0082f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:10.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.366+0000 7fe1ae7fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1b0082aa0 0x7fe1b0082f10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:44684/0 (socket says 192.168.123.105:44684) 2026-03-09T15:01:10.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.366+0000 7fe1ae7fc700 1 -- 192.168.123.105:0/638587144 learned_addr learned my addr 192.168.123.105:0/638587144 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:01:10.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.366+0000 7fe1aeffd700 1 --2- 192.168.123.105:0/638587144 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe1b0071980 0x7fe1b0082560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:10.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.367+0000 7fe1ae7fc700 1 -- 192.168.123.105:0/638587144 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe1b0071980 msgr2=0x7fe1b0082560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:10.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.367+0000 7fe1ae7fc700 1 --2- 192.168.123.105:0/638587144 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe1b0071980 0x7fe1b0082560 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.367+0000 7fe1ae7fc700 1 -- 192.168.123.105:0/638587144 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe1a8008ee0 con 0x7fe1b0082aa0 2026-03-09T15:01:10.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.367+0000 7fe1ae7fc700 1 --2- 192.168.123.105:0/638587144 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1b0082aa0 0x7fe1b0082f10 secure :-1 s=READY pgs=309 cs=0 l=1 rev1=1 crypto rx=0x7fe1a800edd0 tx=0x7fe1a800ee00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:10.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.368+0000 7fe1b4a58700 1 -- 192.168.123.105:0/638587144 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe1a801c070 con 0x7fe1b0082aa0 2026-03-09T15:01:10.370 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.368+0000 7fe1b5a5a700 1 -- 192.168.123.105:0/638587144 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe1b01b2d10 con 0x7fe1b0082aa0 2026-03-09T15:01:10.370 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.368+0000 7fe1b5a5a700 1 -- 192.168.123.105:0/638587144 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe1b01b3200 con 0x7fe1b0082aa0 2026-03-09T15:01:10.370 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.369+0000 7fe1b4a58700 1 -- 192.168.123.105:0/638587144 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fe1a8008020 con 0x7fe1b0082aa0 2026-03-09T15:01:10.370 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.369+0000 7fe1b4a58700 1 -- 192.168.123.105:0/638587144 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe1a8005560 con 0x7fe1b0082aa0 2026-03-09T15:01:10.371 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.370+0000 7fe1b5a5a700 1 -- 192.168.123.105:0/638587144 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe19c005320 con 0x7fe1b0082aa0 2026-03-09T15:01:10.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.375+0000 7fe1b4a58700 1 -- 192.168.123.105:0/638587144 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 19) v1 ==== 89910+0+0 (secure 0 0 0) 0x7fe1a800e470 con 0x7fe1b0082aa0 2026-03-09T15:01:10.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.376+0000 7fe1b4a58700 1 --2- 192.168.123.105:0/638587144 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe19806c6d0 0x7fe19806eb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:10.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.376+0000 7fe1aeffd700 1 --2- 192.168.123.105:0/638587144 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe19806c6d0 0x7fe19806eb80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:10.378 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.377+0000 7fe1aeffd700 1 --2- 192.168.123.105:0/638587144 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe19806c6d0 0x7fe19806eb80 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7fe1a0005950 tx=0x7fe1a000b8f0 comp rx=0 tx=0).ready entity=mgr.14249 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:10.379 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.378+0000 7fe1b4a58700 1 -- 192.168.123.105:0/638587144 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(37..37 src has 1..37) v4 ==== 5276+0+0 (secure 0 0 0) 0x7fe1a800d3d0 con 0x7fe1b0082aa0 2026-03-09T15:01:10.381 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.378+0000 7fe1b4a58700 1 -- 192.168.123.105:0/638587144 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7fe1a805b710 con 0x7fe1b0082aa0 2026-03-09T15:01:10.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.544+0000 7fe1b5a5a700 1 -- 192.168.123.105:0/638587144 --> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fe19c000bf0 con 0x7fe19806c6d0 2026-03-09T15:01:10.549 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.548+0000 7fe1b4a58700 1 -- 192.168.123.105:0/638587144 <== mgr.14249 v2:192.168.123.105:6800/2 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+434 (secure 0 0 0) 0x7fe19c000bf0 con 0x7fe19806c6d0 2026-03-09T15:01:10.549 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:01:10.549 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:01:10.549 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:01:10.549 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) mgr", 2026-03-09T15:01:10.549 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-09T15:01:10.549 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "0/2 daemons upgraded", 2026-03-09T15:01:10.549 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Pulling quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc image on host vm09", 2026-03-09T15:01:10.549 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:01:10.549 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:01:10.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.553+0000 7fe1967fc700 1 -- 192.168.123.105:0/638587144 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe19806c6d0 msgr2=0x7fe19806eb80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:10.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.553+0000 7fe1967fc700 1 --2- 192.168.123.105:0/638587144 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe19806c6d0 0x7fe19806eb80 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7fe1a0005950 tx=0x7fe1a000b8f0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.553+0000 7fe1967fc700 1 -- 192.168.123.105:0/638587144 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1b0082aa0 msgr2=0x7fe1b0082f10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:10.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.553+0000 7fe1967fc700 1 --2- 192.168.123.105:0/638587144 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1b0082aa0 0x7fe1b0082f10 secure :-1 s=READY pgs=309 cs=0 l=1 rev1=1 crypto rx=0x7fe1a800edd0 tx=0x7fe1a800ee00 comp rx=0 tx=0).stop 2026-03-09T15:01:10.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.553+0000 7fe1967fc700 1 -- 192.168.123.105:0/638587144 shutdown_connections 2026-03-09T15:01:10.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.553+0000 7fe1967fc700 1 --2- 192.168.123.105:0/638587144 >> [v2:192.168.123.105:6800/2,v1:192.168.123.105:6801/2] conn(0x7fe19806c6d0 0x7fe19806eb80 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.553+0000 7fe1967fc700 1 --2- 192.168.123.105:0/638587144 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe1b0071980 0x7fe1b0082560 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.553+0000 7fe1967fc700 1 --2- 192.168.123.105:0/638587144 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1b0082aa0 0x7fe1b0082f10 unknown :-1 s=CLOSED pgs=309 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:10.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.553+0000 7fe1967fc700 1 -- 192.168.123.105:0/638587144 >> 192.168.123.105:0/638587144 conn(0x7fe1b006d1a0 msgr2=0x7fe1b00764c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:10.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.554+0000 7fe1967fc700 1 -- 192.168.123.105:0/638587144 shutdown_connections 2026-03-09T15:01:10.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:10.554+0000 7fe1967fc700 1 -- 192.168.123.105:0/638587144 wait complete. 2026-03-09T15:01:10.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:10 vm09.local ceph-mon[59673]: from='client.24399 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:10.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:10 vm09.local ceph-mon[59673]: from='client.24401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:10.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:10 vm09.local ceph-mon[59673]: from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:10.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:10 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/3159707545' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:01:10.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:10 vm05.local ceph-mon[50611]: from='client.24399 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:10.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:10 vm05.local ceph-mon[50611]: from='client.24401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:10.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:10 vm05.local ceph-mon[50611]: from='client.24403 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:10.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:10 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/3159707545' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:01:11.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:11 vm05.local ceph-mon[50611]: from='client.14616 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:11.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:11 vm05.local ceph-mon[50611]: pgmap v162: 65 pgs: 65 active+clean; 2.1 GiB data, 7.1 GiB used, 113 GiB / 120 GiB avail; 48 MiB/s rd, 120 MiB/s wr, 389 op/s 2026-03-09T15:01:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:11 vm09.local ceph-mon[59673]: from='client.14616 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:11 vm09.local ceph-mon[59673]: pgmap v162: 65 pgs: 65 active+clean; 2.1 GiB data, 7.1 GiB used, 113 GiB / 120 GiB avail; 48 MiB/s rd, 120 MiB/s wr, 389 op/s 2026-03-09T15:01:14.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:13 vm05.local ceph-mon[50611]: pgmap v163: 65 pgs: 65 active+clean; 2.1 GiB data, 7.1 GiB used, 113 GiB / 120 GiB avail; 33 MiB/s rd, 81 MiB/s wr, 282 op/s 2026-03-09T15:01:14.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:13 vm09.local ceph-mon[59673]: pgmap v163: 65 pgs: 65 active+clean; 2.1 GiB data, 7.1 GiB used, 113 GiB / 120 GiB avail; 33 MiB/s rd, 81 MiB/s wr, 282 op/s 2026-03-09T15:01:15.642 INFO:tasks.workunit.client.0.vm05.stderr:+ pushd ltp-full-20091231/testcases/kernel/fs/fsstress 2026-03-09T15:01:15.645 INFO:tasks.workunit.client.0.vm05.stdout:~/cephtest/mnt.0/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress ~/cephtest/mnt.0/client.0/tmp/fsstress ~/cephtest/mnt.0/client.0/tmp 2026-03-09T15:01:15.645 INFO:tasks.workunit.client.0.vm05.stderr:+ make 2026-03-09T15:01:15.682 INFO:tasks.workunit.client.0.vm05.stdout:cc -DNO_XFS -I/home/ubuntu/cephtest/mnt.0/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress -D_LARGEFILE64_SOURCE -D_GNU_SOURCE -I../../../../include -I../../../../include -L../../../../lib fsstress.c -o fsstress 2026-03-09T15:01:16.028 INFO:tasks.workunit.client.0.vm05.stderr:++ readlink -f fsstress 2026-03-09T15:01:16.030 INFO:tasks.workunit.client.0.vm05.stderr:+ BIN=/home/ubuntu/cephtest/mnt.0/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress 2026-03-09T15:01:16.030 INFO:tasks.workunit.client.0.vm05.stderr:+ popd 2026-03-09T15:01:16.031 INFO:tasks.workunit.client.0.vm05.stdout:~/cephtest/mnt.0/client.0/tmp/fsstress ~/cephtest/mnt.0/client.0/tmp 2026-03-09T15:01:16.031 INFO:tasks.workunit.client.0.vm05.stderr:+ popd 2026-03-09T15:01:16.032 INFO:tasks.workunit.client.0.vm05.stdout:~/cephtest/mnt.0/client.0/tmp 2026-03-09T15:01:16.032 INFO:tasks.workunit.client.0.vm05.stderr:++ mktemp -d -p . 2026-03-09T15:01:16.035 INFO:tasks.workunit.client.0.vm05.stderr:+ T=./tmp.LfZX0Lp45U 2026-03-09T15:01:16.035 INFO:tasks.workunit.client.0.vm05.stderr:+ /home/ubuntu/cephtest/mnt.0/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress -d ./tmp.LfZX0Lp45U -l 1 -n 1000 -p 10 -v 2026-03-09T15:01:16.040 INFO:tasks.workunit.client.0.vm05.stdout:seed = 1773027330 2026-03-09T15:01:16.045 INFO:tasks.workunit.client.0.vm05.stdout:0/0: truncate - no filename 2026-03-09T15:01:16.046 INFO:tasks.workunit.client.0.vm05.stdout:2/0: dwrite - no filename 2026-03-09T15:01:16.046 INFO:tasks.workunit.client.0.vm05.stdout:2/1: fsync - no filename 2026-03-09T15:01:16.046 INFO:tasks.workunit.client.0.vm05.stdout:2/2: dwrite - no filename 2026-03-09T15:01:16.047 INFO:tasks.workunit.client.0.vm05.stdout:2/3: dwrite - no filename 2026-03-09T15:01:16.047 INFO:tasks.workunit.client.0.vm05.stdout:2/4: dread - no filename 2026-03-09T15:01:16.047 INFO:tasks.workunit.client.0.vm05.stdout:2/5: readlink - no filename 2026-03-09T15:01:16.048 INFO:tasks.workunit.client.0.vm05.stdout:3/0: write - no filename 2026-03-09T15:01:16.051 INFO:tasks.workunit.client.0.vm05.stdout:0/1: creat f0 x:0 0 0 2026-03-09T15:01:16.051 INFO:tasks.workunit.client.0.vm05.stdout:0/2: write f0 [385322,26914] 0 2026-03-09T15:01:16.054 INFO:tasks.workunit.client.0.vm05.stdout:4/0: dwrite - no filename 2026-03-09T15:01:16.054 INFO:tasks.workunit.client.0.vm05.stdout:4/1: dwrite - no filename 2026-03-09T15:01:16.055 INFO:tasks.workunit.client.0.vm05.stdout:2/6: creat f0 x:0 0 0 2026-03-09T15:01:16.056 INFO:tasks.workunit.client.0.vm05.stdout:2/7: chown f0 1912317635 1 2026-03-09T15:01:16.056 INFO:tasks.workunit.client.0.vm05.stdout:2/8: dread - f0 zero size 2026-03-09T15:01:16.057 INFO:tasks.workunit.client.0.vm05.stdout:2/9: write f0 [583664,39998] 0 2026-03-09T15:01:16.058 INFO:tasks.workunit.client.0.vm05.stdout:2/10: chown f0 14677023 1 2026-03-09T15:01:16.060 INFO:tasks.workunit.client.0.vm05.stdout:0/3: mknod c1 0 2026-03-09T15:01:16.062 INFO:tasks.workunit.client.0.vm05.stdout:5/0: chown . 1 1 2026-03-09T15:01:16.062 INFO:tasks.workunit.client.0.vm05.stdout:5/1: write - no filename 2026-03-09T15:01:16.064 INFO:tasks.workunit.client.0.vm05.stdout:3/1: symlink l0 0 2026-03-09T15:01:16.064 INFO:tasks.workunit.client.0.vm05.stdout:3/2: dread - no filename 2026-03-09T15:01:16.064 INFO:tasks.workunit.client.0.vm05.stdout:2/11: mknod c1 0 2026-03-09T15:01:16.067 INFO:tasks.workunit.client.0.vm05.stdout:2/12: dread f0 [0,4194304] 0 2026-03-09T15:01:16.068 INFO:tasks.workunit.client.0.vm05.stdout:0/4: unlink c1 0 2026-03-09T15:01:16.068 INFO:tasks.workunit.client.0.vm05.stdout:2/13: dread f0 [0,4194304] 0 2026-03-09T15:01:16.077 INFO:tasks.workunit.client.0.vm05.stdout:4/2: symlink l0 0 2026-03-09T15:01:16.077 INFO:tasks.workunit.client.0.vm05.stdout:4/3: dread - no filename 2026-03-09T15:01:16.077 INFO:tasks.workunit.client.0.vm05.stdout:6/0: link - no file 2026-03-09T15:01:16.077 INFO:tasks.workunit.client.0.vm05.stdout:3/3: rename l0 to l1 0 2026-03-09T15:01:16.079 INFO:tasks.workunit.client.0.vm05.stdout:5/2: mknod c0 0 2026-03-09T15:01:16.079 INFO:tasks.workunit.client.0.vm05.stdout:5/3: truncate - no filename 2026-03-09T15:01:16.079 INFO:tasks.workunit.client.0.vm05.stdout:5/4: truncate - no filename 2026-03-09T15:01:16.079 INFO:tasks.workunit.client.0.vm05.stdout:5/5: fdatasync - no filename 2026-03-09T15:01:16.079 INFO:tasks.workunit.client.0.vm05.stdout:5/6: dwrite - no filename 2026-03-09T15:01:16.080 INFO:tasks.workunit.client.0.vm05.stdout:8/0: unlink - no file 2026-03-09T15:01:16.080 INFO:tasks.workunit.client.0.vm05.stdout:8/1: chown . 4963 1 2026-03-09T15:01:16.081 INFO:tasks.workunit.client.0.vm05.stdout:4/4: symlink l1 0 2026-03-09T15:01:16.083 INFO:tasks.workunit.client.0.vm05.stdout:9/0: chown . 10729241 1 2026-03-09T15:01:16.083 INFO:tasks.workunit.client.0.vm05.stdout:9/1: dread - no filename 2026-03-09T15:01:16.083 INFO:tasks.workunit.client.0.vm05.stdout:9/2: fsync - no filename 2026-03-09T15:01:16.085 INFO:tasks.workunit.client.0.vm05.stdout:6/1: mknod c0 0 2026-03-09T15:01:16.085 INFO:tasks.workunit.client.0.vm05.stdout:6/2: truncate - no filename 2026-03-09T15:01:16.085 INFO:tasks.workunit.client.0.vm05.stdout:6/3: dwrite - no filename 2026-03-09T15:01:16.085 INFO:tasks.workunit.client.0.vm05.stdout:6/4: stat c0 0 2026-03-09T15:01:16.086 INFO:tasks.workunit.client.0.vm05.stdout:4/5: mkdir d2 0 2026-03-09T15:01:16.087 INFO:tasks.workunit.client.0.vm05.stdout:4/6: dwrite - no filename 2026-03-09T15:01:16.089 INFO:tasks.workunit.client.0.vm05.stdout:7/0: chown . 466927 1 2026-03-09T15:01:16.091 INFO:tasks.workunit.client.0.vm05.stdout:8/2: mkdir d0 0 2026-03-09T15:01:16.092 INFO:tasks.workunit.client.0.vm05.stdout:2/14: fsync f0 0 2026-03-09T15:01:16.104 INFO:tasks.workunit.client.0.vm05.stdout:6/5: symlink l1 0 2026-03-09T15:01:16.105 INFO:tasks.workunit.client.0.vm05.stdout:4/7: unlink l0 0 2026-03-09T15:01:16.106 INFO:tasks.workunit.client.0.vm05.stdout:4/8: chown d2 264996666 1 2026-03-09T15:01:16.107 INFO:tasks.workunit.client.0.vm05.stdout:9/3: mknod c0 0 2026-03-09T15:01:16.107 INFO:tasks.workunit.client.0.vm05.stdout:9/4: write - no filename 2026-03-09T15:01:16.113 INFO:tasks.workunit.client.0.vm05.stdout:6/6: rename l1 to l2 0 2026-03-09T15:01:16.113 INFO:tasks.workunit.client.0.vm05.stdout:6/7: dread - no filename 2026-03-09T15:01:16.113 INFO:tasks.workunit.client.0.vm05.stdout:3/4: chown l1 3327915 1 2026-03-09T15:01:16.113 INFO:tasks.workunit.client.0.vm05.stdout:3/5: write - no filename 2026-03-09T15:01:16.113 INFO:tasks.workunit.client.0.vm05.stdout:3/6: rmdir - no directory 2026-03-09T15:01:16.113 INFO:tasks.workunit.client.0.vm05.stdout:3/7: dwrite - no filename 2026-03-09T15:01:16.113 INFO:tasks.workunit.client.0.vm05.stdout:3/8: fdatasync - no filename 2026-03-09T15:01:16.113 INFO:tasks.workunit.client.0.vm05.stdout:3/9: dread - no filename 2026-03-09T15:01:16.113 INFO:tasks.workunit.client.0.vm05.stdout:3/10: dread - no filename 2026-03-09T15:01:16.115 INFO:tasks.workunit.client.0.vm05.stdout:2/15: dwrite f0 [0,4194304] 0 2026-03-09T15:01:16.117 INFO:tasks.workunit.client.0.vm05.stdout:7/1: creat f0 x:0 0 0 2026-03-09T15:01:16.117 INFO:tasks.workunit.client.0.vm05.stdout:1/0: dread - no filename 2026-03-09T15:01:16.118 INFO:tasks.workunit.client.0.vm05.stdout:1/1: chown . 6 1 2026-03-09T15:01:16.118 INFO:tasks.workunit.client.0.vm05.stdout:1/2: stat - no entries 2026-03-09T15:01:16.133 INFO:tasks.workunit.client.0.vm05.stdout:2/16: dread f0 [0,4194304] 0 2026-03-09T15:01:16.135 INFO:tasks.workunit.client.0.vm05.stdout:8/3: mkdir d0/d1 0 2026-03-09T15:01:16.142 INFO:tasks.workunit.client.0.vm05.stdout:9/5: unlink c0 0 2026-03-09T15:01:16.145 INFO:tasks.workunit.client.0.vm05.stdout:3/11: rename l1 to l2 0 2026-03-09T15:01:16.148 INFO:tasks.workunit.client.0.vm05.stdout:4/9: mknod d2/c3 0 2026-03-09T15:01:16.149 INFO:tasks.workunit.client.0.vm05.stdout:4/10: stat d2 0 2026-03-09T15:01:16.151 INFO:tasks.workunit.client.0.vm05.stdout:7/2: mkdir d1 0 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:7/3: fdatasync f0 0 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:9/6: symlink l1 0 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:9/7: write - no filename 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:1/3: creat f0 x:0 0 0 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:1/4: chown f0 7413 1 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:2/17: rename f0 to f2 0 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:1/5: dwrite f0 [0,4194304] 0 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:8/4: creat d0/d1/f2 x:0 0 0 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:9/8: mkdir d2 0 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:9/9: dwrite - no filename 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:7/4: mknod d1/c2 0 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:8/5: dwrite d0/d1/f2 [0,4194304] 0 2026-03-09T15:01:16.198 INFO:tasks.workunit.client.0.vm05.stdout:2/18: dwrite f2 [0,4194304] 0 2026-03-09T15:01:16.203 INFO:tasks.workunit.client.0.vm05.stdout:1/6: dread f0 [0,4194304] 0 2026-03-09T15:01:16.213 INFO:tasks.workunit.client.0.vm05.stdout:7/5: unlink f0 0 2026-03-09T15:01:16.232 INFO:tasks.workunit.client.0.vm05.stdout:8/6: unlink d0/d1/f2 0 2026-03-09T15:01:16.232 INFO:tasks.workunit.client.0.vm05.stdout:8/7: link - no file 2026-03-09T15:01:16.232 INFO:tasks.workunit.client.0.vm05.stdout:1/7: mknod c1 0 2026-03-09T15:01:16.232 INFO:tasks.workunit.client.0.vm05.stdout:7/6: mknod d1/c3 0 2026-03-09T15:01:16.232 INFO:tasks.workunit.client.0.vm05.stdout:7/7: dread - no filename 2026-03-09T15:01:16.232 INFO:tasks.workunit.client.0.vm05.stdout:7/8: chown d1/c2 5091032 1 2026-03-09T15:01:16.232 INFO:tasks.workunit.client.0.vm05.stdout:8/8: mknod d0/d1/c3 0 2026-03-09T15:01:16.232 INFO:tasks.workunit.client.0.vm05.stdout:1/8: symlink l2 0 2026-03-09T15:01:16.232 INFO:tasks.workunit.client.0.vm05.stdout:1/9: dread f0 [0,4194304] 0 2026-03-09T15:01:16.232 INFO:tasks.workunit.client.0.vm05.stdout:7/9: symlink d1/l4 0 2026-03-09T15:01:16.233 INFO:tasks.workunit.client.0.vm05.stdout:7/10: link d1/l4 d1/l5 0 2026-03-09T15:01:16.233 INFO:tasks.workunit.client.0.vm05.stdout:7/11: dread - no filename 2026-03-09T15:01:16.233 INFO:tasks.workunit.client.0.vm05.stdout:7/12: write - no filename 2026-03-09T15:01:16.327 INFO:tasks.workunit.client.0.vm05.stdout:6/8: chown l2 32623 1 2026-03-09T15:01:16.327 INFO:tasks.workunit.client.0.vm05.stdout:6/9: write - no filename 2026-03-09T15:01:16.329 INFO:tasks.workunit.client.0.vm05.stdout:3/12: stat l2 0 2026-03-09T15:01:16.333 INFO:tasks.workunit.client.0.vm05.stdout:2/19: dwrite f2 [4194304,4194304] 0 2026-03-09T15:01:16.368 INFO:tasks.workunit.client.0.vm05.stdout:2/20: readlink - no filename 2026-03-09T15:01:16.369 INFO:tasks.workunit.client.0.vm05.stdout:4/11: getdents d2 0 2026-03-09T15:01:16.369 INFO:tasks.workunit.client.0.vm05.stdout:4/12: dwrite - no filename 2026-03-09T15:01:16.369 INFO:tasks.workunit.client.0.vm05.stdout:4/13: dread - no filename 2026-03-09T15:01:16.369 INFO:tasks.workunit.client.0.vm05.stdout:2/21: dread f2 [0,4194304] 0 2026-03-09T15:01:16.369 INFO:tasks.workunit.client.0.vm05.stdout:6/10: symlink l3 0 2026-03-09T15:01:16.369 INFO:tasks.workunit.client.0.vm05.stdout:2/22: dread f2 [4194304,4194304] 0 2026-03-09T15:01:16.369 INFO:tasks.workunit.client.0.vm05.stdout:2/23: readlink - no filename 2026-03-09T15:01:16.369 INFO:tasks.workunit.client.0.vm05.stdout:3/13: mkdir d3 0 2026-03-09T15:01:16.369 INFO:tasks.workunit.client.0.vm05.stdout:3/14: dwrite - no filename 2026-03-09T15:01:16.369 INFO:tasks.workunit.client.0.vm05.stdout:3/15: read - no filename 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:6/11: mknod c4 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:6/12: chown c4 0 1 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:2/24: symlink l3 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:2/25: chown c1 141195666 1 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:2/26: dread f2 [0,4194304] 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:4/14: getdents d2 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:6/13: creat f5 x:0 0 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:6/14: write f5 [715944,63362] 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:1/10: write f0 [4295308,84299] 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:1/11: write f0 [4667985,63741] 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:1/12: dread f0 [0,4194304] 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:4/15: mkdir d2/d4 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:4/16: chown d2/c3 29729675 1 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:4/17: fsync - no filename 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:1/13: dread f0 [0,4194304] 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:6/15: creat f6 x:0 0 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:6/16: dread - f6 zero size 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:3/16: link l2 d3/l4 0 2026-03-09T15:01:16.370 INFO:tasks.workunit.client.0.vm05.stdout:3/17: dread - no filename 2026-03-09T15:01:16.372 INFO:tasks.workunit.client.0.vm05.stdout:2/27: link f2 f4 0 2026-03-09T15:01:16.375 INFO:tasks.workunit.client.0.vm05.stdout:8/9: rmdir d0/d1 39 2026-03-09T15:01:16.375 INFO:tasks.workunit.client.0.vm05.stdout:8/10: dwrite - no filename 2026-03-09T15:01:16.376 INFO:tasks.workunit.client.0.vm05.stdout:4/18: creat d2/f5 x:0 0 0 2026-03-09T15:01:16.377 INFO:tasks.workunit.client.0.vm05.stdout:4/19: write d2/f5 [233996,99480] 0 2026-03-09T15:01:16.381 INFO:tasks.workunit.client.0.vm05.stdout:7/13: getdents d1 0 2026-03-09T15:01:16.381 INFO:tasks.workunit.client.0.vm05.stdout:2/28: write f4 [4895816,55024] 0 2026-03-09T15:01:16.384 INFO:tasks.workunit.client.0.vm05.stdout:2/29: dread f2 [4194304,4194304] 0 2026-03-09T15:01:16.385 INFO:tasks.workunit.client.0.vm05.stdout:8/11: chown d0/d1 46062 1 2026-03-09T15:01:16.385 INFO:tasks.workunit.client.0.vm05.stdout:8/12: read - no filename 2026-03-09T15:01:16.385 INFO:tasks.workunit.client.0.vm05.stdout:8/13: dwrite - no filename 2026-03-09T15:01:16.385 INFO:tasks.workunit.client.0.vm05.stdout:8/14: fsync - no filename 2026-03-09T15:01:16.387 INFO:tasks.workunit.client.0.vm05.stdout:4/20: rename d2/c3 to d2/c6 0 2026-03-09T15:01:16.388 INFO:tasks.workunit.client.0.vm05.stdout:7/14: rmdir d1 39 2026-03-09T15:01:16.388 INFO:tasks.workunit.client.0.vm05.stdout:3/18: mknod d3/c5 0 2026-03-09T15:01:16.389 INFO:tasks.workunit.client.0.vm05.stdout:3/19: chown d3 16303204 1 2026-03-09T15:01:16.392 INFO:tasks.workunit.client.0.vm05.stdout:3/20: unlink d3/c5 0 2026-03-09T15:01:16.392 INFO:tasks.workunit.client.0.vm05.stdout:3/21: write - no filename 2026-03-09T15:01:16.392 INFO:tasks.workunit.client.0.vm05.stdout:3/22: dwrite - no filename 2026-03-09T15:01:16.392 INFO:tasks.workunit.client.0.vm05.stdout:3/23: rename d3 to d3/d6 22 2026-03-09T15:01:16.393 INFO:tasks.workunit.client.0.vm05.stdout:2/30: link f2 f5 0 2026-03-09T15:01:16.393 INFO:tasks.workunit.client.0.vm05.stdout:4/21: mkdir d2/d4/d7 0 2026-03-09T15:01:16.394 INFO:tasks.workunit.client.0.vm05.stdout:7/15: rmdir d1 39 2026-03-09T15:01:16.394 INFO:tasks.workunit.client.0.vm05.stdout:3/24: creat d3/f7 x:0 0 0 2026-03-09T15:01:16.395 INFO:tasks.workunit.client.0.vm05.stdout:2/31: creat f6 x:0 0 0 2026-03-09T15:01:16.396 INFO:tasks.workunit.client.0.vm05.stdout:8/15: creat d0/f4 x:0 0 0 2026-03-09T15:01:16.399 INFO:tasks.workunit.client.0.vm05.stdout:7/16: mkdir d1/d6 0 2026-03-09T15:01:16.400 INFO:tasks.workunit.client.0.vm05.stdout:8/16: mknod d0/c5 0 2026-03-09T15:01:16.402 INFO:tasks.workunit.client.0.vm05.stdout:8/17: creat d0/d1/f6 x:0 0 0 2026-03-09T15:01:16.407 INFO:tasks.workunit.client.0.vm05.stdout:8/18: dwrite d0/d1/f6 [0,4194304] 0 2026-03-09T15:01:16.409 INFO:tasks.workunit.client.0.vm05.stdout:8/19: mkdir d0/d7 0 2026-03-09T15:01:16.411 INFO:tasks.workunit.client.0.vm05.stdout:8/20: getdents d0/d7 0 2026-03-09T15:01:16.557 INFO:tasks.workunit.client.0.vm05.stdout:6/17: fsync f6 0 2026-03-09T15:01:16.560 INFO:tasks.workunit.client.0.vm05.stdout:2/32: truncate f5 5950266 0 2026-03-09T15:01:16.560 INFO:tasks.workunit.client.0.vm05.stdout:2/33: readlink l3 0 2026-03-09T15:01:16.561 INFO:tasks.workunit.client.0.vm05.stdout:4/22: getdents d2/d4 0 2026-03-09T15:01:16.563 INFO:tasks.workunit.client.0.vm05.stdout:3/25: rmdir d3 39 2026-03-09T15:01:16.563 INFO:tasks.workunit.client.0.vm05.stdout:7/17: getdents d1 0 2026-03-09T15:01:16.566 INFO:tasks.workunit.client.0.vm05.stdout:8/21: rmdir d0 39 2026-03-09T15:01:16.649 INFO:tasks.workunit.client.0.vm05.stdout:6/18: mknod c7 0 2026-03-09T15:01:16.650 INFO:tasks.workunit.client.0.vm05.stdout:6/19: dread - f6 zero size 2026-03-09T15:01:16.650 INFO:tasks.workunit.client.0.vm05.stdout:6/20: write f5 [1253538,37036] 0 2026-03-09T15:01:16.652 INFO:tasks.workunit.client.0.vm05.stdout:2/34: symlink l7 0 2026-03-09T15:01:16.653 INFO:tasks.workunit.client.0.vm05.stdout:4/23: mkdir d2/d4/d8 0 2026-03-09T15:01:16.655 INFO:tasks.workunit.client.0.vm05.stdout:2/35: dread f4 [0,4194304] 0 2026-03-09T15:01:16.655 INFO:tasks.workunit.client.0.vm05.stdout:4/24: dread d2/f5 [0,4194304] 0 2026-03-09T15:01:16.656 INFO:tasks.workunit.client.0.vm05.stdout:4/25: dread d2/f5 [0,4194304] 0 2026-03-09T15:01:16.658 INFO:tasks.workunit.client.0.vm05.stdout:7/18: symlink d1/l7 0 2026-03-09T15:01:16.661 INFO:tasks.workunit.client.0.vm05.stdout:6/21: rename c7 to c8 0 2026-03-09T15:01:16.662 INFO:tasks.workunit.client.0.vm05.stdout:6/22: write f6 [180394,105233] 0 2026-03-09T15:01:16.667 INFO:tasks.workunit.client.0.vm05.stdout:6/23: dwrite f6 [0,4194304] 0 2026-03-09T15:01:16.670 INFO:tasks.workunit.client.0.vm05.stdout:2/36: symlink l8 0 2026-03-09T15:01:16.673 INFO:tasks.workunit.client.0.vm05.stdout:6/24: dwrite f6 [0,4194304] 0 2026-03-09T15:01:16.677 INFO:tasks.workunit.client.0.vm05.stdout:3/26: creat d3/f8 x:0 0 0 2026-03-09T15:01:16.682 INFO:tasks.workunit.client.0.vm05.stdout:3/27: chown d3 217910 1 2026-03-09T15:01:16.682 INFO:tasks.workunit.client.0.vm05.stdout:7/19: chown d1/l4 195627399 1 2026-03-09T15:01:16.682 INFO:tasks.workunit.client.0.vm05.stdout:4/26: getdents d2/d4/d8 0 2026-03-09T15:01:16.682 INFO:tasks.workunit.client.0.vm05.stdout:2/37: write f2 [2415407,121849] 0 2026-03-09T15:01:16.687 INFO:tasks.workunit.client.0.vm05.stdout:8/22: creat d0/d7/f8 x:0 0 0 2026-03-09T15:01:16.696 INFO:tasks.workunit.client.0.vm05.stdout:8/23: write d0/d7/f8 [675118,126283] 0 2026-03-09T15:01:16.696 INFO:tasks.workunit.client.0.vm05.stdout:7/20: chown d1/d6 0 1 2026-03-09T15:01:16.696 INFO:tasks.workunit.client.0.vm05.stdout:2/38: mkdir d9 0 2026-03-09T15:01:16.696 INFO:tasks.workunit.client.0.vm05.stdout:4/27: creat d2/d4/d7/f9 x:0 0 0 2026-03-09T15:01:16.696 INFO:tasks.workunit.client.0.vm05.stdout:8/24: mknod d0/d1/c9 0 2026-03-09T15:01:16.696 INFO:tasks.workunit.client.0.vm05.stdout:8/25: dwrite d0/d7/f8 [0,4194304] 0 2026-03-09T15:01:16.705 INFO:tasks.workunit.client.0.vm05.stdout:4/28: fsync d2/f5 0 2026-03-09T15:01:16.705 INFO:tasks.workunit.client.0.vm05.stdout:7/21: creat d1/d6/f8 x:0 0 0 2026-03-09T15:01:16.706 INFO:tasks.workunit.client.0.vm05.stdout:7/22: truncate d1/d6/f8 42881 0 2026-03-09T15:01:16.710 INFO:tasks.workunit.client.0.vm05.stdout:3/28: link d3/l4 d3/l9 0 2026-03-09T15:01:16.711 INFO:tasks.workunit.client.0.vm05.stdout:2/39: rmdir d9 0 2026-03-09T15:01:16.713 INFO:tasks.workunit.client.0.vm05.stdout:7/23: mkdir d1/d9 0 2026-03-09T15:01:16.714 INFO:tasks.workunit.client.0.vm05.stdout:7/24: chown d1/l4 74768 1 2026-03-09T15:01:16.719 INFO:tasks.workunit.client.0.vm05.stdout:2/40: dread f4 [0,4194304] 0 2026-03-09T15:01:16.719 INFO:tasks.workunit.client.0.vm05.stdout:7/25: write d1/d6/f8 [508741,7622] 0 2026-03-09T15:01:16.719 INFO:tasks.workunit.client.0.vm05.stdout:2/41: truncate f6 411036 0 2026-03-09T15:01:16.719 INFO:tasks.workunit.client.0.vm05.stdout:2/42: mkdir da 0 2026-03-09T15:01:16.720 INFO:tasks.workunit.client.0.vm05.stdout:2/43: symlink da/lb 0 2026-03-09T15:01:16.721 INFO:tasks.workunit.client.0.vm05.stdout:7/26: creat d1/fa x:0 0 0 2026-03-09T15:01:16.722 INFO:tasks.workunit.client.0.vm05.stdout:2/44: rename c1 to da/cc 0 2026-03-09T15:01:16.723 INFO:tasks.workunit.client.0.vm05.stdout:2/45: mkdir da/dd 0 2026-03-09T15:01:16.725 INFO:tasks.workunit.client.0.vm05.stdout:2/46: dread f5 [4194304,4194304] 0 2026-03-09T15:01:16.725 INFO:tasks.workunit.client.0.vm05.stdout:2/47: readlink l7 0 2026-03-09T15:01:16.726 INFO:tasks.workunit.client.0.vm05.stdout:7/27: creat d1/fb x:0 0 0 2026-03-09T15:01:16.732 INFO:tasks.workunit.client.0.vm05.stdout:7/28: chown d1/l5 1 1 2026-03-09T15:01:16.732 INFO:tasks.workunit.client.0.vm05.stdout:7/29: unlink d1/c2 0 2026-03-09T15:01:16.732 INFO:tasks.workunit.client.0.vm05.stdout:7/30: creat d1/d9/fc x:0 0 0 2026-03-09T15:01:16.735 INFO:tasks.workunit.client.0.vm05.stdout:7/31: dwrite d1/fa [0,4194304] 0 2026-03-09T15:01:16.739 INFO:tasks.workunit.client.0.vm05.stdout:7/32: creat d1/d9/fd x:0 0 0 2026-03-09T15:01:16.778 INFO:tasks.workunit.client.0.vm05.stdout:1/14: truncate f0 909778 0 2026-03-09T15:01:16.779 INFO:tasks.workunit.client.0.vm05.stdout:1/15: mknod c3 0 2026-03-09T15:01:16.779 INFO:tasks.workunit.client.0.vm05.stdout:6/25: fsync f6 0 2026-03-09T15:01:16.780 INFO:tasks.workunit.client.0.vm05.stdout:1/16: mknod c4 0 2026-03-09T15:01:16.781 INFO:tasks.workunit.client.0.vm05.stdout:6/26: mknod c9 0 2026-03-09T15:01:16.782 INFO:tasks.workunit.client.0.vm05.stdout:1/17: creat f5 x:0 0 0 2026-03-09T15:01:16.782 INFO:tasks.workunit.client.0.vm05.stdout:1/18: stat c1 0 2026-03-09T15:01:16.782 INFO:tasks.workunit.client.0.vm05.stdout:1/19: dread - f5 zero size 2026-03-09T15:01:16.784 INFO:tasks.workunit.client.0.vm05.stdout:1/20: creat f6 x:0 0 0 2026-03-09T15:01:16.784 INFO:tasks.workunit.client.0.vm05.stdout:1/21: chown c1 0 1 2026-03-09T15:01:16.790 INFO:tasks.workunit.client.0.vm05.stdout:6/27: dread f5 [0,4194304] 0 2026-03-09T15:01:16.792 INFO:tasks.workunit.client.0.vm05.stdout:1/22: dwrite f6 [0,4194304] 0 2026-03-09T15:01:16.839 INFO:tasks.workunit.client.0.vm05.stdout:6/28: write f6 [4607377,51642] 0 2026-03-09T15:01:16.850 INFO:tasks.workunit.client.0.vm05.stdout:6/29: mkdir da 0 2026-03-09T15:01:16.861 INFO:tasks.workunit.client.0.vm05.stdout:8/26: truncate d0/d1/f6 2045771 0 2026-03-09T15:01:16.865 INFO:tasks.workunit.client.0.vm05.stdout:2/48: truncate f2 4798779 0 2026-03-09T15:01:16.869 INFO:tasks.workunit.client.0.vm05.stdout:2/49: chown da/dd 1384804 1 2026-03-09T15:01:16.876 INFO:tasks.workunit.client.0.vm05.stdout:7/33: rmdir d1 39 2026-03-09T15:01:16.901 INFO:tasks.workunit.client.0.vm05.stdout:7/34: getdents d1/d6 0 2026-03-09T15:01:16.901 INFO:tasks.workunit.client.0.vm05.stdout:7/35: write d1/fa [551608,56531] 0 2026-03-09T15:01:16.901 INFO:tasks.workunit.client.0.vm05.stdout:7/36: truncate d1/d9/fc 805618 0 2026-03-09T15:01:16.901 INFO:tasks.workunit.client.0.vm05.stdout:7/37: fsync d1/d9/fc 0 2026-03-09T15:01:16.901 INFO:tasks.workunit.client.0.vm05.stdout:7/38: dread d1/fa [0,4194304] 0 2026-03-09T15:01:16.901 INFO:tasks.workunit.client.0.vm05.stdout:7/39: chown d1/d6 14379 1 2026-03-09T15:01:16.901 INFO:tasks.workunit.client.0.vm05.stdout:7/40: getdents d1 0 2026-03-09T15:01:16.901 INFO:tasks.workunit.client.0.vm05.stdout:7/41: unlink d1/l5 0 2026-03-09T15:01:16.952 INFO:tasks.workunit.client.0.vm05.stdout:6/30: truncate f6 1458509 0 2026-03-09T15:01:16.952 INFO:tasks.workunit.client.0.vm05.stdout:6/31: chown l3 619905 1 2026-03-09T15:01:16.954 INFO:tasks.workunit.client.0.vm05.stdout:8/27: dread d0/d1/f6 [0,4194304] 0 2026-03-09T15:01:16.954 INFO:tasks.workunit.client.0.vm05.stdout:8/28: write d0/f4 [440787,119983] 0 2026-03-09T15:01:16.963 INFO:tasks.workunit.client.0.vm05.stdout:2/50: dwrite f2 [4194304,4194304] 0 2026-03-09T15:01:16.964 INFO:tasks.workunit.client.0.vm05.stdout:2/51: read f5 [3061566,126520] 0 2026-03-09T15:01:16.966 INFO:tasks.workunit.client.0.vm05.stdout:6/32: getdents da 0 2026-03-09T15:01:16.974 INFO:tasks.workunit.client.0.vm05.stdout:2/52: dwrite f6 [0,4194304] 0 2026-03-09T15:01:16.978 INFO:tasks.workunit.client.0.vm05.stdout:8/29: creat d0/fa x:0 0 0 2026-03-09T15:01:16.988 INFO:tasks.workunit.client.0.vm05.stdout:2/53: symlink da/le 0 2026-03-09T15:01:16.994 INFO:tasks.workunit.client.0.vm05.stdout:8/30: getdents d0/d7 0 2026-03-09T15:01:17.046 INFO:tasks.workunit.client.0.vm05.stdout:7/42: dwrite d1/fa [0,4194304] 0 2026-03-09T15:01:17.049 INFO:tasks.workunit.client.0.vm05.stdout:7/43: write d1/fb [107534,45634] 0 2026-03-09T15:01:17.058 INFO:tasks.workunit.client.0.vm05.stdout:7/44: rename d1/d6/f8 to d1/d6/fe 0 2026-03-09T15:01:17.059 INFO:tasks.workunit.client.0.vm05.stdout:7/45: chown d1/d6 33225 1 2026-03-09T15:01:17.060 INFO:tasks.workunit.client.0.vm05.stdout:7/46: dread d1/d9/fc [0,4194304] 0 2026-03-09T15:01:17.060 INFO:tasks.workunit.client.0.vm05.stdout:7/47: chown d1/d6 32300846 1 2026-03-09T15:01:17.063 INFO:tasks.workunit.client.0.vm05.stdout:7/48: dread d1/fa [0,4194304] 0 2026-03-09T15:01:17.063 INFO:tasks.workunit.client.0.vm05.stdout:7/49: write d1/d9/fc [1075434,30886] 0 2026-03-09T15:01:17.065 INFO:tasks.workunit.client.0.vm05.stdout:7/50: mknod d1/d6/cf 0 2026-03-09T15:01:17.066 INFO:tasks.workunit.client.0.vm05.stdout:7/51: unlink d1/fb 0 2026-03-09T15:01:17.162 INFO:tasks.workunit.client.0.vm05.stdout:2/54: truncate f5 3397542 0 2026-03-09T15:01:17.162 INFO:tasks.workunit.client.0.vm05.stdout:6/33: write f6 [240481,106588] 0 2026-03-09T15:01:17.162 INFO:tasks.workunit.client.0.vm05.stdout:1/23: dwrite f0 [0,4194304] 0 2026-03-09T15:01:17.174 INFO:tasks.workunit.client.0.vm05.stdout:1/24: unlink f0 0 2026-03-09T15:01:17.176 INFO:tasks.workunit.client.0.vm05.stdout:8/31: write d0/d7/f8 [5159160,84049] 0 2026-03-09T15:01:17.178 INFO:tasks.workunit.client.0.vm05.stdout:2/55: creat da/dd/ff x:0 0 0 2026-03-09T15:01:17.182 INFO:tasks.workunit.client.0.vm05.stdout:2/56: chown da 0 1 2026-03-09T15:01:17.182 INFO:tasks.workunit.client.0.vm05.stdout:8/32: rmdir d0/d1 39 2026-03-09T15:01:17.182 INFO:tasks.workunit.client.0.vm05.stdout:2/57: creat da/f10 x:0 0 0 2026-03-09T15:01:17.182 INFO:tasks.workunit.client.0.vm05.stdout:2/58: chown da/f10 193981 1 2026-03-09T15:01:17.183 INFO:tasks.workunit.client.0.vm05.stdout:8/33: mknod d0/d1/cb 0 2026-03-09T15:01:17.183 INFO:tasks.workunit.client.0.vm05.stdout:8/34: readlink - no filename 2026-03-09T15:01:17.185 INFO:tasks.workunit.client.0.vm05.stdout:8/35: mkdir d0/dc 0 2026-03-09T15:01:17.306 INFO:tasks.workunit.client.0.vm05.stdout:7/52: rename d1/d6/fe to d1/d9/f10 0 2026-03-09T15:01:17.307 INFO:tasks.workunit.client.0.vm05.stdout:7/53: write d1/d9/fd [149548,83243] 0 2026-03-09T15:01:17.311 INFO:tasks.workunit.client.0.vm05.stdout:7/54: dwrite d1/d9/fc [0,4194304] 0 2026-03-09T15:01:17.325 INFO:tasks.workunit.client.0.vm05.stdout:6/34: rename f6 to da/fb 0 2026-03-09T15:01:17.325 INFO:tasks.workunit.client.0.vm05.stdout:6/35: dread da/fb [0,4194304] 0 2026-03-09T15:01:17.325 INFO:tasks.workunit.client.0.vm05.stdout:6/36: mknod da/cc 0 2026-03-09T15:01:17.325 INFO:tasks.workunit.client.0.vm05.stdout:6/37: rmdir da 39 2026-03-09T15:01:17.325 INFO:tasks.workunit.client.0.vm05.stdout:6/38: dread f5 [0,4194304] 0 2026-03-09T15:01:17.325 INFO:tasks.workunit.client.0.vm05.stdout:7/55: dwrite d1/fa [0,4194304] 0 2026-03-09T15:01:17.325 INFO:tasks.workunit.client.0.vm05.stdout:6/39: stat c4 0 2026-03-09T15:01:17.325 INFO:tasks.workunit.client.0.vm05.stdout:7/56: write d1/d9/fc [2727566,25892] 0 2026-03-09T15:01:17.328 INFO:tasks.workunit.client.0.vm05.stdout:7/57: chown d1/d9/f10 23165 1 2026-03-09T15:01:17.337 INFO:tasks.workunit.client.0.vm05.stdout:7/58: creat d1/d6/f11 x:0 0 0 2026-03-09T15:01:17.349 INFO:tasks.workunit.client.0.vm05.stdout:2/59: rmdir da/dd 39 2026-03-09T15:01:17.353 INFO:tasks.workunit.client.0.vm05.stdout:2/60: write f4 [181348,24731] 0 2026-03-09T15:01:17.353 INFO:tasks.workunit.client.0.vm05.stdout:2/61: fdatasync da/f10 0 2026-03-09T15:01:17.356 INFO:tasks.workunit.client.0.vm05.stdout:2/62: write da/dd/ff [571062,63425] 0 2026-03-09T15:01:17.373 INFO:tasks.workunit.client.0.vm05.stdout:0/5: sync 2026-03-09T15:01:17.373 INFO:tasks.workunit.client.0.vm05.stdout:3/29: sync 2026-03-09T15:01:17.373 INFO:tasks.workunit.client.0.vm05.stdout:4/29: sync 2026-03-09T15:01:17.373 INFO:tasks.workunit.client.0.vm05.stdout:8/36: sync 2026-03-09T15:01:17.373 INFO:tasks.workunit.client.0.vm05.stdout:5/7: sync 2026-03-09T15:01:17.374 INFO:tasks.workunit.client.0.vm05.stdout:1/25: sync 2026-03-09T15:01:17.374 INFO:tasks.workunit.client.0.vm05.stdout:9/10: sync 2026-03-09T15:01:17.375 INFO:tasks.workunit.client.0.vm05.stdout:4/30: truncate d2/f5 392383 0 2026-03-09T15:01:17.381 INFO:tasks.workunit.client.0.vm05.stdout:5/8: mkdir d1 0 2026-03-09T15:01:17.381 INFO:tasks.workunit.client.0.vm05.stdout:0/6: mknod c2 0 2026-03-09T15:01:17.382 INFO:tasks.workunit.client.0.vm05.stdout:1/26: creat f7 x:0 0 0 2026-03-09T15:01:17.382 INFO:tasks.workunit.client.0.vm05.stdout:8/37: mknod d0/d7/cd 0 2026-03-09T15:01:17.383 INFO:tasks.workunit.client.0.vm05.stdout:1/27: chown f6 15 1 2026-03-09T15:01:17.384 INFO:tasks.workunit.client.0.vm05.stdout:1/28: chown l2 10294 1 2026-03-09T15:01:17.384 INFO:tasks.workunit.client.0.vm05.stdout:1/29: dread - f7 zero size 2026-03-09T15:01:17.385 INFO:tasks.workunit.client.0.vm05.stdout:8/38: truncate d0/fa 495222 0 2026-03-09T15:01:17.386 INFO:tasks.workunit.client.0.vm05.stdout:5/9: sync 2026-03-09T15:01:17.390 INFO:tasks.workunit.client.0.vm05.stdout:0/7: dwrite f0 [0,4194304] 0 2026-03-09T15:01:17.392 INFO:tasks.workunit.client.0.vm05.stdout:0/8: write f0 [3023578,99364] 0 2026-03-09T15:01:17.401 INFO:tasks.workunit.client.0.vm05.stdout:9/11: rename l1 to d2/l3 0 2026-03-09T15:01:17.401 INFO:tasks.workunit.client.0.vm05.stdout:9/12: write - no filename 2026-03-09T15:01:17.412 INFO:tasks.workunit.client.0.vm05.stdout:1/30: symlink l8 0 2026-03-09T15:01:17.413 INFO:tasks.workunit.client.0.vm05.stdout:8/39: creat d0/d7/fe x:0 0 0 2026-03-09T15:01:17.423 INFO:tasks.workunit.client.0.vm05.stdout:6/40: getdents da 0 2026-03-09T15:01:17.424 INFO:tasks.workunit.client.0.vm05.stdout:9/13: creat d2/f4 x:0 0 0 2026-03-09T15:01:17.425 INFO:tasks.workunit.client.0.vm05.stdout:7/59: rename d1/d6 to d1/d12 0 2026-03-09T15:01:17.428 INFO:tasks.workunit.client.0.vm05.stdout:2/63: truncate f6 3513890 0 2026-03-09T15:01:17.428 INFO:tasks.workunit.client.0.vm05.stdout:4/31: symlink d2/d4/d8/la 0 2026-03-09T15:01:17.429 INFO:tasks.workunit.client.0.vm05.stdout:4/32: chown d2/d4/d7 435 1 2026-03-09T15:01:17.438 INFO:tasks.workunit.client.0.vm05.stdout:5/10: symlink d1/l2 0 2026-03-09T15:01:17.438 INFO:tasks.workunit.client.0.vm05.stdout:5/11: dwrite - no filename 2026-03-09T15:01:17.447 INFO:tasks.workunit.client.0.vm05.stdout:9/14: creat d2/f5 x:0 0 0 2026-03-09T15:01:17.448 INFO:tasks.workunit.client.0.vm05.stdout:9/15: dread - d2/f4 zero size 2026-03-09T15:01:17.448 INFO:tasks.workunit.client.0.vm05.stdout:7/60: write d1/d9/f10 [1432683,118171] 0 2026-03-09T15:01:17.449 INFO:tasks.workunit.client.0.vm05.stdout:9/16: dread - d2/f4 zero size 2026-03-09T15:01:17.449 INFO:tasks.workunit.client.0.vm05.stdout:9/17: chown d2/f4 280221 1 2026-03-09T15:01:17.451 INFO:tasks.workunit.client.0.vm05.stdout:7/61: dread d1/d9/fd [0,4194304] 0 2026-03-09T15:01:17.454 INFO:tasks.workunit.client.0.vm05.stdout:4/33: mknod d2/d4/cb 0 2026-03-09T15:01:17.455 INFO:tasks.workunit.client.0.vm05.stdout:4/34: write d2/f5 [1425499,55955] 0 2026-03-09T15:01:17.456 INFO:tasks.workunit.client.0.vm05.stdout:4/35: fsync d2/d4/d7/f9 0 2026-03-09T15:01:17.456 INFO:tasks.workunit.client.0.vm05.stdout:4/36: dread - d2/d4/d7/f9 zero size 2026-03-09T15:01:17.457 INFO:tasks.workunit.client.0.vm05.stdout:4/37: chown d2/d4/d7 1705458 1 2026-03-09T15:01:17.459 INFO:tasks.workunit.client.0.vm05.stdout:3/30: link d3/l4 d3/la 0 2026-03-09T15:01:17.459 INFO:tasks.workunit.client.0.vm05.stdout:8/40: mkdir d0/dc/df 0 2026-03-09T15:01:17.459 INFO:tasks.workunit.client.0.vm05.stdout:3/31: dread - d3/f7 zero size 2026-03-09T15:01:17.460 INFO:tasks.workunit.client.0.vm05.stdout:8/41: chown d0/d1 1771855 1 2026-03-09T15:01:17.461 INFO:tasks.workunit.client.0.vm05.stdout:7/62: sync 2026-03-09T15:01:17.463 INFO:tasks.workunit.client.0.vm05.stdout:7/63: sync 2026-03-09T15:01:17.465 INFO:tasks.workunit.client.0.vm05.stdout:4/38: dwrite d2/f5 [0,4194304] 0 2026-03-09T15:01:17.466 INFO:tasks.workunit.client.0.vm05.stdout:4/39: read d2/f5 [1552773,48738] 0 2026-03-09T15:01:17.468 INFO:tasks.workunit.client.0.vm05.stdout:7/64: dread d1/fa [0,4194304] 0 2026-03-09T15:01:17.470 INFO:tasks.workunit.client.0.vm05.stdout:5/12: creat d1/f3 x:0 0 0 2026-03-09T15:01:17.475 INFO:tasks.workunit.client.0.vm05.stdout:9/18: rename d2/f4 to d2/f6 0 2026-03-09T15:01:17.477 INFO:tasks.workunit.client.0.vm05.stdout:3/32: readlink d3/la 0 2026-03-09T15:01:17.481 INFO:tasks.workunit.client.0.vm05.stdout:4/40: mkdir d2/d4/d7/dc 0 2026-03-09T15:01:17.482 INFO:tasks.workunit.client.0.vm05.stdout:5/13: mkdir d1/d4 0 2026-03-09T15:01:17.486 INFO:tasks.workunit.client.0.vm05.stdout:3/33: creat d3/fb x:0 0 0 2026-03-09T15:01:17.487 INFO:tasks.workunit.client.0.vm05.stdout:3/34: write d3/f8 [245645,11232] 0 2026-03-09T15:01:17.492 INFO:tasks.workunit.client.0.vm05.stdout:7/65: link d1/d9/f10 d1/d9/f13 0 2026-03-09T15:01:17.493 INFO:tasks.workunit.client.0.vm05.stdout:4/41: rmdir d2/d4 39 2026-03-09T15:01:17.496 INFO:tasks.workunit.client.0.vm05.stdout:8/42: creat d0/f10 x:0 0 0 2026-03-09T15:01:17.497 INFO:tasks.workunit.client.0.vm05.stdout:8/43: dread d0/fa [0,4194304] 0 2026-03-09T15:01:17.498 INFO:tasks.workunit.client.0.vm05.stdout:8/44: chown d0 3062 1 2026-03-09T15:01:17.500 INFO:tasks.workunit.client.0.vm05.stdout:7/66: dread d1/d9/f13 [0,4194304] 0 2026-03-09T15:01:17.501 INFO:tasks.workunit.client.0.vm05.stdout:3/35: symlink d3/lc 0 2026-03-09T15:01:17.502 INFO:tasks.workunit.client.0.vm05.stdout:3/36: write d3/f8 [480355,36382] 0 2026-03-09T15:01:17.502 INFO:tasks.workunit.client.0.vm05.stdout:3/37: chown d3/fb 24915 1 2026-03-09T15:01:17.503 INFO:tasks.workunit.client.0.vm05.stdout:4/42: write d2/d4/d7/f9 [117587,88020] 0 2026-03-09T15:01:17.507 INFO:tasks.workunit.client.0.vm05.stdout:5/14: creat d1/d4/f5 x:0 0 0 2026-03-09T15:01:17.508 INFO:tasks.workunit.client.0.vm05.stdout:7/67: sync 2026-03-09T15:01:17.521 INFO:tasks.workunit.client.0.vm05.stdout:4/43: symlink d2/d4/d8/ld 0 2026-03-09T15:01:17.523 INFO:tasks.workunit.client.0.vm05.stdout:5/15: creat d1/f6 x:0 0 0 2026-03-09T15:01:17.524 INFO:tasks.workunit.client.0.vm05.stdout:4/44: dread d2/d4/d7/f9 [0,4194304] 0 2026-03-09T15:01:17.529 INFO:tasks.workunit.client.0.vm05.stdout:5/16: sync 2026-03-09T15:01:17.530 INFO:tasks.workunit.client.0.vm05.stdout:8/45: fsync d0/d7/fe 0 2026-03-09T15:01:17.531 INFO:tasks.workunit.client.0.vm05.stdout:8/46: dread - d0/d7/fe zero size 2026-03-09T15:01:17.533 INFO:tasks.workunit.client.0.vm05.stdout:2/64: fsync f6 0 2026-03-09T15:01:17.535 INFO:tasks.workunit.client.0.vm05.stdout:0/9: truncate f0 3181099 0 2026-03-09T15:01:17.544 INFO:tasks.workunit.client.0.vm05.stdout:2/65: dread da/dd/ff [0,4194304] 0 2026-03-09T15:01:17.544 INFO:tasks.workunit.client.0.vm05.stdout:2/66: truncate f4 3574382 0 2026-03-09T15:01:17.545 INFO:tasks.workunit.client.0.vm05.stdout:1/31: truncate f6 945838 0 2026-03-09T15:01:17.563 INFO:tasks.workunit.client.0.vm05.stdout:6/41: truncate f5 619141 0 2026-03-09T15:01:17.587 INFO:tasks.workunit.client.0.vm05.stdout:9/19: fsync d2/f6 0 2026-03-09T15:01:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:17 vm09.local ceph-mon[59673]: pgmap v164: 65 pgs: 65 active+clean; 1.9 GiB data, 6.6 GiB used, 113 GiB / 120 GiB avail; 34 MiB/s rd, 82 MiB/s wr, 363 op/s 2026-03-09T15:01:17.647 INFO:tasks.workunit.client.0.vm05.stdout:3/38: rename d3/l4 to d3/ld 0 2026-03-09T15:01:17.649 INFO:tasks.workunit.client.0.vm05.stdout:4/45: rename d2/d4/cb to d2/d4/ce 0 2026-03-09T15:01:17.657 INFO:tasks.workunit.client.0.vm05.stdout:5/17: creat d1/f7 x:0 0 0 2026-03-09T15:01:17.658 INFO:tasks.workunit.client.0.vm05.stdout:5/18: truncate d1/d4/f5 393858 0 2026-03-09T15:01:17.659 INFO:tasks.workunit.client.0.vm05.stdout:8/47: creat d0/d7/f11 x:0 0 0 2026-03-09T15:01:17.662 INFO:tasks.workunit.client.0.vm05.stdout:8/48: dwrite d0/d7/fe [0,4194304] 0 2026-03-09T15:01:17.664 INFO:tasks.workunit.client.0.vm05.stdout:0/10: symlink l3 0 2026-03-09T15:01:17.664 INFO:tasks.workunit.client.0.vm05.stdout:8/49: fdatasync d0/d7/f8 0 2026-03-09T15:01:17.665 INFO:tasks.workunit.client.0.vm05.stdout:8/50: write d0/f10 [766009,105070] 0 2026-03-09T15:01:17.667 INFO:tasks.workunit.client.0.vm05.stdout:1/32: mkdir d9 0 2026-03-09T15:01:17.675 INFO:tasks.workunit.client.0.vm05.stdout:6/42: mknod da/cd 0 2026-03-09T15:01:17.675 INFO:tasks.workunit.client.0.vm05.stdout:6/43: chown c0 2342 1 2026-03-09T15:01:17.676 INFO:tasks.workunit.client.0.vm05.stdout:9/20: mkdir d2/d7 0 2026-03-09T15:01:17.678 INFO:tasks.workunit.client.0.vm05.stdout:3/39: creat d3/fe x:0 0 0 2026-03-09T15:01:17.682 INFO:tasks.workunit.client.0.vm05.stdout:3/40: dwrite d3/f7 [0,4194304] 0 2026-03-09T15:01:17.683 INFO:tasks.workunit.client.0.vm05.stdout:5/19: symlink d1/l8 0 2026-03-09T15:01:17.706 INFO:tasks.workunit.client.0.vm05.stdout:0/11: fdatasync f0 0 2026-03-09T15:01:17.706 INFO:tasks.workunit.client.0.vm05.stdout:0/12: dread f0 [0,4194304] 0 2026-03-09T15:01:17.706 INFO:tasks.workunit.client.0.vm05.stdout:0/13: write f0 [1737536,69110] 0 2026-03-09T15:01:17.706 INFO:tasks.workunit.client.0.vm05.stdout:2/67: mknod da/c11 0 2026-03-09T15:01:17.706 INFO:tasks.workunit.client.0.vm05.stdout:9/21: creat d2/f8 x:0 0 0 2026-03-09T15:01:17.706 INFO:tasks.workunit.client.0.vm05.stdout:5/20: creat d1/f9 x:0 0 0 2026-03-09T15:01:17.706 INFO:tasks.workunit.client.0.vm05.stdout:5/21: read - d1/f6 zero size 2026-03-09T15:01:17.706 INFO:tasks.workunit.client.0.vm05.stdout:2/68: rmdir da/dd 39 2026-03-09T15:01:17.706 INFO:tasks.workunit.client.0.vm05.stdout:9/22: creat d2/f9 x:0 0 0 2026-03-09T15:01:17.706 INFO:tasks.workunit.client.0.vm05.stdout:9/23: fsync d2/f5 0 2026-03-09T15:01:17.706 INFO:tasks.workunit.client.0.vm05.stdout:9/24: write d2/f6 [689467,12090] 0 2026-03-09T15:01:17.707 INFO:tasks.workunit.client.0.vm05.stdout:9/25: dwrite d2/f6 [0,4194304] 0 2026-03-09T15:01:17.709 INFO:tasks.workunit.client.0.vm05.stdout:5/22: mkdir d1/da 0 2026-03-09T15:01:17.709 INFO:tasks.workunit.client.0.vm05.stdout:9/26: write d2/f9 [824790,87241] 0 2026-03-09T15:01:17.710 INFO:tasks.workunit.client.0.vm05.stdout:8/51: sync 2026-03-09T15:01:17.710 INFO:tasks.workunit.client.0.vm05.stdout:5/23: write d1/f9 [23058,23449] 0 2026-03-09T15:01:17.711 INFO:tasks.workunit.client.0.vm05.stdout:9/27: rename d2/d7 to d2/d7/da 22 2026-03-09T15:01:17.716 INFO:tasks.workunit.client.0.vm05.stdout:9/28: dwrite d2/f6 [0,4194304] 0 2026-03-09T15:01:17.722 INFO:tasks.workunit.client.0.vm05.stdout:1/33: link l2 d9/la 0 2026-03-09T15:01:17.724 INFO:tasks.workunit.client.0.vm05.stdout:8/52: rename d0/dc/df to d0/d1/d12 0 2026-03-09T15:01:17.726 INFO:tasks.workunit.client.0.vm05.stdout:5/24: symlink d1/d4/lb 0 2026-03-09T15:01:17.726 INFO:tasks.workunit.client.0.vm05.stdout:0/14: link l3 l4 0 2026-03-09T15:01:17.726 INFO:tasks.workunit.client.0.vm05.stdout:5/25: chown d1/f9 24448 1 2026-03-09T15:01:17.727 INFO:tasks.workunit.client.0.vm05.stdout:9/29: sync 2026-03-09T15:01:17.728 INFO:tasks.workunit.client.0.vm05.stdout:5/26: dread d1/d4/f5 [0,4194304] 0 2026-03-09T15:01:17.732 INFO:tasks.workunit.client.0.vm05.stdout:2/69: dwrite da/dd/ff [0,4194304] 0 2026-03-09T15:01:17.737 INFO:tasks.workunit.client.0.vm05.stdout:8/53: rename d0/d1/f6 to d0/d1/f13 0 2026-03-09T15:01:17.738 INFO:tasks.workunit.client.0.vm05.stdout:5/27: dwrite d1/d4/f5 [0,4194304] 0 2026-03-09T15:01:17.740 INFO:tasks.workunit.client.0.vm05.stdout:0/15: mknod c5 0 2026-03-09T15:01:17.741 INFO:tasks.workunit.client.0.vm05.stdout:5/28: readlink d1/l2 0 2026-03-09T15:01:17.743 INFO:tasks.workunit.client.0.vm05.stdout:2/70: dwrite f2 [0,4194304] 0 2026-03-09T15:01:17.746 INFO:tasks.workunit.client.0.vm05.stdout:2/71: write f6 [3675168,1373] 0 2026-03-09T15:01:17.747 INFO:tasks.workunit.client.0.vm05.stdout:2/72: write da/dd/ff [1472469,72581] 0 2026-03-09T15:01:17.750 INFO:tasks.workunit.client.0.vm05.stdout:9/30: creat d2/fb x:0 0 0 2026-03-09T15:01:17.750 INFO:tasks.workunit.client.0.vm05.stdout:1/34: symlink d9/lb 0 2026-03-09T15:01:17.750 INFO:tasks.workunit.client.0.vm05.stdout:9/31: fdatasync d2/fb 0 2026-03-09T15:01:17.778 INFO:tasks.workunit.client.0.vm05.stdout:9/32: rename d2/fb to d2/fc 0 2026-03-09T15:01:17.792 INFO:tasks.workunit.client.0.vm05.stdout:0/16: link l3 l6 0 2026-03-09T15:01:17.793 INFO:tasks.workunit.client.0.vm05.stdout:0/17: fsync f0 0 2026-03-09T15:01:17.798 INFO:tasks.workunit.client.0.vm05.stdout:9/33: rmdir d2/d7 0 2026-03-09T15:01:17.800 INFO:tasks.workunit.client.0.vm05.stdout:0/18: rename l3 to l7 0 2026-03-09T15:01:17.800 INFO:tasks.workunit.client.0.vm05.stdout:0/19: read f0 [1229055,8541] 0 2026-03-09T15:01:17.802 INFO:tasks.workunit.client.0.vm05.stdout:0/20: unlink c2 0 2026-03-09T15:01:17.808 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:17 vm05.local ceph-mon[50611]: pgmap v164: 65 pgs: 65 active+clean; 1.9 GiB data, 6.6 GiB used, 113 GiB / 120 GiB avail; 34 MiB/s rd, 82 MiB/s wr, 363 op/s 2026-03-09T15:01:17.808 INFO:tasks.workunit.client.0.vm05.stdout:0/21: dwrite f0 [0,4194304] 0 2026-03-09T15:01:17.839 INFO:tasks.workunit.client.0.vm05.stdout:7/68: truncate d1/d9/fc 2781968 0 2026-03-09T15:01:17.842 INFO:tasks.workunit.client.0.vm05.stdout:4/46: link d2/d4/ce d2/d4/d7/dc/cf 0 2026-03-09T15:01:17.843 INFO:tasks.workunit.client.0.vm05.stdout:8/54: getdents d0/d7 0 2026-03-09T15:01:17.844 INFO:tasks.workunit.client.0.vm05.stdout:4/47: dread d2/d4/d7/f9 [0,4194304] 0 2026-03-09T15:01:17.845 INFO:tasks.workunit.client.0.vm05.stdout:4/48: chown d2/d4/d7/f9 241870 1 2026-03-09T15:01:17.847 INFO:tasks.workunit.client.0.vm05.stdout:4/49: mknod d2/d4/d7/c10 0 2026-03-09T15:01:17.847 INFO:tasks.workunit.client.0.vm05.stdout:4/50: readlink l1 0 2026-03-09T15:01:17.849 INFO:tasks.workunit.client.0.vm05.stdout:4/51: unlink l1 0 2026-03-09T15:01:17.850 INFO:tasks.workunit.client.0.vm05.stdout:3/41: getdents d3 0 2026-03-09T15:01:17.852 INFO:tasks.workunit.client.0.vm05.stdout:1/35: rename f6 to d9/fc 0 2026-03-09T15:01:17.854 INFO:tasks.workunit.client.0.vm05.stdout:3/42: mkdir d3/df 0 2026-03-09T15:01:17.855 INFO:tasks.workunit.client.0.vm05.stdout:1/36: unlink l8 0 2026-03-09T15:01:17.856 INFO:tasks.workunit.client.0.vm05.stdout:2/73: getdents da 0 2026-03-09T15:01:17.859 INFO:tasks.workunit.client.0.vm05.stdout:3/43: write d3/f7 [4346497,3096] 0 2026-03-09T15:01:17.861 INFO:tasks.workunit.client.0.vm05.stdout:1/37: readlink d9/la 0 2026-03-09T15:01:17.863 INFO:tasks.workunit.client.0.vm05.stdout:2/74: creat da/dd/f12 x:0 0 0 2026-03-09T15:01:17.865 INFO:tasks.workunit.client.0.vm05.stdout:2/75: dread f2 [0,4194304] 0 2026-03-09T15:01:17.866 INFO:tasks.workunit.client.0.vm05.stdout:2/76: write f5 [5136179,46661] 0 2026-03-09T15:01:17.866 INFO:tasks.workunit.client.0.vm05.stdout:2/77: chown l3 27151 1 2026-03-09T15:01:17.872 INFO:tasks.workunit.client.0.vm05.stdout:2/78: dwrite f6 [0,4194304] 0 2026-03-09T15:01:17.873 INFO:tasks.workunit.client.0.vm05.stdout:2/79: chown da/dd/ff 686 1 2026-03-09T15:01:17.877 INFO:tasks.workunit.client.0.vm05.stdout:3/44: dwrite d3/fb [0,4194304] 0 2026-03-09T15:01:17.891 INFO:tasks.workunit.client.0.vm05.stdout:1/38: sync 2026-03-09T15:01:17.893 INFO:tasks.workunit.client.0.vm05.stdout:3/45: truncate d3/fe 837919 0 2026-03-09T15:01:17.893 INFO:tasks.workunit.client.0.vm05.stdout:3/46: truncate d3/fe 1369123 0 2026-03-09T15:01:17.896 INFO:tasks.workunit.client.0.vm05.stdout:1/39: symlink d9/ld 0 2026-03-09T15:01:17.897 INFO:tasks.workunit.client.0.vm05.stdout:3/47: chown d3/ld 18563 1 2026-03-09T15:01:17.897 INFO:tasks.workunit.client.0.vm05.stdout:5/29: rmdir d1 39 2026-03-09T15:01:17.898 INFO:tasks.workunit.client.0.vm05.stdout:1/40: symlink d9/le 0 2026-03-09T15:01:17.898 INFO:tasks.workunit.client.0.vm05.stdout:1/41: read - f7 zero size 2026-03-09T15:01:17.899 INFO:tasks.workunit.client.0.vm05.stdout:8/55: rename d0/d1/f13 to d0/d7/f14 0 2026-03-09T15:01:17.900 INFO:tasks.workunit.client.0.vm05.stdout:8/56: chown d0/d1/c9 223496427 1 2026-03-09T15:01:17.900 INFO:tasks.workunit.client.0.vm05.stdout:8/57: stat d0/d7/f11 0 2026-03-09T15:01:17.901 INFO:tasks.workunit.client.0.vm05.stdout:5/30: rename d1 to d1/d4/dc 22 2026-03-09T15:01:17.901 INFO:tasks.workunit.client.0.vm05.stdout:5/31: dread - d1/f3 zero size 2026-03-09T15:01:17.902 INFO:tasks.workunit.client.0.vm05.stdout:3/48: mkdir d3/df/d10 0 2026-03-09T15:01:17.903 INFO:tasks.workunit.client.0.vm05.stdout:8/58: creat d0/dc/f15 x:0 0 0 2026-03-09T15:01:17.905 INFO:tasks.workunit.client.0.vm05.stdout:3/49: readlink d3/la 0 2026-03-09T15:01:17.905 INFO:tasks.workunit.client.0.vm05.stdout:3/50: fsync d3/f7 0 2026-03-09T15:01:17.908 INFO:tasks.workunit.client.0.vm05.stdout:5/32: symlink d1/da/ld 0 2026-03-09T15:01:17.908 INFO:tasks.workunit.client.0.vm05.stdout:5/33: dread - d1/f3 zero size 2026-03-09T15:01:17.908 INFO:tasks.workunit.client.0.vm05.stdout:1/42: sync 2026-03-09T15:01:17.908 INFO:tasks.workunit.client.0.vm05.stdout:5/34: read - d1/f3 zero size 2026-03-09T15:01:17.909 INFO:tasks.workunit.client.0.vm05.stdout:1/43: truncate f5 194881 0 2026-03-09T15:01:17.917 INFO:tasks.workunit.client.0.vm05.stdout:9/34: rmdir d2 39 2026-03-09T15:01:17.918 INFO:tasks.workunit.client.0.vm05.stdout:3/51: chown d3/l9 272651655 1 2026-03-09T15:01:17.920 INFO:tasks.workunit.client.0.vm05.stdout:5/35: creat d1/da/fe x:0 0 0 2026-03-09T15:01:17.921 INFO:tasks.workunit.client.0.vm05.stdout:0/22: truncate f0 1504496 0 2026-03-09T15:01:17.922 INFO:tasks.workunit.client.0.vm05.stdout:9/35: dread - d2/f8 zero size 2026-03-09T15:01:17.925 INFO:tasks.workunit.client.0.vm05.stdout:3/52: unlink d3/fb 0 2026-03-09T15:01:17.928 INFO:tasks.workunit.client.0.vm05.stdout:5/36: write d1/d4/f5 [1862099,41743] 0 2026-03-09T15:01:17.931 INFO:tasks.workunit.client.0.vm05.stdout:0/23: symlink l8 0 2026-03-09T15:01:17.933 INFO:tasks.workunit.client.0.vm05.stdout:9/36: creat d2/fd x:0 0 0 2026-03-09T15:01:17.938 INFO:tasks.workunit.client.0.vm05.stdout:0/24: sync 2026-03-09T15:01:17.942 INFO:tasks.workunit.client.0.vm05.stdout:3/53: dwrite d3/f8 [0,4194304] 0 2026-03-09T15:01:17.945 INFO:tasks.workunit.client.0.vm05.stdout:5/37: rename d1/f7 to d1/ff 0 2026-03-09T15:01:17.945 INFO:tasks.workunit.client.0.vm05.stdout:5/38: rename d1 to d1/d10 22 2026-03-09T15:01:17.947 INFO:tasks.workunit.client.0.vm05.stdout:9/37: creat d2/fe x:0 0 0 2026-03-09T15:01:17.947 INFO:tasks.workunit.client.0.vm05.stdout:9/38: truncate d2/f8 229666 0 2026-03-09T15:01:17.948 INFO:tasks.workunit.client.0.vm05.stdout:9/39: write d2/f5 [115650,30977] 0 2026-03-09T15:01:17.948 INFO:tasks.workunit.client.0.vm05.stdout:9/40: dread - d2/fd zero size 2026-03-09T15:01:17.948 INFO:tasks.workunit.client.0.vm05.stdout:9/41: stat d2/fd 0 2026-03-09T15:01:17.955 INFO:tasks.workunit.client.0.vm05.stdout:7/69: write d1/d9/f13 [2388655,27393] 0 2026-03-09T15:01:17.959 INFO:tasks.workunit.client.0.vm05.stdout:6/44: link f5 da/fe 0 2026-03-09T15:01:17.962 INFO:tasks.workunit.client.0.vm05.stdout:3/54: creat d3/df/f11 x:0 0 0 2026-03-09T15:01:17.964 INFO:tasks.workunit.client.0.vm05.stdout:3/55: dread d3/f7 [0,4194304] 0 2026-03-09T15:01:17.968 INFO:tasks.workunit.client.0.vm05.stdout:5/39: creat d1/d4/f11 x:0 0 0 2026-03-09T15:01:17.968 INFO:tasks.workunit.client.0.vm05.stdout:5/40: rename d1/d4 to d1/d4/d12 22 2026-03-09T15:01:17.972 INFO:tasks.workunit.client.0.vm05.stdout:7/70: dread d1/d9/f13 [0,4194304] 0 2026-03-09T15:01:17.972 INFO:tasks.workunit.client.0.vm05.stdout:7/71: write d1/d9/f10 [2871226,14233] 0 2026-03-09T15:01:17.976 INFO:tasks.workunit.client.0.vm05.stdout:4/52: truncate d2/f5 2347546 0 2026-03-09T15:01:17.977 INFO:tasks.workunit.client.0.vm05.stdout:2/80: rmdir da/dd 39 2026-03-09T15:01:18.023 INFO:tasks.workunit.client.0.vm05.stdout:6/45: rmdir da 39 2026-03-09T15:01:18.023 INFO:tasks.workunit.client.0.vm05.stdout:8/59: getdents d0/d7 0 2026-03-09T15:01:18.023 INFO:tasks.workunit.client.0.vm05.stdout:6/46: stat c0 0 2026-03-09T15:01:18.024 INFO:tasks.workunit.client.0.vm05.stdout:1/44: dread d9/fc [0,4194304] 0 2026-03-09T15:01:18.029 INFO:tasks.workunit.client.0.vm05.stdout:5/41: creat d1/f13 x:0 0 0 2026-03-09T15:01:18.037 INFO:tasks.workunit.client.0.vm05.stdout:2/81: unlink f2 0 2026-03-09T15:01:18.038 INFO:tasks.workunit.client.0.vm05.stdout:3/56: creat d3/df/d10/f12 x:0 0 0 2026-03-09T15:01:18.038 INFO:tasks.workunit.client.0.vm05.stdout:1/45: fsync d9/fc 0 2026-03-09T15:01:18.038 INFO:tasks.workunit.client.0.vm05.stdout:5/42: creat d1/f14 x:0 0 0 2026-03-09T15:01:18.040 INFO:tasks.workunit.client.0.vm05.stdout:1/46: dread d9/fc [0,4194304] 0 2026-03-09T15:01:18.041 INFO:tasks.workunit.client.0.vm05.stdout:0/25: dread f0 [0,4194304] 0 2026-03-09T15:01:18.041 INFO:tasks.workunit.client.0.vm05.stdout:3/57: dread d3/f8 [0,4194304] 0 2026-03-09T15:01:18.043 INFO:tasks.workunit.client.0.vm05.stdout:3/58: dread d3/fe [0,4194304] 0 2026-03-09T15:01:18.049 INFO:tasks.workunit.client.0.vm05.stdout:7/72: mkdir d1/d14 0 2026-03-09T15:01:18.050 INFO:tasks.workunit.client.0.vm05.stdout:1/47: sync 2026-03-09T15:01:18.051 INFO:tasks.workunit.client.0.vm05.stdout:1/48: dread d9/fc [0,4194304] 0 2026-03-09T15:01:18.055 INFO:tasks.workunit.client.0.vm05.stdout:1/49: sync 2026-03-09T15:01:18.056 INFO:tasks.workunit.client.0.vm05.stdout:9/42: rmdir d2 39 2026-03-09T15:01:18.058 INFO:tasks.workunit.client.0.vm05.stdout:5/43: mknod d1/c15 0 2026-03-09T15:01:18.062 INFO:tasks.workunit.client.0.vm05.stdout:0/26: mkdir d9 0 2026-03-09T15:01:18.067 INFO:tasks.workunit.client.0.vm05.stdout:8/60: rename d0/c5 to d0/c16 0 2026-03-09T15:01:18.068 INFO:tasks.workunit.client.0.vm05.stdout:8/61: read d0/d7/f14 [1521401,30300] 0 2026-03-09T15:01:18.068 INFO:tasks.workunit.client.0.vm05.stdout:8/62: stat d0/d7/cd 0 2026-03-09T15:01:18.069 INFO:tasks.workunit.client.0.vm05.stdout:8/63: fdatasync d0/d7/f11 0 2026-03-09T15:01:18.071 INFO:tasks.workunit.client.0.vm05.stdout:1/50: rename c1 to d9/cf 0 2026-03-09T15:01:18.078 INFO:tasks.workunit.client.0.vm05.stdout:9/43: write d2/fd [945892,15228] 0 2026-03-09T15:01:18.082 INFO:tasks.workunit.client.0.vm05.stdout:5/44: creat d1/f16 x:0 0 0 2026-03-09T15:01:18.087 INFO:tasks.workunit.client.0.vm05.stdout:0/27: dwrite f0 [0,4194304] 0 2026-03-09T15:01:18.088 INFO:tasks.workunit.client.0.vm05.stdout:2/82: mkdir da/d13 0 2026-03-09T15:01:18.090 INFO:tasks.workunit.client.0.vm05.stdout:0/28: write f0 [2789109,16877] 0 2026-03-09T15:01:18.094 INFO:tasks.workunit.client.0.vm05.stdout:1/51: rmdir d9 39 2026-03-09T15:01:18.098 INFO:tasks.workunit.client.0.vm05.stdout:5/45: symlink d1/l17 0 2026-03-09T15:01:18.098 INFO:tasks.workunit.client.0.vm05.stdout:3/59: creat d3/f13 x:0 0 0 2026-03-09T15:01:18.098 INFO:tasks.workunit.client.0.vm05.stdout:9/44: creat d2/ff x:0 0 0 2026-03-09T15:01:18.098 INFO:tasks.workunit.client.0.vm05.stdout:5/46: chown d1/d4 7354 1 2026-03-09T15:01:18.099 INFO:tasks.workunit.client.0.vm05.stdout:3/60: chown d3/fe 3987768 1 2026-03-09T15:01:18.103 INFO:tasks.workunit.client.0.vm05.stdout:8/64: symlink d0/l17 0 2026-03-09T15:01:18.103 INFO:tasks.workunit.client.0.vm05.stdout:7/73: creat d1/f15 x:0 0 0 2026-03-09T15:01:18.104 INFO:tasks.workunit.client.0.vm05.stdout:0/29: symlink d9/la 0 2026-03-09T15:01:18.104 INFO:tasks.workunit.client.0.vm05.stdout:9/45: mkdir d2/d10 0 2026-03-09T15:01:18.107 INFO:tasks.workunit.client.0.vm05.stdout:5/47: symlink d1/l18 0 2026-03-09T15:01:18.114 INFO:tasks.workunit.client.0.vm05.stdout:1/52: mknod d9/c10 0 2026-03-09T15:01:18.119 INFO:tasks.workunit.client.0.vm05.stdout:8/65: dwrite d0/d7/f14 [0,4194304] 0 2026-03-09T15:01:18.121 INFO:tasks.workunit.client.0.vm05.stdout:8/66: chown d0/d7 423 1 2026-03-09T15:01:18.122 INFO:tasks.workunit.client.0.vm05.stdout:0/30: rmdir d9 39 2026-03-09T15:01:18.123 INFO:tasks.workunit.client.0.vm05.stdout:0/31: chown l8 0 1 2026-03-09T15:01:18.131 INFO:tasks.workunit.client.0.vm05.stdout:5/48: dwrite d1/ff [0,4194304] 0 2026-03-09T15:01:18.132 INFO:tasks.workunit.client.0.vm05.stdout:5/49: write d1/f16 [658655,78133] 0 2026-03-09T15:01:18.133 INFO:tasks.workunit.client.0.vm05.stdout:5/50: chown d1/c15 699587 1 2026-03-09T15:01:18.133 INFO:tasks.workunit.client.0.vm05.stdout:5/51: truncate d1/f16 1710528 0 2026-03-09T15:01:18.138 INFO:tasks.workunit.client.0.vm05.stdout:1/53: unlink d9/fc 0 2026-03-09T15:01:18.143 INFO:tasks.workunit.client.0.vm05.stdout:4/53: dwrite d2/d4/d7/f9 [0,4194304] 0 2026-03-09T15:01:18.148 INFO:tasks.workunit.client.0.vm05.stdout:9/46: link d2/f6 d2/f11 0 2026-03-09T15:01:18.154 INFO:tasks.workunit.client.0.vm05.stdout:7/74: creat d1/f16 x:0 0 0 2026-03-09T15:01:18.157 INFO:tasks.workunit.client.0.vm05.stdout:5/52: mkdir d1/d4/d19 0 2026-03-09T15:01:18.160 INFO:tasks.workunit.client.0.vm05.stdout:5/53: dwrite d1/f3 [0,4194304] 0 2026-03-09T15:01:18.163 INFO:tasks.workunit.client.0.vm05.stdout:1/54: symlink d9/l11 0 2026-03-09T15:01:18.163 INFO:tasks.workunit.client.0.vm05.stdout:1/55: fdatasync f5 0 2026-03-09T15:01:18.169 INFO:tasks.workunit.client.0.vm05.stdout:8/67: mknod d0/c18 0 2026-03-09T15:01:18.169 INFO:tasks.workunit.client.0.vm05.stdout:8/68: read d0/d7/fe [3507672,86095] 0 2026-03-09T15:01:18.170 INFO:tasks.workunit.client.0.vm05.stdout:8/69: rename d0/d7 to d0/d7/d19 22 2026-03-09T15:01:18.176 INFO:tasks.workunit.client.0.vm05.stdout:8/70: dwrite d0/d7/fe [0,4194304] 0 2026-03-09T15:01:18.179 INFO:tasks.workunit.client.0.vm05.stdout:4/54: mknod d2/d4/d7/c11 0 2026-03-09T15:01:18.180 INFO:tasks.workunit.client.0.vm05.stdout:9/47: readlink d2/l3 0 2026-03-09T15:01:18.180 INFO:tasks.workunit.client.0.vm05.stdout:9/48: write d2/f9 [1318217,124866] 0 2026-03-09T15:01:18.186 INFO:tasks.workunit.client.0.vm05.stdout:7/75: dwrite d1/d9/f10 [0,4194304] 0 2026-03-09T15:01:18.189 INFO:tasks.workunit.client.0.vm05.stdout:1/56: creat d9/f12 x:0 0 0 2026-03-09T15:01:18.203 INFO:tasks.workunit.client.0.vm05.stdout:8/71: rename d0/d7/f11 to d0/d7/f1a 0 2026-03-09T15:01:18.203 INFO:tasks.workunit.client.0.vm05.stdout:1/57: symlink d9/l13 0 2026-03-09T15:01:18.203 INFO:tasks.workunit.client.0.vm05.stdout:9/49: creat d2/f12 x:0 0 0 2026-03-09T15:01:18.203 INFO:tasks.workunit.client.0.vm05.stdout:1/58: fsync f7 0 2026-03-09T15:01:18.204 INFO:tasks.workunit.client.0.vm05.stdout:9/50: read - d2/fc zero size 2026-03-09T15:01:18.204 INFO:tasks.workunit.client.0.vm05.stdout:9/51: readlink d2/l3 0 2026-03-09T15:01:18.204 INFO:tasks.workunit.client.0.vm05.stdout:1/59: write f5 [814555,111474] 0 2026-03-09T15:01:18.205 INFO:tasks.workunit.client.0.vm05.stdout:1/60: write d9/f12 [585095,37901] 0 2026-03-09T15:01:18.214 INFO:tasks.workunit.client.0.vm05.stdout:9/52: rename d2/f9 to d2/f13 0 2026-03-09T15:01:18.214 INFO:tasks.workunit.client.0.vm05.stdout:9/53: fdatasync d2/fe 0 2026-03-09T15:01:18.215 INFO:tasks.workunit.client.0.vm05.stdout:9/54: chown d2/f5 127702269 1 2026-03-09T15:01:18.219 INFO:tasks.workunit.client.0.vm05.stdout:9/55: dwrite d2/fe [0,4194304] 0 2026-03-09T15:01:18.241 INFO:tasks.workunit.client.0.vm05.stdout:8/72: dread d0/f4 [0,4194304] 0 2026-03-09T15:01:18.248 INFO:tasks.workunit.client.0.vm05.stdout:1/61: sync 2026-03-09T15:01:18.250 INFO:tasks.workunit.client.0.vm05.stdout:1/62: symlink d9/l14 0 2026-03-09T15:01:18.256 INFO:tasks.workunit.client.0.vm05.stdout:9/56: sync 2026-03-09T15:01:18.257 INFO:tasks.workunit.client.0.vm05.stdout:9/57: chown d2/l3 3768 1 2026-03-09T15:01:18.264 INFO:tasks.workunit.client.0.vm05.stdout:1/63: dwrite f5 [0,4194304] 0 2026-03-09T15:01:18.267 INFO:tasks.workunit.client.0.vm05.stdout:9/58: write d2/f11 [4921494,126520] 0 2026-03-09T15:01:18.269 INFO:tasks.workunit.client.0.vm05.stdout:1/64: creat d9/f15 x:0 0 0 2026-03-09T15:01:18.270 INFO:tasks.workunit.client.0.vm05.stdout:9/59: write d2/fc [424170,102761] 0 2026-03-09T15:01:18.270 INFO:tasks.workunit.client.0.vm05.stdout:9/60: chown d2/d10 948166897 1 2026-03-09T15:01:18.273 INFO:tasks.workunit.client.0.vm05.stdout:1/65: rename d9/lb to d9/l16 0 2026-03-09T15:01:18.276 INFO:tasks.workunit.client.0.vm05.stdout:9/61: link d2/fd d2/d10/f14 0 2026-03-09T15:01:18.280 INFO:tasks.workunit.client.0.vm05.stdout:6/47: dwrite f5 [0,4194304] 0 2026-03-09T15:01:18.285 INFO:tasks.workunit.client.0.vm05.stdout:6/48: dread da/fb [0,4194304] 0 2026-03-09T15:01:18.288 INFO:tasks.workunit.client.0.vm05.stdout:9/62: mkdir d2/d10/d15 0 2026-03-09T15:01:18.289 INFO:tasks.workunit.client.0.vm05.stdout:6/49: dread da/fe [0,4194304] 0 2026-03-09T15:01:18.289 INFO:tasks.workunit.client.0.vm05.stdout:9/63: stat d2/f8 0 2026-03-09T15:01:18.289 INFO:tasks.workunit.client.0.vm05.stdout:6/50: stat da 0 2026-03-09T15:01:18.291 INFO:tasks.workunit.client.0.vm05.stdout:9/64: symlink d2/d10/l16 0 2026-03-09T15:01:18.296 INFO:tasks.workunit.client.0.vm05.stdout:9/65: dwrite d2/f11 [4194304,4194304] 0 2026-03-09T15:01:18.297 INFO:tasks.workunit.client.0.vm05.stdout:9/66: chown d2/l3 82 1 2026-03-09T15:01:18.297 INFO:tasks.workunit.client.0.vm05.stdout:9/67: readlink d2/l3 0 2026-03-09T15:01:18.302 INFO:tasks.workunit.client.0.vm05.stdout:9/68: creat d2/f17 x:0 0 0 2026-03-09T15:01:18.313 INFO:tasks.workunit.client.0.vm05.stdout:0/32: symlink d9/lb 0 2026-03-09T15:01:18.317 INFO:tasks.workunit.client.0.vm05.stdout:0/33: dread f0 [0,4194304] 0 2026-03-09T15:01:18.318 INFO:tasks.workunit.client.0.vm05.stdout:0/34: truncate f0 4551896 0 2026-03-09T15:01:18.325 INFO:tasks.workunit.client.0.vm05.stdout:3/61: truncate d3/fe 690739 0 2026-03-09T15:01:18.330 INFO:tasks.workunit.client.0.vm05.stdout:2/83: mknod da/d13/c14 0 2026-03-09T15:01:18.332 INFO:tasks.workunit.client.0.vm05.stdout:2/84: creat da/d13/f15 x:0 0 0 2026-03-09T15:01:18.334 INFO:tasks.workunit.client.0.vm05.stdout:9/69: dread d2/fc [0,4194304] 0 2026-03-09T15:01:18.337 INFO:tasks.workunit.client.0.vm05.stdout:3/62: sync 2026-03-09T15:01:18.338 INFO:tasks.workunit.client.0.vm05.stdout:3/63: truncate d3/f13 421456 0 2026-03-09T15:01:18.340 INFO:tasks.workunit.client.0.vm05.stdout:3/64: dread d3/f13 [0,4194304] 0 2026-03-09T15:01:18.341 INFO:tasks.workunit.client.0.vm05.stdout:3/65: truncate d3/df/f11 504171 0 2026-03-09T15:01:18.341 INFO:tasks.workunit.client.0.vm05.stdout:9/70: dwrite d2/f11 [4194304,4194304] 0 2026-03-09T15:01:18.341 INFO:tasks.workunit.client.0.vm05.stdout:3/66: stat d3/df 0 2026-03-09T15:01:18.344 INFO:tasks.workunit.client.0.vm05.stdout:2/85: stat da/c11 0 2026-03-09T15:01:18.349 INFO:tasks.workunit.client.0.vm05.stdout:3/67: creat d3/df/f14 x:0 0 0 2026-03-09T15:01:18.350 INFO:tasks.workunit.client.0.vm05.stdout:5/54: getdents d1 0 2026-03-09T15:01:18.352 INFO:tasks.workunit.client.0.vm05.stdout:9/71: rename d2/d10/f14 to d2/d10/d15/f18 0 2026-03-09T15:01:18.356 INFO:tasks.workunit.client.0.vm05.stdout:9/72: dwrite d2/f6 [0,4194304] 0 2026-03-09T15:01:18.367 INFO:tasks.workunit.client.0.vm05.stdout:5/55: getdents d1 0 2026-03-09T15:01:18.367 INFO:tasks.workunit.client.0.vm05.stdout:5/56: stat d1/d4 0 2026-03-09T15:01:18.367 INFO:tasks.workunit.client.0.vm05.stdout:5/57: dread - d1/f13 zero size 2026-03-09T15:01:18.368 INFO:tasks.workunit.client.0.vm05.stdout:5/58: chown d1/ff 3336 1 2026-03-09T15:01:18.374 INFO:tasks.workunit.client.0.vm05.stdout:5/59: dwrite d1/f16 [0,4194304] 0 2026-03-09T15:01:18.376 INFO:tasks.workunit.client.0.vm05.stdout:7/76: fdatasync d1/d9/f13 0 2026-03-09T15:01:18.376 INFO:tasks.workunit.client.0.vm05.stdout:7/77: chown d1/f15 459717 1 2026-03-09T15:01:18.377 INFO:tasks.workunit.client.0.vm05.stdout:7/78: rename d1 to d1/d14/d17 22 2026-03-09T15:01:18.386 INFO:tasks.workunit.client.0.vm05.stdout:5/60: symlink d1/l1a 0 2026-03-09T15:01:18.389 INFO:tasks.workunit.client.0.vm05.stdout:5/61: mknod d1/d4/c1b 0 2026-03-09T15:01:18.394 INFO:tasks.workunit.client.0.vm05.stdout:5/62: symlink d1/da/l1c 0 2026-03-09T15:01:18.395 INFO:tasks.workunit.client.0.vm05.stdout:5/63: chown d1/d4/d19 15251308 1 2026-03-09T15:01:18.398 INFO:tasks.workunit.client.0.vm05.stdout:5/64: unlink d1/l1a 0 2026-03-09T15:01:18.483 INFO:tasks.workunit.client.0.vm05.stdout:9/73: sync 2026-03-09T15:01:18.484 INFO:tasks.workunit.client.0.vm05.stdout:9/74: truncate d2/d10/d15/f18 1021963 0 2026-03-09T15:01:18.485 INFO:tasks.workunit.client.0.vm05.stdout:9/75: mkdir d2/d19 0 2026-03-09T15:01:18.490 INFO:tasks.workunit.client.0.vm05.stdout:9/76: dwrite d2/d10/d15/f18 [0,4194304] 0 2026-03-09T15:01:18.492 INFO:tasks.workunit.client.0.vm05.stdout:9/77: mkdir d2/d1a 0 2026-03-09T15:01:18.496 INFO:tasks.workunit.client.0.vm05.stdout:9/78: dwrite d2/ff [0,4194304] 0 2026-03-09T15:01:18.497 INFO:tasks.workunit.client.0.vm05.stdout:5/65: sync 2026-03-09T15:01:18.500 INFO:tasks.workunit.client.0.vm05.stdout:5/66: chown d1/d4 277799231 1 2026-03-09T15:01:18.507 INFO:tasks.workunit.client.0.vm05.stdout:5/67: write d1/f6 [485905,2793] 0 2026-03-09T15:01:18.511 INFO:tasks.workunit.client.0.vm05.stdout:5/68: rename d1/f16 to d1/f1d 0 2026-03-09T15:01:18.512 INFO:tasks.workunit.client.0.vm05.stdout:5/69: write d1/d4/f11 [42877,125643] 0 2026-03-09T15:01:18.515 INFO:tasks.workunit.client.0.vm05.stdout:9/79: mkdir d2/d1a/d1b 0 2026-03-09T15:01:18.520 INFO:tasks.workunit.client.0.vm05.stdout:8/73: getdents d0/d7 0 2026-03-09T15:01:18.520 INFO:tasks.workunit.client.0.vm05.stdout:8/74: chown d0/d7/f1a 1 1 2026-03-09T15:01:18.545 INFO:tasks.workunit.client.0.vm05.stdout:5/70: symlink d1/l1e 0 2026-03-09T15:01:18.545 INFO:tasks.workunit.client.0.vm05.stdout:9/80: creat d2/d1a/f1c x:0 0 0 2026-03-09T15:01:18.547 INFO:tasks.workunit.client.0.vm05.stdout:5/71: dread d1/f9 [0,4194304] 0 2026-03-09T15:01:18.547 INFO:tasks.workunit.client.0.vm05.stdout:9/81: dread d2/fc [0,4194304] 0 2026-03-09T15:01:18.548 INFO:tasks.workunit.client.0.vm05.stdout:5/72: write d1/f3 [2750196,115496] 0 2026-03-09T15:01:18.559 INFO:tasks.workunit.client.0.vm05.stdout:9/82: sync 2026-03-09T15:01:18.559 INFO:tasks.workunit.client.0.vm05.stdout:9/83: fdatasync d2/f11 0 2026-03-09T15:01:18.562 INFO:tasks.workunit.client.0.vm05.stdout:1/66: getdents d9 0 2026-03-09T15:01:18.583 INFO:tasks.workunit.client.0.vm05.stdout:6/51: truncate da/fb 433568 0 2026-03-09T15:01:18.583 INFO:tasks.workunit.client.0.vm05.stdout:6/52: chown c4 267084 1 2026-03-09T15:01:18.585 INFO:tasks.workunit.client.0.vm05.stdout:0/35: truncate f0 1168871 0 2026-03-09T15:01:18.593 INFO:tasks.workunit.client.0.vm05.stdout:3/68: write d3/f8 [3891196,37809] 0 2026-03-09T15:01:18.601 INFO:tasks.workunit.client.0.vm05.stdout:7/79: dwrite d1/d9/fc [0,4194304] 0 2026-03-09T15:01:18.770 INFO:tasks.workunit.client.0.vm05.stdout:1/67: mkdir d9/d17 0 2026-03-09T15:01:18.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:18 vm09.local ceph-mon[59673]: pgmap v165: 65 pgs: 65 active+clean; 1.8 GiB data, 6.3 GiB used, 114 GiB / 120 GiB avail; 19 MiB/s rd, 48 MiB/s wr, 273 op/s 2026-03-09T15:01:18.773 INFO:tasks.workunit.client.0.vm05.stdout:4/55: truncate d2/f5 2386008 0 2026-03-09T15:01:18.775 INFO:tasks.workunit.client.0.vm05.stdout:2/86: write f5 [6074774,130960] 0 2026-03-09T15:01:18.776 INFO:tasks.workunit.client.0.vm05.stdout:3/69: mknod d3/df/d10/c15 0 2026-03-09T15:01:18.777 INFO:tasks.workunit.client.0.vm05.stdout:3/70: write d3/df/f11 [611937,20439] 0 2026-03-09T15:01:18.777 INFO:tasks.workunit.client.0.vm05.stdout:3/71: readlink l2 0 2026-03-09T15:01:18.778 INFO:tasks.workunit.client.0.vm05.stdout:3/72: stat d3/la 0 2026-03-09T15:01:18.778 INFO:tasks.workunit.client.0.vm05.stdout:3/73: write d3/df/f11 [1499724,19314] 0 2026-03-09T15:01:18.779 INFO:tasks.workunit.client.0.vm05.stdout:3/74: write d3/df/d10/f12 [540001,65060] 0 2026-03-09T15:01:18.787 INFO:tasks.workunit.client.0.vm05.stdout:5/73: mknod d1/d4/d19/c1f 0 2026-03-09T15:01:18.787 INFO:tasks.workunit.client.0.vm05.stdout:5/74: read d1/f1d [1868496,39742] 0 2026-03-09T15:01:18.788 INFO:tasks.workunit.client.0.vm05.stdout:5/75: chown d1/da/ld 6333990 1 2026-03-09T15:01:18.788 INFO:tasks.workunit.client.0.vm05.stdout:5/76: readlink d1/da/l1c 0 2026-03-09T15:01:18.790 INFO:tasks.workunit.client.0.vm05.stdout:1/68: readlink d9/l13 0 2026-03-09T15:01:18.797 INFO:tasks.workunit.client.0.vm05.stdout:3/75: fdatasync d3/f13 0 2026-03-09T15:01:18.797 INFO:tasks.workunit.client.0.vm05.stdout:3/76: dread - d3/df/f14 zero size 2026-03-09T15:01:18.802 INFO:tasks.workunit.client.0.vm05.stdout:8/75: getdents d0 0 2026-03-09T15:01:18.802 INFO:tasks.workunit.client.0.vm05.stdout:8/76: chown d0/f10 50913201 1 2026-03-09T15:01:18.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:18 vm05.local ceph-mon[50611]: pgmap v165: 65 pgs: 65 active+clean; 1.8 GiB data, 6.3 GiB used, 114 GiB / 120 GiB avail; 19 MiB/s rd, 48 MiB/s wr, 273 op/s 2026-03-09T15:01:18.804 INFO:tasks.workunit.client.0.vm05.stdout:5/77: creat d1/d4/f20 x:0 0 0 2026-03-09T15:01:18.808 INFO:tasks.workunit.client.0.vm05.stdout:5/78: dwrite d1/f6 [0,4194304] 0 2026-03-09T15:01:18.808 INFO:tasks.workunit.client.0.vm05.stdout:5/79: chown d1/f14 72 1 2026-03-09T15:01:18.808 INFO:tasks.workunit.client.0.vm05.stdout:5/80: chown d1 5710 1 2026-03-09T15:01:18.814 INFO:tasks.workunit.client.0.vm05.stdout:3/77: creat d3/df/f16 x:0 0 0 2026-03-09T15:01:18.818 INFO:tasks.workunit.client.0.vm05.stdout:7/80: truncate d1/d9/f10 8524 0 2026-03-09T15:01:18.820 INFO:tasks.workunit.client.0.vm05.stdout:8/77: mkdir d0/d1/d12/d1b 0 2026-03-09T15:01:18.821 INFO:tasks.workunit.client.0.vm05.stdout:9/84: getdents d2/d10/d15 0 2026-03-09T15:01:18.822 INFO:tasks.workunit.client.0.vm05.stdout:9/85: dread d2/f8 [0,4194304] 0 2026-03-09T15:01:18.833 INFO:tasks.workunit.client.0.vm05.stdout:8/78: dwrite d0/d7/f1a [0,4194304] 0 2026-03-09T15:01:18.840 INFO:tasks.workunit.client.0.vm05.stdout:1/69: rename c4 to d9/c18 0 2026-03-09T15:01:18.841 INFO:tasks.workunit.client.0.vm05.stdout:2/87: getdents da/d13 0 2026-03-09T15:01:18.844 INFO:tasks.workunit.client.0.vm05.stdout:8/79: creat d0/d1/d12/f1c x:0 0 0 2026-03-09T15:01:18.849 INFO:tasks.workunit.client.0.vm05.stdout:5/81: rename c0 to d1/d4/c21 0 2026-03-09T15:01:18.850 INFO:tasks.workunit.client.0.vm05.stdout:5/82: dread - d1/da/fe zero size 2026-03-09T15:01:18.852 INFO:tasks.workunit.client.0.vm05.stdout:2/88: chown da/dd 2191 1 2026-03-09T15:01:18.855 INFO:tasks.workunit.client.0.vm05.stdout:8/80: symlink d0/dc/l1d 0 2026-03-09T15:01:18.855 INFO:tasks.workunit.client.0.vm05.stdout:8/81: stat d0/d7/fe 0 2026-03-09T15:01:18.856 INFO:tasks.workunit.client.0.vm05.stdout:8/82: write d0/d1/d12/f1c [744812,97596] 0 2026-03-09T15:01:18.861 INFO:tasks.workunit.client.0.vm05.stdout:1/70: mknod d9/c19 0 2026-03-09T15:01:18.861 INFO:tasks.workunit.client.0.vm05.stdout:1/71: chown f7 63006 1 2026-03-09T15:01:18.863 INFO:tasks.workunit.client.0.vm05.stdout:9/86: rename d2/d10/l16 to d2/l1d 0 2026-03-09T15:01:18.867 INFO:tasks.workunit.client.0.vm05.stdout:8/83: symlink d0/d1/d12/l1e 0 2026-03-09T15:01:18.867 INFO:tasks.workunit.client.0.vm05.stdout:8/84: readlink d0/d1/d12/l1e 0 2026-03-09T15:01:18.868 INFO:tasks.workunit.client.0.vm05.stdout:8/85: chown d0/d1/d12/f1c 12 1 2026-03-09T15:01:18.869 INFO:tasks.workunit.client.0.vm05.stdout:2/89: dwrite f6 [4194304,4194304] 0 2026-03-09T15:01:18.874 INFO:tasks.workunit.client.0.vm05.stdout:4/56: fsync d2/f5 0 2026-03-09T15:01:18.875 INFO:tasks.workunit.client.0.vm05.stdout:4/57: stat d2/f5 0 2026-03-09T15:01:18.875 INFO:tasks.workunit.client.0.vm05.stdout:6/53: dread da/fb [0,4194304] 0 2026-03-09T15:01:18.875 INFO:tasks.workunit.client.0.vm05.stdout:3/78: rmdir d3/df 39 2026-03-09T15:01:18.883 INFO:tasks.workunit.client.0.vm05.stdout:1/72: dread d9/f12 [0,4194304] 0 2026-03-09T15:01:18.883 INFO:tasks.workunit.client.0.vm05.stdout:1/73: read - f7 zero size 2026-03-09T15:01:18.887 INFO:tasks.workunit.client.0.vm05.stdout:8/86: dread d0/d7/f8 [4194304,4194304] 0 2026-03-09T15:01:18.890 INFO:tasks.workunit.client.0.vm05.stdout:8/87: dread d0/d1/d12/f1c [0,4194304] 0 2026-03-09T15:01:18.891 INFO:tasks.workunit.client.0.vm05.stdout:9/87: symlink d2/d1a/l1e 0 2026-03-09T15:01:18.892 INFO:tasks.workunit.client.0.vm05.stdout:7/81: dread d1/d9/f10 [0,4194304] 0 2026-03-09T15:01:18.897 INFO:tasks.workunit.client.0.vm05.stdout:7/82: dwrite d1/d9/fc [4194304,4194304] 0 2026-03-09T15:01:18.918 INFO:tasks.workunit.client.0.vm05.stdout:7/83: write d1/f16 [907121,3014] 0 2026-03-09T15:01:18.918 INFO:tasks.workunit.client.0.vm05.stdout:0/36: truncate f0 1066641 0 2026-03-09T15:01:18.918 INFO:tasks.workunit.client.0.vm05.stdout:4/58: mknod d2/d4/d7/c12 0 2026-03-09T15:01:18.918 INFO:tasks.workunit.client.0.vm05.stdout:6/54: creat da/ff x:0 0 0 2026-03-09T15:01:18.918 INFO:tasks.workunit.client.0.vm05.stdout:4/59: read d2/d4/d7/f9 [895768,11810] 0 2026-03-09T15:01:18.918 INFO:tasks.workunit.client.0.vm05.stdout:4/60: write d2/d4/d7/f9 [2709994,100173] 0 2026-03-09T15:01:18.918 INFO:tasks.workunit.client.0.vm05.stdout:6/55: dread f5 [0,4194304] 0 2026-03-09T15:01:18.918 INFO:tasks.workunit.client.0.vm05.stdout:8/88: write d0/f4 [1210372,5825] 0 2026-03-09T15:01:18.918 INFO:tasks.workunit.client.0.vm05.stdout:9/88: creat d2/f1f x:0 0 0 2026-03-09T15:01:18.918 INFO:tasks.workunit.client.0.vm05.stdout:9/89: truncate d2/f1f 250548 0 2026-03-09T15:01:18.921 INFO:tasks.workunit.client.0.vm05.stdout:0/37: creat d9/fc x:0 0 0 2026-03-09T15:01:18.922 INFO:tasks.workunit.client.0.vm05.stdout:5/83: unlink d1/d4/c21 0 2026-03-09T15:01:18.929 INFO:tasks.workunit.client.0.vm05.stdout:1/74: mkdir d9/d1a 0 2026-03-09T15:01:18.929 INFO:tasks.workunit.client.0.vm05.stdout:8/89: rmdir d0 39 2026-03-09T15:01:18.929 INFO:tasks.workunit.client.0.vm05.stdout:6/56: dwrite da/fe [4194304,4194304] 0 2026-03-09T15:01:18.931 INFO:tasks.workunit.client.0.vm05.stdout:0/38: chown l7 6727859 1 2026-03-09T15:01:18.932 INFO:tasks.workunit.client.0.vm05.stdout:5/84: symlink d1/da/l22 0 2026-03-09T15:01:18.936 INFO:tasks.workunit.client.0.vm05.stdout:2/90: sync 2026-03-09T15:01:18.939 INFO:tasks.workunit.client.0.vm05.stdout:1/75: fdatasync d9/f15 0 2026-03-09T15:01:18.940 INFO:tasks.workunit.client.0.vm05.stdout:1/76: write f5 [3746403,73455] 0 2026-03-09T15:01:18.943 INFO:tasks.workunit.client.0.vm05.stdout:9/90: symlink d2/d19/l20 0 2026-03-09T15:01:18.943 INFO:tasks.workunit.client.0.vm05.stdout:9/91: chown d2/l3 7064 1 2026-03-09T15:01:18.945 INFO:tasks.workunit.client.0.vm05.stdout:6/57: unlink c0 0 2026-03-09T15:01:18.946 INFO:tasks.workunit.client.0.vm05.stdout:7/84: rmdir d1/d14 0 2026-03-09T15:01:18.947 INFO:tasks.workunit.client.0.vm05.stdout:3/79: rename d3/fe to d3/f17 0 2026-03-09T15:01:18.949 INFO:tasks.workunit.client.0.vm05.stdout:0/39: creat d9/fd x:0 0 0 2026-03-09T15:01:18.950 INFO:tasks.workunit.client.0.vm05.stdout:5/85: mkdir d1/d4/d19/d23 0 2026-03-09T15:01:18.951 INFO:tasks.workunit.client.0.vm05.stdout:2/91: write da/f10 [182279,102109] 0 2026-03-09T15:01:18.955 INFO:tasks.workunit.client.0.vm05.stdout:9/92: dread d2/f8 [0,4194304] 0 2026-03-09T15:01:18.957 INFO:tasks.workunit.client.0.vm05.stdout:2/92: dwrite f5 [4194304,4194304] 0 2026-03-09T15:01:18.957 INFO:tasks.workunit.client.0.vm05.stdout:9/93: stat d2/d10/d15 0 2026-03-09T15:01:18.958 INFO:tasks.workunit.client.0.vm05.stdout:6/58: creat da/f10 x:0 0 0 2026-03-09T15:01:18.958 INFO:tasks.workunit.client.0.vm05.stdout:9/94: read d2/f6 [281071,130145] 0 2026-03-09T15:01:18.958 INFO:tasks.workunit.client.0.vm05.stdout:2/93: chown da/f10 5 1 2026-03-09T15:01:18.970 INFO:tasks.workunit.client.0.vm05.stdout:1/77: mknod d9/d17/c1b 0 2026-03-09T15:01:18.972 INFO:tasks.workunit.client.0.vm05.stdout:5/86: dread d1/f9 [0,4194304] 0 2026-03-09T15:01:18.975 INFO:tasks.workunit.client.0.vm05.stdout:1/78: dwrite f5 [0,4194304] 0 2026-03-09T15:01:18.978 INFO:tasks.workunit.client.0.vm05.stdout:1/79: dread f5 [0,4194304] 0 2026-03-09T15:01:18.978 INFO:tasks.workunit.client.0.vm05.stdout:1/80: write f5 [1009403,51745] 0 2026-03-09T15:01:18.987 INFO:tasks.workunit.client.0.vm05.stdout:9/95: symlink d2/d10/d15/l21 0 2026-03-09T15:01:18.990 INFO:tasks.workunit.client.0.vm05.stdout:2/94: dread f4 [4194304,4194304] 0 2026-03-09T15:01:18.990 INFO:tasks.workunit.client.0.vm05.stdout:2/95: stat da/f10 0 2026-03-09T15:01:18.991 INFO:tasks.workunit.client.0.vm05.stdout:2/96: write da/d13/f15 [704630,42496] 0 2026-03-09T15:01:18.991 INFO:tasks.workunit.client.0.vm05.stdout:2/97: fsync f6 0 2026-03-09T15:01:18.994 INFO:tasks.workunit.client.0.vm05.stdout:1/81: rename l2 to d9/d17/l1c 0 2026-03-09T15:01:18.995 INFO:tasks.workunit.client.0.vm05.stdout:1/82: write d9/f15 [970600,51895] 0 2026-03-09T15:01:18.997 INFO:tasks.workunit.client.0.vm05.stdout:3/80: creat d3/f18 x:0 0 0 2026-03-09T15:01:18.998 INFO:tasks.workunit.client.0.vm05.stdout:5/87: link d1/da/fe d1/d4/d19/f24 0 2026-03-09T15:01:19.002 INFO:tasks.workunit.client.0.vm05.stdout:9/96: mkdir d2/d10/d22 0 2026-03-09T15:01:19.004 INFO:tasks.workunit.client.0.vm05.stdout:5/88: read d1/d4/f11 [3685,65148] 0 2026-03-09T15:01:19.007 INFO:tasks.workunit.client.0.vm05.stdout:5/89: dread d1/f3 [0,4194304] 0 2026-03-09T15:01:19.010 INFO:tasks.workunit.client.0.vm05.stdout:5/90: dwrite d1/f3 [0,4194304] 0 2026-03-09T15:01:19.018 INFO:tasks.workunit.client.0.vm05.stdout:2/98: mkdir da/d16 0 2026-03-09T15:01:19.019 INFO:tasks.workunit.client.0.vm05.stdout:1/83: getdents d9/d1a 0 2026-03-09T15:01:19.020 INFO:tasks.workunit.client.0.vm05.stdout:8/90: getdents d0 0 2026-03-09T15:01:19.021 INFO:tasks.workunit.client.0.vm05.stdout:8/91: dread d0/fa [0,4194304] 0 2026-03-09T15:01:19.025 INFO:tasks.workunit.client.0.vm05.stdout:8/92: dwrite d0/d7/f8 [0,4194304] 0 2026-03-09T15:01:19.027 INFO:tasks.workunit.client.0.vm05.stdout:5/91: rename d1/da/ld to d1/da/l25 0 2026-03-09T15:01:19.027 INFO:tasks.workunit.client.0.vm05.stdout:9/97: mkdir d2/d1a/d1b/d23 0 2026-03-09T15:01:19.028 INFO:tasks.workunit.client.0.vm05.stdout:3/81: mkdir d3/df/d10/d19 0 2026-03-09T15:01:19.030 INFO:tasks.workunit.client.0.vm05.stdout:9/98: dread d2/f5 [0,4194304] 0 2026-03-09T15:01:19.031 INFO:tasks.workunit.client.0.vm05.stdout:9/99: read d2/f6 [2687786,107276] 0 2026-03-09T15:01:19.035 INFO:tasks.workunit.client.0.vm05.stdout:9/100: dwrite d2/d10/d15/f18 [0,4194304] 0 2026-03-09T15:01:19.035 INFO:tasks.workunit.client.0.vm05.stdout:9/101: chown d2/fc 377 1 2026-03-09T15:01:19.035 INFO:tasks.workunit.client.0.vm05.stdout:9/102: chown d2/d1a/d1b/d23 9107 1 2026-03-09T15:01:19.037 INFO:tasks.workunit.client.0.vm05.stdout:2/99: mkdir da/dd/d17 0 2026-03-09T15:01:19.038 INFO:tasks.workunit.client.0.vm05.stdout:3/82: creat d3/df/d10/f1a x:0 0 0 2026-03-09T15:01:19.040 INFO:tasks.workunit.client.0.vm05.stdout:9/103: dread d2/f5 [0,4194304] 0 2026-03-09T15:01:19.044 INFO:tasks.workunit.client.0.vm05.stdout:5/92: link d1/d4/f11 d1/f26 0 2026-03-09T15:01:19.048 INFO:tasks.workunit.client.0.vm05.stdout:5/93: write d1/da/fe [454494,93908] 0 2026-03-09T15:01:19.048 INFO:tasks.workunit.client.0.vm05.stdout:2/100: creat da/d16/f18 x:0 0 0 2026-03-09T15:01:19.048 INFO:tasks.workunit.client.0.vm05.stdout:9/104: mknod d2/d10/d22/c24 0 2026-03-09T15:01:19.049 INFO:tasks.workunit.client.0.vm05.stdout:5/94: mkdir d1/d4/d27 0 2026-03-09T15:01:19.053 INFO:tasks.workunit.client.0.vm05.stdout:5/95: dwrite d1/f14 [0,4194304] 0 2026-03-09T15:01:19.054 INFO:tasks.workunit.client.0.vm05.stdout:8/93: sync 2026-03-09T15:01:19.055 INFO:tasks.workunit.client.0.vm05.stdout:5/96: write d1/d4/f20 [392621,103914] 0 2026-03-09T15:01:19.055 INFO:tasks.workunit.client.0.vm05.stdout:5/97: chown d1/f14 29468 1 2026-03-09T15:01:19.057 INFO:tasks.workunit.client.0.vm05.stdout:2/101: symlink da/d13/l19 0 2026-03-09T15:01:19.062 INFO:tasks.workunit.client.0.vm05.stdout:9/105: symlink d2/d10/d15/l25 0 2026-03-09T15:01:19.062 INFO:tasks.workunit.client.0.vm05.stdout:5/98: dwrite d1/f14 [0,4194304] 0 2026-03-09T15:01:19.064 INFO:tasks.workunit.client.0.vm05.stdout:5/99: readlink d1/d4/lb 0 2026-03-09T15:01:19.065 INFO:tasks.workunit.client.0.vm05.stdout:5/100: fsync d1/d4/f5 0 2026-03-09T15:01:19.067 INFO:tasks.workunit.client.0.vm05.stdout:7/85: write d1/d9/f10 [441286,124423] 0 2026-03-09T15:01:19.067 INFO:tasks.workunit.client.0.vm05.stdout:5/101: rename d1/d4 to d1/d4/d19/d28 22 2026-03-09T15:01:19.072 INFO:tasks.workunit.client.0.vm05.stdout:4/61: dwrite d2/f5 [0,4194304] 0 2026-03-09T15:01:19.073 INFO:tasks.workunit.client.0.vm05.stdout:9/106: dwrite d2/f1f [0,4194304] 0 2026-03-09T15:01:19.074 INFO:tasks.workunit.client.0.vm05.stdout:9/107: write d2/f12 [835124,38825] 0 2026-03-09T15:01:19.075 INFO:tasks.workunit.client.0.vm05.stdout:9/108: dread d2/f8 [0,4194304] 0 2026-03-09T15:01:19.081 INFO:tasks.workunit.client.0.vm05.stdout:2/102: truncate da/f10 470140 0 2026-03-09T15:01:19.081 INFO:tasks.workunit.client.0.vm05.stdout:2/103: write da/d13/f15 [836628,89531] 0 2026-03-09T15:01:19.082 INFO:tasks.workunit.client.0.vm05.stdout:2/104: fdatasync da/f10 0 2026-03-09T15:01:19.083 INFO:tasks.workunit.client.0.vm05.stdout:7/86: creat d1/d12/f18 x:0 0 0 2026-03-09T15:01:19.085 INFO:tasks.workunit.client.0.vm05.stdout:8/94: dwrite d0/d7/fe [0,4194304] 0 2026-03-09T15:01:19.085 INFO:tasks.workunit.client.0.vm05.stdout:5/102: creat d1/d4/d19/f29 x:0 0 0 2026-03-09T15:01:19.089 INFO:tasks.workunit.client.0.vm05.stdout:9/109: creat d2/d10/f26 x:0 0 0 2026-03-09T15:01:19.090 INFO:tasks.workunit.client.0.vm05.stdout:4/62: creat d2/d4/d8/f13 x:0 0 0 2026-03-09T15:01:19.091 INFO:tasks.workunit.client.0.vm05.stdout:4/63: write d2/d4/d8/f13 [653153,60381] 0 2026-03-09T15:01:19.092 INFO:tasks.workunit.client.0.vm05.stdout:7/87: dwrite d1/d9/f13 [0,4194304] 0 2026-03-09T15:01:19.100 INFO:tasks.workunit.client.0.vm05.stdout:5/103: rename d1/d4/f5 to d1/f2a 0 2026-03-09T15:01:19.100 INFO:tasks.workunit.client.0.vm05.stdout:5/104: write d1/ff [4882081,32755] 0 2026-03-09T15:01:19.101 INFO:tasks.workunit.client.0.vm05.stdout:5/105: write d1/d4/f20 [367946,74640] 0 2026-03-09T15:01:19.119 INFO:tasks.workunit.client.0.vm05.stdout:7/88: symlink d1/l19 0 2026-03-09T15:01:19.126 INFO:tasks.workunit.client.0.vm05.stdout:2/105: rmdir da/dd/d17 0 2026-03-09T15:01:19.129 INFO:tasks.workunit.client.0.vm05.stdout:1/84: dread d9/f15 [0,4194304] 0 2026-03-09T15:01:19.130 INFO:tasks.workunit.client.0.vm05.stdout:4/64: creat d2/f14 x:0 0 0 2026-03-09T15:01:19.134 INFO:tasks.workunit.client.0.vm05.stdout:2/106: creat da/d16/f1a x:0 0 0 2026-03-09T15:01:19.137 INFO:tasks.workunit.client.0.vm05.stdout:4/65: creat d2/d4/f15 x:0 0 0 2026-03-09T15:01:19.140 INFO:tasks.workunit.client.0.vm05.stdout:4/66: chown d2/c6 211509 1 2026-03-09T15:01:19.144 INFO:tasks.workunit.client.0.vm05.stdout:2/107: creat da/f1b x:0 0 0 2026-03-09T15:01:19.148 INFO:tasks.workunit.client.0.vm05.stdout:2/108: dwrite da/dd/ff [0,4194304] 0 2026-03-09T15:01:19.150 INFO:tasks.workunit.client.0.vm05.stdout:2/109: truncate da/d16/f1a 607274 0 2026-03-09T15:01:19.154 INFO:tasks.workunit.client.0.vm05.stdout:2/110: dwrite da/d13/f15 [0,4194304] 0 2026-03-09T15:01:19.158 INFO:tasks.workunit.client.0.vm05.stdout:2/111: creat da/dd/f1c x:0 0 0 2026-03-09T15:01:19.167 INFO:tasks.workunit.client.0.vm05.stdout:5/106: dread d1/d4/f20 [0,4194304] 0 2026-03-09T15:01:19.167 INFO:tasks.workunit.client.0.vm05.stdout:2/112: mknod da/c1d 0 2026-03-09T15:01:19.169 INFO:tasks.workunit.client.0.vm05.stdout:5/107: mknod d1/d4/d19/c2b 0 2026-03-09T15:01:19.169 INFO:tasks.workunit.client.0.vm05.stdout:5/108: read d1/f1d [3893983,80126] 0 2026-03-09T15:01:19.170 INFO:tasks.workunit.client.0.vm05.stdout:5/109: truncate d1/d4/d19/f24 1260133 0 2026-03-09T15:01:19.174 INFO:tasks.workunit.client.0.vm05.stdout:5/110: symlink d1/d4/d27/l2c 0 2026-03-09T15:01:19.222 INFO:tasks.workunit.client.0.vm05.stdout:0/40: getdents d9 0 2026-03-09T15:01:19.227 INFO:tasks.workunit.client.0.vm05.stdout:0/41: mkdir d9/de 0 2026-03-09T15:01:19.231 INFO:tasks.workunit.client.0.vm05.stdout:0/42: symlink d9/lf 0 2026-03-09T15:01:19.234 INFO:tasks.workunit.client.0.vm05.stdout:0/43: creat d9/de/f10 x:0 0 0 2026-03-09T15:01:19.254 INFO:tasks.workunit.client.0.vm05.stdout:6/59: dwrite da/fb [0,4194304] 0 2026-03-09T15:01:19.255 INFO:tasks.workunit.client.0.vm05.stdout:6/60: dread - da/f10 zero size 2026-03-09T15:01:19.256 INFO:tasks.workunit.client.0.vm05.stdout:6/61: read - da/f10 zero size 2026-03-09T15:01:19.307 INFO:tasks.workunit.client.0.vm05.stdout:7/89: getdents d1/d12 0 2026-03-09T15:01:19.309 INFO:tasks.workunit.client.0.vm05.stdout:7/90: dread d1/d9/f13 [0,4194304] 0 2026-03-09T15:01:19.314 INFO:tasks.workunit.client.0.vm05.stdout:7/91: dread d1/d9/f10 [0,4194304] 0 2026-03-09T15:01:19.316 INFO:tasks.workunit.client.0.vm05.stdout:8/95: write d0/fa [105577,100467] 0 2026-03-09T15:01:19.317 INFO:tasks.workunit.client.0.vm05.stdout:7/92: mknod d1/d12/c1a 0 2026-03-09T15:01:19.320 INFO:tasks.workunit.client.0.vm05.stdout:8/96: rename d0/d1/c3 to d0/d1/c1f 0 2026-03-09T15:01:19.320 INFO:tasks.workunit.client.0.vm05.stdout:7/93: dread d1/f16 [0,4194304] 0 2026-03-09T15:01:19.320 INFO:tasks.workunit.client.0.vm05.stdout:8/97: truncate d0/fa 1092990 0 2026-03-09T15:01:19.323 INFO:tasks.workunit.client.0.vm05.stdout:6/62: fdatasync da/fb 0 2026-03-09T15:01:19.324 INFO:tasks.workunit.client.0.vm05.stdout:6/63: read - da/ff zero size 2026-03-09T15:01:19.325 INFO:tasks.workunit.client.0.vm05.stdout:8/98: dwrite d0/f4 [0,4194304] 0 2026-03-09T15:01:19.344 INFO:tasks.workunit.client.0.vm05.stdout:9/110: truncate d2/f1f 19897 0 2026-03-09T15:01:19.345 INFO:tasks.workunit.client.0.vm05.stdout:7/94: fdatasync d1/fa 0 2026-03-09T15:01:19.346 INFO:tasks.workunit.client.0.vm05.stdout:7/95: chown d1/d12/f18 0 1 2026-03-09T15:01:19.346 INFO:tasks.workunit.client.0.vm05.stdout:7/96: chown d1/d9 276038780 1 2026-03-09T15:01:19.348 INFO:tasks.workunit.client.0.vm05.stdout:9/111: dwrite d2/d1a/f1c [0,4194304] 0 2026-03-09T15:01:19.357 INFO:tasks.workunit.client.0.vm05.stdout:6/64: mknod da/c11 0 2026-03-09T15:01:19.357 INFO:tasks.workunit.client.0.vm05.stdout:4/67: fsync d2/f14 0 2026-03-09T15:01:19.357 INFO:tasks.workunit.client.0.vm05.stdout:0/44: chown f0 27030 1 2026-03-09T15:01:19.360 INFO:tasks.workunit.client.0.vm05.stdout:1/85: truncate d9/f15 846139 0 2026-03-09T15:01:19.363 INFO:tasks.workunit.client.0.vm05.stdout:4/68: dwrite d2/d4/f15 [0,4194304] 0 2026-03-09T15:01:19.365 INFO:tasks.workunit.client.0.vm05.stdout:9/112: creat d2/d1a/d1b/f27 x:0 0 0 2026-03-09T15:01:19.365 INFO:tasks.workunit.client.0.vm05.stdout:9/113: read d2/fc [113252,38942] 0 2026-03-09T15:01:19.365 INFO:tasks.workunit.client.0.vm05.stdout:9/114: dread - d2/f17 zero size 2026-03-09T15:01:19.371 INFO:tasks.workunit.client.0.vm05.stdout:8/99: sync 2026-03-09T15:01:19.372 INFO:tasks.workunit.client.0.vm05.stdout:8/100: write d0/d7/f14 [1308944,93706] 0 2026-03-09T15:01:19.377 INFO:tasks.workunit.client.0.vm05.stdout:6/65: creat da/f12 x:0 0 0 2026-03-09T15:01:19.380 INFO:tasks.workunit.client.0.vm05.stdout:0/45: read - d9/fd zero size 2026-03-09T15:01:19.385 INFO:tasks.workunit.client.0.vm05.stdout:9/115: creat d2/d10/f28 x:0 0 0 2026-03-09T15:01:19.388 INFO:tasks.workunit.client.0.vm05.stdout:9/116: dread - d2/f17 zero size 2026-03-09T15:01:19.388 INFO:tasks.workunit.client.0.vm05.stdout:1/86: creat d9/d1a/f1d x:0 0 0 2026-03-09T15:01:19.388 INFO:tasks.workunit.client.0.vm05.stdout:8/101: sync 2026-03-09T15:01:19.393 INFO:tasks.workunit.client.0.vm05.stdout:8/102: dwrite d0/d7/f14 [0,4194304] 0 2026-03-09T15:01:19.401 INFO:tasks.workunit.client.0.vm05.stdout:9/117: unlink d2/d1a/l1e 0 2026-03-09T15:01:19.408 INFO:tasks.workunit.client.0.vm05.stdout:4/69: rename d2/d4/d7/c12 to d2/c16 0 2026-03-09T15:01:19.408 INFO:tasks.workunit.client.0.vm05.stdout:4/70: truncate d2/d4/f15 5062946 0 2026-03-09T15:01:19.409 INFO:tasks.workunit.client.0.vm05.stdout:4/71: write d2/f5 [931476,84693] 0 2026-03-09T15:01:19.410 INFO:tasks.workunit.client.0.vm05.stdout:7/97: getdents d1/d9 0 2026-03-09T15:01:19.410 INFO:tasks.workunit.client.0.vm05.stdout:7/98: dread - d1/d12/f11 zero size 2026-03-09T15:01:19.414 INFO:tasks.workunit.client.0.vm05.stdout:8/103: rename d0/d7/f1a to d0/d7/f20 0 2026-03-09T15:01:19.414 INFO:tasks.workunit.client.0.vm05.stdout:8/104: stat d0 0 2026-03-09T15:01:19.416 INFO:tasks.workunit.client.0.vm05.stdout:9/118: mknod d2/d10/d22/c29 0 2026-03-09T15:01:19.420 INFO:tasks.workunit.client.0.vm05.stdout:2/113: truncate da/d13/f15 802559 0 2026-03-09T15:01:19.420 INFO:tasks.workunit.client.0.vm05.stdout:9/119: dwrite d2/fd [0,4194304] 0 2026-03-09T15:01:19.422 INFO:tasks.workunit.client.0.vm05.stdout:9/120: chown d2/fd 27762 1 2026-03-09T15:01:19.423 INFO:tasks.workunit.client.0.vm05.stdout:0/46: creat d9/f11 x:0 0 0 2026-03-09T15:01:19.423 INFO:tasks.workunit.client.0.vm05.stdout:0/47: write d9/f11 [716715,91801] 0 2026-03-09T15:01:19.424 INFO:tasks.workunit.client.0.vm05.stdout:6/66: rename c9 to da/c13 0 2026-03-09T15:01:19.429 INFO:tasks.workunit.client.0.vm05.stdout:4/72: unlink d2/f5 0 2026-03-09T15:01:19.434 INFO:tasks.workunit.client.0.vm05.stdout:2/114: creat da/d16/f1e x:0 0 0 2026-03-09T15:01:19.435 INFO:tasks.workunit.client.0.vm05.stdout:2/115: write da/dd/f1c [848144,130410] 0 2026-03-09T15:01:19.437 INFO:tasks.workunit.client.0.vm05.stdout:9/121: rename d2/d1a/d1b/f27 to d2/d1a/d1b/f2a 0 2026-03-09T15:01:19.439 INFO:tasks.workunit.client.0.vm05.stdout:9/122: truncate d2/d1a/d1b/f2a 69820 0 2026-03-09T15:01:19.439 INFO:tasks.workunit.client.0.vm05.stdout:2/116: dwrite da/d16/f1a [0,4194304] 0 2026-03-09T15:01:19.447 INFO:tasks.workunit.client.0.vm05.stdout:6/67: creat da/f14 x:0 0 0 2026-03-09T15:01:19.447 INFO:tasks.workunit.client.0.vm05.stdout:6/68: chown da/ff 630674 1 2026-03-09T15:01:19.449 INFO:tasks.workunit.client.0.vm05.stdout:4/73: write d2/d4/d7/f9 [2019372,85201] 0 2026-03-09T15:01:19.449 INFO:tasks.workunit.client.0.vm05.stdout:4/74: fdatasync d2/f14 0 2026-03-09T15:01:19.452 INFO:tasks.workunit.client.0.vm05.stdout:8/105: mkdir d0/d1/d12/d1b/d21 0 2026-03-09T15:01:19.456 INFO:tasks.workunit.client.0.vm05.stdout:9/123: write d2/d1a/d1b/f2a [977172,114185] 0 2026-03-09T15:01:19.457 INFO:tasks.workunit.client.0.vm05.stdout:2/117: creat da/d16/f1f x:0 0 0 2026-03-09T15:01:19.459 INFO:tasks.workunit.client.0.vm05.stdout:6/69: symlink da/l15 0 2026-03-09T15:01:19.459 INFO:tasks.workunit.client.0.vm05.stdout:6/70: write da/fe [6450556,22415] 0 2026-03-09T15:01:19.459 INFO:tasks.workunit.client.0.vm05.stdout:6/71: read - da/f10 zero size 2026-03-09T15:01:19.460 INFO:tasks.workunit.client.0.vm05.stdout:6/72: chown f5 33206134 1 2026-03-09T15:01:19.461 INFO:tasks.workunit.client.0.vm05.stdout:4/75: creat d2/d4/f17 x:0 0 0 2026-03-09T15:01:19.464 INFO:tasks.workunit.client.0.vm05.stdout:7/99: creat d1/f1b x:0 0 0 2026-03-09T15:01:19.464 INFO:tasks.workunit.client.0.vm05.stdout:7/100: stat d1 0 2026-03-09T15:01:19.495 INFO:tasks.workunit.client.0.vm05.stdout:8/106: symlink d0/dc/l22 0 2026-03-09T15:01:19.495 INFO:tasks.workunit.client.0.vm05.stdout:8/107: chown d0/dc/l22 2266295 1 2026-03-09T15:01:19.497 INFO:tasks.workunit.client.0.vm05.stdout:5/111: truncate d1/f2a 1648359 0 2026-03-09T15:01:19.502 INFO:tasks.workunit.client.0.vm05.stdout:2/118: creat da/d16/f20 x:0 0 0 2026-03-09T15:01:19.502 INFO:tasks.workunit.client.0.vm05.stdout:2/119: stat f6 0 2026-03-09T15:01:19.505 INFO:tasks.workunit.client.0.vm05.stdout:6/73: creat da/f16 x:0 0 0 2026-03-09T15:01:19.507 INFO:tasks.workunit.client.0.vm05.stdout:7/101: creat d1/d12/f1c x:0 0 0 2026-03-09T15:01:19.508 INFO:tasks.workunit.client.0.vm05.stdout:8/108: unlink d0/d7/fe 0 2026-03-09T15:01:19.510 INFO:tasks.workunit.client.0.vm05.stdout:5/112: symlink d1/d4/d27/l2d 0 2026-03-09T15:01:19.512 INFO:tasks.workunit.client.0.vm05.stdout:9/124: rename d2/l1d to d2/d1a/d1b/l2b 0 2026-03-09T15:01:19.514 INFO:tasks.workunit.client.0.vm05.stdout:2/120: readlink da/le 0 2026-03-09T15:01:19.516 INFO:tasks.workunit.client.0.vm05.stdout:6/74: mkdir da/d17 0 2026-03-09T15:01:19.517 INFO:tasks.workunit.client.0.vm05.stdout:3/83: dwrite d3/f17 [0,4194304] 0 2026-03-09T15:01:19.518 INFO:tasks.workunit.client.0.vm05.stdout:6/75: truncate da/f16 683184 0 2026-03-09T15:01:19.523 INFO:tasks.workunit.client.0.vm05.stdout:3/84: dread d3/df/f11 [0,4194304] 0 2026-03-09T15:01:19.542 INFO:tasks.workunit.client.0.vm05.stdout:4/76: creat d2/d4/d7/dc/f18 x:0 0 0 2026-03-09T15:01:19.543 INFO:tasks.workunit.client.0.vm05.stdout:7/102: truncate d1/d9/fd 822121 0 2026-03-09T15:01:19.543 INFO:tasks.workunit.client.0.vm05.stdout:4/77: dread - d2/d4/d7/dc/f18 zero size 2026-03-09T15:01:19.549 INFO:tasks.workunit.client.0.vm05.stdout:5/113: rename d1/l8 to d1/d4/l2e 0 2026-03-09T15:01:19.551 INFO:tasks.workunit.client.0.vm05.stdout:4/78: dwrite d2/d4/d7/f9 [4194304,4194304] 0 2026-03-09T15:01:19.554 INFO:tasks.workunit.client.0.vm05.stdout:9/125: mkdir d2/d10/d22/d2c 0 2026-03-09T15:01:19.557 INFO:tasks.workunit.client.0.vm05.stdout:4/79: dread - d2/d4/d7/dc/f18 zero size 2026-03-09T15:01:19.561 INFO:tasks.workunit.client.0.vm05.stdout:4/80: dread d2/d4/d7/f9 [0,4194304] 0 2026-03-09T15:01:19.565 INFO:tasks.workunit.client.0.vm05.stdout:0/48: getdents d9/de 0 2026-03-09T15:01:19.573 INFO:tasks.workunit.client.0.vm05.stdout:4/81: dread d2/d4/d8/f13 [0,4194304] 0 2026-03-09T15:01:19.587 INFO:tasks.workunit.client.0.vm05.stdout:1/87: write d9/f12 [160627,11040] 0 2026-03-09T15:01:19.687 INFO:tasks.workunit.client.0.vm05.stdout:5/114: fsync d1/f1d 0 2026-03-09T15:01:19.688 INFO:tasks.workunit.client.0.vm05.stdout:9/126: truncate d2/f8 1269064 0 2026-03-09T15:01:19.691 INFO:tasks.workunit.client.0.vm05.stdout:5/115: dwrite d1/f1d [0,4194304] 0 2026-03-09T15:01:19.694 INFO:tasks.workunit.client.0.vm05.stdout:0/49: unlink l6 0 2026-03-09T15:01:19.695 INFO:tasks.workunit.client.0.vm05.stdout:4/82: mknod d2/d4/c19 0 2026-03-09T15:01:19.695 INFO:tasks.workunit.client.0.vm05.stdout:1/88: readlink d9/l14 0 2026-03-09T15:01:19.695 INFO:tasks.workunit.client.0.vm05.stdout:4/83: fsync d2/d4/f15 0 2026-03-09T15:01:19.696 INFO:tasks.workunit.client.0.vm05.stdout:4/84: write d2/f14 [579249,24573] 0 2026-03-09T15:01:19.696 INFO:tasks.workunit.client.0.vm05.stdout:4/85: dread - d2/d4/f17 zero size 2026-03-09T15:01:19.696 INFO:tasks.workunit.client.0.vm05.stdout:4/86: chown d2/d4 4622653 1 2026-03-09T15:01:19.699 INFO:tasks.workunit.client.0.vm05.stdout:9/127: getdents d2/d10/d22/d2c 0 2026-03-09T15:01:19.699 INFO:tasks.workunit.client.0.vm05.stdout:2/121: creat da/f21 x:0 0 0 2026-03-09T15:01:19.700 INFO:tasks.workunit.client.0.vm05.stdout:1/89: creat d9/d17/f1e x:0 0 0 2026-03-09T15:01:19.703 INFO:tasks.workunit.client.0.vm05.stdout:2/122: dwrite da/dd/f1c [0,4194304] 0 2026-03-09T15:01:19.706 INFO:tasks.workunit.client.0.vm05.stdout:2/123: dread - da/f1b zero size 2026-03-09T15:01:19.707 INFO:tasks.workunit.client.0.vm05.stdout:1/90: rename d9 to d9/d1f 22 2026-03-09T15:01:19.707 INFO:tasks.workunit.client.0.vm05.stdout:4/87: unlink d2/d4/d7/dc/cf 0 2026-03-09T15:01:19.709 INFO:tasks.workunit.client.0.vm05.stdout:4/88: truncate d2/d4/d7/dc/f18 954265 0 2026-03-09T15:01:19.721 INFO:tasks.workunit.client.0.vm05.stdout:2/124: rename f6 to da/dd/f22 0 2026-03-09T15:01:19.733 INFO:tasks.workunit.client.0.vm05.stdout:2/125: write da/dd/f12 [827363,30574] 0 2026-03-09T15:01:19.733 INFO:tasks.workunit.client.0.vm05.stdout:4/89: symlink d2/d4/l1a 0 2026-03-09T15:01:19.733 INFO:tasks.workunit.client.0.vm05.stdout:1/91: symlink d9/l20 0 2026-03-09T15:01:19.733 INFO:tasks.workunit.client.0.vm05.stdout:2/126: rename da/d13/f15 to da/dd/f23 0 2026-03-09T15:01:19.733 INFO:tasks.workunit.client.0.vm05.stdout:2/127: mknod da/dd/c24 0 2026-03-09T15:01:19.738 INFO:tasks.workunit.client.0.vm05.stdout:2/128: dwrite da/f10 [0,4194304] 0 2026-03-09T15:01:19.746 INFO:tasks.workunit.client.0.vm05.stdout:2/129: link da/dd/ff da/dd/f25 0 2026-03-09T15:01:19.748 INFO:tasks.workunit.client.0.vm05.stdout:2/130: rename da/cc to da/dd/c26 0 2026-03-09T15:01:19.751 INFO:tasks.workunit.client.0.vm05.stdout:2/131: write da/dd/f25 [933102,4414] 0 2026-03-09T15:01:19.757 INFO:tasks.workunit.client.0.vm05.stdout:5/116: sync 2026-03-09T15:01:19.758 INFO:tasks.workunit.client.0.vm05.stdout:5/117: chown d1/f6 373553 1 2026-03-09T15:01:19.759 INFO:tasks.workunit.client.0.vm05.stdout:5/118: write d1/da/fe [2121307,26427] 0 2026-03-09T15:01:19.761 INFO:tasks.workunit.client.0.vm05.stdout:5/119: creat d1/da/f2f x:0 0 0 2026-03-09T15:01:19.762 INFO:tasks.workunit.client.0.vm05.stdout:5/120: dread - d1/d4/d19/f29 zero size 2026-03-09T15:01:19.763 INFO:tasks.workunit.client.0.vm05.stdout:5/121: creat d1/f30 x:0 0 0 2026-03-09T15:01:19.818 INFO:tasks.workunit.client.0.vm05.stdout:2/132: getdents da/d16 0 2026-03-09T15:01:19.819 INFO:tasks.workunit.client.0.vm05.stdout:2/133: mknod da/dd/c27 0 2026-03-09T15:01:19.819 INFO:tasks.workunit.client.0.vm05.stdout:2/134: dread - da/f1b zero size 2026-03-09T15:01:19.821 INFO:tasks.workunit.client.0.vm05.stdout:2/135: write da/dd/ff [1887024,9746] 0 2026-03-09T15:01:19.821 INFO:tasks.workunit.client.0.vm05.stdout:2/136: fdatasync da/d16/f20 0 2026-03-09T15:01:19.823 INFO:tasks.workunit.client.0.vm05.stdout:2/137: dread da/f10 [0,4194304] 0 2026-03-09T15:01:19.825 INFO:tasks.workunit.client.0.vm05.stdout:5/122: getdents d1/d4/d27 0 2026-03-09T15:01:19.829 INFO:tasks.workunit.client.0.vm05.stdout:6/76: rmdir da 39 2026-03-09T15:01:19.832 INFO:tasks.workunit.client.0.vm05.stdout:6/77: rename da/ff to da/f18 0 2026-03-09T15:01:19.832 INFO:tasks.workunit.client.0.vm05.stdout:6/78: fsync da/f16 0 2026-03-09T15:01:19.833 INFO:tasks.workunit.client.0.vm05.stdout:6/79: fsync da/f12 0 2026-03-09T15:01:19.836 INFO:tasks.workunit.client.0.vm05.stdout:6/80: mkdir da/d19 0 2026-03-09T15:01:19.839 INFO:tasks.workunit.client.0.vm05.stdout:6/81: creat da/f1a x:0 0 0 2026-03-09T15:01:19.839 INFO:tasks.workunit.client.0.vm05.stdout:6/82: dread - da/f10 zero size 2026-03-09T15:01:19.839 INFO:tasks.workunit.client.0.vm05.stdout:6/83: stat da/d17 0 2026-03-09T15:01:19.840 INFO:tasks.workunit.client.0.vm05.stdout:6/84: chown da/l15 0 1 2026-03-09T15:01:19.844 INFO:tasks.workunit.client.0.vm05.stdout:6/85: dwrite da/f16 [0,4194304] 0 2026-03-09T15:01:19.848 INFO:tasks.workunit.client.0.vm05.stdout:6/86: symlink da/l1b 0 2026-03-09T15:01:19.855 INFO:tasks.workunit.client.0.vm05.stdout:6/87: link da/c13 da/d19/c1c 0 2026-03-09T15:01:19.857 INFO:tasks.workunit.client.0.vm05.stdout:6/88: creat da/d17/f1d x:0 0 0 2026-03-09T15:01:19.860 INFO:tasks.workunit.client.0.vm05.stdout:6/89: symlink da/d19/l1e 0 2026-03-09T15:01:19.860 INFO:tasks.workunit.client.0.vm05.stdout:6/90: dread - da/f12 zero size 2026-03-09T15:01:19.861 INFO:tasks.workunit.client.0.vm05.stdout:6/91: stat da/cc 0 2026-03-09T15:01:19.861 INFO:tasks.workunit.client.0.vm05.stdout:6/92: write da/f10 [402066,103467] 0 2026-03-09T15:01:19.863 INFO:tasks.workunit.client.0.vm05.stdout:6/93: creat da/f1f x:0 0 0 2026-03-09T15:01:19.864 INFO:tasks.workunit.client.0.vm05.stdout:6/94: chown da 7160689 1 2026-03-09T15:01:19.864 INFO:tasks.workunit.client.0.vm05.stdout:6/95: stat da/l1b 0 2026-03-09T15:01:19.864 INFO:tasks.workunit.client.0.vm05.stdout:6/96: readlink da/l15 0 2026-03-09T15:01:19.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:19 vm09.local ceph-mon[59673]: pgmap v166: 65 pgs: 65 active+clean; 1.8 GiB data, 6.2 GiB used, 114 GiB / 120 GiB avail; 9.6 MiB/s rd, 32 MiB/s wr, 249 op/s 2026-03-09T15:01:19.869 INFO:tasks.workunit.client.0.vm05.stdout:6/97: dwrite da/fb [0,4194304] 0 2026-03-09T15:01:19.870 INFO:tasks.workunit.client.0.vm05.stdout:6/98: dread - da/f1f zero size 2026-03-09T15:01:20.024 INFO:tasks.workunit.client.0.vm05.stdout:7/103: write d1/d9/f13 [563776,126903] 0 2026-03-09T15:01:20.024 INFO:tasks.workunit.client.0.vm05.stdout:7/104: dread - d1/f1b zero size 2026-03-09T15:01:20.026 INFO:tasks.workunit.client.0.vm05.stdout:7/105: symlink d1/d12/l1d 0 2026-03-09T15:01:20.030 INFO:tasks.workunit.client.0.vm05.stdout:7/106: dwrite d1/f1b [0,4194304] 0 2026-03-09T15:01:20.031 INFO:tasks.workunit.client.0.vm05.stdout:7/107: write d1/d12/f1c [548984,62462] 0 2026-03-09T15:01:20.031 INFO:tasks.workunit.client.0.vm05.stdout:7/108: dread - d1/d12/f18 zero size 2026-03-09T15:01:20.033 INFO:tasks.workunit.client.0.vm05.stdout:7/109: symlink d1/d12/l1e 0 2026-03-09T15:01:20.034 INFO:tasks.workunit.client.0.vm05.stdout:7/110: unlink d1/f1b 0 2026-03-09T15:01:20.034 INFO:tasks.workunit.client.0.vm05.stdout:7/111: chown d1 6474 1 2026-03-09T15:01:20.038 INFO:tasks.workunit.client.0.vm05.stdout:7/112: creat d1/d12/f1f x:0 0 0 2026-03-09T15:01:20.044 INFO:tasks.workunit.client.0.vm05.stdout:8/109: truncate d0/f4 3020539 0 2026-03-09T15:01:20.045 INFO:tasks.workunit.client.0.vm05.stdout:8/110: mknod d0/d1/d12/d1b/d21/c23 0 2026-03-09T15:01:20.048 INFO:tasks.workunit.client.0.vm05.stdout:8/111: dread d0/d1/d12/f1c [0,4194304] 0 2026-03-09T15:01:20.049 INFO:tasks.workunit.client.0.vm05.stdout:8/112: mkdir d0/d24 0 2026-03-09T15:01:20.051 INFO:tasks.workunit.client.0.vm05.stdout:8/113: symlink d0/d1/d12/l25 0 2026-03-09T15:01:20.052 INFO:tasks.workunit.client.0.vm05.stdout:8/114: chown d0/c16 138543483 1 2026-03-09T15:01:20.052 INFO:tasks.workunit.client.0.vm05.stdout:8/115: read d0/d7/f8 [1877294,7076] 0 2026-03-09T15:01:20.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:19 vm05.local ceph-mon[50611]: pgmap v166: 65 pgs: 65 active+clean; 1.8 GiB data, 6.2 GiB used, 114 GiB / 120 GiB avail; 9.6 MiB/s rd, 32 MiB/s wr, 249 op/s 2026-03-09T15:01:20.054 INFO:tasks.workunit.client.0.vm05.stdout:8/116: fsync d0/d7/f20 0 2026-03-09T15:01:20.056 INFO:tasks.workunit.client.0.vm05.stdout:8/117: symlink d0/dc/l26 0 2026-03-09T15:01:20.056 INFO:tasks.workunit.client.0.vm05.stdout:8/118: readlink d0/d1/d12/l1e 0 2026-03-09T15:01:20.059 INFO:tasks.workunit.client.0.vm05.stdout:8/119: dread d0/f10 [0,4194304] 0 2026-03-09T15:01:20.059 INFO:tasks.workunit.client.0.vm05.stdout:8/120: readlink d0/d1/d12/l1e 0 2026-03-09T15:01:20.062 INFO:tasks.workunit.client.0.vm05.stdout:9/128: rmdir d2 39 2026-03-09T15:01:20.066 INFO:tasks.workunit.client.0.vm05.stdout:9/129: dread d2/fd [0,4194304] 0 2026-03-09T15:01:20.066 INFO:tasks.workunit.client.0.vm05.stdout:9/130: write d2/d1a/d1b/f2a [1491403,70706] 0 2026-03-09T15:01:20.070 INFO:tasks.workunit.client.0.vm05.stdout:7/113: sync 2026-03-09T15:01:20.075 INFO:tasks.workunit.client.0.vm05.stdout:7/114: dwrite d1/d9/f10 [0,4194304] 0 2026-03-09T15:01:20.088 INFO:tasks.workunit.client.0.vm05.stdout:7/115: dread d1/d9/fd [0,4194304] 0 2026-03-09T15:01:20.089 INFO:tasks.workunit.client.0.vm05.stdout:7/116: creat d1/d12/f20 x:0 0 0 2026-03-09T15:01:20.093 INFO:tasks.workunit.client.0.vm05.stdout:7/117: rename d1/d12/f1c to d1/f21 0 2026-03-09T15:01:20.102 INFO:tasks.workunit.client.0.vm05.stdout:3/85: truncate d3/df/f11 1449943 0 2026-03-09T15:01:20.110 INFO:tasks.workunit.client.0.vm05.stdout:3/86: dwrite d3/df/d10/f1a [0,4194304] 0 2026-03-09T15:01:20.113 INFO:tasks.workunit.client.0.vm05.stdout:3/87: write d3/f17 [400982,110380] 0 2026-03-09T15:01:20.117 INFO:tasks.workunit.client.0.vm05.stdout:3/88: dread d3/df/d10/f1a [0,4194304] 0 2026-03-09T15:01:20.117 INFO:tasks.workunit.client.0.vm05.stdout:3/89: fsync d3/df/f16 0 2026-03-09T15:01:20.122 INFO:tasks.workunit.client.0.vm05.stdout:4/90: rmdir d2/d4/d7/dc 39 2026-03-09T15:01:20.123 INFO:tasks.workunit.client.0.vm05.stdout:0/50: write f0 [523426,52457] 0 2026-03-09T15:01:20.123 INFO:tasks.workunit.client.0.vm05.stdout:0/51: chown d9/f11 18155 1 2026-03-09T15:01:20.126 INFO:tasks.workunit.client.0.vm05.stdout:3/90: creat d3/df/f1b x:0 0 0 2026-03-09T15:01:20.127 INFO:tasks.workunit.client.0.vm05.stdout:4/91: fdatasync d2/d4/d8/f13 0 2026-03-09T15:01:20.129 INFO:tasks.workunit.client.0.vm05.stdout:1/92: getdents d9 0 2026-03-09T15:01:20.131 INFO:tasks.workunit.client.0.vm05.stdout:0/52: mkdir d9/de/d12 0 2026-03-09T15:01:20.137 INFO:tasks.workunit.client.0.vm05.stdout:3/91: rename d3/f8 to d3/df/d10/f1c 0 2026-03-09T15:01:20.137 INFO:tasks.workunit.client.0.vm05.stdout:4/92: write d2/d4/d7/dc/f18 [1561402,20513] 0 2026-03-09T15:01:20.137 INFO:tasks.workunit.client.0.vm05.stdout:1/93: creat d9/f21 x:0 0 0 2026-03-09T15:01:20.137 INFO:tasks.workunit.client.0.vm05.stdout:0/53: creat d9/de/f13 x:0 0 0 2026-03-09T15:01:20.137 INFO:tasks.workunit.client.0.vm05.stdout:3/92: mknod d3/df/c1d 0 2026-03-09T15:01:20.141 INFO:tasks.workunit.client.0.vm05.stdout:4/93: dwrite d2/d4/f15 [4194304,4194304] 0 2026-03-09T15:01:20.150 INFO:tasks.workunit.client.0.vm05.stdout:3/93: mkdir d3/df/d1e 0 2026-03-09T15:01:20.150 INFO:tasks.workunit.client.0.vm05.stdout:1/94: creat d9/d17/f22 x:0 0 0 2026-03-09T15:01:20.152 INFO:tasks.workunit.client.0.vm05.stdout:1/95: write f7 [275993,84593] 0 2026-03-09T15:01:20.153 INFO:tasks.workunit.client.0.vm05.stdout:0/54: mknod d9/c14 0 2026-03-09T15:01:20.153 INFO:tasks.workunit.client.0.vm05.stdout:0/55: chown d9/f11 63028 1 2026-03-09T15:01:20.154 INFO:tasks.workunit.client.0.vm05.stdout:0/56: write d9/f11 [1375401,90341] 0 2026-03-09T15:01:20.154 INFO:tasks.workunit.client.0.vm05.stdout:0/57: chown d9/fd 98879356 1 2026-03-09T15:01:20.155 INFO:tasks.workunit.client.0.vm05.stdout:0/58: truncate d9/f11 1837105 0 2026-03-09T15:01:20.159 INFO:tasks.workunit.client.0.vm05.stdout:4/94: creat d2/f1b x:0 0 0 2026-03-09T15:01:20.159 INFO:tasks.workunit.client.0.vm05.stdout:4/95: write d2/d4/f17 [954608,54214] 0 2026-03-09T15:01:20.163 INFO:tasks.workunit.client.0.vm05.stdout:4/96: dwrite d2/d4/f15 [0,4194304] 0 2026-03-09T15:01:20.165 INFO:tasks.workunit.client.0.vm05.stdout:3/94: creat d3/f1f x:0 0 0 2026-03-09T15:01:20.165 INFO:tasks.workunit.client.0.vm05.stdout:4/97: write d2/f1b [160246,2371] 0 2026-03-09T15:01:20.167 INFO:tasks.workunit.client.0.vm05.stdout:4/98: dread d2/d4/d8/f13 [0,4194304] 0 2026-03-09T15:01:20.170 INFO:tasks.workunit.client.0.vm05.stdout:0/59: mkdir d9/de/d12/d15 0 2026-03-09T15:01:20.172 INFO:tasks.workunit.client.0.vm05.stdout:4/99: dwrite d2/f1b [0,4194304] 0 2026-03-09T15:01:20.174 INFO:tasks.workunit.client.0.vm05.stdout:3/95: mkdir d3/df/d10/d20 0 2026-03-09T15:01:20.179 INFO:tasks.workunit.client.0.vm05.stdout:3/96: unlink d3/df/d10/f12 0 2026-03-09T15:01:20.181 INFO:tasks.workunit.client.0.vm05.stdout:0/60: getdents d9/de/d12/d15 0 2026-03-09T15:01:20.186 INFO:tasks.workunit.client.0.vm05.stdout:0/61: chown l4 106150 1 2026-03-09T15:01:20.190 INFO:tasks.workunit.client.0.vm05.stdout:4/100: rename d2/c16 to d2/d4/d7/c1c 0 2026-03-09T15:01:20.190 INFO:tasks.workunit.client.0.vm05.stdout:0/62: rename d9/de to d9/de/d16 22 2026-03-09T15:01:20.191 INFO:tasks.workunit.client.0.vm05.stdout:0/63: chown d9/de/f13 18343344 1 2026-03-09T15:01:20.193 INFO:tasks.workunit.client.0.vm05.stdout:3/97: rmdir d3/df/d10/d20 0 2026-03-09T15:01:20.195 INFO:tasks.workunit.client.0.vm05.stdout:4/101: rmdir d2/d4/d8 39 2026-03-09T15:01:20.197 INFO:tasks.workunit.client.0.vm05.stdout:3/98: creat d3/df/d10/f21 x:0 0 0 2026-03-09T15:01:20.197 INFO:tasks.workunit.client.0.vm05.stdout:0/64: mknod d9/c17 0 2026-03-09T15:01:20.199 INFO:tasks.workunit.client.0.vm05.stdout:3/99: mknod d3/df/d10/c22 0 2026-03-09T15:01:20.201 INFO:tasks.workunit.client.0.vm05.stdout:4/102: mkdir d2/d1d 0 2026-03-09T15:01:20.202 INFO:tasks.workunit.client.0.vm05.stdout:0/65: dwrite f0 [0,4194304] 0 2026-03-09T15:01:20.204 INFO:tasks.workunit.client.0.vm05.stdout:0/66: rename d9/de/d12 to d9/de/d12/d15/d18 22 2026-03-09T15:01:20.204 INFO:tasks.workunit.client.0.vm05.stdout:3/100: creat d3/df/f23 x:0 0 0 2026-03-09T15:01:20.207 INFO:tasks.workunit.client.0.vm05.stdout:0/67: creat d9/de/f19 x:0 0 0 2026-03-09T15:01:20.212 INFO:tasks.workunit.client.0.vm05.stdout:4/103: mkdir d2/d4/d1e 0 2026-03-09T15:01:20.218 INFO:tasks.workunit.client.0.vm05.stdout:4/104: dwrite d2/d4/d7/dc/f18 [0,4194304] 0 2026-03-09T15:01:20.225 INFO:tasks.workunit.client.0.vm05.stdout:2/138: chown da/dd/c26 53 1 2026-03-09T15:01:20.225 INFO:tasks.workunit.client.0.vm05.stdout:3/101: mkdir d3/df/d1e/d24 0 2026-03-09T15:01:20.225 INFO:tasks.workunit.client.0.vm05.stdout:2/139: write da/dd/ff [212173,120724] 0 2026-03-09T15:01:20.225 INFO:tasks.workunit.client.0.vm05.stdout:0/68: creat d9/de/d12/d15/f1a x:0 0 0 2026-03-09T15:01:20.225 INFO:tasks.workunit.client.0.vm05.stdout:3/102: dread - d3/df/f14 zero size 2026-03-09T15:01:20.226 INFO:tasks.workunit.client.0.vm05.stdout:2/140: mknod da/c28 0 2026-03-09T15:01:20.227 INFO:tasks.workunit.client.0.vm05.stdout:3/103: unlink d3/df/d10/f1a 0 2026-03-09T15:01:20.227 INFO:tasks.workunit.client.0.vm05.stdout:3/104: dread - d3/f18 zero size 2026-03-09T15:01:20.228 INFO:tasks.workunit.client.0.vm05.stdout:0/69: mknod d9/c1b 0 2026-03-09T15:01:20.228 INFO:tasks.workunit.client.0.vm05.stdout:0/70: chown f0 14576853 1 2026-03-09T15:01:20.232 INFO:tasks.workunit.client.0.vm05.stdout:0/71: readlink d9/la 0 2026-03-09T15:01:20.233 INFO:tasks.workunit.client.0.vm05.stdout:0/72: write d9/fd [86710,7016] 0 2026-03-09T15:01:20.233 INFO:tasks.workunit.client.0.vm05.stdout:0/73: truncate d9/de/f10 463832 0 2026-03-09T15:01:20.235 INFO:tasks.workunit.client.0.vm05.stdout:2/141: mkdir da/d29 0 2026-03-09T15:01:20.236 INFO:tasks.workunit.client.0.vm05.stdout:2/142: write da/d16/f20 [1017794,2143] 0 2026-03-09T15:01:20.236 INFO:tasks.workunit.client.0.vm05.stdout:3/105: creat d3/df/d10/d19/f25 x:0 0 0 2026-03-09T15:01:20.238 INFO:tasks.workunit.client.0.vm05.stdout:2/143: symlink da/dd/l2a 0 2026-03-09T15:01:20.240 INFO:tasks.workunit.client.0.vm05.stdout:3/106: creat d3/df/d10/d19/f26 x:0 0 0 2026-03-09T15:01:20.243 INFO:tasks.workunit.client.0.vm05.stdout:2/144: rename da/dd/l2a to da/d13/l2b 0 2026-03-09T15:01:20.244 INFO:tasks.workunit.client.0.vm05.stdout:2/145: chown da/f10 59176 1 2026-03-09T15:01:20.248 INFO:tasks.workunit.client.0.vm05.stdout:3/107: rename d3/ld to d3/df/d10/d19/l27 0 2026-03-09T15:01:20.254 INFO:tasks.workunit.client.0.vm05.stdout:2/146: fdatasync da/dd/f23 0 2026-03-09T15:01:20.254 INFO:tasks.workunit.client.0.vm05.stdout:2/147: write da/d16/f1e [97994,56099] 0 2026-03-09T15:01:20.256 INFO:tasks.workunit.client.0.vm05.stdout:2/148: dwrite da/dd/f25 [0,4194304] 0 2026-03-09T15:01:20.258 INFO:tasks.workunit.client.0.vm05.stdout:4/105: sync 2026-03-09T15:01:20.264 INFO:tasks.workunit.client.0.vm05.stdout:4/106: fsync d2/d4/d7/dc/f18 0 2026-03-09T15:01:20.274 INFO:tasks.workunit.client.0.vm05.stdout:2/149: creat da/f2c x:0 0 0 2026-03-09T15:01:20.279 INFO:tasks.workunit.client.0.vm05.stdout:2/150: dwrite da/dd/f25 [0,4194304] 0 2026-03-09T15:01:20.280 INFO:tasks.workunit.client.0.vm05.stdout:2/151: fsync f4 0 2026-03-09T15:01:20.281 INFO:tasks.workunit.client.0.vm05.stdout:2/152: read da/dd/f22 [4913410,45282] 0 2026-03-09T15:01:20.283 INFO:tasks.workunit.client.0.vm05.stdout:2/153: unlink da/dd/f23 0 2026-03-09T15:01:20.289 INFO:tasks.workunit.client.0.vm05.stdout:2/154: dwrite da/d16/f20 [0,4194304] 0 2026-03-09T15:01:20.292 INFO:tasks.workunit.client.0.vm05.stdout:2/155: stat l3 0 2026-03-09T15:01:20.294 INFO:tasks.workunit.client.0.vm05.stdout:2/156: rename da/d16/f1a to da/d29/f2d 0 2026-03-09T15:01:20.358 INFO:tasks.workunit.client.0.vm05.stdout:5/123: dwrite d1/f2a [0,4194304] 0 2026-03-09T15:01:20.360 INFO:tasks.workunit.client.0.vm05.stdout:5/124: read d1/f9 [27990,25506] 0 2026-03-09T15:01:20.363 INFO:tasks.workunit.client.0.vm05.stdout:5/125: creat d1/da/f31 x:0 0 0 2026-03-09T15:01:20.367 INFO:tasks.workunit.client.0.vm05.stdout:5/126: creat d1/d4/d19/d23/f32 x:0 0 0 2026-03-09T15:01:20.382 INFO:tasks.workunit.client.0.vm05.stdout:5/127: sync 2026-03-09T15:01:20.384 INFO:tasks.workunit.client.0.vm05.stdout:5/128: mknod d1/c33 0 2026-03-09T15:01:20.387 INFO:tasks.workunit.client.0.vm05.stdout:5/129: sync 2026-03-09T15:01:20.388 INFO:tasks.workunit.client.0.vm05.stdout:5/130: write d1/f2a [2761709,101452] 0 2026-03-09T15:01:20.391 INFO:tasks.workunit.client.0.vm05.stdout:5/131: unlink d1/f13 0 2026-03-09T15:01:20.405 INFO:tasks.workunit.client.0.vm05.stdout:5/132: mkdir d1/d4/d34 0 2026-03-09T15:01:20.406 INFO:tasks.workunit.client.0.vm05.stdout:5/133: truncate d1/d4/d19/f29 453851 0 2026-03-09T15:01:20.406 INFO:tasks.workunit.client.0.vm05.stdout:5/134: dread - d1/da/f2f zero size 2026-03-09T15:01:20.410 INFO:tasks.workunit.client.0.vm05.stdout:6/99: getdents da/d17 0 2026-03-09T15:01:20.416 INFO:tasks.workunit.client.0.vm05.stdout:6/100: creat da/d17/f20 x:0 0 0 2026-03-09T15:01:20.417 INFO:tasks.workunit.client.0.vm05.stdout:5/135: mkdir d1/d4/d34/d35 0 2026-03-09T15:01:20.422 INFO:tasks.workunit.client.0.vm05.stdout:6/101: symlink da/d17/l21 0 2026-03-09T15:01:20.423 INFO:tasks.workunit.client.0.vm05.stdout:6/102: dread - da/f1a zero size 2026-03-09T15:01:20.424 INFO:tasks.workunit.client.0.vm05.stdout:5/136: creat d1/d4/d34/d35/f36 x:0 0 0 2026-03-09T15:01:20.430 INFO:tasks.workunit.client.0.vm05.stdout:5/137: sync 2026-03-09T15:01:20.431 INFO:tasks.workunit.client.0.vm05.stdout:5/138: write d1/d4/d19/d23/f32 [207790,71346] 0 2026-03-09T15:01:20.432 INFO:tasks.workunit.client.0.vm05.stdout:5/139: creat d1/d4/d19/d23/f37 x:0 0 0 2026-03-09T15:01:20.436 INFO:tasks.workunit.client.0.vm05.stdout:5/140: dwrite d1/f14 [0,4194304] 0 2026-03-09T15:01:20.450 INFO:tasks.workunit.client.0.vm05.stdout:8/121: dread d0/f4 [0,4194304] 0 2026-03-09T15:01:20.455 INFO:tasks.workunit.client.0.vm05.stdout:8/122: creat d0/d1/d12/d1b/f27 x:0 0 0 2026-03-09T15:01:20.459 INFO:tasks.workunit.client.0.vm05.stdout:8/123: unlink d0/dc/l26 0 2026-03-09T15:01:20.459 INFO:tasks.workunit.client.0.vm05.stdout:8/124: chown d0/dc/l22 4 1 2026-03-09T15:01:20.459 INFO:tasks.workunit.client.0.vm05.stdout:8/125: getdents d0/d24 0 2026-03-09T15:01:20.461 INFO:tasks.workunit.client.0.vm05.stdout:8/126: symlink d0/d1/d12/d1b/l28 0 2026-03-09T15:01:20.462 INFO:tasks.workunit.client.0.vm05.stdout:8/127: write d0/fa [1198336,69231] 0 2026-03-09T15:01:20.463 INFO:tasks.workunit.client.0.vm05.stdout:8/128: read d0/f4 [2014875,72793] 0 2026-03-09T15:01:20.468 INFO:tasks.workunit.client.0.vm05.stdout:8/129: mknod d0/dc/c29 0 2026-03-09T15:01:20.475 INFO:tasks.workunit.client.0.vm05.stdout:7/118: getdents d1 0 2026-03-09T15:01:20.477 INFO:tasks.workunit.client.0.vm05.stdout:3/108: truncate d3/df/f11 2453380 0 2026-03-09T15:01:20.480 INFO:tasks.workunit.client.0.vm05.stdout:7/119: dwrite d1/f21 [0,4194304] 0 2026-03-09T15:01:20.482 INFO:tasks.workunit.client.0.vm05.stdout:7/120: readlink d1/l7 0 2026-03-09T15:01:20.482 INFO:tasks.workunit.client.0.vm05.stdout:7/121: chown d1/d9/f13 75965 1 2026-03-09T15:01:20.487 INFO:tasks.workunit.client.0.vm05.stdout:7/122: dwrite d1/d9/fc [0,4194304] 0 2026-03-09T15:01:20.489 INFO:tasks.workunit.client.0.vm05.stdout:7/123: write d1/d12/f1f [332507,112629] 0 2026-03-09T15:01:20.495 INFO:tasks.workunit.client.0.vm05.stdout:3/109: creat d3/df/d10/f28 x:0 0 0 2026-03-09T15:01:20.499 INFO:tasks.workunit.client.0.vm05.stdout:9/131: write d2/f1f [773134,380] 0 2026-03-09T15:01:20.503 INFO:tasks.workunit.client.0.vm05.stdout:7/124: truncate d1/d9/fd 1617937 0 2026-03-09T15:01:20.504 INFO:tasks.workunit.client.0.vm05.stdout:9/132: symlink d2/d19/l2d 0 2026-03-09T15:01:20.504 INFO:tasks.workunit.client.0.vm05.stdout:9/133: fdatasync d2/d10/f26 0 2026-03-09T15:01:20.505 INFO:tasks.workunit.client.0.vm05.stdout:7/125: mkdir d1/d22 0 2026-03-09T15:01:20.506 INFO:tasks.workunit.client.0.vm05.stdout:3/110: mkdir d3/d29 0 2026-03-09T15:01:20.507 INFO:tasks.workunit.client.0.vm05.stdout:9/134: write d2/fc [1147103,44096] 0 2026-03-09T15:01:20.515 INFO:tasks.workunit.client.0.vm05.stdout:3/111: creat d3/df/d10/f2a x:0 0 0 2026-03-09T15:01:20.515 INFO:tasks.workunit.client.0.vm05.stdout:7/126: rmdir d1/d12 39 2026-03-09T15:01:20.515 INFO:tasks.workunit.client.0.vm05.stdout:9/135: rename d2/ff to d2/d10/f2e 0 2026-03-09T15:01:20.515 INFO:tasks.workunit.client.0.vm05.stdout:3/112: readlink d3/l9 0 2026-03-09T15:01:20.519 INFO:tasks.workunit.client.0.vm05.stdout:3/113: rename d3/df/d10/f21 to d3/df/d1e/f2b 0 2026-03-09T15:01:20.522 INFO:tasks.workunit.client.0.vm05.stdout:3/114: mkdir d3/df/d1e/d2c 0 2026-03-09T15:01:20.522 INFO:tasks.workunit.client.0.vm05.stdout:3/115: write d3/df/d10/d19/f26 [71646,44340] 0 2026-03-09T15:01:20.523 INFO:tasks.workunit.client.0.vm05.stdout:9/136: fsync d2/fc 0 2026-03-09T15:01:20.524 INFO:tasks.workunit.client.0.vm05.stdout:9/137: truncate d2/f17 513324 0 2026-03-09T15:01:20.526 INFO:tasks.workunit.client.0.vm05.stdout:4/107: truncate d2/d4/d8/f13 245877 0 2026-03-09T15:01:20.528 INFO:tasks.workunit.client.0.vm05.stdout:3/116: mkdir d3/d29/d2d 0 2026-03-09T15:01:20.528 INFO:tasks.workunit.client.0.vm05.stdout:9/138: dread d2/f1f [0,4194304] 0 2026-03-09T15:01:20.529 INFO:tasks.workunit.client.0.vm05.stdout:9/139: readlink d2/d19/l20 0 2026-03-09T15:01:20.529 INFO:tasks.workunit.client.0.vm05.stdout:3/117: dread - d3/df/f14 zero size 2026-03-09T15:01:20.529 INFO:tasks.workunit.client.0.vm05.stdout:9/140: chown d2/f5 9 1 2026-03-09T15:01:20.530 INFO:tasks.workunit.client.0.vm05.stdout:9/141: fdatasync d2/f12 0 2026-03-09T15:01:20.535 INFO:tasks.workunit.client.0.vm05.stdout:9/142: rename d2/d19/l20 to d2/d10/d22/l2f 0 2026-03-09T15:01:20.537 INFO:tasks.workunit.client.0.vm05.stdout:3/118: symlink d3/df/d1e/d24/l2e 0 2026-03-09T15:01:20.537 INFO:tasks.workunit.client.0.vm05.stdout:3/119: dread - d3/df/f23 zero size 2026-03-09T15:01:20.538 INFO:tasks.workunit.client.0.vm05.stdout:3/120: chown d3/df/d10/f2a 101394 1 2026-03-09T15:01:20.538 INFO:tasks.workunit.client.0.vm05.stdout:3/121: stat d3/f17 0 2026-03-09T15:01:20.540 INFO:tasks.workunit.client.0.vm05.stdout:3/122: truncate d3/df/d10/d19/f25 257015 0 2026-03-09T15:01:20.547 INFO:tasks.workunit.client.0.vm05.stdout:1/96: truncate f5 1580149 0 2026-03-09T15:01:20.548 INFO:tasks.workunit.client.0.vm05.stdout:1/97: fsync f7 0 2026-03-09T15:01:20.549 INFO:tasks.workunit.client.0.vm05.stdout:9/143: mknod d2/d1a/d1b/d23/c30 0 2026-03-09T15:01:20.550 INFO:tasks.workunit.client.0.vm05.stdout:3/123: mkdir d3/df/d1e/d2f 0 2026-03-09T15:01:20.555 INFO:tasks.workunit.client.0.vm05.stdout:3/124: dwrite d3/df/d10/f2a [0,4194304] 0 2026-03-09T15:01:20.558 INFO:tasks.workunit.client.0.vm05.stdout:3/125: dread d3/df/d10/d19/f26 [0,4194304] 0 2026-03-09T15:01:20.564 INFO:tasks.workunit.client.0.vm05.stdout:0/74: rmdir d9/de/d12/d15 39 2026-03-09T15:01:20.584 INFO:tasks.workunit.client.0.vm05.stdout:2/157: rmdir da 39 2026-03-09T15:01:20.603 INFO:tasks.workunit.client.0.vm05.stdout:6/103: truncate da/fb 159064 0 2026-03-09T15:01:20.603 INFO:tasks.workunit.client.0.vm05.stdout:2/158: sync 2026-03-09T15:01:20.606 INFO:tasks.workunit.client.0.vm05.stdout:6/104: sync 2026-03-09T15:01:20.607 INFO:tasks.workunit.client.0.vm05.stdout:6/105: chown da/f1f 0 1 2026-03-09T15:01:20.613 INFO:tasks.workunit.client.0.vm05.stdout:5/141: write d1/f26 [213629,101253] 0 2026-03-09T15:01:20.616 INFO:tasks.workunit.client.0.vm05.stdout:5/142: dread d1/f3 [0,4194304] 0 2026-03-09T15:01:20.616 INFO:tasks.workunit.client.0.vm05.stdout:5/143: stat d1/da/f2f 0 2026-03-09T15:01:20.626 INFO:tasks.workunit.client.0.vm05.stdout:8/130: write d0/d7/f8 [4657526,81210] 0 2026-03-09T15:01:20.638 INFO:tasks.workunit.client.0.vm05.stdout:7/127: write d1/fa [4417545,7344] 0 2026-03-09T15:01:20.644 INFO:tasks.workunit.client.0.vm05.stdout:4/108: write d2/d4/d7/f9 [7665939,19837] 0 2026-03-09T15:01:20.647 INFO:tasks.workunit.client.0.vm05.stdout:4/109: dwrite d2/f1b [0,4194304] 0 2026-03-09T15:01:20.738 INFO:tasks.workunit.client.0.vm05.stdout:1/98: dwrite d9/d1a/f1d [0,4194304] 0 2026-03-09T15:01:20.742 INFO:tasks.workunit.client.0.vm05.stdout:3/126: creat d3/d29/f30 x:0 0 0 2026-03-09T15:01:20.744 INFO:tasks.workunit.client.0.vm05.stdout:2/159: fdatasync da/d16/f20 0 2026-03-09T15:01:20.747 INFO:tasks.workunit.client.0.vm05.stdout:8/131: dread d0/f10 [0,4194304] 0 2026-03-09T15:01:20.747 INFO:tasks.workunit.client.0.vm05.stdout:8/132: readlink d0/d1/d12/l25 0 2026-03-09T15:01:20.750 INFO:tasks.workunit.client.0.vm05.stdout:8/133: dread d0/d7/f14 [0,4194304] 0 2026-03-09T15:01:20.751 INFO:tasks.workunit.client.0.vm05.stdout:7/128: stat d1/l19 0 2026-03-09T15:01:20.752 INFO:tasks.workunit.client.0.vm05.stdout:4/110: symlink d2/d4/d8/l1f 0 2026-03-09T15:01:20.753 INFO:tasks.workunit.client.0.vm05.stdout:4/111: write d2/d4/d7/f9 [3785800,10461] 0 2026-03-09T15:01:20.753 INFO:tasks.workunit.client.0.vm05.stdout:4/112: chown d2/d4/d8/l1f 4 1 2026-03-09T15:01:20.758 INFO:tasks.workunit.client.0.vm05.stdout:3/127: unlink d3/df/d10/c22 0 2026-03-09T15:01:20.760 INFO:tasks.workunit.client.0.vm05.stdout:0/75: unlink d9/fc 0 2026-03-09T15:01:20.761 INFO:tasks.workunit.client.0.vm05.stdout:2/160: mknod da/d13/c2e 0 2026-03-09T15:01:20.762 INFO:tasks.workunit.client.0.vm05.stdout:2/161: dread - da/d16/f1f zero size 2026-03-09T15:01:20.763 INFO:tasks.workunit.client.0.vm05.stdout:2/162: read da/dd/f12 [500972,48032] 0 2026-03-09T15:01:20.769 INFO:tasks.workunit.client.0.vm05.stdout:8/134: rmdir d0/d7 39 2026-03-09T15:01:20.774 INFO:tasks.workunit.client.0.vm05.stdout:4/113: stat d2/d4/d7/c1c 0 2026-03-09T15:01:20.779 INFO:tasks.workunit.client.0.vm05.stdout:2/163: mkdir da/d13/d2f 0 2026-03-09T15:01:20.782 INFO:tasks.workunit.client.0.vm05.stdout:2/164: dwrite da/f21 [0,4194304] 0 2026-03-09T15:01:20.786 INFO:tasks.workunit.client.0.vm05.stdout:2/165: dwrite da/d16/f20 [0,4194304] 0 2026-03-09T15:01:20.793 INFO:tasks.workunit.client.0.vm05.stdout:2/166: dwrite da/d16/f1e [0,4194304] 0 2026-03-09T15:01:20.808 INFO:tasks.workunit.client.0.vm05.stdout:9/144: write d2/f1f [434166,54379] 0 2026-03-09T15:01:20.811 INFO:tasks.workunit.client.0.vm05.stdout:7/129: mkdir d1/d9/d23 0 2026-03-09T15:01:20.811 INFO:tasks.workunit.client.0.vm05.stdout:1/99: creat d9/f23 x:0 0 0 2026-03-09T15:01:20.812 INFO:tasks.workunit.client.0.vm05.stdout:1/100: write d9/d1a/f1d [308218,110949] 0 2026-03-09T15:01:20.812 INFO:tasks.workunit.client.0.vm05.stdout:1/101: dread - d9/d17/f22 zero size 2026-03-09T15:01:20.813 INFO:tasks.workunit.client.0.vm05.stdout:3/128: creat d3/d29/d2d/f31 x:0 0 0 2026-03-09T15:01:20.816 INFO:tasks.workunit.client.0.vm05.stdout:2/167: mkdir da/d13/d30 0 2026-03-09T15:01:20.817 INFO:tasks.workunit.client.0.vm05.stdout:2/168: fsync da/d16/f18 0 2026-03-09T15:01:20.817 INFO:tasks.workunit.client.0.vm05.stdout:2/169: chown da/dd/c27 26 1 2026-03-09T15:01:20.818 INFO:tasks.workunit.client.0.vm05.stdout:9/145: creat d2/d10/d15/f31 x:0 0 0 2026-03-09T15:01:20.818 INFO:tasks.workunit.client.0.vm05.stdout:9/146: read d2/d1a/f1c [358773,52910] 0 2026-03-09T15:01:20.819 INFO:tasks.workunit.client.0.vm05.stdout:8/135: getdents d0/d24 0 2026-03-09T15:01:20.819 INFO:tasks.workunit.client.0.vm05.stdout:9/147: write d2/d1a/f1c [2706647,121191] 0 2026-03-09T15:01:20.821 INFO:tasks.workunit.client.0.vm05.stdout:2/170: dread da/d29/f2d [0,4194304] 0 2026-03-09T15:01:20.826 INFO:tasks.workunit.client.0.vm05.stdout:7/130: dwrite d1/d12/f18 [0,4194304] 0 2026-03-09T15:01:20.833 INFO:tasks.workunit.client.0.vm05.stdout:9/148: mknod d2/d10/d22/c32 0 2026-03-09T15:01:20.833 INFO:tasks.workunit.client.0.vm05.stdout:9/149: read d2/f17 [248855,60508] 0 2026-03-09T15:01:20.840 INFO:tasks.workunit.client.0.vm05.stdout:3/129: link d3/df/d10/d19/f26 d3/df/d1e/d2c/f32 0 2026-03-09T15:01:20.841 INFO:tasks.workunit.client.0.vm05.stdout:3/130: dread - d3/df/f1b zero size 2026-03-09T15:01:20.848 INFO:tasks.workunit.client.0.vm05.stdout:8/136: mkdir d0/d2a 0 2026-03-09T15:01:20.852 INFO:tasks.workunit.client.0.vm05.stdout:9/150: unlink d2/d10/d15/f31 0 2026-03-09T15:01:20.857 INFO:tasks.workunit.client.0.vm05.stdout:0/76: getdents d9/de/d12/d15 0 2026-03-09T15:01:20.861 INFO:tasks.workunit.client.0.vm05.stdout:0/77: dwrite d9/fd [0,4194304] 0 2026-03-09T15:01:20.869 INFO:tasks.workunit.client.0.vm05.stdout:7/131: symlink d1/d9/d23/l24 0 2026-03-09T15:01:20.869 INFO:tasks.workunit.client.0.vm05.stdout:7/132: readlink d1/d12/l1e 0 2026-03-09T15:01:20.878 INFO:tasks.workunit.client.0.vm05.stdout:1/102: getdents d9/d1a 0 2026-03-09T15:01:20.881 INFO:tasks.workunit.client.0.vm05.stdout:0/78: creat d9/de/d12/d15/f1c x:0 0 0 2026-03-09T15:01:20.888 INFO:tasks.workunit.client.0.vm05.stdout:9/151: mknod d2/d10/d22/d2c/c33 0 2026-03-09T15:01:20.888 INFO:tasks.workunit.client.0.vm05.stdout:9/152: stat d2/f5 0 2026-03-09T15:01:20.888 INFO:tasks.workunit.client.0.vm05.stdout:5/144: truncate d1/ff 4298140 0 2026-03-09T15:01:20.889 INFO:tasks.workunit.client.0.vm05.stdout:5/145: truncate d1/da/f2f 748337 0 2026-03-09T15:01:20.891 INFO:tasks.workunit.client.0.vm05.stdout:1/103: mkdir d9/d24 0 2026-03-09T15:01:20.891 INFO:tasks.workunit.client.0.vm05.stdout:1/104: chown d9/d17/c1b 87431081 1 2026-03-09T15:01:20.893 INFO:tasks.workunit.client.0.vm05.stdout:9/153: symlink d2/l34 0 2026-03-09T15:01:20.893 INFO:tasks.workunit.client.0.vm05.stdout:9/154: write d2/f11 [3729754,26821] 0 2026-03-09T15:01:20.896 INFO:tasks.workunit.client.0.vm05.stdout:1/105: chown d9/l14 3 1 2026-03-09T15:01:20.901 INFO:tasks.workunit.client.0.vm05.stdout:9/155: unlink d2/d10/d22/l2f 0 2026-03-09T15:01:20.901 INFO:tasks.workunit.client.0.vm05.stdout:3/131: sync 2026-03-09T15:01:20.901 INFO:tasks.workunit.client.0.vm05.stdout:0/79: getdents d9/de 0 2026-03-09T15:01:20.902 INFO:tasks.workunit.client.0.vm05.stdout:5/146: sync 2026-03-09T15:01:20.903 INFO:tasks.workunit.client.0.vm05.stdout:0/80: write d9/de/f19 [134153,44312] 0 2026-03-09T15:01:20.904 INFO:tasks.workunit.client.0.vm05.stdout:0/81: write d9/de/d12/d15/f1c [969479,65126] 0 2026-03-09T15:01:20.906 INFO:tasks.workunit.client.0.vm05.stdout:5/147: sync 2026-03-09T15:01:20.906 INFO:tasks.workunit.client.0.vm05.stdout:0/82: sync 2026-03-09T15:01:20.906 INFO:tasks.workunit.client.0.vm05.stdout:9/156: dread d2/d1a/f1c [0,4194304] 0 2026-03-09T15:01:20.907 INFO:tasks.workunit.client.0.vm05.stdout:5/148: truncate d1/da/f2f 905520 0 2026-03-09T15:01:20.911 INFO:tasks.workunit.client.0.vm05.stdout:5/149: dwrite d1/d4/d19/f24 [0,4194304] 0 2026-03-09T15:01:20.913 INFO:tasks.workunit.client.0.vm05.stdout:3/132: readlink d3/l9 0 2026-03-09T15:01:20.914 INFO:tasks.workunit.client.0.vm05.stdout:3/133: fdatasync d3/f7 0 2026-03-09T15:01:20.924 INFO:tasks.workunit.client.0.vm05.stdout:9/157: rename d2/l3 to d2/d10/l35 0 2026-03-09T15:01:20.930 INFO:tasks.workunit.client.0.vm05.stdout:6/106: dwrite da/fb [0,4194304] 0 2026-03-09T15:01:20.930 INFO:tasks.workunit.client.0.vm05.stdout:6/107: read - da/f1a zero size 2026-03-09T15:01:20.936 INFO:tasks.workunit.client.0.vm05.stdout:6/108: dwrite da/f10 [0,4194304] 0 2026-03-09T15:01:20.938 INFO:tasks.workunit.client.0.vm05.stdout:2/171: truncate da/f10 3189222 0 2026-03-09T15:01:20.939 INFO:tasks.workunit.client.0.vm05.stdout:4/114: write d2/d4/d8/f13 [504816,65762] 0 2026-03-09T15:01:20.942 INFO:tasks.workunit.client.0.vm05.stdout:1/106: creat d9/d24/f25 x:0 0 0 2026-03-09T15:01:20.943 INFO:tasks.workunit.client.0.vm05.stdout:5/150: mkdir d1/d4/d19/d23/d38 0 2026-03-09T15:01:20.945 INFO:tasks.workunit.client.0.vm05.stdout:5/151: write d1/d4/d34/d35/f36 [28512,29801] 0 2026-03-09T15:01:20.947 INFO:tasks.workunit.client.0.vm05.stdout:8/137: dwrite d0/f10 [0,4194304] 0 2026-03-09T15:01:20.947 INFO:tasks.workunit.client.0.vm05.stdout:5/152: fdatasync d1/f30 0 2026-03-09T15:01:20.951 INFO:tasks.workunit.client.0.vm05.stdout:5/153: dread d1/f14 [0,4194304] 0 2026-03-09T15:01:20.951 INFO:tasks.workunit.client.0.vm05.stdout:8/138: sync 2026-03-09T15:01:20.952 INFO:tasks.workunit.client.0.vm05.stdout:8/139: readlink d0/dc/l1d 0 2026-03-09T15:01:20.964 INFO:tasks.workunit.client.0.vm05.stdout:0/83: mknod d9/de/d12/d15/c1d 0 2026-03-09T15:01:20.969 INFO:tasks.workunit.client.0.vm05.stdout:7/133: truncate d1/fa 680695 0 2026-03-09T15:01:20.969 INFO:tasks.workunit.client.0.vm05.stdout:2/172: symlink da/d13/l31 0 2026-03-09T15:01:20.974 INFO:tasks.workunit.client.0.vm05.stdout:4/115: rename d2/d4/d7/c10 to d2/d4/d1e/c20 0 2026-03-09T15:01:20.977 INFO:tasks.workunit.client.0.vm05.stdout:4/116: write d2/d4/f17 [1690611,103625] 0 2026-03-09T15:01:20.977 INFO:tasks.workunit.client.0.vm05.stdout:4/117: write d2/d4/d7/dc/f18 [1433597,93787] 0 2026-03-09T15:01:20.990 INFO:tasks.workunit.client.0.vm05.stdout:1/107: dwrite d9/f12 [0,4194304] 0 2026-03-09T15:01:20.991 INFO:tasks.workunit.client.0.vm05.stdout:1/108: write d9/d1a/f1d [3300929,1714] 0 2026-03-09T15:01:21.002 INFO:tasks.workunit.client.0.vm05.stdout:0/84: dread - d9/de/d12/d15/f1a zero size 2026-03-09T15:01:21.003 INFO:tasks.workunit.client.0.vm05.stdout:6/109: link da/f1a da/d19/f22 0 2026-03-09T15:01:21.004 INFO:tasks.workunit.client.0.vm05.stdout:2/173: truncate da/dd/f12 1059534 0 2026-03-09T15:01:21.004 INFO:tasks.workunit.client.0.vm05.stdout:2/174: truncate f5 9016383 0 2026-03-09T15:01:21.005 INFO:tasks.workunit.client.0.vm05.stdout:4/118: unlink d2/d4/d7/c1c 0 2026-03-09T15:01:21.007 INFO:tasks.workunit.client.0.vm05.stdout:8/140: creat d0/d24/f2b x:0 0 0 2026-03-09T15:01:21.008 INFO:tasks.workunit.client.0.vm05.stdout:0/85: rename d9/f11 to d9/de/f1e 0 2026-03-09T15:01:21.009 INFO:tasks.workunit.client.0.vm05.stdout:6/110: symlink da/d19/l23 0 2026-03-09T15:01:21.010 INFO:tasks.workunit.client.0.vm05.stdout:7/134: mkdir d1/d25 0 2026-03-09T15:01:21.010 INFO:tasks.workunit.client.0.vm05.stdout:6/111: readlink da/d17/l21 0 2026-03-09T15:01:21.010 INFO:tasks.workunit.client.0.vm05.stdout:2/175: symlink da/d13/l32 0 2026-03-09T15:01:21.016 INFO:tasks.workunit.client.0.vm05.stdout:0/86: dwrite f0 [0,4194304] 0 2026-03-09T15:01:21.017 INFO:tasks.workunit.client.0.vm05.stdout:6/112: dread da/f10 [0,4194304] 0 2026-03-09T15:01:21.018 INFO:tasks.workunit.client.0.vm05.stdout:6/113: dread - da/f12 zero size 2026-03-09T15:01:21.019 INFO:tasks.workunit.client.0.vm05.stdout:6/114: write da/f16 [1461063,100804] 0 2026-03-09T15:01:21.027 INFO:tasks.workunit.client.0.vm05.stdout:6/115: dwrite da/d17/f1d [0,4194304] 0 2026-03-09T15:01:21.029 INFO:tasks.workunit.client.0.vm05.stdout:6/116: write da/f16 [4440638,36135] 0 2026-03-09T15:01:21.029 INFO:tasks.workunit.client.0.vm05.stdout:6/117: write da/f14 [151940,87581] 0 2026-03-09T15:01:21.040 INFO:tasks.workunit.client.0.vm05.stdout:8/141: rmdir d0/d1 39 2026-03-09T15:01:21.044 INFO:tasks.workunit.client.0.vm05.stdout:0/87: mkdir d9/de/d1f 0 2026-03-09T15:01:21.044 INFO:tasks.workunit.client.0.vm05.stdout:0/88: write d9/de/d12/d15/f1c [240184,60391] 0 2026-03-09T15:01:21.045 INFO:tasks.workunit.client.0.vm05.stdout:0/89: write d9/de/f10 [1427854,46683] 0 2026-03-09T15:01:21.045 INFO:tasks.workunit.client.0.vm05.stdout:0/90: chown d9/lf 0 1 2026-03-09T15:01:21.046 INFO:tasks.workunit.client.0.vm05.stdout:4/119: mkdir d2/d4/d7/d21 0 2026-03-09T15:01:21.051 INFO:tasks.workunit.client.0.vm05.stdout:6/118: mknod da/d19/c24 0 2026-03-09T15:01:21.051 INFO:tasks.workunit.client.0.vm05.stdout:6/119: dread - da/f12 zero size 2026-03-09T15:01:21.052 INFO:tasks.workunit.client.0.vm05.stdout:8/142: rename d0/d24/f2b to d0/d24/f2c 0 2026-03-09T15:01:21.053 INFO:tasks.workunit.client.0.vm05.stdout:4/120: dwrite d2/d4/f17 [0,4194304] 0 2026-03-09T15:01:21.059 INFO:tasks.workunit.client.0.vm05.stdout:6/120: dread da/f10 [0,4194304] 0 2026-03-09T15:01:21.064 INFO:tasks.workunit.client.0.vm05.stdout:4/121: dwrite d2/d4/d7/f9 [0,4194304] 0 2026-03-09T15:01:21.066 INFO:tasks.workunit.client.0.vm05.stdout:4/122: chown d2/d4/d8/l1f 605116 1 2026-03-09T15:01:21.067 INFO:tasks.workunit.client.0.vm05.stdout:4/123: write d2/f1b [1908243,8354] 0 2026-03-09T15:01:21.069 INFO:tasks.workunit.client.0.vm05.stdout:8/143: mkdir d0/d2a/d2d 0 2026-03-09T15:01:21.070 INFO:tasks.workunit.client.0.vm05.stdout:6/121: symlink da/d19/l25 0 2026-03-09T15:01:21.071 INFO:tasks.workunit.client.0.vm05.stdout:6/122: mknod da/d17/c26 0 2026-03-09T15:01:21.072 INFO:tasks.workunit.client.0.vm05.stdout:6/123: write f5 [8237003,82941] 0 2026-03-09T15:01:21.073 INFO:tasks.workunit.client.0.vm05.stdout:4/124: symlink d2/d4/d7/l22 0 2026-03-09T15:01:21.082 INFO:tasks.workunit.client.0.vm05.stdout:8/144: sync 2026-03-09T15:01:21.085 INFO:tasks.workunit.client.0.vm05.stdout:6/124: rename c4 to da/d17/c27 0 2026-03-09T15:01:21.089 INFO:tasks.workunit.client.0.vm05.stdout:8/145: creat d0/d2a/f2e x:0 0 0 2026-03-09T15:01:21.089 INFO:tasks.workunit.client.0.vm05.stdout:6/125: symlink da/l28 0 2026-03-09T15:01:21.092 INFO:tasks.workunit.client.0.vm05.stdout:6/126: creat da/d17/f29 x:0 0 0 2026-03-09T15:01:21.096 INFO:tasks.workunit.client.0.vm05.stdout:8/146: rename d0/d1/d12/l25 to d0/d1/d12/d1b/d21/l2f 0 2026-03-09T15:01:21.097 INFO:tasks.workunit.client.0.vm05.stdout:4/125: link d2/d4/ce d2/d4/d7/c23 0 2026-03-09T15:01:21.097 INFO:tasks.workunit.client.0.vm05.stdout:8/147: creat d0/d24/f30 x:0 0 0 2026-03-09T15:01:21.101 INFO:tasks.workunit.client.0.vm05.stdout:6/127: dwrite da/f10 [0,4194304] 0 2026-03-09T15:01:21.106 INFO:tasks.workunit.client.0.vm05.stdout:6/128: dread da/d17/f1d [0,4194304] 0 2026-03-09T15:01:21.107 INFO:tasks.workunit.client.0.vm05.stdout:8/148: link d0/d1/d12/d1b/d21/c23 d0/d1/d12/c31 0 2026-03-09T15:01:21.107 INFO:tasks.workunit.client.0.vm05.stdout:6/129: creat da/d17/f2a x:0 0 0 2026-03-09T15:01:21.108 INFO:tasks.workunit.client.0.vm05.stdout:8/149: creat d0/f32 x:0 0 0 2026-03-09T15:01:21.109 INFO:tasks.workunit.client.0.vm05.stdout:6/130: symlink da/l2b 0 2026-03-09T15:01:21.109 INFO:tasks.workunit.client.0.vm05.stdout:6/131: chown da/d17/c26 7 1 2026-03-09T15:01:21.110 INFO:tasks.workunit.client.0.vm05.stdout:8/150: unlink d0/d1/c9 0 2026-03-09T15:01:21.112 INFO:tasks.workunit.client.0.vm05.stdout:8/151: link d0/d24/f2c d0/d7/f33 0 2026-03-09T15:01:21.114 INFO:tasks.workunit.client.0.vm05.stdout:8/152: creat d0/d1/d12/d1b/f34 x:0 0 0 2026-03-09T15:01:21.145 INFO:tasks.workunit.client.0.vm05.stdout:2/176: dread da/f10 [0,4194304] 0 2026-03-09T15:01:21.146 INFO:tasks.workunit.client.0.vm05.stdout:2/177: dread da/dd/f25 [0,4194304] 0 2026-03-09T15:01:21.148 INFO:tasks.workunit.client.0.vm05.stdout:5/154: getdents d1/d4/d19/d23 0 2026-03-09T15:01:21.148 INFO:tasks.workunit.client.0.vm05.stdout:1/109: rmdir d9/d24 39 2026-03-09T15:01:21.149 INFO:tasks.workunit.client.0.vm05.stdout:9/158: write d2/d10/d15/f18 [354246,44977] 0 2026-03-09T15:01:21.153 INFO:tasks.workunit.client.0.vm05.stdout:7/135: write d1/fa [1068386,82038] 0 2026-03-09T15:01:21.159 INFO:tasks.workunit.client.0.vm05.stdout:3/134: truncate d3/df/d10/f1c 1314706 0 2026-03-09T15:01:21.159 INFO:tasks.workunit.client.0.vm05.stdout:0/91: read d9/de/d12/d15/f1c [524634,92244] 0 2026-03-09T15:01:21.165 INFO:tasks.workunit.client.0.vm05.stdout:5/155: mknod d1/da/c39 0 2026-03-09T15:01:21.167 INFO:tasks.workunit.client.0.vm05.stdout:5/156: dread d1/d4/f11 [0,4194304] 0 2026-03-09T15:01:21.168 INFO:tasks.workunit.client.0.vm05.stdout:5/157: truncate d1/d4/f11 1190621 0 2026-03-09T15:01:21.168 INFO:tasks.workunit.client.0.vm05.stdout:5/158: stat d1/d4/d27/l2c 0 2026-03-09T15:01:21.168 INFO:tasks.workunit.client.0.vm05.stdout:1/110: creat d9/d17/f26 x:0 0 0 2026-03-09T15:01:21.169 INFO:tasks.workunit.client.0.vm05.stdout:1/111: dread - d9/f23 zero size 2026-03-09T15:01:21.169 INFO:tasks.workunit.client.0.vm05.stdout:5/159: dread - d1/f30 zero size 2026-03-09T15:01:21.170 INFO:tasks.workunit.client.0.vm05.stdout:5/160: chown d1/c33 6518 1 2026-03-09T15:01:21.174 INFO:tasks.workunit.client.0.vm05.stdout:5/161: dread d1/f26 [0,4194304] 0 2026-03-09T15:01:21.175 INFO:tasks.workunit.client.0.vm05.stdout:5/162: chown d1/f9 24 1 2026-03-09T15:01:21.177 INFO:tasks.workunit.client.0.vm05.stdout:3/135: creat d3/d29/d2d/f33 x:0 0 0 2026-03-09T15:01:21.178 INFO:tasks.workunit.client.0.vm05.stdout:3/136: readlink d3/l9 0 2026-03-09T15:01:21.180 INFO:tasks.workunit.client.0.vm05.stdout:0/92: unlink d9/de/f13 0 2026-03-09T15:01:21.186 INFO:tasks.workunit.client.0.vm05.stdout:1/112: symlink d9/d17/l27 0 2026-03-09T15:01:21.188 INFO:tasks.workunit.client.0.vm05.stdout:4/126: fdatasync d2/d4/d7/f9 0 2026-03-09T15:01:21.195 INFO:tasks.workunit.client.0.vm05.stdout:3/137: dread d3/f7 [0,4194304] 0 2026-03-09T15:01:21.197 INFO:tasks.workunit.client.0.vm05.stdout:5/163: creat d1/d4/d27/f3a x:0 0 0 2026-03-09T15:01:21.200 INFO:tasks.workunit.client.0.vm05.stdout:0/93: creat d9/de/f20 x:0 0 0 2026-03-09T15:01:21.200 INFO:tasks.workunit.client.0.vm05.stdout:0/94: fsync d9/de/f10 0 2026-03-09T15:01:21.201 INFO:tasks.workunit.client.0.vm05.stdout:6/132: getdents da/d19 0 2026-03-09T15:01:21.203 INFO:tasks.workunit.client.0.vm05.stdout:2/178: rename da/dd/c26 to da/c33 0 2026-03-09T15:01:21.204 INFO:tasks.workunit.client.0.vm05.stdout:2/179: dread f4 [8388608,4194304] 0 2026-03-09T15:01:21.205 INFO:tasks.workunit.client.0.vm05.stdout:1/113: chown d9/cf 148826 1 2026-03-09T15:01:21.213 INFO:tasks.workunit.client.0.vm05.stdout:5/164: rmdir d1/d4/d19/d23 39 2026-03-09T15:01:21.216 INFO:tasks.workunit.client.0.vm05.stdout:0/95: creat d9/de/f21 x:0 0 0 2026-03-09T15:01:21.220 INFO:tasks.workunit.client.0.vm05.stdout:6/133: rename f5 to da/d17/f2c 0 2026-03-09T15:01:21.222 INFO:tasks.workunit.client.0.vm05.stdout:2/180: rmdir da/d29 39 2026-03-09T15:01:21.224 INFO:tasks.workunit.client.0.vm05.stdout:8/153: rmdir d0 39 2026-03-09T15:01:21.225 INFO:tasks.workunit.client.0.vm05.stdout:1/114: mknod d9/d17/c28 0 2026-03-09T15:01:21.229 INFO:tasks.workunit.client.0.vm05.stdout:5/165: write d1/d4/d19/d23/f32 [824403,111365] 0 2026-03-09T15:01:21.231 INFO:tasks.workunit.client.0.vm05.stdout:7/136: getdents d1 0 2026-03-09T15:01:21.232 INFO:tasks.workunit.client.0.vm05.stdout:6/134: creat da/d17/f2d x:0 0 0 2026-03-09T15:01:21.235 INFO:tasks.workunit.client.0.vm05.stdout:5/166: write d1/d4/f20 [365250,122484] 0 2026-03-09T15:01:21.237 INFO:tasks.workunit.client.0.vm05.stdout:7/137: readlink d1/l4 0 2026-03-09T15:01:21.237 INFO:tasks.workunit.client.0.vm05.stdout:6/135: mknod da/c2e 0 2026-03-09T15:01:21.238 INFO:tasks.workunit.client.0.vm05.stdout:2/181: creat da/d13/d30/f34 x:0 0 0 2026-03-09T15:01:21.260 INFO:tasks.workunit.client.0.vm05.stdout:1/115: mknod d9/d24/c29 0 2026-03-09T15:01:21.260 INFO:tasks.workunit.client.0.vm05.stdout:5/167: symlink d1/d4/d27/l3b 0 2026-03-09T15:01:21.260 INFO:tasks.workunit.client.0.vm05.stdout:6/136: mknod da/d17/c2f 0 2026-03-09T15:01:21.260 INFO:tasks.workunit.client.0.vm05.stdout:6/137: chown da/d19/l25 28071756 1 2026-03-09T15:01:21.260 INFO:tasks.workunit.client.0.vm05.stdout:6/138: truncate da/d17/f2a 900795 0 2026-03-09T15:01:21.260 INFO:tasks.workunit.client.0.vm05.stdout:6/139: fsync da/f14 0 2026-03-09T15:01:21.260 INFO:tasks.workunit.client.0.vm05.stdout:7/138: mknod d1/d22/c26 0 2026-03-09T15:01:21.260 INFO:tasks.workunit.client.0.vm05.stdout:2/182: mkdir da/d13/d2f/d35 0 2026-03-09T15:01:21.260 INFO:tasks.workunit.client.0.vm05.stdout:6/140: creat da/d17/f30 x:0 0 0 2026-03-09T15:01:21.260 INFO:tasks.workunit.client.0.vm05.stdout:6/141: write da/f16 [5515424,326] 0 2026-03-09T15:01:21.260 INFO:tasks.workunit.client.0.vm05.stdout:6/142: dread - da/d17/f2d zero size 2026-03-09T15:01:21.260 INFO:tasks.workunit.client.0.vm05.stdout:2/183: dwrite da/dd/ff [0,4194304] 0 2026-03-09T15:01:21.261 INFO:tasks.workunit.client.0.vm05.stdout:2/184: chown da/dd/c27 0 1 2026-03-09T15:01:21.261 INFO:tasks.workunit.client.0.vm05.stdout:7/139: dread d1/d9/fd [0,4194304] 0 2026-03-09T15:01:21.261 INFO:tasks.workunit.client.0.vm05.stdout:7/140: fdatasync d1/f15 0 2026-03-09T15:01:21.261 INFO:tasks.workunit.client.0.vm05.stdout:6/143: rename da/d19/c24 to da/d19/c31 0 2026-03-09T15:01:21.261 INFO:tasks.workunit.client.0.vm05.stdout:2/185: dread da/dd/ff [0,4194304] 0 2026-03-09T15:01:21.261 INFO:tasks.workunit.client.0.vm05.stdout:5/168: getdents d1/d4/d34 0 2026-03-09T15:01:21.263 INFO:tasks.workunit.client.0.vm05.stdout:7/141: symlink d1/d9/l27 0 2026-03-09T15:01:21.289 INFO:tasks.workunit.client.0.vm05.stdout:7/142: write d1/f15 [671418,54278] 0 2026-03-09T15:01:21.289 INFO:tasks.workunit.client.0.vm05.stdout:2/186: mknod da/d13/d30/c36 0 2026-03-09T15:01:21.289 INFO:tasks.workunit.client.0.vm05.stdout:9/159: dwrite d2/f5 [0,4194304] 0 2026-03-09T15:01:21.289 INFO:tasks.workunit.client.0.vm05.stdout:9/160: stat d2/l34 0 2026-03-09T15:01:21.289 INFO:tasks.workunit.client.0.vm05.stdout:7/143: creat d1/f28 x:0 0 0 2026-03-09T15:01:21.289 INFO:tasks.workunit.client.0.vm05.stdout:9/161: dread d2/f17 [0,4194304] 0 2026-03-09T15:01:21.289 INFO:tasks.workunit.client.0.vm05.stdout:7/144: creat d1/d25/f29 x:0 0 0 2026-03-09T15:01:21.289 INFO:tasks.workunit.client.0.vm05.stdout:5/169: link d1/d4/c1b d1/d4/d19/c3c 0 2026-03-09T15:01:21.290 INFO:tasks.workunit.client.0.vm05.stdout:2/187: dwrite da/d29/f2d [4194304,4194304] 0 2026-03-09T15:01:21.297 INFO:tasks.workunit.client.0.vm05.stdout:7/145: creat d1/d9/d23/f2a x:0 0 0 2026-03-09T15:01:21.302 INFO:tasks.workunit.client.0.vm05.stdout:5/170: rename d1/d4/d19/d23 to d1/d4/d34/d35/d3d 0 2026-03-09T15:01:21.306 INFO:tasks.workunit.client.0.vm05.stdout:5/171: write d1/da/f31 [734157,96882] 0 2026-03-09T15:01:21.307 INFO:tasks.workunit.client.0.vm05.stdout:5/172: dread d1/d4/f20 [0,4194304] 0 2026-03-09T15:01:21.307 INFO:tasks.workunit.client.0.vm05.stdout:5/173: chown d1/d4/d27 320781979 1 2026-03-09T15:01:21.307 INFO:tasks.workunit.client.0.vm05.stdout:2/188: symlink da/d13/l37 0 2026-03-09T15:01:21.307 INFO:tasks.workunit.client.0.vm05.stdout:2/189: chown da/d13/d2f/d35 0 1 2026-03-09T15:01:21.307 INFO:tasks.workunit.client.0.vm05.stdout:2/190: chown da/le 1516927 1 2026-03-09T15:01:21.307 INFO:tasks.workunit.client.0.vm05.stdout:7/146: rename d1/l4 to d1/d9/d23/l2b 0 2026-03-09T15:01:21.309 INFO:tasks.workunit.client.0.vm05.stdout:2/191: unlink da/d13/l37 0 2026-03-09T15:01:21.310 INFO:tasks.workunit.client.0.vm05.stdout:2/192: fsync da/d16/f20 0 2026-03-09T15:01:21.311 INFO:tasks.workunit.client.0.vm05.stdout:7/147: dwrite d1/d9/f13 [0,4194304] 0 2026-03-09T15:01:21.321 INFO:tasks.workunit.client.0.vm05.stdout:7/148: mknod d1/d9/c2c 0 2026-03-09T15:01:21.321 INFO:tasks.workunit.client.0.vm05.stdout:2/193: symlink da/d13/d2f/d35/l38 0 2026-03-09T15:01:21.322 INFO:tasks.workunit.client.0.vm05.stdout:2/194: rmdir da 39 2026-03-09T15:01:21.323 INFO:tasks.workunit.client.0.vm05.stdout:7/149: dread d1/d9/f13 [0,4194304] 0 2026-03-09T15:01:21.324 INFO:tasks.workunit.client.0.vm05.stdout:2/195: readlink da/d13/d2f/d35/l38 0 2026-03-09T15:01:21.324 INFO:tasks.workunit.client.0.vm05.stdout:7/150: symlink d1/d12/l2d 0 2026-03-09T15:01:21.325 INFO:tasks.workunit.client.0.vm05.stdout:7/151: truncate d1/f15 918737 0 2026-03-09T15:01:21.326 INFO:tasks.workunit.client.0.vm05.stdout:2/196: creat da/d29/f39 x:0 0 0 2026-03-09T15:01:21.326 INFO:tasks.workunit.client.0.vm05.stdout:7/152: read d1/d9/fd [51620,64339] 0 2026-03-09T15:01:21.326 INFO:tasks.workunit.client.0.vm05.stdout:2/197: truncate da/f2c 765812 0 2026-03-09T15:01:21.328 INFO:tasks.workunit.client.0.vm05.stdout:7/153: rename d1/d9/d23/l24 to d1/l2e 0 2026-03-09T15:01:21.329 INFO:tasks.workunit.client.0.vm05.stdout:7/154: mkdir d1/d12/d2f 0 2026-03-09T15:01:21.330 INFO:tasks.workunit.client.0.vm05.stdout:7/155: chown d1/d12/cf 41525756 1 2026-03-09T15:01:21.343 INFO:tasks.workunit.client.0.vm05.stdout:0/96: sync 2026-03-09T15:01:21.344 INFO:tasks.workunit.client.0.vm05.stdout:0/97: unlink f0 0 2026-03-09T15:01:21.345 INFO:tasks.workunit.client.0.vm05.stdout:0/98: truncate d9/de/f21 934855 0 2026-03-09T15:01:21.346 INFO:tasks.workunit.client.0.vm05.stdout:3/138: write d3/df/d10/d19/f26 [78334,42094] 0 2026-03-09T15:01:21.346 INFO:tasks.workunit.client.0.vm05.stdout:4/127: write d2/d4/f17 [4994262,48606] 0 2026-03-09T15:01:21.350 INFO:tasks.workunit.client.0.vm05.stdout:3/139: mkdir d3/df/d10/d34 0 2026-03-09T15:01:21.351 INFO:tasks.workunit.client.0.vm05.stdout:3/140: dread - d3/d29/d2d/f33 zero size 2026-03-09T15:01:21.351 INFO:tasks.workunit.client.0.vm05.stdout:3/141: stat d3/f1f 0 2026-03-09T15:01:21.355 INFO:tasks.workunit.client.0.vm05.stdout:0/99: creat d9/f22 x:0 0 0 2026-03-09T15:01:21.362 INFO:tasks.workunit.client.0.vm05.stdout:0/100: write d9/f22 [328548,90132] 0 2026-03-09T15:01:21.396 INFO:tasks.workunit.client.0.vm05.stdout:3/142: link d3/d29/f30 d3/df/d1e/d24/f35 0 2026-03-09T15:01:21.396 INFO:tasks.workunit.client.0.vm05.stdout:0/101: rename d9/de/f10 to d9/de/d12/f23 0 2026-03-09T15:01:21.396 INFO:tasks.workunit.client.0.vm05.stdout:0/102: write d9/de/f20 [818254,24196] 0 2026-03-09T15:01:21.397 INFO:tasks.workunit.client.0.vm05.stdout:0/103: mknod d9/de/d12/d15/c24 0 2026-03-09T15:01:21.397 INFO:tasks.workunit.client.0.vm05.stdout:0/104: mkdir d9/de/d25 0 2026-03-09T15:01:21.397 INFO:tasks.workunit.client.0.vm05.stdout:0/105: write d9/de/d12/d15/f1c [1191479,77696] 0 2026-03-09T15:01:21.397 INFO:tasks.workunit.client.0.vm05.stdout:8/154: write d0/d7/f33 [426937,41382] 0 2026-03-09T15:01:21.397 INFO:tasks.workunit.client.0.vm05.stdout:8/155: dread - d0/d1/d12/d1b/f27 zero size 2026-03-09T15:01:21.397 INFO:tasks.workunit.client.0.vm05.stdout:8/156: readlink d0/d1/d12/d1b/l28 0 2026-03-09T15:01:21.397 INFO:tasks.workunit.client.0.vm05.stdout:8/157: mkdir d0/d7/d35 0 2026-03-09T15:01:21.397 INFO:tasks.workunit.client.0.vm05.stdout:8/158: chown d0/d7/cd 5432 1 2026-03-09T15:01:21.397 INFO:tasks.workunit.client.0.vm05.stdout:1/116: write d9/f15 [1653360,28816] 0 2026-03-09T15:01:21.397 INFO:tasks.workunit.client.0.vm05.stdout:8/159: chown d0/f4 804 1 2026-03-09T15:01:21.397 INFO:tasks.workunit.client.0.vm05.stdout:8/160: write d0/d24/f2c [117476,21490] 0 2026-03-09T15:01:21.401 INFO:tasks.workunit.client.0.vm05.stdout:1/117: mkdir d9/d2a 0 2026-03-09T15:01:21.406 INFO:tasks.workunit.client.0.vm05.stdout:6/144: dwrite da/d19/f22 [0,4194304] 0 2026-03-09T15:01:21.409 INFO:tasks.workunit.client.0.vm05.stdout:7/156: fsync d1/f28 0 2026-03-09T15:01:21.417 INFO:tasks.workunit.client.0.vm05.stdout:6/145: rename da/l1b to da/d19/l32 0 2026-03-09T15:01:21.418 INFO:tasks.workunit.client.0.vm05.stdout:7/157: creat d1/d25/f30 x:0 0 0 2026-03-09T15:01:21.420 INFO:tasks.workunit.client.0.vm05.stdout:8/161: link d0/d1/d12/c31 d0/d24/c36 0 2026-03-09T15:01:21.420 INFO:tasks.workunit.client.0.vm05.stdout:6/146: creat da/d17/f33 x:0 0 0 2026-03-09T15:01:21.421 INFO:tasks.workunit.client.0.vm05.stdout:1/118: link d9/d17/l27 d9/d2a/l2b 0 2026-03-09T15:01:21.423 INFO:tasks.workunit.client.0.vm05.stdout:7/158: dwrite d1/f15 [0,4194304] 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:8/162: dwrite d0/d7/f8 [0,4194304] 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:8/163: fdatasync d0/f10 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:1/119: rename d9/d17/f1e to d9/d1a/f2c 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:6/147: creat da/d19/f34 x:0 0 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:1/120: dwrite d9/f21 [0,4194304] 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:7/159: unlink d1/l19 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:7/160: readlink d1/d9/l27 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:6/148: creat da/d19/f35 x:0 0 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:6/149: dread da/fe [0,4194304] 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:6/150: read - da/d19/f34 zero size 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:6/151: fdatasync da/f1f 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:6/152: write da/fb [1806812,24325] 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:8/164: mknod d0/c37 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:6/153: symlink da/d19/l36 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:1/121: creat d9/d2a/f2d x:0 0 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:8/165: symlink d0/d1/d12/d1b/l38 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:8/166: mknod d0/d24/c39 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:8/167: dread - d0/d24/f30 zero size 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:8/168: truncate d0/d1/d12/d1b/f27 187975 0 2026-03-09T15:01:21.461 INFO:tasks.workunit.client.0.vm05.stdout:8/169: truncate d0/d7/f20 397026 0 2026-03-09T15:01:21.526 INFO:tasks.workunit.client.0.vm05.stdout:8/170: symlink d0/dc/l3a 0 2026-03-09T15:01:21.528 INFO:tasks.workunit.client.0.vm05.stdout:8/171: rename d0/f32 to d0/f3b 0 2026-03-09T15:01:21.529 INFO:tasks.workunit.client.0.vm05.stdout:7/161: fdatasync d1/d9/f13 0 2026-03-09T15:01:21.530 INFO:tasks.workunit.client.0.vm05.stdout:7/162: chown d1/d12/f18 532 1 2026-03-09T15:01:21.531 INFO:tasks.workunit.client.0.vm05.stdout:8/172: mkdir d0/d1/d12/d3c 0 2026-03-09T15:01:21.531 INFO:tasks.workunit.client.0.vm05.stdout:7/163: dread d1/f16 [0,4194304] 0 2026-03-09T15:01:21.532 INFO:tasks.workunit.client.0.vm05.stdout:7/164: fdatasync d1/d12/f18 0 2026-03-09T15:01:21.532 INFO:tasks.workunit.client.0.vm05.stdout:8/173: dread d0/d1/d12/d1b/f27 [0,4194304] 0 2026-03-09T15:01:21.536 INFO:tasks.workunit.client.0.vm05.stdout:7/165: rename d1/d12/d2f to d1/d9/d23/d31 0 2026-03-09T15:01:21.542 INFO:tasks.workunit.client.0.vm05.stdout:8/174: rename d0/d1/cb to d0/dc/c3d 0 2026-03-09T15:01:21.542 INFO:tasks.workunit.client.0.vm05.stdout:8/175: fdatasync d0/d24/f2c 0 2026-03-09T15:01:21.542 INFO:tasks.workunit.client.0.vm05.stdout:8/176: rename d0/d1/d12/f1c to d0/d2a/d2d/f3e 0 2026-03-09T15:01:21.543 INFO:tasks.workunit.client.0.vm05.stdout:8/177: mknod d0/dc/c3f 0 2026-03-09T15:01:21.544 INFO:tasks.workunit.client.0.vm05.stdout:8/178: mknod d0/dc/c40 0 2026-03-09T15:01:21.546 INFO:tasks.workunit.client.0.vm05.stdout:5/174: sync 2026-03-09T15:01:21.546 INFO:tasks.workunit.client.0.vm05.stdout:8/179: readlink d0/d1/d12/d1b/d21/l2f 0 2026-03-09T15:01:21.547 INFO:tasks.workunit.client.0.vm05.stdout:3/143: sync 2026-03-09T15:01:21.550 INFO:tasks.workunit.client.0.vm05.stdout:3/144: chown d3/d29 3780577 1 2026-03-09T15:01:21.551 INFO:tasks.workunit.client.0.vm05.stdout:6/154: sync 2026-03-09T15:01:21.552 INFO:tasks.workunit.client.0.vm05.stdout:6/155: chown da/d17/c26 957 1 2026-03-09T15:01:21.552 INFO:tasks.workunit.client.0.vm05.stdout:6/156: readlink l2 0 2026-03-09T15:01:21.553 INFO:tasks.workunit.client.0.vm05.stdout:8/180: fdatasync d0/f4 0 2026-03-09T15:01:21.554 INFO:tasks.workunit.client.0.vm05.stdout:8/181: chown d0/dc/c40 338 1 2026-03-09T15:01:21.554 INFO:tasks.workunit.client.0.vm05.stdout:5/175: dwrite d1/f3 [0,4194304] 0 2026-03-09T15:01:21.560 INFO:tasks.workunit.client.0.vm05.stdout:9/162: dwrite d2/f8 [0,4194304] 0 2026-03-09T15:01:21.571 INFO:tasks.workunit.client.0.vm05.stdout:2/198: truncate da/f21 3952315 0 2026-03-09T15:01:21.571 INFO:tasks.workunit.client.0.vm05.stdout:2/199: write da/d16/f1f [210641,127416] 0 2026-03-09T15:01:21.572 INFO:tasks.workunit.client.0.vm05.stdout:2/200: write da/d13/d30/f34 [49985,81965] 0 2026-03-09T15:01:21.574 INFO:tasks.workunit.client.0.vm05.stdout:4/128: truncate d2/d4/d7/dc/f18 3698548 0 2026-03-09T15:01:21.579 INFO:tasks.workunit.client.0.vm05.stdout:0/106: truncate d9/fd 18108 0 2026-03-09T15:01:21.596 INFO:tasks.workunit.client.0.vm05.stdout:7/166: getdents d1 0 2026-03-09T15:01:21.596 INFO:tasks.workunit.client.0.vm05.stdout:7/167: read d1/d9/fd [197692,38563] 0 2026-03-09T15:01:21.599 INFO:tasks.workunit.client.0.vm05.stdout:0/107: dread d9/de/d12/d15/f1c [0,4194304] 0 2026-03-09T15:01:21.599 INFO:tasks.workunit.client.0.vm05.stdout:0/108: chown d9 261482 1 2026-03-09T15:01:21.600 INFO:tasks.workunit.client.0.vm05.stdout:0/109: stat d9/de 0 2026-03-09T15:01:21.605 INFO:tasks.workunit.client.0.vm05.stdout:1/122: dwrite f5 [0,4194304] 0 2026-03-09T15:01:21.628 INFO:tasks.workunit.client.0.vm05.stdout:6/157: mknod da/d19/c37 0 2026-03-09T15:01:21.630 INFO:tasks.workunit.client.0.vm05.stdout:8/182: dread d0/d1/d12/d1b/f27 [0,4194304] 0 2026-03-09T15:01:21.630 INFO:tasks.workunit.client.0.vm05.stdout:5/176: symlink d1/da/l3e 0 2026-03-09T15:01:21.630 INFO:tasks.workunit.client.0.vm05.stdout:5/177: readlink d1/l2 0 2026-03-09T15:01:21.638 INFO:tasks.workunit.client.0.vm05.stdout:9/163: rmdir d2/d10/d22 39 2026-03-09T15:01:21.639 INFO:tasks.workunit.client.0.vm05.stdout:2/201: creat da/d13/d2f/d35/f3a x:0 0 0 2026-03-09T15:01:21.641 INFO:tasks.workunit.client.0.vm05.stdout:2/202: dread da/d16/f1e [0,4194304] 0 2026-03-09T15:01:21.644 INFO:tasks.workunit.client.0.vm05.stdout:4/129: mknod d2/d4/d1e/c24 0 2026-03-09T15:01:21.644 INFO:tasks.workunit.client.0.vm05.stdout:2/203: dwrite da/d16/f1f [0,4194304] 0 2026-03-09T15:01:21.649 INFO:tasks.workunit.client.0.vm05.stdout:5/178: sync 2026-03-09T15:01:21.649 INFO:tasks.workunit.client.0.vm05.stdout:2/204: sync 2026-03-09T15:01:21.649 INFO:tasks.workunit.client.0.vm05.stdout:5/179: chown d1/d4/d27/f3a 449269 1 2026-03-09T15:01:21.650 INFO:tasks.workunit.client.0.vm05.stdout:5/180: write d1/da/f31 [586120,96498] 0 2026-03-09T15:01:21.651 INFO:tasks.workunit.client.0.vm05.stdout:5/181: fdatasync d1/d4/d19/f29 0 2026-03-09T15:01:21.651 INFO:tasks.workunit.client.0.vm05.stdout:5/182: readlink d1/da/l22 0 2026-03-09T15:01:21.652 INFO:tasks.workunit.client.0.vm05.stdout:5/183: stat d1/d4/d34/d35/f36 0 2026-03-09T15:01:21.662 INFO:tasks.workunit.client.0.vm05.stdout:7/168: mkdir d1/d9/d23/d31/d32 0 2026-03-09T15:01:21.662 INFO:tasks.workunit.client.0.vm05.stdout:7/169: chown d1/d9/c2c 43532 1 2026-03-09T15:01:21.663 INFO:tasks.workunit.client.0.vm05.stdout:7/170: read - d1/d9/d23/f2a zero size 2026-03-09T15:01:21.663 INFO:tasks.workunit.client.0.vm05.stdout:7/171: stat d1/d25/f29 0 2026-03-09T15:01:21.664 INFO:tasks.workunit.client.0.vm05.stdout:7/172: dread - d1/d9/d23/f2a zero size 2026-03-09T15:01:21.664 INFO:tasks.workunit.client.0.vm05.stdout:7/173: dread - d1/d12/f20 zero size 2026-03-09T15:01:21.667 INFO:tasks.workunit.client.0.vm05.stdout:7/174: read d1/fa [771886,98984] 0 2026-03-09T15:01:21.668 INFO:tasks.workunit.client.0.vm05.stdout:7/175: fsync d1/f15 0 2026-03-09T15:01:21.671 INFO:tasks.workunit.client.0.vm05.stdout:0/110: creat d9/de/f26 x:0 0 0 2026-03-09T15:01:21.674 INFO:tasks.workunit.client.0.vm05.stdout:1/123: rmdir d9/d24 39 2026-03-09T15:01:21.675 INFO:tasks.workunit.client.0.vm05.stdout:1/124: write d9/f23 [545436,37894] 0 2026-03-09T15:01:21.682 INFO:tasks.workunit.client.0.vm05.stdout:3/145: truncate d3/df/f11 3007867 0 2026-03-09T15:01:21.685 INFO:tasks.workunit.client.0.vm05.stdout:6/158: creat da/f38 x:0 0 0 2026-03-09T15:01:21.689 INFO:tasks.workunit.client.0.vm05.stdout:9/164: creat d2/d19/f36 x:0 0 0 2026-03-09T15:01:21.696 INFO:tasks.workunit.client.0.vm05.stdout:2/205: symlink da/d13/d2f/l3b 0 2026-03-09T15:01:21.697 INFO:tasks.workunit.client.0.vm05.stdout:2/206: read da/dd/f22 [2381615,67087] 0 2026-03-09T15:01:21.705 INFO:tasks.workunit.client.0.vm05.stdout:7/176: creat d1/d9/d23/d31/f33 x:0 0 0 2026-03-09T15:01:21.709 INFO:tasks.workunit.client.0.vm05.stdout:5/184: dwrite d1/f26 [0,4194304] 0 2026-03-09T15:01:21.714 INFO:tasks.workunit.client.0.vm05.stdout:5/185: chown d1/d4/f11 30 1 2026-03-09T15:01:21.714 INFO:tasks.workunit.client.0.vm05.stdout:3/146: stat d3/lc 0 2026-03-09T15:01:21.720 INFO:tasks.workunit.client.0.vm05.stdout:4/130: symlink d2/d4/l25 0 2026-03-09T15:01:21.724 INFO:tasks.workunit.client.0.vm05.stdout:0/111: unlink d9/de/f26 0 2026-03-09T15:01:21.733 INFO:tasks.workunit.client.0.vm05.stdout:1/125: symlink d9/d24/l2e 0 2026-03-09T15:01:21.733 INFO:tasks.workunit.client.0.vm05.stdout:5/186: symlink d1/l3f 0 2026-03-09T15:01:21.733 INFO:tasks.workunit.client.0.vm05.stdout:8/183: rmdir d0/d7/d35 0 2026-03-09T15:01:21.733 INFO:tasks.workunit.client.0.vm05.stdout:4/131: symlink d2/d4/d8/l26 0 2026-03-09T15:01:21.733 INFO:tasks.workunit.client.0.vm05.stdout:8/184: fdatasync d0/d24/f30 0 2026-03-09T15:01:21.733 INFO:tasks.workunit.client.0.vm05.stdout:4/132: stat d2/d4/d7/dc 0 2026-03-09T15:01:21.733 INFO:tasks.workunit.client.0.vm05.stdout:1/126: readlink d9/le 0 2026-03-09T15:01:21.733 INFO:tasks.workunit.client.0.vm05.stdout:2/207: dread da/dd/f12 [0,4194304] 0 2026-03-09T15:01:21.736 INFO:tasks.workunit.client.0.vm05.stdout:3/147: dread d3/df/d10/f1c [0,4194304] 0 2026-03-09T15:01:21.743 INFO:tasks.workunit.client.0.vm05.stdout:8/185: creat d0/d2a/d2d/f41 x:0 0 0 2026-03-09T15:01:21.744 INFO:tasks.workunit.client.0.vm05.stdout:8/186: dread - d0/d24/f30 zero size 2026-03-09T15:01:21.746 INFO:tasks.workunit.client.0.vm05.stdout:4/133: read d2/d4/d7/dc/f18 [1265196,77576] 0 2026-03-09T15:01:21.747 INFO:tasks.workunit.client.0.vm05.stdout:4/134: write d2/f14 [190045,123301] 0 2026-03-09T15:01:21.748 INFO:tasks.workunit.client.0.vm05.stdout:3/148: sync 2026-03-09T15:01:21.754 INFO:tasks.workunit.client.0.vm05.stdout:3/149: dwrite d3/df/f23 [0,4194304] 0 2026-03-09T15:01:21.755 INFO:tasks.workunit.client.0.vm05.stdout:4/135: dwrite d2/f1b [0,4194304] 0 2026-03-09T15:01:21.757 INFO:tasks.workunit.client.0.vm05.stdout:3/150: stat d3/df/d10/d19 0 2026-03-09T15:01:21.760 INFO:tasks.workunit.client.0.vm05.stdout:7/177: link d1/d9/c2c d1/d9/d23/d31/c34 0 2026-03-09T15:01:21.760 INFO:tasks.workunit.client.0.vm05.stdout:7/178: chown d1/d9/d23 10 1 2026-03-09T15:01:21.763 INFO:tasks.workunit.client.0.vm05.stdout:0/112: symlink d9/de/d25/l27 0 2026-03-09T15:01:21.763 INFO:tasks.workunit.client.0.vm05.stdout:6/159: getdents da/d19 0 2026-03-09T15:01:21.763 INFO:tasks.workunit.client.0.vm05.stdout:0/113: chown d9/de/d1f 8029 1 2026-03-09T15:01:21.764 INFO:tasks.workunit.client.0.vm05.stdout:6/160: fsync da/d17/f2d 0 2026-03-09T15:01:21.764 INFO:tasks.workunit.client.0.vm05.stdout:0/114: readlink l8 0 2026-03-09T15:01:21.765 INFO:tasks.workunit.client.0.vm05.stdout:6/161: write da/f1f [940186,15449] 0 2026-03-09T15:01:21.766 INFO:tasks.workunit.client.0.vm05.stdout:7/179: dread d1/d9/f10 [0,4194304] 0 2026-03-09T15:01:21.774 INFO:tasks.workunit.client.0.vm05.stdout:5/187: creat d1/d4/d34/d35/d3d/d38/f40 x:0 0 0 2026-03-09T15:01:21.774 INFO:tasks.workunit.client.0.vm05.stdout:8/187: mkdir d0/d2a/d2d/d42 0 2026-03-09T15:01:21.775 INFO:tasks.workunit.client.0.vm05.stdout:5/188: write d1/d4/d19/f24 [3240871,115186] 0 2026-03-09T15:01:21.775 INFO:tasks.workunit.client.0.vm05.stdout:8/188: readlink d0/d1/d12/d1b/d21/l2f 0 2026-03-09T15:01:21.776 INFO:tasks.workunit.client.0.vm05.stdout:8/189: chown d0/d2a/d2d 15233 1 2026-03-09T15:01:21.780 INFO:tasks.workunit.client.0.vm05.stdout:5/189: dwrite d1/d4/d19/f24 [0,4194304] 0 2026-03-09T15:01:21.782 INFO:tasks.workunit.client.0.vm05.stdout:5/190: fsync d1/f2a 0 2026-03-09T15:01:21.788 INFO:tasks.workunit.client.0.vm05.stdout:5/191: dread d1/f26 [0,4194304] 0 2026-03-09T15:01:21.789 INFO:tasks.workunit.client.0.vm05.stdout:5/192: write d1/d4/f11 [4081476,118050] 0 2026-03-09T15:01:21.795 INFO:tasks.workunit.client.0.vm05.stdout:1/127: mkdir d9/d2f 0 2026-03-09T15:01:21.797 INFO:tasks.workunit.client.0.vm05.stdout:0/115: unlink d9/de/d12/d15/f1a 0 2026-03-09T15:01:21.798 INFO:tasks.workunit.client.0.vm05.stdout:7/180: symlink d1/d9/l35 0 2026-03-09T15:01:21.798 INFO:tasks.workunit.client.0.vm05.stdout:7/181: write d1/d9/fc [8121966,73641] 0 2026-03-09T15:01:21.799 INFO:tasks.workunit.client.0.vm05.stdout:9/165: getdents d2/d10/d22 0 2026-03-09T15:01:21.805 INFO:tasks.workunit.client.0.vm05.stdout:8/190: unlink d0/d1/d12/l1e 0 2026-03-09T15:01:21.805 INFO:tasks.workunit.client.0.vm05.stdout:5/193: mknod d1/d4/d34/c41 0 2026-03-09T15:01:21.808 INFO:tasks.workunit.client.0.vm05.stdout:5/194: read d1/da/f31 [57989,29365] 0 2026-03-09T15:01:21.811 INFO:tasks.workunit.client.0.vm05.stdout:4/136: link d2/d4/f17 d2/d4/d7/dc/f27 0 2026-03-09T15:01:21.811 INFO:tasks.workunit.client.0.vm05.stdout:3/151: symlink d3/df/d10/d34/l36 0 2026-03-09T15:01:21.811 INFO:tasks.workunit.client.0.vm05.stdout:8/191: dwrite d0/f10 [0,4194304] 0 2026-03-09T15:01:21.812 INFO:tasks.workunit.client.0.vm05.stdout:8/192: stat d0/c18 0 2026-03-09T15:01:21.812 INFO:tasks.workunit.client.0.vm05.stdout:1/128: chown d9/l11 7 1 2026-03-09T15:01:21.815 INFO:tasks.workunit.client.0.vm05.stdout:1/129: write d9/f15 [2073122,16835] 0 2026-03-09T15:01:21.817 INFO:tasks.workunit.client.0.vm05.stdout:6/162: getdents da/d19 0 2026-03-09T15:01:21.824 INFO:tasks.workunit.client.0.vm05.stdout:1/130: sync 2026-03-09T15:01:21.825 INFO:tasks.workunit.client.0.vm05.stdout:9/166: rename d2/d19 to d2/d1a/d1b/d23/d37 0 2026-03-09T15:01:21.829 INFO:tasks.workunit.client.0.vm05.stdout:5/195: unlink d1/d4/f11 0 2026-03-09T15:01:21.835 INFO:tasks.workunit.client.0.vm05.stdout:8/193: symlink d0/d1/d12/d1b/l43 0 2026-03-09T15:01:21.835 INFO:tasks.workunit.client.0.vm05.stdout:1/131: sync 2026-03-09T15:01:21.839 INFO:tasks.workunit.client.0.vm05.stdout:7/182: mknod d1/d9/d23/d31/d32/c36 0 2026-03-09T15:01:21.839 INFO:tasks.workunit.client.0.vm05.stdout:7/183: stat d1/d9 0 2026-03-09T15:01:21.840 INFO:tasks.workunit.client.0.vm05.stdout:7/184: chown d1/d9/d23/d31/f33 16296616 1 2026-03-09T15:01:21.840 INFO:tasks.workunit.client.0.vm05.stdout:7/185: read d1/d9/f10 [273032,95605] 0 2026-03-09T15:01:21.844 INFO:tasks.workunit.client.0.vm05.stdout:0/116: symlink d9/l28 0 2026-03-09T15:01:21.845 INFO:tasks.workunit.client.0.vm05.stdout:6/163: unlink da/f38 0 2026-03-09T15:01:21.846 INFO:tasks.workunit.client.0.vm05.stdout:6/164: read - da/d17/f33 zero size 2026-03-09T15:01:21.846 INFO:tasks.workunit.client.0.vm05.stdout:6/165: chown da/d17 525395436 1 2026-03-09T15:01:21.846 INFO:tasks.workunit.client.0.vm05.stdout:6/166: readlink l3 0 2026-03-09T15:01:21.849 INFO:tasks.workunit.client.0.vm05.stdout:2/208: write da/f21 [3949451,38884] 0 2026-03-09T15:01:21.849 INFO:tasks.workunit.client.0.vm05.stdout:2/209: write da/f21 [4091643,67857] 0 2026-03-09T15:01:21.853 INFO:tasks.workunit.client.0.vm05.stdout:9/167: mknod d2/d1a/d1b/c38 0 2026-03-09T15:01:21.855 INFO:tasks.workunit.client.0.vm05.stdout:5/196: mkdir d1/d4/d34/d35/d3d/d38/d42 0 2026-03-09T15:01:21.859 INFO:tasks.workunit.client.0.vm05.stdout:1/132: chown d9/d17/l27 1995 1 2026-03-09T15:01:21.862 INFO:tasks.workunit.client.0.vm05.stdout:3/152: rmdir d3/df/d10 39 2026-03-09T15:01:21.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:21 vm09.local ceph-mon[59673]: pgmap v167: 65 pgs: 65 active+clean; 1.7 GiB data, 6.2 GiB used, 114 GiB / 120 GiB avail; 6.9 MiB/s rd, 48 MiB/s wr, 293 op/s 2026-03-09T15:01:21.867 INFO:tasks.workunit.client.0.vm05.stdout:6/167: symlink da/d19/l39 0 2026-03-09T15:01:21.868 INFO:tasks.workunit.client.0.vm05.stdout:6/168: write da/d19/f22 [1862014,127277] 0 2026-03-09T15:01:21.878 INFO:tasks.workunit.client.0.vm05.stdout:8/194: write d0/d7/f14 [628675,110937] 0 2026-03-09T15:01:21.879 INFO:tasks.workunit.client.0.vm05.stdout:2/210: rmdir da/d13/d30 39 2026-03-09T15:01:21.884 INFO:tasks.workunit.client.0.vm05.stdout:5/197: rename d1/da/f31 to d1/d4/f43 0 2026-03-09T15:01:21.886 INFO:tasks.workunit.client.0.vm05.stdout:1/133: creat d9/d1a/f30 x:0 0 0 2026-03-09T15:01:21.890 INFO:tasks.workunit.client.0.vm05.stdout:1/134: dwrite d9/d17/f26 [0,4194304] 0 2026-03-09T15:01:21.892 INFO:tasks.workunit.client.0.vm05.stdout:1/135: write d9/f15 [151783,58400] 0 2026-03-09T15:01:21.894 INFO:tasks.workunit.client.0.vm05.stdout:1/136: write f5 [2114955,113926] 0 2026-03-09T15:01:21.905 INFO:tasks.workunit.client.0.vm05.stdout:8/195: rmdir d0/d2a 39 2026-03-09T15:01:21.906 INFO:tasks.workunit.client.0.vm05.stdout:6/169: dwrite da/d17/f2c [0,4194304] 0 2026-03-09T15:01:21.921 INFO:tasks.workunit.client.0.vm05.stdout:3/153: rename d3/df/d1e/d24/l2e to d3/d29/l37 0 2026-03-09T15:01:21.922 INFO:tasks.workunit.client.0.vm05.stdout:3/154: stat d3/df/c1d 0 2026-03-09T15:01:21.924 INFO:tasks.workunit.client.0.vm05.stdout:4/137: rename d2/d4/ce to d2/d4/c28 0 2026-03-09T15:01:21.927 INFO:tasks.workunit.client.0.vm05.stdout:4/138: dread d2/d4/d8/f13 [0,4194304] 0 2026-03-09T15:01:21.927 INFO:tasks.workunit.client.0.vm05.stdout:4/139: stat d2/d4/d8/ld 0 2026-03-09T15:01:21.931 INFO:tasks.workunit.client.0.vm05.stdout:0/117: rmdir d9/de/d1f 0 2026-03-09T15:01:21.938 INFO:tasks.workunit.client.0.vm05.stdout:1/137: rmdir d9/d24 39 2026-03-09T15:01:21.938 INFO:tasks.workunit.client.0.vm05.stdout:1/138: chown d9/d17/c28 1894514427 1 2026-03-09T15:01:21.940 INFO:tasks.workunit.client.0.vm05.stdout:8/196: symlink d0/d24/l44 0 2026-03-09T15:01:21.941 INFO:tasks.workunit.client.0.vm05.stdout:8/197: chown d0/dc/l1d 8 1 2026-03-09T15:01:21.945 INFO:tasks.workunit.client.0.vm05.stdout:6/170: stat da/c13 0 2026-03-09T15:01:21.945 INFO:tasks.workunit.client.0.vm05.stdout:6/171: chown da/c11 215502701 1 2026-03-09T15:01:21.948 INFO:tasks.workunit.client.0.vm05.stdout:6/172: dread da/fe [4194304,4194304] 0 2026-03-09T15:01:21.948 INFO:tasks.workunit.client.0.vm05.stdout:9/168: link d2/f1f d2/d10/f39 0 2026-03-09T15:01:21.953 INFO:tasks.workunit.client.0.vm05.stdout:3/155: creat d3/d29/f38 x:0 0 0 2026-03-09T15:01:21.959 INFO:tasks.workunit.client.0.vm05.stdout:7/186: getdents d1/d9/d23 0 2026-03-09T15:01:21.960 INFO:tasks.workunit.client.0.vm05.stdout:4/140: rename d2 to d2/d4/d29 22 2026-03-09T15:01:21.963 INFO:tasks.workunit.client.0.vm05.stdout:9/169: sync 2026-03-09T15:01:21.963 INFO:tasks.workunit.client.0.vm05.stdout:1/139: mkdir d9/d2a/d31 0 2026-03-09T15:01:21.968 INFO:tasks.workunit.client.0.vm05.stdout:1/140: dwrite f7 [0,4194304] 0 2026-03-09T15:01:21.989 INFO:tasks.workunit.client.0.vm05.stdout:2/211: creat da/f3c x:0 0 0 2026-03-09T15:01:21.991 INFO:tasks.workunit.client.0.vm05.stdout:2/212: dread da/f10 [0,4194304] 0 2026-03-09T15:01:22.000 INFO:tasks.workunit.client.0.vm05.stdout:6/173: mknod da/c3a 0 2026-03-09T15:01:22.001 INFO:tasks.workunit.client.0.vm05.stdout:6/174: truncate da/d17/f30 203746 0 2026-03-09T15:01:22.002 INFO:tasks.workunit.client.0.vm05.stdout:3/156: unlink d3/d29/f38 0 2026-03-09T15:01:22.007 INFO:tasks.workunit.client.0.vm05.stdout:0/118: symlink d9/l29 0 2026-03-09T15:01:22.007 INFO:tasks.workunit.client.0.vm05.stdout:0/119: chown d9 1302289521 1 2026-03-09T15:01:22.007 INFO:tasks.workunit.client.0.vm05.stdout:0/120: stat d9/f22 0 2026-03-09T15:01:22.008 INFO:tasks.workunit.client.0.vm05.stdout:0/121: write d9/de/f19 [548405,54541] 0 2026-03-09T15:01:22.009 INFO:tasks.workunit.client.0.vm05.stdout:9/170: creat d2/d10/d22/d2c/f3a x:0 0 0 2026-03-09T15:01:22.009 INFO:tasks.workunit.client.0.vm05.stdout:0/122: read d9/de/f21 [754049,103257] 0 2026-03-09T15:01:22.016 INFO:tasks.workunit.client.0.vm05.stdout:8/198: symlink d0/d2a/l45 0 2026-03-09T15:01:22.016 INFO:tasks.workunit.client.0.vm05.stdout:9/171: dwrite d2/d1a/d1b/d23/d37/f36 [0,4194304] 0 2026-03-09T15:01:22.018 INFO:tasks.workunit.client.0.vm05.stdout:8/199: dread - d0/f3b zero size 2026-03-09T15:01:22.019 INFO:tasks.workunit.client.0.vm05.stdout:8/200: readlink d0/dc/l22 0 2026-03-09T15:01:22.020 INFO:tasks.workunit.client.0.vm05.stdout:2/213: mknod da/d29/c3d 0 2026-03-09T15:01:22.021 INFO:tasks.workunit.client.0.vm05.stdout:2/214: readlink l3 0 2026-03-09T15:01:22.022 INFO:tasks.workunit.client.0.vm05.stdout:2/215: read - da/d29/f39 zero size 2026-03-09T15:01:22.022 INFO:tasks.workunit.client.0.vm05.stdout:8/201: fdatasync d0/d24/f30 0 2026-03-09T15:01:22.022 INFO:tasks.workunit.client.0.vm05.stdout:2/216: chown da/f1b 8023 1 2026-03-09T15:01:22.023 INFO:tasks.workunit.client.0.vm05.stdout:6/175: mkdir da/d17/d3b 0 2026-03-09T15:01:22.023 INFO:tasks.workunit.client.0.vm05.stdout:3/157: mknod d3/df/d1e/d24/c39 0 2026-03-09T15:01:22.026 INFO:tasks.workunit.client.0.vm05.stdout:1/141: creat d9/d2a/d31/f32 x:0 0 0 2026-03-09T15:01:22.027 INFO:tasks.workunit.client.0.vm05.stdout:9/172: mknod d2/d1a/c3b 0 2026-03-09T15:01:22.028 INFO:tasks.workunit.client.0.vm05.stdout:5/198: rename d1/d4/d19/f24 to d1/d4/d34/d35/f44 0 2026-03-09T15:01:22.031 INFO:tasks.workunit.client.0.vm05.stdout:8/202: rmdir d0/d1/d12/d1b 39 2026-03-09T15:01:22.035 INFO:tasks.workunit.client.0.vm05.stdout:9/173: dread d2/d1a/d1b/f2a [0,4194304] 0 2026-03-09T15:01:22.037 INFO:tasks.workunit.client.0.vm05.stdout:9/174: truncate d2/d1a/f1c 4863709 0 2026-03-09T15:01:22.041 INFO:tasks.workunit.client.0.vm05.stdout:2/217: dread f5 [4194304,4194304] 0 2026-03-09T15:01:22.042 INFO:tasks.workunit.client.0.vm05.stdout:2/218: chown da/f2c 100424060 1 2026-03-09T15:01:22.049 INFO:tasks.workunit.client.0.vm05.stdout:1/142: mkdir d9/d2a/d31/d33 0 2026-03-09T15:01:22.053 INFO:tasks.workunit.client.0.vm05.stdout:3/158: symlink d3/df/d1e/d2c/l3a 0 2026-03-09T15:01:22.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:21 vm05.local ceph-mon[50611]: pgmap v167: 65 pgs: 65 active+clean; 1.7 GiB data, 6.2 GiB used, 114 GiB / 120 GiB avail; 6.9 MiB/s rd, 48 MiB/s wr, 293 op/s 2026-03-09T15:01:22.055 INFO:tasks.workunit.client.0.vm05.stdout:5/199: symlink d1/d4/d19/l45 0 2026-03-09T15:01:22.058 INFO:tasks.workunit.client.0.vm05.stdout:8/203: symlink d0/d2a/d2d/d42/l46 0 2026-03-09T15:01:22.058 INFO:tasks.workunit.client.0.vm05.stdout:5/200: dread d1/d4/d34/d35/d3d/f32 [0,4194304] 0 2026-03-09T15:01:22.060 INFO:tasks.workunit.client.0.vm05.stdout:9/175: sync 2026-03-09T15:01:22.062 INFO:tasks.workunit.client.0.vm05.stdout:8/204: dread d0/f10 [0,4194304] 0 2026-03-09T15:01:22.063 INFO:tasks.workunit.client.0.vm05.stdout:8/205: write d0/d7/f14 [948745,46564] 0 2026-03-09T15:01:22.067 INFO:tasks.workunit.client.0.vm05.stdout:7/187: write d1/fa [142895,56685] 0 2026-03-09T15:01:22.073 INFO:tasks.workunit.client.0.vm05.stdout:4/141: dwrite d2/d4/d7/dc/f18 [0,4194304] 0 2026-03-09T15:01:22.075 INFO:tasks.workunit.client.0.vm05.stdout:9/176: dwrite d2/f11 [8388608,4194304] 0 2026-03-09T15:01:22.081 INFO:tasks.workunit.client.0.vm05.stdout:3/159: mknod d3/d29/d2d/c3b 0 2026-03-09T15:01:22.083 INFO:tasks.workunit.client.0.vm05.stdout:4/142: dwrite d2/f1b [0,4194304] 0 2026-03-09T15:01:22.084 INFO:tasks.workunit.client.0.vm05.stdout:8/206: dwrite d0/f4 [0,4194304] 0 2026-03-09T15:01:22.094 INFO:tasks.workunit.client.0.vm05.stdout:4/143: dwrite d2/d4/d7/f9 [4194304,4194304] 0 2026-03-09T15:01:22.122 INFO:tasks.workunit.client.0.vm05.stdout:0/123: write d9/fd [866732,1093] 0 2026-03-09T15:01:22.137 INFO:tasks.workunit.client.0.vm05.stdout:5/201: creat d1/d4/d34/d35/f46 x:0 0 0 2026-03-09T15:01:22.139 INFO:tasks.workunit.client.0.vm05.stdout:6/176: getdents da/d17 0 2026-03-09T15:01:22.140 INFO:tasks.workunit.client.0.vm05.stdout:2/219: truncate da/d16/f20 194150 0 2026-03-09T15:01:22.145 INFO:tasks.workunit.client.0.vm05.stdout:7/188: rmdir d1/d12 39 2026-03-09T15:01:22.147 INFO:tasks.workunit.client.0.vm05.stdout:9/177: mkdir d2/d10/d22/d2c/d3c 0 2026-03-09T15:01:22.152 INFO:tasks.workunit.client.0.vm05.stdout:8/207: dwrite d0/f3b [0,4194304] 0 2026-03-09T15:01:22.155 INFO:tasks.workunit.client.0.vm05.stdout:8/208: write d0/d7/f14 [287121,92007] 0 2026-03-09T15:01:22.156 INFO:tasks.workunit.client.0.vm05.stdout:0/124: fdatasync d9/de/f1e 0 2026-03-09T15:01:22.156 INFO:tasks.workunit.client.0.vm05.stdout:5/202: creat d1/d4/d34/d35/f47 x:0 0 0 2026-03-09T15:01:22.156 INFO:tasks.workunit.client.0.vm05.stdout:7/189: creat d1/d9/d23/d31/f37 x:0 0 0 2026-03-09T15:01:22.157 INFO:tasks.workunit.client.0.vm05.stdout:3/160: mknod d3/df/d10/d34/c3c 0 2026-03-09T15:01:22.158 INFO:tasks.workunit.client.0.vm05.stdout:3/161: write d3/df/d10/f28 [548618,39882] 0 2026-03-09T15:01:22.160 INFO:tasks.workunit.client.0.vm05.stdout:1/143: link d9/d24/c29 d9/d1a/c34 0 2026-03-09T15:01:22.162 INFO:tasks.workunit.client.0.vm05.stdout:6/177: getdents da/d17/d3b 0 2026-03-09T15:01:22.170 INFO:tasks.workunit.client.0.vm05.stdout:5/203: mknod d1/d4/d34/c48 0 2026-03-09T15:01:22.178 INFO:tasks.workunit.client.0.vm05.stdout:0/125: rename d9/lf to d9/de/l2a 0 2026-03-09T15:01:22.178 INFO:tasks.workunit.client.0.vm05.stdout:7/190: creat d1/d9/d23/d31/d32/f38 x:0 0 0 2026-03-09T15:01:22.178 INFO:tasks.workunit.client.0.vm05.stdout:1/144: mknod d9/d17/c35 0 2026-03-09T15:01:22.178 INFO:tasks.workunit.client.0.vm05.stdout:1/145: dread d9/d1a/f1d [0,4194304] 0 2026-03-09T15:01:22.178 INFO:tasks.workunit.client.0.vm05.stdout:5/204: rename d1/d4/d34/c48 to d1/d4/d27/c49 0 2026-03-09T15:01:22.178 INFO:tasks.workunit.client.0.vm05.stdout:7/191: creat d1/d25/f39 x:0 0 0 2026-03-09T15:01:22.179 INFO:tasks.workunit.client.0.vm05.stdout:3/162: sync 2026-03-09T15:01:22.181 INFO:tasks.workunit.client.0.vm05.stdout:8/209: creat d0/f47 x:0 0 0 2026-03-09T15:01:22.181 INFO:tasks.workunit.client.0.vm05.stdout:8/210: chown d0/c18 3087 1 2026-03-09T15:01:22.184 INFO:tasks.workunit.client.0.vm05.stdout:8/211: dread d0/f3b [0,4194304] 0 2026-03-09T15:01:22.185 INFO:tasks.workunit.client.0.vm05.stdout:8/212: readlink d0/d24/l44 0 2026-03-09T15:01:22.185 INFO:tasks.workunit.client.0.vm05.stdout:8/213: stat d0/d24/f30 0 2026-03-09T15:01:22.187 INFO:tasks.workunit.client.0.vm05.stdout:3/163: symlink d3/df/d1e/l3d 0 2026-03-09T15:01:22.188 INFO:tasks.workunit.client.0.vm05.stdout:1/146: dread d9/f23 [0,4194304] 0 2026-03-09T15:01:22.189 INFO:tasks.workunit.client.0.vm05.stdout:8/214: dread d0/f4 [0,4194304] 0 2026-03-09T15:01:22.192 INFO:tasks.workunit.client.0.vm05.stdout:3/164: read d3/f17 [376872,18010] 0 2026-03-09T15:01:22.193 INFO:tasks.workunit.client.0.vm05.stdout:3/165: dread d3/df/d1e/d2c/f32 [0,4194304] 0 2026-03-09T15:01:22.194 INFO:tasks.workunit.client.0.vm05.stdout:3/166: write d3/f17 [758668,3646] 0 2026-03-09T15:01:22.203 INFO:tasks.workunit.client.0.vm05.stdout:4/144: dwrite d2/d4/d8/f13 [0,4194304] 0 2026-03-09T15:01:22.204 INFO:tasks.workunit.client.0.vm05.stdout:4/145: stat d2/d4/d1e 0 2026-03-09T15:01:22.206 INFO:tasks.workunit.client.0.vm05.stdout:0/126: creat d9/f2b x:0 0 0 2026-03-09T15:01:22.210 INFO:tasks.workunit.client.0.vm05.stdout:4/146: dwrite d2/d4/f15 [0,4194304] 0 2026-03-09T15:01:22.212 INFO:tasks.workunit.client.0.vm05.stdout:1/147: symlink d9/d2a/d31/l36 0 2026-03-09T15:01:22.216 INFO:tasks.workunit.client.0.vm05.stdout:3/167: symlink d3/df/d10/d19/l3e 0 2026-03-09T15:01:22.230 INFO:tasks.workunit.client.0.vm05.stdout:3/168: creat d3/df/d10/f3f x:0 0 0 2026-03-09T15:01:22.230 INFO:tasks.workunit.client.0.vm05.stdout:3/169: dread - d3/df/d1e/d24/f35 zero size 2026-03-09T15:01:22.231 INFO:tasks.workunit.client.0.vm05.stdout:3/170: dread - d3/d29/d2d/f33 zero size 2026-03-09T15:01:22.231 INFO:tasks.workunit.client.0.vm05.stdout:3/171: dread - d3/d29/d2d/f31 zero size 2026-03-09T15:01:22.234 INFO:tasks.workunit.client.0.vm05.stdout:0/127: mkdir d9/de/d12/d2c 0 2026-03-09T15:01:22.237 INFO:tasks.workunit.client.0.vm05.stdout:3/172: dwrite d3/f1f [0,4194304] 0 2026-03-09T15:01:22.238 INFO:tasks.workunit.client.0.vm05.stdout:3/173: write d3/df/d10/d19/f25 [402100,115662] 0 2026-03-09T15:01:22.239 INFO:tasks.workunit.client.0.vm05.stdout:3/174: chown d3/df/d1e 135693 1 2026-03-09T15:01:22.239 INFO:tasks.workunit.client.0.vm05.stdout:7/192: getdents d1/d22 0 2026-03-09T15:01:22.241 INFO:tasks.workunit.client.0.vm05.stdout:3/175: chown d3/df/d10/d19/f26 1272552782 1 2026-03-09T15:01:22.243 INFO:tasks.workunit.client.0.vm05.stdout:3/176: chown d3/df/d10/c15 1 1 2026-03-09T15:01:22.244 INFO:tasks.workunit.client.0.vm05.stdout:3/177: dread - d3/d29/f30 zero size 2026-03-09T15:01:22.244 INFO:tasks.workunit.client.0.vm05.stdout:8/215: dread d0/d7/f20 [0,4194304] 0 2026-03-09T15:01:22.245 INFO:tasks.workunit.client.0.vm05.stdout:3/178: write d3/f1f [3029144,43068] 0 2026-03-09T15:01:22.249 INFO:tasks.workunit.client.0.vm05.stdout:7/193: dwrite d1/d9/d23/f2a [0,4194304] 0 2026-03-09T15:01:22.253 INFO:tasks.workunit.client.0.vm05.stdout:8/216: dread d0/f3b [0,4194304] 0 2026-03-09T15:01:22.253 INFO:tasks.workunit.client.0.vm05.stdout:8/217: dread - d0/dc/f15 zero size 2026-03-09T15:01:22.255 INFO:tasks.workunit.client.0.vm05.stdout:8/218: read d0/d7/f8 [4858343,44778] 0 2026-03-09T15:01:22.269 INFO:tasks.workunit.client.0.vm05.stdout:4/147: symlink d2/l2a 0 2026-03-09T15:01:22.269 INFO:tasks.workunit.client.0.vm05.stdout:4/148: readlink d2/d4/l25 0 2026-03-09T15:01:22.269 INFO:tasks.workunit.client.0.vm05.stdout:1/148: mkdir d9/d2f/d37 0 2026-03-09T15:01:22.269 INFO:tasks.workunit.client.0.vm05.stdout:1/149: write d9/d17/f22 [1028441,42889] 0 2026-03-09T15:01:22.269 INFO:tasks.workunit.client.0.vm05.stdout:2/220: truncate da/dd/f25 2965803 0 2026-03-09T15:01:22.269 INFO:tasks.workunit.client.0.vm05.stdout:2/221: fdatasync da/d29/f2d 0 2026-03-09T15:01:22.269 INFO:tasks.workunit.client.0.vm05.stdout:2/222: fdatasync da/f3c 0 2026-03-09T15:01:22.269 INFO:tasks.workunit.client.0.vm05.stdout:0/128: creat d9/de/d25/f2d x:0 0 0 2026-03-09T15:01:22.269 INFO:tasks.workunit.client.0.vm05.stdout:0/129: chown d9/de/d25/f2d 102905261 1 2026-03-09T15:01:22.271 INFO:tasks.workunit.client.0.vm05.stdout:3/179: sync 2026-03-09T15:01:22.274 INFO:tasks.workunit.client.0.vm05.stdout:3/180: dwrite d3/df/d10/f2a [0,4194304] 0 2026-03-09T15:01:22.283 INFO:tasks.workunit.client.0.vm05.stdout:3/181: dwrite d3/df/f1b [0,4194304] 0 2026-03-09T15:01:22.286 INFO:tasks.workunit.client.0.vm05.stdout:8/219: creat d0/d2a/d2d/f48 x:0 0 0 2026-03-09T15:01:22.287 INFO:tasks.workunit.client.0.vm05.stdout:3/182: write d3/df/d10/d19/f25 [1144334,88423] 0 2026-03-09T15:01:22.287 INFO:tasks.workunit.client.0.vm05.stdout:8/220: write d0/d2a/d2d/f41 [355173,62477] 0 2026-03-09T15:01:22.289 INFO:tasks.workunit.client.0.vm05.stdout:4/149: mkdir d2/d4/d7/dc/d2b 0 2026-03-09T15:01:22.289 INFO:tasks.workunit.client.0.vm05.stdout:3/183: chown d3/df/f1b 837 1 2026-03-09T15:01:22.295 INFO:tasks.workunit.client.0.vm05.stdout:8/221: read d0/d24/f2c [369833,27870] 0 2026-03-09T15:01:22.296 INFO:tasks.workunit.client.0.vm05.stdout:0/130: fsync d9/de/d12/f23 0 2026-03-09T15:01:22.296 INFO:tasks.workunit.client.0.vm05.stdout:0/131: fsync d9/de/f1e 0 2026-03-09T15:01:22.302 INFO:tasks.workunit.client.0.vm05.stdout:3/184: rename d3/df/d1e/d24/c39 to d3/df/d1e/d2f/c40 0 2026-03-09T15:01:22.307 INFO:tasks.workunit.client.0.vm05.stdout:8/222: dread d0/d2a/d2d/f3e [0,4194304] 0 2026-03-09T15:01:22.307 INFO:tasks.workunit.client.0.vm05.stdout:8/223: fsync d0/d24/f30 0 2026-03-09T15:01:22.308 INFO:tasks.workunit.client.0.vm05.stdout:8/224: chown d0/dc/l1d 0 1 2026-03-09T15:01:22.313 INFO:tasks.workunit.client.0.vm05.stdout:9/178: truncate d2/d1a/f1c 2027283 0 2026-03-09T15:01:22.314 INFO:tasks.workunit.client.0.vm05.stdout:1/150: link d9/d17/l27 d9/d17/l38 0 2026-03-09T15:01:22.316 INFO:tasks.workunit.client.0.vm05.stdout:9/179: dread d2/f11 [8388608,4194304] 0 2026-03-09T15:01:22.318 INFO:tasks.workunit.client.0.vm05.stdout:4/150: mknod d2/d4/c2c 0 2026-03-09T15:01:22.321 INFO:tasks.workunit.client.0.vm05.stdout:3/185: write d3/f17 [2803614,18447] 0 2026-03-09T15:01:22.323 INFO:tasks.workunit.client.0.vm05.stdout:0/132: mkdir d9/de/d12/d15/d2e 0 2026-03-09T15:01:22.325 INFO:tasks.workunit.client.0.vm05.stdout:1/151: creat d9/d2a/f39 x:0 0 0 2026-03-09T15:01:22.327 INFO:tasks.workunit.client.0.vm05.stdout:3/186: unlink d3/df/d10/f1c 0 2026-03-09T15:01:22.328 INFO:tasks.workunit.client.0.vm05.stdout:2/223: getdents da/d13/d2f 0 2026-03-09T15:01:22.328 INFO:tasks.workunit.client.0.vm05.stdout:9/180: sync 2026-03-09T15:01:22.329 INFO:tasks.workunit.client.0.vm05.stdout:9/181: readlink d2/d10/d15/l25 0 2026-03-09T15:01:22.332 INFO:tasks.workunit.client.0.vm05.stdout:1/152: unlink d9/d1a/f30 0 2026-03-09T15:01:22.333 INFO:tasks.workunit.client.0.vm05.stdout:3/187: creat d3/d29/f41 x:0 0 0 2026-03-09T15:01:22.333 INFO:tasks.workunit.client.0.vm05.stdout:1/153: fdatasync d9/f21 0 2026-03-09T15:01:22.334 INFO:tasks.workunit.client.0.vm05.stdout:2/224: symlink da/d13/d2f/d35/l3e 0 2026-03-09T15:01:22.336 INFO:tasks.workunit.client.0.vm05.stdout:9/182: rmdir d2/d10/d15 39 2026-03-09T15:01:22.341 INFO:tasks.workunit.client.0.vm05.stdout:0/133: creat d9/de/d12/d2c/f2f x:0 0 0 2026-03-09T15:01:22.342 INFO:tasks.workunit.client.0.vm05.stdout:0/134: write d9/de/d12/d2c/f2f [368961,71284] 0 2026-03-09T15:01:22.346 INFO:tasks.workunit.client.0.vm05.stdout:4/151: creat d2/d4/d7/f2d x:0 0 0 2026-03-09T15:01:22.347 INFO:tasks.workunit.client.0.vm05.stdout:4/152: write d2/d4/f15 [8867597,122735] 0 2026-03-09T15:01:22.348 INFO:tasks.workunit.client.0.vm05.stdout:6/178: write da/fe [3976922,116758] 0 2026-03-09T15:01:22.353 INFO:tasks.workunit.client.0.vm05.stdout:3/188: dread d3/df/d10/d19/f26 [0,4194304] 0 2026-03-09T15:01:22.355 INFO:tasks.workunit.client.0.vm05.stdout:1/154: unlink d9/d17/c1b 0 2026-03-09T15:01:22.355 INFO:tasks.workunit.client.0.vm05.stdout:1/155: read d9/f12 [893440,3185] 0 2026-03-09T15:01:22.359 INFO:tasks.workunit.client.0.vm05.stdout:9/183: symlink d2/d1a/l3d 0 2026-03-09T15:01:22.364 INFO:tasks.workunit.client.0.vm05.stdout:6/179: creat da/d17/f3c x:0 0 0 2026-03-09T15:01:22.367 INFO:tasks.workunit.client.0.vm05.stdout:6/180: dwrite da/d17/f30 [0,4194304] 0 2026-03-09T15:01:22.367 INFO:tasks.workunit.client.0.vm05.stdout:6/181: fdatasync da/f16 0 2026-03-09T15:01:22.376 INFO:tasks.workunit.client.0.vm05.stdout:5/205: dwrite d1/ff [4194304,4194304] 0 2026-03-09T15:01:22.385 INFO:tasks.workunit.client.0.vm05.stdout:1/156: creat d9/d2f/f3a x:0 0 0 2026-03-09T15:01:22.391 INFO:tasks.workunit.client.0.vm05.stdout:0/135: mknod d9/c30 0 2026-03-09T15:01:22.399 INFO:tasks.workunit.client.0.vm05.stdout:5/206: dwrite d1/f6 [0,4194304] 0 2026-03-09T15:01:22.400 INFO:tasks.workunit.client.0.vm05.stdout:1/157: rename d9/d17/c35 to d9/d1a/c3b 0 2026-03-09T15:01:22.406 INFO:tasks.workunit.client.0.vm05.stdout:5/207: dwrite d1/f6 [0,4194304] 0 2026-03-09T15:01:22.424 INFO:tasks.workunit.client.0.vm05.stdout:6/182: link da/d19/f22 da/f3d 0 2026-03-09T15:01:22.428 INFO:tasks.workunit.client.0.vm05.stdout:3/189: creat d3/f42 x:0 0 0 2026-03-09T15:01:22.433 INFO:tasks.workunit.client.0.vm05.stdout:3/190: mknod d3/df/d10/c43 0 2026-03-09T15:01:22.438 INFO:tasks.workunit.client.0.vm05.stdout:6/183: symlink da/d17/d3b/l3e 0 2026-03-09T15:01:22.439 INFO:tasks.workunit.client.0.vm05.stdout:6/184: chown da/d17/f20 8 1 2026-03-09T15:01:22.440 INFO:tasks.workunit.client.0.vm05.stdout:6/185: dread da/d17/f2a [0,4194304] 0 2026-03-09T15:01:22.441 INFO:tasks.workunit.client.0.vm05.stdout:5/208: sync 2026-03-09T15:01:22.442 INFO:tasks.workunit.client.0.vm05.stdout:5/209: chown d1/d4/d34/c41 246 1 2026-03-09T15:01:22.443 INFO:tasks.workunit.client.0.vm05.stdout:3/191: mkdir d3/df/d10/d19/d44 0 2026-03-09T15:01:22.446 INFO:tasks.workunit.client.0.vm05.stdout:5/210: chown d1/da/l3e 5868674 1 2026-03-09T15:01:22.447 INFO:tasks.workunit.client.0.vm05.stdout:3/192: unlink d3/df/d1e/d2c/f32 0 2026-03-09T15:01:22.452 INFO:tasks.workunit.client.0.vm05.stdout:5/211: creat d1/da/f4a x:0 0 0 2026-03-09T15:01:22.452 INFO:tasks.workunit.client.0.vm05.stdout:5/212: dread - d1/d4/d34/d35/f46 zero size 2026-03-09T15:01:22.455 INFO:tasks.workunit.client.0.vm05.stdout:7/194: write d1/f16 [22124,3375] 0 2026-03-09T15:01:22.459 INFO:tasks.workunit.client.0.vm05.stdout:3/193: mknod d3/df/d1e/d24/c45 0 2026-03-09T15:01:22.459 INFO:tasks.workunit.client.0.vm05.stdout:3/194: dread - d3/df/d10/f3f zero size 2026-03-09T15:01:22.464 INFO:tasks.workunit.client.0.vm05.stdout:5/213: creat d1/d4/d34/d35/d3d/d38/f4b x:0 0 0 2026-03-09T15:01:22.467 INFO:tasks.workunit.client.0.vm05.stdout:7/195: creat d1/d9/d23/d31/d32/f3a x:0 0 0 2026-03-09T15:01:22.470 INFO:tasks.workunit.client.0.vm05.stdout:5/214: sync 2026-03-09T15:01:22.472 INFO:tasks.workunit.client.0.vm05.stdout:3/195: mknod d3/df/c46 0 2026-03-09T15:01:22.475 INFO:tasks.workunit.client.0.vm05.stdout:5/215: chown d1/c33 1 1 2026-03-09T15:01:22.476 INFO:tasks.workunit.client.0.vm05.stdout:5/216: chown d1/da/l1c 0 1 2026-03-09T15:01:22.483 INFO:tasks.workunit.client.0.vm05.stdout:7/196: getdents d1/d9/d23 0 2026-03-09T15:01:22.485 INFO:tasks.workunit.client.0.vm05.stdout:7/197: dread d1/f15 [0,4194304] 0 2026-03-09T15:01:22.487 INFO:tasks.workunit.client.0.vm05.stdout:3/196: symlink d3/df/d10/d19/d44/l47 0 2026-03-09T15:01:22.489 INFO:tasks.workunit.client.0.vm05.stdout:5/217: unlink d1/c15 0 2026-03-09T15:01:22.490 INFO:tasks.workunit.client.0.vm05.stdout:5/218: fsync d1/d4/d34/d35/f47 0 2026-03-09T15:01:22.497 INFO:tasks.workunit.client.0.vm05.stdout:8/225: write d0/d1/d12/d1b/f27 [1007912,114101] 0 2026-03-09T15:01:22.501 INFO:tasks.workunit.client.0.vm05.stdout:8/226: creat d0/d1/f49 x:0 0 0 2026-03-09T15:01:22.502 INFO:tasks.workunit.client.0.vm05.stdout:7/198: truncate d1/d9/f10 4019166 0 2026-03-09T15:01:22.504 INFO:tasks.workunit.client.0.vm05.stdout:5/219: rmdir d1/d4/d34/d35/d3d/d38/d42 0 2026-03-09T15:01:22.506 INFO:tasks.workunit.client.0.vm05.stdout:8/227: creat d0/dc/f4a x:0 0 0 2026-03-09T15:01:22.506 INFO:tasks.workunit.client.0.vm05.stdout:8/228: write d0/d24/f30 [835792,123837] 0 2026-03-09T15:01:22.508 INFO:tasks.workunit.client.0.vm05.stdout:7/199: creat d1/d25/f3b x:0 0 0 2026-03-09T15:01:22.510 INFO:tasks.workunit.client.0.vm05.stdout:2/225: write da/dd/f12 [1436471,62688] 0 2026-03-09T15:01:22.513 INFO:tasks.workunit.client.0.vm05.stdout:8/229: mkdir d0/d2a/d2d/d4b 0 2026-03-09T15:01:22.515 INFO:tasks.workunit.client.0.vm05.stdout:7/200: mkdir d1/d22/d3c 0 2026-03-09T15:01:22.519 INFO:tasks.workunit.client.0.vm05.stdout:7/201: dwrite d1/d25/f39 [0,4194304] 0 2026-03-09T15:01:22.525 INFO:tasks.workunit.client.0.vm05.stdout:2/226: mkdir da/d29/d3f 0 2026-03-09T15:01:22.527 INFO:tasks.workunit.client.0.vm05.stdout:5/220: truncate d1/d4/f43 1727475 0 2026-03-09T15:01:22.538 INFO:tasks.workunit.client.0.vm05.stdout:9/184: dwrite d2/d10/f39 [0,4194304] 0 2026-03-09T15:01:22.546 INFO:tasks.workunit.client.0.vm05.stdout:9/185: truncate d2/f17 364931 0 2026-03-09T15:01:22.548 INFO:tasks.workunit.client.0.vm05.stdout:7/202: creat d1/d22/d3c/f3d x:0 0 0 2026-03-09T15:01:22.549 INFO:tasks.workunit.client.0.vm05.stdout:2/227: mknod da/d13/c40 0 2026-03-09T15:01:22.551 INFO:tasks.workunit.client.0.vm05.stdout:2/228: chown f4 2076253205 1 2026-03-09T15:01:22.552 INFO:tasks.workunit.client.0.vm05.stdout:8/230: dread d0/d24/f30 [0,4194304] 0 2026-03-09T15:01:22.556 INFO:tasks.workunit.client.0.vm05.stdout:9/186: creat d2/d1a/f3e x:0 0 0 2026-03-09T15:01:22.556 INFO:tasks.workunit.client.0.vm05.stdout:2/229: dwrite da/d16/f1f [0,4194304] 0 2026-03-09T15:01:22.558 INFO:tasks.workunit.client.0.vm05.stdout:8/231: dwrite d0/d1/d12/d1b/f27 [0,4194304] 0 2026-03-09T15:01:22.564 INFO:tasks.workunit.client.0.vm05.stdout:5/221: creat d1/f4c x:0 0 0 2026-03-09T15:01:22.565 INFO:tasks.workunit.client.0.vm05.stdout:2/230: unlink da/dd/f22 0 2026-03-09T15:01:22.568 INFO:tasks.workunit.client.0.vm05.stdout:9/187: dwrite d2/d10/d15/f18 [0,4194304] 0 2026-03-09T15:01:22.571 INFO:tasks.workunit.client.0.vm05.stdout:9/188: write d2/fe [4637455,63601] 0 2026-03-09T15:01:22.584 INFO:tasks.workunit.client.0.vm05.stdout:8/232: creat d0/d1/d12/d3c/f4c x:0 0 0 2026-03-09T15:01:22.588 INFO:tasks.workunit.client.0.vm05.stdout:8/233: dwrite d0/d7/f33 [0,4194304] 0 2026-03-09T15:01:22.590 INFO:tasks.workunit.client.0.vm05.stdout:8/234: write d0/d24/f2c [972855,129935] 0 2026-03-09T15:01:22.600 INFO:tasks.workunit.client.0.vm05.stdout:9/189: rename d2/d1a/l3d to d2/d10/d22/d2c/d3c/l3f 0 2026-03-09T15:01:22.600 INFO:tasks.workunit.client.0.vm05.stdout:5/222: unlink d1/l3f 0 2026-03-09T15:01:22.604 INFO:tasks.workunit.client.0.vm05.stdout:5/223: dread d1/da/fe [0,4194304] 0 2026-03-09T15:01:22.604 INFO:tasks.workunit.client.0.vm05.stdout:5/224: dread - d1/da/f4a zero size 2026-03-09T15:01:22.608 INFO:tasks.workunit.client.0.vm05.stdout:5/225: chown d1/d4/l2e 18241795 1 2026-03-09T15:01:22.610 INFO:tasks.workunit.client.0.vm05.stdout:5/226: creat d1/d4/d34/d35/f4d x:0 0 0 2026-03-09T15:01:22.680 INFO:tasks.workunit.client.0.vm05.stdout:4/153: truncate d2/d4/d8/f13 3633129 0 2026-03-09T15:01:22.681 INFO:tasks.workunit.client.0.vm05.stdout:1/158: getdents d9/d1a 0 2026-03-09T15:01:22.684 INFO:tasks.workunit.client.0.vm05.stdout:0/136: dwrite d9/de/f21 [0,4194304] 0 2026-03-09T15:01:22.687 INFO:tasks.workunit.client.0.vm05.stdout:2/231: fdatasync da/d16/f20 0 2026-03-09T15:01:22.699 INFO:tasks.workunit.client.0.vm05.stdout:6/186: getdents da/d17/d3b 0 2026-03-09T15:01:22.700 INFO:tasks.workunit.client.0.vm05.stdout:3/197: getdents d3/df 0 2026-03-09T15:01:22.701 INFO:tasks.workunit.client.0.vm05.stdout:1/159: symlink d9/d2a/d31/l3c 0 2026-03-09T15:01:22.701 INFO:tasks.workunit.client.0.vm05.stdout:5/227: rmdir d1/da 39 2026-03-09T15:01:22.702 INFO:tasks.workunit.client.0.vm05.stdout:5/228: readlink d1/d4/d19/l45 0 2026-03-09T15:01:22.703 INFO:tasks.workunit.client.0.vm05.stdout:5/229: readlink d1/d4/d27/l3b 0 2026-03-09T15:01:22.704 INFO:tasks.workunit.client.0.vm05.stdout:5/230: write d1/d4/d34/d35/f47 [467638,36804] 0 2026-03-09T15:01:22.708 INFO:tasks.workunit.client.0.vm05.stdout:3/198: dwrite d3/f7 [0,4194304] 0 2026-03-09T15:01:22.719 INFO:tasks.workunit.client.0.vm05.stdout:0/137: dread d9/de/d12/f23 [0,4194304] 0 2026-03-09T15:01:22.723 INFO:tasks.workunit.client.0.vm05.stdout:1/160: fdatasync d9/d1a/f1d 0 2026-03-09T15:01:22.724 INFO:tasks.workunit.client.0.vm05.stdout:1/161: chown d9/l11 446572286 1 2026-03-09T15:01:22.725 INFO:tasks.workunit.client.0.vm05.stdout:1/162: write d9/d17/f26 [2830320,120962] 0 2026-03-09T15:01:22.727 INFO:tasks.workunit.client.0.vm05.stdout:3/199: creat d3/df/d10/d34/f48 x:0 0 0 2026-03-09T15:01:22.729 INFO:tasks.workunit.client.0.vm05.stdout:6/187: truncate da/f3d 1331237 0 2026-03-09T15:01:22.731 INFO:tasks.workunit.client.0.vm05.stdout:3/200: mknod d3/df/d1e/d2f/c49 0 2026-03-09T15:01:22.732 INFO:tasks.workunit.client.0.vm05.stdout:3/201: write d3/df/f1b [4393789,79796] 0 2026-03-09T15:01:22.734 INFO:tasks.workunit.client.0.vm05.stdout:4/154: link d2/d4/c19 d2/d4/c2e 0 2026-03-09T15:01:22.734 INFO:tasks.workunit.client.0.vm05.stdout:4/155: fsync d2/d4/d7/f2d 0 2026-03-09T15:01:22.736 INFO:tasks.workunit.client.0.vm05.stdout:0/138: link l8 d9/de/l31 0 2026-03-09T15:01:22.737 INFO:tasks.workunit.client.0.vm05.stdout:0/139: read d9/fd [463369,13486] 0 2026-03-09T15:01:22.738 INFO:tasks.workunit.client.0.vm05.stdout:3/202: dread d3/df/f11 [0,4194304] 0 2026-03-09T15:01:22.742 INFO:tasks.workunit.client.0.vm05.stdout:4/156: symlink d2/d4/d7/d21/l2f 0 2026-03-09T15:01:22.742 INFO:tasks.workunit.client.0.vm05.stdout:4/157: fdatasync d2/d4/d7/f9 0 2026-03-09T15:01:22.831 INFO:tasks.workunit.client.0.vm05.stdout:2/232: sync 2026-03-09T15:01:22.831 INFO:tasks.workunit.client.0.vm05.stdout:2/233: chown da/f10 91864 1 2026-03-09T15:01:22.835 INFO:tasks.workunit.client.0.vm05.stdout:9/190: dread d2/f17 [0,4194304] 0 2026-03-09T15:01:22.837 INFO:tasks.workunit.client.0.vm05.stdout:7/203: truncate d1/d9/fc 4811540 0 2026-03-09T15:01:22.839 INFO:tasks.workunit.client.0.vm05.stdout:9/191: creat d2/d1a/f40 x:0 0 0 2026-03-09T15:01:22.839 INFO:tasks.workunit.client.0.vm05.stdout:9/192: fdatasync d2/d1a/d1b/d23/d37/f36 0 2026-03-09T15:01:22.841 INFO:tasks.workunit.client.0.vm05.stdout:7/204: mknod d1/d22/d3c/c3e 0 2026-03-09T15:01:22.856 INFO:tasks.workunit.client.0.vm05.stdout:8/235: dwrite d0/d7/f20 [0,4194304] 0 2026-03-09T15:01:22.869 INFO:tasks.workunit.client.0.vm05.stdout:8/236: creat d0/d2a/d2d/f4d x:0 0 0 2026-03-09T15:01:22.870 INFO:tasks.workunit.client.0.vm05.stdout:8/237: creat d0/d2a/d2d/d42/f4e x:0 0 0 2026-03-09T15:01:22.870 INFO:tasks.workunit.client.0.vm05.stdout:8/238: creat d0/d1/d12/f4f x:0 0 0 2026-03-09T15:01:22.870 INFO:tasks.workunit.client.0.vm05.stdout:8/239: mknod d0/d7/c50 0 2026-03-09T15:01:22.870 INFO:tasks.workunit.client.0.vm05.stdout:8/240: write d0/d7/f33 [2929818,17314] 0 2026-03-09T15:01:22.870 INFO:tasks.workunit.client.0.vm05.stdout:8/241: dread - d0/d2a/d2d/f4d zero size 2026-03-09T15:01:22.870 INFO:tasks.workunit.client.0.vm05.stdout:8/242: creat d0/d1/d12/d3c/f51 x:0 0 0 2026-03-09T15:01:22.870 INFO:tasks.workunit.client.0.vm05.stdout:8/243: stat d0/d2a 0 2026-03-09T15:01:22.870 INFO:tasks.workunit.client.0.vm05.stdout:8/244: read d0/d2a/d2d/f41 [370585,38788] 0 2026-03-09T15:01:22.871 INFO:tasks.workunit.client.0.vm05.stdout:8/245: symlink d0/d1/d12/d1b/l52 0 2026-03-09T15:01:22.872 INFO:tasks.workunit.client.0.vm05.stdout:8/246: mkdir d0/d7/d53 0 2026-03-09T15:01:22.873 INFO:tasks.workunit.client.0.vm05.stdout:8/247: mkdir d0/d2a/d2d/d54 0 2026-03-09T15:01:22.874 INFO:tasks.workunit.client.0.vm05.stdout:8/248: mkdir d0/d1/d55 0 2026-03-09T15:01:22.874 INFO:tasks.workunit.client.0.vm05.stdout:8/249: chown d0 3111672 1 2026-03-09T15:01:22.876 INFO:tasks.workunit.client.0.vm05.stdout:8/250: creat d0/d7/d53/f56 x:0 0 0 2026-03-09T15:01:22.878 INFO:tasks.workunit.client.0.vm05.stdout:8/251: creat d0/d1/d12/f57 x:0 0 0 2026-03-09T15:01:22.879 INFO:tasks.workunit.client.0.vm05.stdout:8/252: symlink d0/d2a/d2d/l58 0 2026-03-09T15:01:22.880 INFO:tasks.workunit.client.0.vm05.stdout:8/253: write d0/d2a/d2d/d42/f4e [369313,26018] 0 2026-03-09T15:01:22.882 INFO:tasks.workunit.client.0.vm05.stdout:8/254: mknod d0/d1/d55/c59 0 2026-03-09T15:01:22.885 INFO:tasks.workunit.client.0.vm05.stdout:8/255: creat d0/d1/d12/f5a x:0 0 0 2026-03-09T15:01:22.887 INFO:tasks.workunit.client.0.vm05.stdout:8/256: creat d0/d2a/d2d/d54/f5b x:0 0 0 2026-03-09T15:01:22.918 INFO:tasks.workunit.client.0.vm05.stdout:1/163: rename d9/d2a/d31 to d9/d3d 0 2026-03-09T15:01:22.921 INFO:tasks.workunit.client.0.vm05.stdout:9/193: sync 2026-03-09T15:01:22.924 INFO:tasks.workunit.client.0.vm05.stdout:1/164: dwrite d9/f23 [0,4194304] 0 2026-03-09T15:01:22.927 INFO:tasks.workunit.client.0.vm05.stdout:9/194: dread d2/d10/f39 [0,4194304] 0 2026-03-09T15:01:22.933 INFO:tasks.workunit.client.0.vm05.stdout:8/257: fsync d0/d7/f33 0 2026-03-09T15:01:22.933 INFO:tasks.workunit.client.0.vm05.stdout:8/258: fsync d0/d24/f2c 0 2026-03-09T15:01:22.934 INFO:tasks.workunit.client.0.vm05.stdout:1/165: dwrite d9/d24/f25 [0,4194304] 0 2026-03-09T15:01:22.946 INFO:tasks.workunit.client.0.vm05.stdout:5/231: dwrite d1/d4/d34/d35/d3d/f32 [0,4194304] 0 2026-03-09T15:01:22.951 INFO:tasks.workunit.client.0.vm05.stdout:9/195: stat d2/d1a/f1c 0 2026-03-09T15:01:22.954 INFO:tasks.workunit.client.0.vm05.stdout:4/158: rename d2/d4/d8/la to d2/d4/d7/l30 0 2026-03-09T15:01:22.956 INFO:tasks.workunit.client.0.vm05.stdout:8/259: fdatasync d0/d2a/d2d/f41 0 2026-03-09T15:01:22.964 INFO:tasks.workunit.client.0.vm05.stdout:6/188: truncate da/d17/f2c 5930102 0 2026-03-09T15:01:22.965 INFO:tasks.workunit.client.0.vm05.stdout:6/189: chown da/f1f 121345941 1 2026-03-09T15:01:22.966 INFO:tasks.workunit.client.0.vm05.stdout:5/232: mkdir d1/d4/d34/d35/d4e 0 2026-03-09T15:01:22.967 INFO:tasks.workunit.client.0.vm05.stdout:0/140: write d9/de/d12/d15/f1c [2092869,54372] 0 2026-03-09T15:01:22.968 INFO:tasks.workunit.client.0.vm05.stdout:9/196: creat d2/d10/d22/d2c/d3c/f41 x:0 0 0 2026-03-09T15:01:22.969 INFO:tasks.workunit.client.0.vm05.stdout:9/197: dread - d2/d10/d22/d2c/d3c/f41 zero size 2026-03-09T15:01:22.973 INFO:tasks.workunit.client.0.vm05.stdout:9/198: dwrite d2/d1a/f40 [0,4194304] 0 2026-03-09T15:01:22.978 INFO:tasks.workunit.client.0.vm05.stdout:9/199: dread d2/d10/d15/f18 [0,4194304] 0 2026-03-09T15:01:22.987 INFO:tasks.workunit.client.0.vm05.stdout:2/234: rename da/dd/f12 to da/d13/d30/f41 0 2026-03-09T15:01:22.987 INFO:tasks.workunit.client.0.vm05.stdout:8/260: symlink d0/d2a/d2d/d42/l5c 0 2026-03-09T15:01:22.987 INFO:tasks.workunit.client.0.vm05.stdout:3/203: write d3/df/d10/d19/f26 [294168,63335] 0 2026-03-09T15:01:22.988 INFO:tasks.workunit.client.0.vm05.stdout:3/204: chown d3/la 6517503 1 2026-03-09T15:01:22.989 INFO:tasks.workunit.client.0.vm05.stdout:5/233: creat d1/d4/d27/f4f x:0 0 0 2026-03-09T15:01:22.993 INFO:tasks.workunit.client.0.vm05.stdout:0/141: dread d9/fd [0,4194304] 0 2026-03-09T15:01:22.994 INFO:tasks.workunit.client.0.vm05.stdout:0/142: chown c5 48 1 2026-03-09T15:01:22.994 INFO:tasks.workunit.client.0.vm05.stdout:0/143: fdatasync d9/f22 0 2026-03-09T15:01:22.994 INFO:tasks.workunit.client.0.vm05.stdout:0/144: chown c5 144451 1 2026-03-09T15:01:22.995 INFO:tasks.workunit.client.0.vm05.stdout:0/145: write d9/f2b [87791,18686] 0 2026-03-09T15:01:22.996 INFO:tasks.workunit.client.0.vm05.stdout:9/200: chown d2/d10/d22/d2c/d3c/l3f 997 1 2026-03-09T15:01:22.997 INFO:tasks.workunit.client.0.vm05.stdout:9/201: chown d2/d10/d22/c29 492 1 2026-03-09T15:01:23.003 INFO:tasks.workunit.client.0.vm05.stdout:3/205: creat d3/df/f4a x:0 0 0 2026-03-09T15:01:23.004 INFO:tasks.workunit.client.0.vm05.stdout:9/202: dwrite d2/d10/f28 [0,4194304] 0 2026-03-09T15:01:23.006 INFO:tasks.workunit.client.0.vm05.stdout:5/234: write d1/f1d [681817,104801] 0 2026-03-09T15:01:23.025 INFO:tasks.workunit.client.0.vm05.stdout:5/235: creat d1/d4/f50 x:0 0 0 2026-03-09T15:01:23.026 INFO:tasks.workunit.client.0.vm05.stdout:5/236: write d1/d4/d27/f3a [195081,3367] 0 2026-03-09T15:01:23.027 INFO:tasks.workunit.client.0.vm05.stdout:1/166: rename d9/d17/l38 to d9/d2a/l3e 0 2026-03-09T15:01:23.028 INFO:tasks.workunit.client.0.vm05.stdout:3/206: unlink d3/d29/l37 0 2026-03-09T15:01:23.029 INFO:tasks.workunit.client.0.vm05.stdout:5/237: symlink d1/d4/d34/l51 0 2026-03-09T15:01:23.030 INFO:tasks.workunit.client.0.vm05.stdout:2/235: rename da/d16/f18 to da/d13/d2f/f42 0 2026-03-09T15:01:23.049 INFO:tasks.workunit.client.0.vm05.stdout:1/167: dwrite d9/d1a/f2c [0,4194304] 0 2026-03-09T15:01:23.049 INFO:tasks.workunit.client.0.vm05.stdout:1/168: write d9/d24/f25 [1236137,108078] 0 2026-03-09T15:01:23.049 INFO:tasks.workunit.client.0.vm05.stdout:1/169: chown d9/d2f/d37 491 1 2026-03-09T15:01:23.049 INFO:tasks.workunit.client.0.vm05.stdout:1/170: stat d9/d3d/l3c 0 2026-03-09T15:01:23.049 INFO:tasks.workunit.client.0.vm05.stdout:5/238: fdatasync d1/d4/d34/d35/f44 0 2026-03-09T15:01:23.049 INFO:tasks.workunit.client.0.vm05.stdout:9/203: rename d2/d1a/f1c to d2/f42 0 2026-03-09T15:01:23.049 INFO:tasks.workunit.client.0.vm05.stdout:2/236: creat da/d29/d3f/f43 x:0 0 0 2026-03-09T15:01:23.049 INFO:tasks.workunit.client.0.vm05.stdout:9/204: creat d2/d1a/d1b/f43 x:0 0 0 2026-03-09T15:01:23.049 INFO:tasks.workunit.client.0.vm05.stdout:5/239: rename d1/d4/f50 to d1/d4/d34/d35/f52 0 2026-03-09T15:01:23.050 INFO:tasks.workunit.client.0.vm05.stdout:5/240: write d1/d4/d19/f29 [805885,113954] 0 2026-03-09T15:01:23.050 INFO:tasks.workunit.client.0.vm05.stdout:2/237: write da/f3c [663365,71975] 0 2026-03-09T15:01:23.056 INFO:tasks.workunit.client.0.vm05.stdout:9/205: creat d2/d10/d22/d2c/f44 x:0 0 0 2026-03-09T15:01:23.058 INFO:tasks.workunit.client.0.vm05.stdout:9/206: mkdir d2/d1a/d1b/d23/d37/d45 0 2026-03-09T15:01:23.148 INFO:tasks.workunit.client.0.vm05.stdout:0/146: dread d9/de/d12/d15/f1c [0,4194304] 0 2026-03-09T15:01:23.148 INFO:tasks.workunit.client.0.vm05.stdout:0/147: fsync d9/de/d12/f23 0 2026-03-09T15:01:23.149 INFO:tasks.workunit.client.0.vm05.stdout:0/148: stat d9/c1b 0 2026-03-09T15:01:23.203 INFO:tasks.workunit.client.0.vm05.stdout:4/159: write d2/d4/f17 [3522334,124844] 0 2026-03-09T15:01:23.205 INFO:tasks.workunit.client.0.vm05.stdout:4/160: rename d2/d4/c28 to d2/d4/d8/c31 0 2026-03-09T15:01:23.206 INFO:tasks.workunit.client.0.vm05.stdout:4/161: stat d2/d4/d7/d21/l2f 0 2026-03-09T15:01:23.208 INFO:tasks.workunit.client.0.vm05.stdout:4/162: symlink d2/d4/d7/d21/l32 0 2026-03-09T15:01:23.210 INFO:tasks.workunit.client.0.vm05.stdout:4/163: creat d2/f33 x:0 0 0 2026-03-09T15:01:23.213 INFO:tasks.workunit.client.0.vm05.stdout:7/205: read d1/d9/fc [4614292,71822] 0 2026-03-09T15:01:23.214 INFO:tasks.workunit.client.0.vm05.stdout:4/164: dwrite d2/f33 [0,4194304] 0 2026-03-09T15:01:23.220 INFO:tasks.workunit.client.0.vm05.stdout:4/165: dread d2/d4/d7/dc/f18 [0,4194304] 0 2026-03-09T15:01:23.226 INFO:tasks.workunit.client.0.vm05.stdout:7/206: dwrite d1/d12/f1f [0,4194304] 0 2026-03-09T15:01:23.228 INFO:tasks.workunit.client.0.vm05.stdout:4/166: unlink d2/c6 0 2026-03-09T15:01:23.229 INFO:tasks.workunit.client.0.vm05.stdout:7/207: write d1/d9/d23/d31/d32/f38 [293701,3826] 0 2026-03-09T15:01:23.230 INFO:tasks.workunit.client.0.vm05.stdout:7/208: stat d1/f15 0 2026-03-09T15:01:23.236 INFO:tasks.workunit.client.0.vm05.stdout:7/209: rename d1/d25 to d1/d9/d3f 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:7/210: symlink d1/d9/d23/d31/l40 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:7/211: read - d1/f28 zero size 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:3/207: rmdir d3 39 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:3/208: write d3/f18 [709480,98430] 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:7/212: rename d1/d9/d23/f2a to d1/d9/d3f/f41 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:7/213: read d1/f16 [669558,39656] 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:7/214: truncate d1/f16 1567286 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:3/209: write d3/df/d10/f3f [599927,20649] 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:3/210: read d3/df/d10/d19/f25 [731637,117994] 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:5/241: getdents d1/d4 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:5/242: write d1/d4/d34/d35/f36 [906112,75124] 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:7/215: rmdir d1/d9/d23 39 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:1/171: truncate d9/f23 2686726 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:5/243: write d1/f1d [2212099,108897] 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:2/238: truncate da/f2c 392718 0 2026-03-09T15:01:23.260 INFO:tasks.workunit.client.0.vm05.stdout:9/207: truncate d2/f8 737506 0 2026-03-09T15:01:23.262 INFO:tasks.workunit.client.0.vm05.stdout:5/244: mkdir d1/d4/d34/d35/d53 0 2026-03-09T15:01:23.266 INFO:tasks.workunit.client.0.vm05.stdout:9/208: readlink d2/d1a/d1b/l2b 0 2026-03-09T15:01:23.267 INFO:tasks.workunit.client.0.vm05.stdout:7/216: mknod d1/c42 0 2026-03-09T15:01:23.268 INFO:tasks.workunit.client.0.vm05.stdout:5/245: mknod d1/d4/c54 0 2026-03-09T15:01:23.278 INFO:tasks.workunit.client.0.vm05.stdout:3/211: creat d3/df/f4b x:0 0 0 2026-03-09T15:01:23.278 INFO:tasks.workunit.client.0.vm05.stdout:9/209: creat d2/f46 x:0 0 0 2026-03-09T15:01:23.316 INFO:tasks.workunit.client.0.vm05.stdout:9/210: chown d2/d10/d15/l25 50884 1 2026-03-09T15:01:23.316 INFO:tasks.workunit.client.0.vm05.stdout:5/246: creat d1/d4/f55 x:0 0 0 2026-03-09T15:01:23.316 INFO:tasks.workunit.client.0.vm05.stdout:1/172: truncate d9/f21 1390614 0 2026-03-09T15:01:23.316 INFO:tasks.workunit.client.0.vm05.stdout:7/217: mknod d1/d9/d23/c43 0 2026-03-09T15:01:23.316 INFO:tasks.workunit.client.0.vm05.stdout:7/218: chown d1/d9/d23/d31/l40 288659736 1 2026-03-09T15:01:23.316 INFO:tasks.workunit.client.0.vm05.stdout:9/211: rmdir d2/d1a/d1b/d23/d37/d45 0 2026-03-09T15:01:23.316 INFO:tasks.workunit.client.0.vm05.stdout:9/212: chown d2/f1f 986 1 2026-03-09T15:01:23.316 INFO:tasks.workunit.client.0.vm05.stdout:9/213: mkdir d2/d10/d22/d47 0 2026-03-09T15:01:23.316 INFO:tasks.workunit.client.0.vm05.stdout:9/214: creat d2/d10/f48 x:0 0 0 2026-03-09T15:01:23.316 INFO:tasks.workunit.client.0.vm05.stdout:9/215: getdents d2/d1a 0 2026-03-09T15:01:23.317 INFO:tasks.workunit.client.0.vm05.stdout:4/167: read d2/d4/f17 [841492,110729] 0 2026-03-09T15:01:23.318 INFO:tasks.workunit.client.0.vm05.stdout:4/168: creat d2/d4/d7/d21/f34 x:0 0 0 2026-03-09T15:01:23.321 INFO:tasks.workunit.client.0.vm05.stdout:4/169: link d2/d4/d7/l22 d2/d4/d7/l35 0 2026-03-09T15:01:23.392 INFO:tasks.workunit.client.0.vm05.stdout:8/261: sync 2026-03-09T15:01:23.392 INFO:tasks.workunit.client.0.vm05.stdout:8/262: stat d0/d7/f8 0 2026-03-09T15:01:23.393 INFO:tasks.workunit.client.0.vm05.stdout:8/263: write d0/d7/f20 [4379493,102536] 0 2026-03-09T15:01:23.394 INFO:tasks.workunit.client.0.vm05.stdout:8/264: fsync d0/dc/f4a 0 2026-03-09T15:01:23.397 INFO:tasks.workunit.client.0.vm05.stdout:8/265: mknod d0/d1/c5d 0 2026-03-09T15:01:23.397 INFO:tasks.workunit.client.0.vm05.stdout:6/190: link da/fe da/d17/d3b/f3f 0 2026-03-09T15:01:23.398 INFO:tasks.workunit.client.0.vm05.stdout:6/191: write da/d17/f20 [163654,22247] 0 2026-03-09T15:01:23.401 INFO:tasks.workunit.client.0.vm05.stdout:8/266: rmdir d0/d7/d53 39 2026-03-09T15:01:23.401 INFO:tasks.workunit.client.0.vm05.stdout:8/267: chown d0/fa 513364988 1 2026-03-09T15:01:23.402 INFO:tasks.workunit.client.0.vm05.stdout:8/268: write d0/d1/d12/d1b/f27 [3199054,46679] 0 2026-03-09T15:01:23.409 INFO:tasks.workunit.client.0.vm05.stdout:4/170: dwrite d2/f33 [4194304,4194304] 0 2026-03-09T15:01:23.409 INFO:tasks.workunit.client.0.vm05.stdout:4/171: write d2/d4/d7/f2d [519566,75642] 0 2026-03-09T15:01:23.412 INFO:tasks.workunit.client.0.vm05.stdout:4/172: chown d2/d4/c2e 318977331 1 2026-03-09T15:01:23.419 INFO:tasks.workunit.client.0.vm05.stdout:8/269: creat d0/d7/f5e x:0 0 0 2026-03-09T15:01:23.420 INFO:tasks.workunit.client.0.vm05.stdout:4/173: creat d2/d1d/f36 x:0 0 0 2026-03-09T15:01:23.421 INFO:tasks.workunit.client.0.vm05.stdout:8/270: creat d0/d2a/d2d/f5f x:0 0 0 2026-03-09T15:01:23.422 INFO:tasks.workunit.client.0.vm05.stdout:8/271: chown d0/d2a/d2d/d54/f5b 20 1 2026-03-09T15:01:23.423 INFO:tasks.workunit.client.0.vm05.stdout:4/174: mknod d2/d4/d7/d21/c37 0 2026-03-09T15:01:23.459 INFO:tasks.workunit.client.0.vm05.stdout:8/272: unlink d0/dc/l1d 0 2026-03-09T15:01:23.459 INFO:tasks.workunit.client.0.vm05.stdout:4/175: rmdir d2/d4/d7/dc 39 2026-03-09T15:01:23.459 INFO:tasks.workunit.client.0.vm05.stdout:8/273: mkdir d0/d2a/d2d/d42/d60 0 2026-03-09T15:01:23.459 INFO:tasks.workunit.client.0.vm05.stdout:8/274: rmdir d0/d1/d12/d1b 39 2026-03-09T15:01:23.474 INFO:tasks.workunit.client.0.vm05.stdout:7/219: getdents d1/d9/d3f 0 2026-03-09T15:01:23.487 INFO:tasks.workunit.client.0.vm05.stdout:7/220: dread d1/fa [0,4194304] 0 2026-03-09T15:01:23.493 INFO:tasks.workunit.client.0.vm05.stdout:7/221: truncate d1/d12/f18 3420287 0 2026-03-09T15:01:23.495 INFO:tasks.workunit.client.0.vm05.stdout:7/222: getdents d1/d9 0 2026-03-09T15:01:23.497 INFO:tasks.workunit.client.0.vm05.stdout:7/223: unlink d1/d22/d3c/f3d 0 2026-03-09T15:01:23.499 INFO:tasks.workunit.client.0.vm05.stdout:7/224: creat d1/d22/d3c/f44 x:0 0 0 2026-03-09T15:01:23.500 INFO:tasks.workunit.client.0.vm05.stdout:7/225: write d1/d12/f11 [103270,49111] 0 2026-03-09T15:01:23.502 INFO:tasks.workunit.client.0.vm05.stdout:7/226: chown d1/d9/d23/l2b 1491069021 1 2026-03-09T15:01:23.502 INFO:tasks.workunit.client.0.vm05.stdout:7/227: chown d1/d9/fc 107914 1 2026-03-09T15:01:23.503 INFO:tasks.workunit.client.0.vm05.stdout:7/228: dread - d1/d22/d3c/f44 zero size 2026-03-09T15:01:23.504 INFO:tasks.workunit.client.0.vm05.stdout:7/229: truncate d1/d9/d23/d31/f37 70361 0 2026-03-09T15:01:23.505 INFO:tasks.workunit.client.0.vm05.stdout:7/230: chown d1/d9/d23/d31/f33 91090 1 2026-03-09T15:01:23.505 INFO:tasks.workunit.client.0.vm05.stdout:7/231: fdatasync d1/d9/d3f/f3b 0 2026-03-09T15:01:23.510 INFO:tasks.workunit.client.0.vm05.stdout:7/232: dwrite d1/d9/d3f/f39 [0,4194304] 0 2026-03-09T15:01:23.520 INFO:tasks.workunit.client.0.vm05.stdout:7/233: dwrite d1/d9/d23/d31/d32/f3a [0,4194304] 0 2026-03-09T15:01:23.523 INFO:tasks.workunit.client.0.vm05.stdout:7/234: write d1/d12/f11 [513196,54215] 0 2026-03-09T15:01:23.524 INFO:tasks.workunit.client.0.vm05.stdout:7/235: truncate d1/f28 508291 0 2026-03-09T15:01:23.524 INFO:tasks.workunit.client.0.vm05.stdout:7/236: stat d1/d22/c26 0 2026-03-09T15:01:23.525 INFO:tasks.workunit.client.0.vm05.stdout:0/149: sync 2026-03-09T15:01:23.531 INFO:tasks.workunit.client.0.vm05.stdout:0/150: dwrite d9/f22 [0,4194304] 0 2026-03-09T15:01:23.549 INFO:tasks.workunit.client.0.vm05.stdout:1/173: sync 2026-03-09T15:01:23.549 INFO:tasks.workunit.client.0.vm05.stdout:3/212: sync 2026-03-09T15:01:23.549 INFO:tasks.workunit.client.0.vm05.stdout:9/216: sync 2026-03-09T15:01:23.549 INFO:tasks.workunit.client.0.vm05.stdout:9/217: readlink d2/l34 0 2026-03-09T15:01:23.549 INFO:tasks.workunit.client.0.vm05.stdout:3/213: write d3/f1f [1085667,129027] 0 2026-03-09T15:01:23.549 INFO:tasks.workunit.client.0.vm05.stdout:9/218: dwrite d2/d10/f26 [0,4194304] 0 2026-03-09T15:01:23.549 INFO:tasks.workunit.client.0.vm05.stdout:1/174: getdents d9/d2f/d37 0 2026-03-09T15:01:23.549 INFO:tasks.workunit.client.0.vm05.stdout:0/151: mkdir d9/de/d12/d15/d2e/d32 0 2026-03-09T15:01:23.561 INFO:tasks.workunit.client.0.vm05.stdout:7/237: link d1/d12/f1f d1/f45 0 2026-03-09T15:01:23.562 INFO:tasks.workunit.client.0.vm05.stdout:1/175: fdatasync d9/f12 0 2026-03-09T15:01:23.563 INFO:tasks.workunit.client.0.vm05.stdout:0/152: dwrite d9/de/f20 [0,4194304] 0 2026-03-09T15:01:23.570 INFO:tasks.workunit.client.0.vm05.stdout:1/176: creat d9/d2a/f3f x:0 0 0 2026-03-09T15:01:23.570 INFO:tasks.workunit.client.0.vm05.stdout:3/214: getdents d3/df/d10/d19/d44 0 2026-03-09T15:01:23.571 INFO:tasks.workunit.client.0.vm05.stdout:3/215: readlink d3/df/d10/d34/l36 0 2026-03-09T15:01:23.572 INFO:tasks.workunit.client.0.vm05.stdout:3/216: read - d3/f42 zero size 2026-03-09T15:01:23.572 INFO:tasks.workunit.client.0.vm05.stdout:3/217: dread - d3/d29/d2d/f31 zero size 2026-03-09T15:01:23.574 INFO:tasks.workunit.client.0.vm05.stdout:7/238: dread d1/f16 [0,4194304] 0 2026-03-09T15:01:23.575 INFO:tasks.workunit.client.0.vm05.stdout:7/239: dread - d1/d9/d3f/f3b zero size 2026-03-09T15:01:23.575 INFO:tasks.workunit.client.0.vm05.stdout:7/240: write d1/d9/d3f/f29 [840415,99921] 0 2026-03-09T15:01:23.581 INFO:tasks.workunit.client.0.vm05.stdout:3/218: stat d3/df/c46 0 2026-03-09T15:01:23.581 INFO:tasks.workunit.client.0.vm05.stdout:1/177: symlink d9/l40 0 2026-03-09T15:01:23.583 INFO:tasks.workunit.client.0.vm05.stdout:1/178: dread d9/f12 [0,4194304] 0 2026-03-09T15:01:23.585 INFO:tasks.workunit.client.0.vm05.stdout:0/153: dread d9/de/f1e [0,4194304] 0 2026-03-09T15:01:23.595 INFO:tasks.workunit.client.0.vm05.stdout:3/219: creat d3/df/d10/d34/f4c x:0 0 0 2026-03-09T15:01:23.625 INFO:tasks.workunit.client.0.vm05.stdout:1/179: symlink d9/d17/l41 0 2026-03-09T15:01:23.625 INFO:tasks.workunit.client.0.vm05.stdout:1/180: stat d9/d3d/d33 0 2026-03-09T15:01:23.625 INFO:tasks.workunit.client.0.vm05.stdout:1/181: creat d9/d3d/f42 x:0 0 0 2026-03-09T15:01:23.625 INFO:tasks.workunit.client.0.vm05.stdout:0/154: rename d9/c17 to d9/c33 0 2026-03-09T15:01:23.625 INFO:tasks.workunit.client.0.vm05.stdout:0/155: read d9/fd [700457,69133] 0 2026-03-09T15:01:23.625 INFO:tasks.workunit.client.0.vm05.stdout:0/156: chown d9/de/d12/f23 15573682 1 2026-03-09T15:01:23.626 INFO:tasks.workunit.client.0.vm05.stdout:1/182: creat d9/d2f/f43 x:0 0 0 2026-03-09T15:01:23.626 INFO:tasks.workunit.client.0.vm05.stdout:1/183: chown d9/d2a 136361435 1 2026-03-09T15:01:23.626 INFO:tasks.workunit.client.0.vm05.stdout:1/184: chown d9/d1a/f2c 15970580 1 2026-03-09T15:01:23.626 INFO:tasks.workunit.client.0.vm05.stdout:0/157: dwrite d9/de/d12/d15/f1c [0,4194304] 0 2026-03-09T15:01:23.626 INFO:tasks.workunit.client.0.vm05.stdout:1/185: link d9/d2a/f2d d9/d2a/f44 0 2026-03-09T15:01:23.626 INFO:tasks.workunit.client.0.vm05.stdout:1/186: dread - d9/d3d/f32 zero size 2026-03-09T15:01:23.626 INFO:tasks.workunit.client.0.vm05.stdout:0/158: dread d9/de/d12/f23 [0,4194304] 0 2026-03-09T15:01:23.626 INFO:tasks.workunit.client.0.vm05.stdout:0/159: stat d9/de/f20 0 2026-03-09T15:01:23.626 INFO:tasks.workunit.client.0.vm05.stdout:0/160: truncate d9/de/f19 1451446 0 2026-03-09T15:01:23.626 INFO:tasks.workunit.client.0.vm05.stdout:1/187: dwrite d9/d17/f26 [0,4194304] 0 2026-03-09T15:01:23.629 INFO:tasks.workunit.client.0.vm05.stdout:1/188: dread f7 [0,4194304] 0 2026-03-09T15:01:23.707 INFO:tasks.workunit.client.0.vm05.stdout:0/161: read d9/f2b [41193,87210] 0 2026-03-09T15:01:23.714 INFO:tasks.workunit.client.0.vm05.stdout:2/239: dwrite f5 [4194304,4194304] 0 2026-03-09T15:01:23.716 INFO:tasks.workunit.client.0.vm05.stdout:2/240: write da/d16/f1f [10692,129272] 0 2026-03-09T15:01:23.716 INFO:tasks.workunit.client.0.vm05.stdout:2/241: write da/d29/f39 [973118,125997] 0 2026-03-09T15:01:23.722 INFO:tasks.workunit.client.0.vm05.stdout:5/247: write d1/f14 [1359271,112219] 0 2026-03-09T15:01:23.732 INFO:tasks.workunit.client.0.vm05.stdout:5/248: mkdir d1/d4/d34/d56 0 2026-03-09T15:01:23.732 INFO:tasks.workunit.client.0.vm05.stdout:5/249: stat d1/f14 0 2026-03-09T15:01:23.732 INFO:tasks.workunit.client.0.vm05.stdout:5/250: write d1/f1d [4426440,9827] 0 2026-03-09T15:01:23.734 INFO:tasks.workunit.client.0.vm05.stdout:0/162: symlink d9/l34 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:2/242: unlink l7 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:2/243: stat da/d13/c14 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:5/251: creat d1/d4/d27/f57 x:0 0 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:6/192: dwrite da/f1a [0,4194304] 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:6/193: truncate da/d17/f29 396460 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:2/244: creat da/d13/d2f/d35/f44 x:0 0 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:5/252: symlink d1/d4/d27/l58 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:4/176: truncate d2/d4/f15 6479215 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:8/275: write d0/d2a/d2d/f3e [840867,96109] 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:8/276: truncate d0/d2a/d2d/f41 1126810 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:8/277: fsync d0/d1/d12/f5a 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:6/194: unlink da/d19/c31 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:4/177: rename d2/d4/d7/d21/l2f to d2/d4/d1e/l38 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:8/278: symlink d0/dc/l61 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:0/163: creat d9/f35 x:0 0 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:0/164: dwrite d9/de/f21 [0,4194304] 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:0/165: read d9/de/f1e [474931,72657] 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:0/166: chown d9/de/d12/d15/c24 835 1 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:0/167: write d9/de/d25/f2d [258311,32330] 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:0/168: write d9/de/f1e [1472132,33967] 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:0/169: write d9/de/d12/d15/f1c [4994858,83317] 0 2026-03-09T15:01:23.772 INFO:tasks.workunit.client.0.vm05.stdout:6/195: dread da/f14 [0,4194304] 0 2026-03-09T15:01:23.776 INFO:tasks.workunit.client.0.vm05.stdout:6/196: dwrite da/d17/f2d [0,4194304] 0 2026-03-09T15:01:23.783 INFO:tasks.workunit.client.0.vm05.stdout:6/197: dwrite da/f10 [0,4194304] 0 2026-03-09T15:01:23.787 INFO:tasks.workunit.client.0.vm05.stdout:6/198: dread da/d17/f29 [0,4194304] 0 2026-03-09T15:01:23.790 INFO:tasks.workunit.client.0.vm05.stdout:8/279: rmdir d0/d24 39 2026-03-09T15:01:23.803 INFO:tasks.workunit.client.0.vm05.stdout:8/280: chown d0/d1/d12/d1b/l52 0 1 2026-03-09T15:01:23.803 INFO:tasks.workunit.client.0.vm05.stdout:8/281: write d0/d2a/d2d/f5f [534506,82968] 0 2026-03-09T15:01:23.804 INFO:tasks.workunit.client.0.vm05.stdout:8/282: dread - d0/d1/d12/d3c/f51 zero size 2026-03-09T15:01:23.807 INFO:tasks.workunit.client.0.vm05.stdout:4/178: symlink d2/d4/d7/dc/d2b/l39 0 2026-03-09T15:01:23.808 INFO:tasks.workunit.client.0.vm05.stdout:4/179: mknod d2/d4/d7/dc/c3a 0 2026-03-09T15:01:23.809 INFO:tasks.workunit.client.0.vm05.stdout:4/180: chown d2/d4/c2e 38939421 1 2026-03-09T15:01:23.813 INFO:tasks.workunit.client.0.vm05.stdout:4/181: rename d2/d4/c19 to d2/d4/d7/c3b 0 2026-03-09T15:01:23.816 INFO:tasks.workunit.client.0.vm05.stdout:4/182: mknod d2/d4/d7/dc/d2b/c3c 0 2026-03-09T15:01:23.817 INFO:tasks.workunit.client.0.vm05.stdout:4/183: mkdir d2/d4/d7/d21/d3d 0 2026-03-09T15:01:23.818 INFO:tasks.workunit.client.0.vm05.stdout:4/184: creat d2/f3e x:0 0 0 2026-03-09T15:01:23.852 INFO:tasks.workunit.client.0.vm05.stdout:7/241: sync 2026-03-09T15:01:23.852 INFO:tasks.workunit.client.0.vm05.stdout:1/189: sync 2026-03-09T15:01:23.853 INFO:tasks.workunit.client.0.vm05.stdout:1/190: fdatasync d9/d1a/f2c 0 2026-03-09T15:01:23.853 INFO:tasks.workunit.client.0.vm05.stdout:1/191: fdatasync d9/d2f/f3a 0 2026-03-09T15:01:23.854 INFO:tasks.workunit.client.0.vm05.stdout:1/192: dread - d9/d2a/f2d zero size 2026-03-09T15:01:23.855 INFO:tasks.workunit.client.0.vm05.stdout:7/242: unlink d1/d9/d3f/f30 0 2026-03-09T15:01:23.858 INFO:tasks.workunit.client.0.vm05.stdout:7/243: dread d1/d9/d23/d31/d32/f3a [0,4194304] 0 2026-03-09T15:01:23.860 INFO:tasks.workunit.client.0.vm05.stdout:7/244: symlink d1/d9/l46 0 2026-03-09T15:01:23.861 INFO:tasks.workunit.client.0.vm05.stdout:7/245: write d1/f21 [5022209,117134] 0 2026-03-09T15:01:23.869 INFO:tasks.workunit.client.0.vm05.stdout:1/193: link d9/le d9/l45 0 2026-03-09T15:01:23.869 INFO:tasks.workunit.client.0.vm05.stdout:7/246: creat d1/d22/f47 x:0 0 0 2026-03-09T15:01:23.870 INFO:tasks.workunit.client.0.vm05.stdout:1/194: write d9/d1a/f2c [2307263,130837] 0 2026-03-09T15:01:23.870 INFO:tasks.workunit.client.0.vm05.stdout:7/247: chown d1/f15 401623 1 2026-03-09T15:01:23.873 INFO:tasks.workunit.client.0.vm05.stdout:1/195: rename d9/ld to d9/l46 0 2026-03-09T15:01:23.874 INFO:tasks.workunit.client.0.vm05.stdout:1/196: read d9/f12 [3621616,85086] 0 2026-03-09T15:01:23.875 INFO:tasks.workunit.client.0.vm05.stdout:1/197: symlink d9/d2f/d37/l47 0 2026-03-09T15:01:23.877 INFO:tasks.workunit.client.0.vm05.stdout:1/198: rename d9/d24 to d9/d1a/d48 0 2026-03-09T15:01:23.965 INFO:tasks.workunit.client.0.vm05.stdout:8/283: fdatasync d0/d2a/d2d/f3e 0 2026-03-09T15:01:23.967 INFO:tasks.workunit.client.0.vm05.stdout:8/284: mknod d0/d1/d12/c62 0 2026-03-09T15:01:23.969 INFO:tasks.workunit.client.0.vm05.stdout:8/285: getdents d0/d2a/d2d 0 2026-03-09T15:01:23.971 INFO:tasks.workunit.client.0.vm05.stdout:8/286: mknod d0/d1/d12/c63 0 2026-03-09T15:01:23.973 INFO:tasks.workunit.client.0.vm05.stdout:8/287: link d0/f4 d0/d2a/d2d/d54/f64 0 2026-03-09T15:01:24.065 INFO:tasks.workunit.client.0.vm05.stdout:0/170: dwrite d9/f22 [4194304,4194304] 0 2026-03-09T15:01:24.076 INFO:tasks.workunit.client.0.vm05.stdout:0/171: creat d9/de/d12/d15/f36 x:0 0 0 2026-03-09T15:01:24.077 INFO:tasks.workunit.client.0.vm05.stdout:0/172: symlink d9/l37 0 2026-03-09T15:01:24.081 INFO:tasks.workunit.client.0.vm05.stdout:0/173: dread d9/de/d25/f2d [0,4194304] 0 2026-03-09T15:01:24.135 INFO:tasks.workunit.client.0.vm05.stdout:9/219: dwrite d2/f1f [0,4194304] 0 2026-03-09T15:01:24.171 INFO:tasks.workunit.client.0.vm05.stdout:9/220: sync 2026-03-09T15:01:24.173 INFO:tasks.workunit.client.0.vm05.stdout:9/221: mknod d2/d10/d22/d2c/d3c/c49 0 2026-03-09T15:01:24.174 INFO:tasks.workunit.client.0.vm05.stdout:9/222: read d2/f1f [3712131,15822] 0 2026-03-09T15:01:24.178 INFO:tasks.workunit.client.0.vm05.stdout:9/223: mknod d2/d10/c4a 0 2026-03-09T15:01:24.179 INFO:tasks.workunit.client.0.vm05.stdout:9/224: fsync d2/d10/d22/d2c/f44 0 2026-03-09T15:01:24.184 INFO:tasks.workunit.client.0.vm05.stdout:9/225: dwrite d2/d10/f39 [4194304,4194304] 0 2026-03-09T15:01:24.187 INFO:tasks.workunit.client.0.vm05.stdout:9/226: symlink d2/d10/d22/d2c/l4b 0 2026-03-09T15:01:24.189 INFO:tasks.workunit.client.0.vm05.stdout:9/227: symlink d2/d10/d22/d2c/l4c 0 2026-03-09T15:01:24.190 INFO:tasks.workunit.client.0.vm05.stdout:9/228: readlink d2/l34 0 2026-03-09T15:01:24.190 INFO:tasks.workunit.client.0.vm05.stdout:9/229: write d2/f5 [2462869,90740] 0 2026-03-09T15:01:24.194 INFO:tasks.workunit.client.0.vm05.stdout:9/230: getdents d2/d1a 0 2026-03-09T15:01:24.199 INFO:tasks.workunit.client.0.vm05.stdout:9/231: unlink d2/f42 0 2026-03-09T15:01:24.202 INFO:tasks.workunit.client.0.vm05.stdout:9/232: dread d2/fd [0,4194304] 0 2026-03-09T15:01:24.207 INFO:tasks.workunit.client.0.vm05.stdout:3/220: truncate d3/df/d10/d19/f26 87060 0 2026-03-09T15:01:24.207 INFO:tasks.workunit.client.0.vm05.stdout:3/221: chown d3/df/f4a 10 1 2026-03-09T15:01:24.207 INFO:tasks.workunit.client.0.vm05.stdout:3/222: stat d3/df 0 2026-03-09T15:01:24.239 INFO:tasks.workunit.client.0.vm05.stdout:4/185: dwrite d2/d4/f15 [4194304,4194304] 0 2026-03-09T15:01:24.240 INFO:tasks.workunit.client.0.vm05.stdout:4/186: fdatasync d2/f14 0 2026-03-09T15:01:24.240 INFO:tasks.workunit.client.0.vm05.stdout:4/187: chown d2/d4/d8/ld 1788 1 2026-03-09T15:01:24.250 INFO:tasks.workunit.client.0.vm05.stdout:4/188: symlink d2/d4/d8/l3f 0 2026-03-09T15:01:24.251 INFO:tasks.workunit.client.0.vm05.stdout:4/189: write d2/d4/d7/f9 [1837935,98296] 0 2026-03-09T15:01:24.254 INFO:tasks.workunit.client.0.vm05.stdout:2/245: dwrite da/dd/f25 [0,4194304] 0 2026-03-09T15:01:24.259 INFO:tasks.workunit.client.0.vm05.stdout:2/246: mkdir da/d29/d45 0 2026-03-09T15:01:24.264 INFO:tasks.workunit.client.0.vm05.stdout:2/247: mkdir da/d16/d46 0 2026-03-09T15:01:24.266 INFO:tasks.workunit.client.0.vm05.stdout:4/190: sync 2026-03-09T15:01:24.268 INFO:tasks.workunit.client.0.vm05.stdout:2/248: dread da/dd/ff [0,4194304] 0 2026-03-09T15:01:24.273 INFO:tasks.workunit.client.0.vm05.stdout:5/253: dwrite d1/f9 [0,4194304] 0 2026-03-09T15:01:24.279 INFO:tasks.workunit.client.0.vm05.stdout:4/191: dread d2/d4/d7/dc/f18 [0,4194304] 0 2026-03-09T15:01:24.286 INFO:tasks.workunit.client.0.vm05.stdout:2/249: mknod da/d16/d46/c47 0 2026-03-09T15:01:24.288 INFO:tasks.workunit.client.0.vm05.stdout:4/192: creat d2/d4/d1e/f40 x:0 0 0 2026-03-09T15:01:24.292 INFO:tasks.workunit.client.0.vm05.stdout:2/250: dwrite da/d16/f1f [4194304,4194304] 0 2026-03-09T15:01:24.300 INFO:tasks.workunit.client.0.vm05.stdout:4/193: dread d2/d4/d8/f13 [0,4194304] 0 2026-03-09T15:01:24.301 INFO:tasks.workunit.client.0.vm05.stdout:2/251: unlink l3 0 2026-03-09T15:01:24.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:24 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:01:24.308 INFO:tasks.workunit.client.0.vm05.stdout:2/252: mknod da/d16/c48 0 2026-03-09T15:01:24.315 INFO:tasks.workunit.client.0.vm05.stdout:4/194: mknod d2/d4/d7/c41 0 2026-03-09T15:01:24.315 INFO:tasks.workunit.client.0.vm05.stdout:4/195: write d2/d4/d1e/f40 [772459,6643] 0 2026-03-09T15:01:24.320 INFO:tasks.workunit.client.0.vm05.stdout:4/196: rename d2/d4/d8/l1f to d2/d4/d8/l42 0 2026-03-09T15:01:24.323 INFO:tasks.workunit.client.0.vm05.stdout:4/197: dread d2/f14 [0,4194304] 0 2026-03-09T15:01:24.323 INFO:tasks.workunit.client.0.vm05.stdout:4/198: write d2/f1b [2304961,117444] 0 2026-03-09T15:01:24.324 INFO:tasks.workunit.client.0.vm05.stdout:4/199: write d2/d1d/f36 [48645,38728] 0 2026-03-09T15:01:24.329 INFO:tasks.workunit.client.0.vm05.stdout:4/200: truncate d2/d4/d8/f13 1877802 0 2026-03-09T15:01:24.329 INFO:tasks.workunit.client.0.vm05.stdout:4/201: write d2/f33 [8157129,109078] 0 2026-03-09T15:01:24.330 INFO:tasks.workunit.client.0.vm05.stdout:4/202: fsync d2/d4/d7/f2d 0 2026-03-09T15:01:24.347 INFO:tasks.workunit.client.0.vm05.stdout:4/203: dread d2/d4/f17 [0,4194304] 0 2026-03-09T15:01:24.348 INFO:tasks.workunit.client.0.vm05.stdout:4/204: dread d2/d4/f17 [4194304,4194304] 0 2026-03-09T15:01:24.351 INFO:tasks.workunit.client.0.vm05.stdout:4/205: dread d2/d4/d7/f2d [0,4194304] 0 2026-03-09T15:01:24.356 INFO:tasks.workunit.client.0.vm05.stdout:4/206: dwrite d2/d4/f15 [0,4194304] 0 2026-03-09T15:01:24.356 INFO:tasks.workunit.client.0.vm05.stdout:6/199: truncate da/f10 2221392 0 2026-03-09T15:01:24.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:24 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:01:24.367 INFO:tasks.workunit.client.0.vm05.stdout:6/200: truncate da/f16 4385700 0 2026-03-09T15:01:24.367 INFO:tasks.workunit.client.0.vm05.stdout:6/201: truncate da/d17/f3c 533654 0 2026-03-09T15:01:24.402 INFO:tasks.workunit.client.0.vm05.stdout:7/248: symlink d1/d9/l48 0 2026-03-09T15:01:24.403 INFO:tasks.workunit.client.0.vm05.stdout:7/249: fsync d1/d9/d3f/f3b 0 2026-03-09T15:01:24.403 INFO:tasks.workunit.client.0.vm05.stdout:7/250: chown d1/c3 7541 1 2026-03-09T15:01:24.407 INFO:tasks.workunit.client.0.vm05.stdout:7/251: mkdir d1/d49 0 2026-03-09T15:01:24.417 INFO:tasks.workunit.client.0.vm05.stdout:7/252: dread d1/d9/d3f/f41 [0,4194304] 0 2026-03-09T15:01:24.418 INFO:tasks.workunit.client.0.vm05.stdout:1/199: rename d9/d1a to d9/d3d/d49 0 2026-03-09T15:01:24.423 INFO:tasks.workunit.client.0.vm05.stdout:8/288: rename d0/d7/f5e to d0/d1/d12/d1b/d21/f65 0 2026-03-09T15:01:24.428 INFO:tasks.workunit.client.0.vm05.stdout:8/289: dwrite d0/d2a/d2d/d54/f5b [0,4194304] 0 2026-03-09T15:01:24.430 INFO:tasks.workunit.client.0.vm05.stdout:8/290: dread - d0/d1/d12/d1b/d21/f65 zero size 2026-03-09T15:01:24.435 INFO:tasks.workunit.client.0.vm05.stdout:7/253: truncate d1/d12/f18 4307587 0 2026-03-09T15:01:24.437 INFO:tasks.workunit.client.0.vm05.stdout:1/200: symlink d9/d3d/d33/l4a 0 2026-03-09T15:01:24.440 INFO:tasks.workunit.client.0.vm05.stdout:0/174: rename d9/de/d12/d2c to d9/de/d25/d38 0 2026-03-09T15:01:24.444 INFO:tasks.workunit.client.0.vm05.stdout:1/201: mkdir d9/d3d/d49/d4b 0 2026-03-09T15:01:24.444 INFO:tasks.workunit.client.0.vm05.stdout:1/202: chown d9/d2f/f43 19198381 1 2026-03-09T15:01:24.445 INFO:tasks.workunit.client.0.vm05.stdout:1/203: fdatasync d9/d3d/f42 0 2026-03-09T15:01:24.447 INFO:tasks.workunit.client.0.vm05.stdout:5/254: rename d1/d4/d34/d35/f46 to d1/d4/d34/d56/f59 0 2026-03-09T15:01:24.449 INFO:tasks.workunit.client.0.vm05.stdout:5/255: write d1/d4/d34/d35/f4d [664330,31397] 0 2026-03-09T15:01:24.459 INFO:tasks.workunit.client.0.vm05.stdout:7/254: mkdir d1/d49/d4a 0 2026-03-09T15:01:24.460 INFO:tasks.workunit.client.0.vm05.stdout:7/255: chown d1/l2e 14724772 1 2026-03-09T15:01:24.460 INFO:tasks.workunit.client.0.vm05.stdout:7/256: read d1/f28 [301088,4753] 0 2026-03-09T15:01:24.465 INFO:tasks.workunit.client.0.vm05.stdout:9/233: rmdir d2 39 2026-03-09T15:01:24.489 INFO:tasks.workunit.client.0.vm05.stdout:3/223: dwrite d3/d29/f30 [0,4194304] 0 2026-03-09T15:01:24.492 INFO:tasks.workunit.client.0.vm05.stdout:3/224: chown d3/df/d10/d19/d44/l47 536230 1 2026-03-09T15:01:24.499 INFO:tasks.workunit.client.0.vm05.stdout:0/175: symlink d9/de/d12/d15/d2e/d32/l39 0 2026-03-09T15:01:24.499 INFO:tasks.workunit.client.0.vm05.stdout:0/176: fsync d9/de/d25/d38/f2f 0 2026-03-09T15:01:24.510 INFO:tasks.workunit.client.0.vm05.stdout:8/291: rename d0/d7/d53 to d0/d1/d12/d1b/d66 0 2026-03-09T15:01:24.514 INFO:tasks.workunit.client.0.vm05.stdout:8/292: dwrite d0/d1/d12/d1b/f34 [0,4194304] 0 2026-03-09T15:01:24.518 INFO:tasks.workunit.client.0.vm05.stdout:2/253: truncate da/f21 2921362 0 2026-03-09T15:01:24.519 INFO:tasks.workunit.client.0.vm05.stdout:2/254: truncate da/d13/d30/f34 923275 0 2026-03-09T15:01:24.520 INFO:tasks.workunit.client.0.vm05.stdout:5/256: creat d1/d4/d34/d35/d53/f5a x:0 0 0 2026-03-09T15:01:24.521 INFO:tasks.workunit.client.0.vm05.stdout:5/257: dread - d1/d4/d34/d35/d3d/f37 zero size 2026-03-09T15:01:24.525 INFO:tasks.workunit.client.0.vm05.stdout:4/207: getdents d2/d4/d8 0 2026-03-09T15:01:24.527 INFO:tasks.workunit.client.0.vm05.stdout:0/177: unlink d9/de/d25/l27 0 2026-03-09T15:01:24.535 INFO:tasks.workunit.client.0.vm05.stdout:9/234: dread d2/f8 [0,4194304] 0 2026-03-09T15:01:24.538 INFO:tasks.workunit.client.0.vm05.stdout:2/255: write da/d16/f1e [3091813,87283] 0 2026-03-09T15:01:24.541 INFO:tasks.workunit.client.0.vm05.stdout:5/258: mkdir d1/d4/d27/d5b 0 2026-03-09T15:01:24.541 INFO:tasks.workunit.client.0.vm05.stdout:5/259: stat d1/c33 0 2026-03-09T15:01:24.547 INFO:tasks.workunit.client.0.vm05.stdout:8/293: dread d0/f10 [0,4194304] 0 2026-03-09T15:01:24.547 INFO:tasks.workunit.client.0.vm05.stdout:3/225: mkdir d3/df/d1e/d24/d4d 0 2026-03-09T15:01:24.548 INFO:tasks.workunit.client.0.vm05.stdout:8/294: dread - d0/d1/d12/d3c/f51 zero size 2026-03-09T15:01:24.548 INFO:tasks.workunit.client.0.vm05.stdout:3/226: write d3/d29/f41 [996397,119400] 0 2026-03-09T15:01:24.550 INFO:tasks.workunit.client.0.vm05.stdout:3/227: dread d3/f13 [0,4194304] 0 2026-03-09T15:01:24.555 INFO:tasks.workunit.client.0.vm05.stdout:6/202: dwrite da/d17/f2a [0,4194304] 0 2026-03-09T15:01:24.559 INFO:tasks.workunit.client.0.vm05.stdout:9/235: truncate d2/fd 5064588 0 2026-03-09T15:01:24.559 INFO:tasks.workunit.client.0.vm05.stdout:2/256: mknod da/d13/d2f/c49 0 2026-03-09T15:01:24.564 INFO:tasks.workunit.client.0.vm05.stdout:0/178: dwrite d9/de/d25/f2d [0,4194304] 0 2026-03-09T15:01:24.565 INFO:tasks.workunit.client.0.vm05.stdout:0/179: readlink d9/de/d12/d15/d2e/d32/l39 0 2026-03-09T15:01:24.565 INFO:tasks.workunit.client.0.vm05.stdout:0/180: chown d9/de/d12/d15/f36 7303 1 2026-03-09T15:01:24.569 INFO:tasks.workunit.client.0.vm05.stdout:0/181: dwrite d9/de/d12/d15/f36 [0,4194304] 0 2026-03-09T15:01:24.571 INFO:tasks.workunit.client.0.vm05.stdout:0/182: write d9/de/f19 [1180506,12871] 0 2026-03-09T15:01:24.585 INFO:tasks.workunit.client.0.vm05.stdout:6/203: rmdir da/d17 39 2026-03-09T15:01:24.599 INFO:tasks.workunit.client.0.vm05.stdout:7/257: getdents d1/d9/d23/d31 0 2026-03-09T15:01:24.600 INFO:tasks.workunit.client.0.vm05.stdout:6/204: symlink da/d17/l40 0 2026-03-09T15:01:24.659 INFO:tasks.workunit.client.0.vm05.stdout:6/205: dread da/d17/f1d [0,4194304] 0 2026-03-09T15:01:24.660 INFO:tasks.workunit.client.0.vm05.stdout:6/206: dread - da/f12 zero size 2026-03-09T15:01:24.660 INFO:tasks.workunit.client.0.vm05.stdout:6/207: write da/d19/f22 [2137422,98153] 0 2026-03-09T15:01:24.665 INFO:tasks.workunit.client.0.vm05.stdout:6/208: creat da/f41 x:0 0 0 2026-03-09T15:01:24.665 INFO:tasks.workunit.client.0.vm05.stdout:0/183: fdatasync d9/de/f19 0 2026-03-09T15:01:24.667 INFO:tasks.workunit.client.0.vm05.stdout:6/209: creat da/d17/f42 x:0 0 0 2026-03-09T15:01:24.668 INFO:tasks.workunit.client.0.vm05.stdout:0/184: creat d9/de/d12/d15/d2e/f3a x:0 0 0 2026-03-09T15:01:24.670 INFO:tasks.workunit.client.0.vm05.stdout:6/210: mkdir da/d43 0 2026-03-09T15:01:24.676 INFO:tasks.workunit.client.0.vm05.stdout:6/211: unlink da/d19/l1e 0 2026-03-09T15:01:24.721 INFO:tasks.workunit.client.0.vm05.stdout:9/236: sync 2026-03-09T15:01:24.722 INFO:tasks.workunit.client.0.vm05.stdout:7/258: sync 2026-03-09T15:01:24.722 INFO:tasks.workunit.client.0.vm05.stdout:6/212: sync 2026-03-09T15:01:24.722 INFO:tasks.workunit.client.0.vm05.stdout:6/213: readlink da/d17/l40 0 2026-03-09T15:01:24.723 INFO:tasks.workunit.client.0.vm05.stdout:6/214: dread - da/d17/f42 zero size 2026-03-09T15:01:24.726 INFO:tasks.workunit.client.0.vm05.stdout:9/237: fsync d2/f8 0 2026-03-09T15:01:24.728 INFO:tasks.workunit.client.0.vm05.stdout:6/215: link da/d19/f35 da/d17/f44 0 2026-03-09T15:01:24.728 INFO:tasks.workunit.client.0.vm05.stdout:6/216: write da/d17/f33 [591709,32680] 0 2026-03-09T15:01:24.730 INFO:tasks.workunit.client.0.vm05.stdout:6/217: unlink da/l15 0 2026-03-09T15:01:24.733 INFO:tasks.workunit.client.0.vm05.stdout:6/218: dwrite da/f3d [0,4194304] 0 2026-03-09T15:01:24.736 INFO:tasks.workunit.client.0.vm05.stdout:6/219: dwrite da/d17/f30 [0,4194304] 0 2026-03-09T15:01:24.736 INFO:tasks.workunit.client.0.vm05.stdout:9/238: sync 2026-03-09T15:01:24.742 INFO:tasks.workunit.client.0.vm05.stdout:6/220: creat da/d19/f45 x:0 0 0 2026-03-09T15:01:24.744 INFO:tasks.workunit.client.0.vm05.stdout:6/221: creat da/d43/f46 x:0 0 0 2026-03-09T15:01:24.749 INFO:tasks.workunit.client.0.vm05.stdout:6/222: dwrite da/d17/f2d [0,4194304] 0 2026-03-09T15:01:24.756 INFO:tasks.workunit.client.0.vm05.stdout:6/223: write da/d19/f35 [545405,31341] 0 2026-03-09T15:01:24.757 INFO:tasks.workunit.client.0.vm05.stdout:6/224: read da/d17/f30 [1366727,46844] 0 2026-03-09T15:01:24.757 INFO:tasks.workunit.client.0.vm05.stdout:6/225: fdatasync da/fb 0 2026-03-09T15:01:24.869 INFO:tasks.workunit.client.0.vm05.stdout:1/204: truncate d9/d17/f26 1130467 0 2026-03-09T15:01:24.871 INFO:tasks.workunit.client.0.vm05.stdout:1/205: creat d9/d3d/d33/f4c x:0 0 0 2026-03-09T15:01:24.872 INFO:tasks.workunit.client.0.vm05.stdout:1/206: truncate d9/d17/f22 1805425 0 2026-03-09T15:01:24.873 INFO:tasks.workunit.client.0.vm05.stdout:1/207: readlink d9/l46 0 2026-03-09T15:01:24.874 INFO:tasks.workunit.client.0.vm05.stdout:1/208: creat d9/d3d/d49/f4d x:0 0 0 2026-03-09T15:01:24.878 INFO:tasks.workunit.client.0.vm05.stdout:1/209: dwrite d9/d3d/d49/f2c [0,4194304] 0 2026-03-09T15:01:24.976 INFO:tasks.workunit.client.0.vm05.stdout:3/228: truncate d3/f1f 3626786 0 2026-03-09T15:01:24.978 INFO:tasks.workunit.client.0.vm05.stdout:3/229: getdents d3/df 0 2026-03-09T15:01:24.979 INFO:tasks.workunit.client.0.vm05.stdout:3/230: truncate d3/df/d10/f28 1135024 0 2026-03-09T15:01:24.980 INFO:tasks.workunit.client.0.vm05.stdout:3/231: chown d3/df/d10/d34/f48 28645328 1 2026-03-09T15:01:24.989 INFO:tasks.workunit.client.0.vm05.stdout:2/257: dwrite da/f10 [0,4194304] 0 2026-03-09T15:01:24.990 INFO:tasks.workunit.client.0.vm05.stdout:2/258: write da/d29/d3f/f43 [635786,121350] 0 2026-03-09T15:01:24.992 INFO:tasks.workunit.client.0.vm05.stdout:2/259: mkdir da/dd/d4a 0 2026-03-09T15:01:24.999 INFO:tasks.workunit.client.0.vm05.stdout:2/260: read da/f3c [237647,61783] 0 2026-03-09T15:01:25.000 INFO:tasks.workunit.client.0.vm05.stdout:2/261: fdatasync da/d13/d2f/d35/f3a 0 2026-03-09T15:01:25.069 INFO:tasks.workunit.client.0.vm05.stdout:9/239: dwrite d2/f8 [0,4194304] 0 2026-03-09T15:01:25.073 INFO:tasks.workunit.client.0.vm05.stdout:9/240: dwrite d2/d10/f26 [0,4194304] 0 2026-03-09T15:01:25.123 INFO:tasks.workunit.client.0.vm05.stdout:9/241: sync 2026-03-09T15:01:25.124 INFO:tasks.workunit.client.0.vm05.stdout:9/242: read d2/f13 [1226828,34204] 0 2026-03-09T15:01:25.125 INFO:tasks.workunit.client.0.vm05.stdout:9/243: fdatasync d2/d1a/d1b/f43 0 2026-03-09T15:01:25.126 INFO:tasks.workunit.client.0.vm05.stdout:9/244: write d2/f46 [274651,13634] 0 2026-03-09T15:01:25.126 INFO:tasks.workunit.client.0.vm05.stdout:9/245: fdatasync d2/d1a/f3e 0 2026-03-09T15:01:25.131 INFO:tasks.workunit.client.0.vm05.stdout:9/246: dread d2/d1a/d1b/f2a [0,4194304] 0 2026-03-09T15:01:25.132 INFO:tasks.workunit.client.0.vm05.stdout:9/247: write d2/d1a/d1b/d23/d37/f36 [2488665,63087] 0 2026-03-09T15:01:25.158 INFO:tasks.workunit.client.0.vm05.stdout:6/226: truncate da/f3d 822129 0 2026-03-09T15:01:25.159 INFO:tasks.workunit.client.0.vm05.stdout:6/227: rmdir da/d43 39 2026-03-09T15:01:25.160 INFO:tasks.workunit.client.0.vm05.stdout:6/228: creat da/d17/d3b/f47 x:0 0 0 2026-03-09T15:01:25.161 INFO:tasks.workunit.client.0.vm05.stdout:6/229: symlink da/d17/d3b/l48 0 2026-03-09T15:01:25.166 INFO:tasks.workunit.client.0.vm05.stdout:6/230: sync 2026-03-09T15:01:25.168 INFO:tasks.workunit.client.0.vm05.stdout:6/231: sync 2026-03-09T15:01:25.168 INFO:tasks.workunit.client.0.vm05.stdout:6/232: truncate da/f1f 1693161 0 2026-03-09T15:01:25.173 INFO:tasks.workunit.client.0.vm05.stdout:6/233: getdents da/d43 0 2026-03-09T15:01:25.173 INFO:tasks.workunit.client.0.vm05.stdout:6/234: readlink l3 0 2026-03-09T15:01:25.174 INFO:tasks.workunit.client.0.vm05.stdout:6/235: write da/d19/f34 [104683,48973] 0 2026-03-09T15:01:25.174 INFO:tasks.workunit.client.0.vm05.stdout:6/236: dread - da/d17/d3b/f47 zero size 2026-03-09T15:01:25.176 INFO:tasks.workunit.client.0.vm05.stdout:6/237: fdatasync da/f3d 0 2026-03-09T15:01:25.209 INFO:tasks.workunit.client.0.vm05.stdout:5/260: rmdir d1/d4/d19 39 2026-03-09T15:01:25.211 INFO:tasks.workunit.client.0.vm05.stdout:5/261: creat d1/d4/d34/f5c x:0 0 0 2026-03-09T15:01:25.211 INFO:tasks.workunit.client.0.vm05.stdout:5/262: dread - d1/d4/d34/f5c zero size 2026-03-09T15:01:25.212 INFO:tasks.workunit.client.0.vm05.stdout:5/263: dread - d1/d4/d34/d35/d53/f5a zero size 2026-03-09T15:01:25.216 INFO:tasks.workunit.client.0.vm05.stdout:5/264: dwrite d1/d4/d27/f4f [0,4194304] 0 2026-03-09T15:01:25.229 INFO:tasks.workunit.client.0.vm05.stdout:0/185: symlink d9/l3b 0 2026-03-09T15:01:25.231 INFO:tasks.workunit.client.0.vm05.stdout:0/186: creat d9/de/d12/f3c x:0 0 0 2026-03-09T15:01:25.233 INFO:tasks.workunit.client.0.vm05.stdout:0/187: dread d9/de/d25/f2d [0,4194304] 0 2026-03-09T15:01:25.234 INFO:tasks.workunit.client.0.vm05.stdout:0/188: chown d9/l28 306041633 1 2026-03-09T15:01:25.234 INFO:tasks.workunit.client.0.vm05.stdout:0/189: truncate d9/de/d12/f3c 995384 0 2026-03-09T15:01:25.235 INFO:tasks.workunit.client.0.vm05.stdout:0/190: write d9/de/f21 [4160546,96962] 0 2026-03-09T15:01:25.239 INFO:tasks.workunit.client.0.vm05.stdout:0/191: rename d9/de/f21 to d9/de/f3d 0 2026-03-09T15:01:25.240 INFO:tasks.workunit.client.0.vm05.stdout:0/192: rmdir d9 39 2026-03-09T15:01:25.253 INFO:tasks.workunit.client.0.vm05.stdout:5/265: mkdir d1/d5d 0 2026-03-09T15:01:25.255 INFO:tasks.workunit.client.0.vm05.stdout:5/266: creat d1/f5e x:0 0 0 2026-03-09T15:01:25.258 INFO:tasks.workunit.client.0.vm05.stdout:5/267: creat d1/d4/f5f x:0 0 0 2026-03-09T15:01:25.259 INFO:tasks.workunit.client.0.vm05.stdout:5/268: mknod d1/d5d/c60 0 2026-03-09T15:01:25.259 INFO:tasks.workunit.client.0.vm05.stdout:5/269: write d1/f6 [4027414,122586] 0 2026-03-09T15:01:25.260 INFO:tasks.workunit.client.0.vm05.stdout:5/270: chown d1/d4/d34/d35/f36 6821098 1 2026-03-09T15:01:25.262 INFO:tasks.workunit.client.0.vm05.stdout:5/271: creat d1/d4/d34/d35/d3d/f61 x:0 0 0 2026-03-09T15:01:25.274 INFO:tasks.workunit.client.0.vm05.stdout:5/272: dread d1/d4/d34/d35/f44 [0,4194304] 0 2026-03-09T15:01:25.277 INFO:tasks.workunit.client.0.vm05.stdout:5/273: link d1/d5d/c60 d1/c62 0 2026-03-09T15:01:25.281 INFO:tasks.workunit.client.0.vm05.stdout:5/274: dwrite d1/d4/d34/d35/d3d/d38/f40 [0,4194304] 0 2026-03-09T15:01:25.299 INFO:tasks.workunit.client.0.vm05.stdout:2/262: dread da/f21 [0,4194304] 0 2026-03-09T15:01:25.299 INFO:tasks.workunit.client.0.vm05.stdout:2/263: dread - da/d13/d2f/d35/f3a zero size 2026-03-09T15:01:25.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:25 vm05.local ceph-mon[50611]: pgmap v168: 65 pgs: 65 active+clean; 1.7 GiB data, 6.2 GiB used, 114 GiB / 120 GiB avail; 1.8 MiB/s rd, 29 MiB/s wr, 189 op/s 2026-03-09T15:01:25.306 INFO:tasks.workunit.client.0.vm05.stdout:5/275: sync 2026-03-09T15:01:25.309 INFO:tasks.workunit.client.0.vm05.stdout:5/276: dwrite d1/f5e [0,4194304] 0 2026-03-09T15:01:25.312 INFO:tasks.workunit.client.0.vm05.stdout:5/277: chown d1/l2 7875 1 2026-03-09T15:01:25.312 INFO:tasks.workunit.client.0.vm05.stdout:5/278: write d1/d4/f5f [275401,42663] 0 2026-03-09T15:01:25.314 INFO:tasks.workunit.client.0.vm05.stdout:5/279: write d1/f3 [4364373,127887] 0 2026-03-09T15:01:25.315 INFO:tasks.workunit.client.0.vm05.stdout:4/208: mkdir d2/d43 0 2026-03-09T15:01:25.320 INFO:tasks.workunit.client.0.vm05.stdout:5/280: mkdir d1/d4/d34/d35/d3d/d38/d63 0 2026-03-09T15:01:25.320 INFO:tasks.workunit.client.0.vm05.stdout:4/209: dread d2/f33 [4194304,4194304] 0 2026-03-09T15:01:25.330 INFO:tasks.workunit.client.0.vm05.stdout:5/281: symlink d1/d4/l64 0 2026-03-09T15:01:25.331 INFO:tasks.workunit.client.0.vm05.stdout:4/210: link d2/d4/d7/d21/l32 d2/d4/d1e/l44 0 2026-03-09T15:01:25.333 INFO:tasks.workunit.client.0.vm05.stdout:4/211: creat d2/d4/d7/dc/f45 x:0 0 0 2026-03-09T15:01:25.335 INFO:tasks.workunit.client.0.vm05.stdout:5/282: sync 2026-03-09T15:01:25.336 INFO:tasks.workunit.client.0.vm05.stdout:5/283: dread - d1/d4/d34/f5c zero size 2026-03-09T15:01:25.337 INFO:tasks.workunit.client.0.vm05.stdout:4/212: symlink d2/d43/l46 0 2026-03-09T15:01:25.340 INFO:tasks.workunit.client.0.vm05.stdout:5/284: link d1/d4/d34/d35/f44 d1/d4/d34/f65 0 2026-03-09T15:01:25.341 INFO:tasks.workunit.client.0.vm05.stdout:4/213: dwrite d2/d1d/f36 [0,4194304] 0 2026-03-09T15:01:25.341 INFO:tasks.workunit.client.0.vm05.stdout:5/285: truncate d1/d4/d34/d35/f4d 919844 0 2026-03-09T15:01:25.346 INFO:tasks.workunit.client.0.vm05.stdout:4/214: creat d2/d43/f47 x:0 0 0 2026-03-09T15:01:25.349 INFO:tasks.workunit.client.0.vm05.stdout:4/215: sync 2026-03-09T15:01:25.350 INFO:tasks.workunit.client.0.vm05.stdout:5/286: dread d1/d4/d34/d35/d3d/d38/f40 [0,4194304] 0 2026-03-09T15:01:25.354 INFO:tasks.workunit.client.0.vm05.stdout:5/287: write d1/d4/f20 [965225,92042] 0 2026-03-09T15:01:25.356 INFO:tasks.workunit.client.0.vm05.stdout:4/216: mkdir d2/d4/d7/d48 0 2026-03-09T15:01:25.357 INFO:tasks.workunit.client.0.vm05.stdout:4/217: write d2/d4/f15 [3323742,2857] 0 2026-03-09T15:01:25.358 INFO:tasks.workunit.client.0.vm05.stdout:4/218: dread d2/f14 [0,4194304] 0 2026-03-09T15:01:25.363 INFO:tasks.workunit.client.0.vm05.stdout:4/219: mkdir d2/d49 0 2026-03-09T15:01:25.364 INFO:tasks.workunit.client.0.vm05.stdout:5/288: dwrite d1/d4/d34/f65 [0,4194304] 0 2026-03-09T15:01:25.368 INFO:tasks.workunit.client.0.vm05.stdout:4/220: mkdir d2/d4/d8/d4a 0 2026-03-09T15:01:25.372 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:25 vm09.local ceph-mon[59673]: pgmap v168: 65 pgs: 65 active+clean; 1.7 GiB data, 6.2 GiB used, 114 GiB / 120 GiB avail; 1.8 MiB/s rd, 29 MiB/s wr, 189 op/s 2026-03-09T15:01:25.376 INFO:tasks.workunit.client.0.vm05.stdout:4/221: dwrite d2/f33 [0,4194304] 0 2026-03-09T15:01:25.382 INFO:tasks.workunit.client.0.vm05.stdout:5/289: creat d1/f66 x:0 0 0 2026-03-09T15:01:25.382 INFO:tasks.workunit.client.0.vm05.stdout:5/290: fdatasync d1/f9 0 2026-03-09T15:01:25.383 INFO:tasks.workunit.client.0.vm05.stdout:4/222: mknod d2/d4/d7/d21/c4b 0 2026-03-09T15:01:25.387 INFO:tasks.workunit.client.0.vm05.stdout:5/291: read - d1/da/f4a zero size 2026-03-09T15:01:25.398 INFO:tasks.workunit.client.0.vm05.stdout:5/292: mkdir d1/d4/d67 0 2026-03-09T15:01:25.400 INFO:tasks.workunit.client.0.vm05.stdout:4/223: dwrite d2/d4/d7/dc/f27 [0,4194304] 0 2026-03-09T15:01:25.410 INFO:tasks.workunit.client.0.vm05.stdout:5/293: getdents d1/d4/d19 0 2026-03-09T15:01:25.411 INFO:tasks.workunit.client.0.vm05.stdout:5/294: mkdir d1/d4/d34/d56/d68 0 2026-03-09T15:01:25.414 INFO:tasks.workunit.client.0.vm05.stdout:5/295: mkdir d1/d4/d34/d35/d3d/d38/d69 0 2026-03-09T15:01:25.415 INFO:tasks.workunit.client.0.vm05.stdout:5/296: read - d1/d4/d34/d35/f52 zero size 2026-03-09T15:01:25.424 INFO:tasks.workunit.client.0.vm05.stdout:4/224: dread d2/d4/d7/f9 [0,4194304] 0 2026-03-09T15:01:25.425 INFO:tasks.workunit.client.0.vm05.stdout:4/225: write d2/d4/d1e/f40 [1673476,11014] 0 2026-03-09T15:01:25.430 INFO:tasks.workunit.client.0.vm05.stdout:4/226: unlink d2/d4/d7/d21/l32 0 2026-03-09T15:01:25.431 INFO:tasks.workunit.client.0.vm05.stdout:4/227: creat d2/d4/d7/d48/f4c x:0 0 0 2026-03-09T15:01:25.432 INFO:tasks.workunit.client.0.vm05.stdout:4/228: creat d2/d49/f4d x:0 0 0 2026-03-09T15:01:25.434 INFO:tasks.workunit.client.0.vm05.stdout:4/229: creat d2/d4/f4e x:0 0 0 2026-03-09T15:01:25.438 INFO:tasks.workunit.client.0.vm05.stdout:4/230: dwrite d2/f33 [4194304,4194304] 0 2026-03-09T15:01:25.440 INFO:tasks.workunit.client.0.vm05.stdout:4/231: chown d2/l2a 121233 1 2026-03-09T15:01:25.441 INFO:tasks.workunit.client.0.vm05.stdout:4/232: creat d2/d43/f4f x:0 0 0 2026-03-09T15:01:25.445 INFO:tasks.workunit.client.0.vm05.stdout:4/233: mkdir d2/d4/d50 0 2026-03-09T15:01:25.446 INFO:tasks.workunit.client.0.vm05.stdout:4/234: chown d2/d4/c2c 7 1 2026-03-09T15:01:25.448 INFO:tasks.workunit.client.0.vm05.stdout:4/235: fsync d2/d4/d8/f13 0 2026-03-09T15:01:25.468 INFO:tasks.workunit.client.0.vm05.stdout:1/210: unlink d9/c19 0 2026-03-09T15:01:25.468 INFO:tasks.workunit.client.0.vm05.stdout:1/211: dread - d9/d2a/f2d zero size 2026-03-09T15:01:25.469 INFO:tasks.workunit.client.0.vm05.stdout:1/212: rename d9/d2a/f2d to d9/d2a/f4e 0 2026-03-09T15:01:25.470 INFO:tasks.workunit.client.0.vm05.stdout:1/213: fdatasync d9/f15 0 2026-03-09T15:01:25.475 INFO:tasks.workunit.client.0.vm05.stdout:8/295: rename d0/d1/d12/f5a to d0/d1/d12/d1b/f67 0 2026-03-09T15:01:25.477 INFO:tasks.workunit.client.0.vm05.stdout:7/259: rename d1/d12/l1e to d1/d9/d23/l4b 0 2026-03-09T15:01:25.480 INFO:tasks.workunit.client.0.vm05.stdout:7/260: creat d1/d9/d23/f4c x:0 0 0 2026-03-09T15:01:25.483 INFO:tasks.workunit.client.0.vm05.stdout:8/296: truncate d0/d7/f33 565449 0 2026-03-09T15:01:25.485 INFO:tasks.workunit.client.0.vm05.stdout:3/232: rename l2 to d3/l4e 0 2026-03-09T15:01:25.485 INFO:tasks.workunit.client.0.vm05.stdout:3/233: stat d3/df/d10/c15 0 2026-03-09T15:01:25.491 INFO:tasks.workunit.client.0.vm05.stdout:7/261: dwrite d1/d9/fc [4194304,4194304] 0 2026-03-09T15:01:25.492 INFO:tasks.workunit.client.0.vm05.stdout:4/236: rename d2/d4/f17 to d2/d43/f51 0 2026-03-09T15:01:25.499 INFO:tasks.workunit.client.0.vm05.stdout:4/237: mknod d2/d4/d7/d21/d3d/c52 0 2026-03-09T15:01:25.499 INFO:tasks.workunit.client.0.vm05.stdout:4/238: chown d2/d1d 152 1 2026-03-09T15:01:25.499 INFO:tasks.workunit.client.0.vm05.stdout:4/239: write d2/f3e [435033,32687] 0 2026-03-09T15:01:25.501 INFO:tasks.workunit.client.0.vm05.stdout:7/262: rename d1/d9/d23/d31/f33 to d1/d9/f4d 0 2026-03-09T15:01:25.503 INFO:tasks.workunit.client.0.vm05.stdout:7/263: dread d1/fa [0,4194304] 0 2026-03-09T15:01:25.504 INFO:tasks.workunit.client.0.vm05.stdout:4/240: creat d2/d4/d7/f53 x:0 0 0 2026-03-09T15:01:25.505 INFO:tasks.workunit.client.0.vm05.stdout:4/241: truncate d2/d4/d1e/f40 1952843 0 2026-03-09T15:01:25.506 INFO:tasks.workunit.client.0.vm05.stdout:3/234: dread d3/df/f1b [0,4194304] 0 2026-03-09T15:01:25.506 INFO:tasks.workunit.client.0.vm05.stdout:4/242: dread - d2/d4/d7/d48/f4c zero size 2026-03-09T15:01:25.507 INFO:tasks.workunit.client.0.vm05.stdout:3/235: write d3/d29/d2d/f33 [786696,25501] 0 2026-03-09T15:01:25.508 INFO:tasks.workunit.client.0.vm05.stdout:3/236: write d3/d29/f41 [1348260,56962] 0 2026-03-09T15:01:25.511 INFO:tasks.workunit.client.0.vm05.stdout:4/243: chown d2/d4/d1e/c20 222 1 2026-03-09T15:01:25.512 INFO:tasks.workunit.client.0.vm05.stdout:3/237: rmdir d3/d29/d2d 39 2026-03-09T15:01:25.520 INFO:tasks.workunit.client.0.vm05.stdout:8/297: dread d0/f3b [0,4194304] 0 2026-03-09T15:01:25.521 INFO:tasks.workunit.client.0.vm05.stdout:7/264: mknod d1/c4e 0 2026-03-09T15:01:25.523 INFO:tasks.workunit.client.0.vm05.stdout:3/238: creat d3/df/d1e/d2c/f4f x:0 0 0 2026-03-09T15:01:25.524 INFO:tasks.workunit.client.0.vm05.stdout:3/239: write d3/df/d10/d34/f4c [596946,49972] 0 2026-03-09T15:01:25.525 INFO:tasks.workunit.client.0.vm05.stdout:8/298: symlink d0/d1/d12/d3c/l68 0 2026-03-09T15:01:25.526 INFO:tasks.workunit.client.0.vm05.stdout:3/240: mkdir d3/df/d10/d19/d44/d50 0 2026-03-09T15:01:25.528 INFO:tasks.workunit.client.0.vm05.stdout:3/241: symlink d3/df/d1e/d2f/l51 0 2026-03-09T15:01:25.529 INFO:tasks.workunit.client.0.vm05.stdout:8/299: mkdir d0/d2a/d2d/d4b/d69 0 2026-03-09T15:01:25.529 INFO:tasks.workunit.client.0.vm05.stdout:8/300: chown d0/dc 90261 1 2026-03-09T15:01:25.530 INFO:tasks.workunit.client.0.vm05.stdout:3/242: mkdir d3/df/d1e/d2f/d52 0 2026-03-09T15:01:25.533 INFO:tasks.workunit.client.0.vm05.stdout:8/301: creat d0/d1/d55/f6a x:0 0 0 2026-03-09T15:01:25.537 INFO:tasks.workunit.client.0.vm05.stdout:8/302: dwrite d0/d2a/d2d/f41 [0,4194304] 0 2026-03-09T15:01:25.537 INFO:tasks.workunit.client.0.vm05.stdout:8/303: chown d0/d1/d12/d1b/f27 267386420 1 2026-03-09T15:01:25.539 INFO:tasks.workunit.client.0.vm05.stdout:9/248: write d2/fd [1756207,81873] 0 2026-03-09T15:01:25.539 INFO:tasks.workunit.client.0.vm05.stdout:9/249: write d2/f46 [77796,3906] 0 2026-03-09T15:01:25.545 INFO:tasks.workunit.client.0.vm05.stdout:9/250: dread d2/f12 [0,4194304] 0 2026-03-09T15:01:25.548 INFO:tasks.workunit.client.0.vm05.stdout:3/243: creat d3/df/d10/f53 x:0 0 0 2026-03-09T15:01:25.555 INFO:tasks.workunit.client.0.vm05.stdout:8/304: rename d0/c16 to d0/d2a/d2d/d42/c6b 0 2026-03-09T15:01:25.559 INFO:tasks.workunit.client.0.vm05.stdout:3/244: dwrite d3/df/f1b [0,4194304] 0 2026-03-09T15:01:25.561 INFO:tasks.workunit.client.0.vm05.stdout:3/245: chown d3/df/f14 22513916 1 2026-03-09T15:01:25.566 INFO:tasks.workunit.client.0.vm05.stdout:3/246: dwrite d3/df/d1e/d24/f35 [0,4194304] 0 2026-03-09T15:01:25.579 INFO:tasks.workunit.client.0.vm05.stdout:3/247: readlink d3/lc 0 2026-03-09T15:01:25.579 INFO:tasks.workunit.client.0.vm05.stdout:3/248: write d3/df/f14 [848350,106570] 0 2026-03-09T15:01:25.579 INFO:tasks.workunit.client.0.vm05.stdout:8/305: mknod d0/d1/d12/c6c 0 2026-03-09T15:01:25.579 INFO:tasks.workunit.client.0.vm05.stdout:3/249: unlink d3/l9 0 2026-03-09T15:01:25.580 INFO:tasks.workunit.client.0.vm05.stdout:3/250: dread d3/d29/f30 [0,4194304] 0 2026-03-09T15:01:25.582 INFO:tasks.workunit.client.0.vm05.stdout:3/251: dwrite d3/df/d10/f3f [0,4194304] 0 2026-03-09T15:01:25.583 INFO:tasks.workunit.client.0.vm05.stdout:9/251: sync 2026-03-09T15:01:25.584 INFO:tasks.workunit.client.0.vm05.stdout:9/252: fdatasync d2/d10/f39 0 2026-03-09T15:01:25.584 INFO:tasks.workunit.client.0.vm05.stdout:9/253: fdatasync d2/d10/d22/d2c/f44 0 2026-03-09T15:01:25.585 INFO:tasks.workunit.client.0.vm05.stdout:9/254: write d2/d1a/d1b/f43 [823005,63070] 0 2026-03-09T15:01:25.587 INFO:tasks.workunit.client.0.vm05.stdout:9/255: write d2/d1a/f3e [396030,58701] 0 2026-03-09T15:01:25.590 INFO:tasks.workunit.client.0.vm05.stdout:9/256: read d2/d1a/f40 [3235593,88956] 0 2026-03-09T15:01:25.590 INFO:tasks.workunit.client.0.vm05.stdout:3/252: dwrite d3/df/f4a [0,4194304] 0 2026-03-09T15:01:25.596 INFO:tasks.workunit.client.0.vm05.stdout:9/257: write d2/d10/f26 [4480085,23919] 0 2026-03-09T15:01:25.597 INFO:tasks.workunit.client.0.vm05.stdout:6/238: write da/f1a [548232,34314] 0 2026-03-09T15:01:25.598 INFO:tasks.workunit.client.0.vm05.stdout:9/258: chown d2/d10/d22/d2c 15092750 1 2026-03-09T15:01:25.599 INFO:tasks.workunit.client.0.vm05.stdout:9/259: read d2/f12 [162057,26231] 0 2026-03-09T15:01:25.602 INFO:tasks.workunit.client.0.vm05.stdout:8/306: fsync d0/d2a/d2d/d54/f64 0 2026-03-09T15:01:25.612 INFO:tasks.workunit.client.0.vm05.stdout:9/260: unlink d2/d10/d22/d2c/d3c/f41 0 2026-03-09T15:01:25.613 INFO:tasks.workunit.client.0.vm05.stdout:6/239: dread da/d17/f20 [0,4194304] 0 2026-03-09T15:01:25.614 INFO:tasks.workunit.client.0.vm05.stdout:6/240: read da/d17/f1d [2123984,72170] 0 2026-03-09T15:01:25.616 INFO:tasks.workunit.client.0.vm05.stdout:9/261: rename d2/d10/d22/d2c/c33 to d2/d10/d22/d2c/c4d 0 2026-03-09T15:01:25.619 INFO:tasks.workunit.client.0.vm05.stdout:9/262: dwrite d2/f12 [0,4194304] 0 2026-03-09T15:01:25.622 INFO:tasks.workunit.client.0.vm05.stdout:6/241: rename da/c2e to da/d43/c49 0 2026-03-09T15:01:25.629 INFO:tasks.workunit.client.0.vm05.stdout:3/253: creat d3/d29/f54 x:0 0 0 2026-03-09T15:01:25.630 INFO:tasks.workunit.client.0.vm05.stdout:9/263: rename d2/d1a to d2/d4e 0 2026-03-09T15:01:25.630 INFO:tasks.workunit.client.0.vm05.stdout:6/242: creat da/d17/d3b/f4a x:0 0 0 2026-03-09T15:01:25.633 INFO:tasks.workunit.client.0.vm05.stdout:3/254: symlink d3/df/l55 0 2026-03-09T15:01:25.636 INFO:tasks.workunit.client.0.vm05.stdout:8/307: dwrite d0/d24/f30 [0,4194304] 0 2026-03-09T15:01:25.651 INFO:tasks.workunit.client.0.vm05.stdout:8/308: dread d0/f4 [0,4194304] 0 2026-03-09T15:01:25.651 INFO:tasks.workunit.client.0.vm05.stdout:8/309: chown d0/d1/d12/d1b/d66 123963689 1 2026-03-09T15:01:25.653 INFO:tasks.workunit.client.0.vm05.stdout:8/310: symlink d0/d2a/d2d/d4b/l6d 0 2026-03-09T15:01:25.663 INFO:tasks.workunit.client.0.vm05.stdout:6/243: dread da/d17/f2c [0,4194304] 0 2026-03-09T15:01:25.669 INFO:tasks.workunit.client.0.vm05.stdout:6/244: dwrite da/f10 [0,4194304] 0 2026-03-09T15:01:25.670 INFO:tasks.workunit.client.0.vm05.stdout:6/245: mknod da/d43/c4b 0 2026-03-09T15:01:25.721 INFO:tasks.workunit.client.0.vm05.stdout:0/193: getdents d9 0 2026-03-09T15:01:25.724 INFO:tasks.workunit.client.0.vm05.stdout:0/194: creat d9/de/f3e x:0 0 0 2026-03-09T15:01:25.728 INFO:tasks.workunit.client.0.vm05.stdout:0/195: rename d9/l28 to d9/de/d12/d15/d2e/l3f 0 2026-03-09T15:01:25.732 INFO:tasks.workunit.client.0.vm05.stdout:1/214: stat d9/d17/f26 0 2026-03-09T15:01:25.735 INFO:tasks.workunit.client.0.vm05.stdout:1/215: rename d9/d3d/d49/f4d to d9/d2f/f4f 0 2026-03-09T15:01:25.736 INFO:tasks.workunit.client.0.vm05.stdout:1/216: truncate d9/d2f/f43 76769 0 2026-03-09T15:01:25.737 INFO:tasks.workunit.client.0.vm05.stdout:1/217: truncate d9/d3d/f42 752533 0 2026-03-09T15:01:25.737 INFO:tasks.workunit.client.0.vm05.stdout:1/218: dread - d9/d3d/f32 zero size 2026-03-09T15:01:25.741 INFO:tasks.workunit.client.0.vm05.stdout:1/219: truncate f7 3897053 0 2026-03-09T15:01:25.741 INFO:tasks.workunit.client.0.vm05.stdout:1/220: chown d9/f23 92 1 2026-03-09T15:01:25.742 INFO:tasks.workunit.client.0.vm05.stdout:1/221: write d9/d2f/f3a [802195,79796] 0 2026-03-09T15:01:25.746 INFO:tasks.workunit.client.0.vm05.stdout:2/264: dwrite da/d16/f20 [0,4194304] 0 2026-03-09T15:01:25.750 INFO:tasks.workunit.client.0.vm05.stdout:1/222: creat d9/d2a/f50 x:0 0 0 2026-03-09T15:01:25.750 INFO:tasks.workunit.client.0.vm05.stdout:1/223: chown d9/d3d/l3c 998054664 1 2026-03-09T15:01:25.753 INFO:tasks.workunit.client.0.vm05.stdout:1/224: truncate d9/d3d/f32 917290 0 2026-03-09T15:01:25.758 INFO:tasks.workunit.client.0.vm05.stdout:1/225: dread - d9/d2a/f44 zero size 2026-03-09T15:01:25.758 INFO:tasks.workunit.client.0.vm05.stdout:1/226: stat d9/d3d 0 2026-03-09T15:01:25.760 INFO:tasks.workunit.client.0.vm05.stdout:2/265: creat da/d13/f4b x:0 0 0 2026-03-09T15:01:25.763 INFO:tasks.workunit.client.0.vm05.stdout:2/266: dwrite da/d13/f4b [0,4194304] 0 2026-03-09T15:01:25.772 INFO:tasks.workunit.client.0.vm05.stdout:2/267: symlink da/d29/d45/l4c 0 2026-03-09T15:01:25.774 INFO:tasks.workunit.client.0.vm05.stdout:2/268: dread da/f21 [0,4194304] 0 2026-03-09T15:01:25.775 INFO:tasks.workunit.client.0.vm05.stdout:2/269: symlink da/d16/l4d 0 2026-03-09T15:01:25.777 INFO:tasks.workunit.client.0.vm05.stdout:2/270: creat da/f4e x:0 0 0 2026-03-09T15:01:25.807 INFO:tasks.workunit.client.0.vm05.stdout:2/271: sync 2026-03-09T15:01:25.808 INFO:tasks.workunit.client.0.vm05.stdout:2/272: sync 2026-03-09T15:01:25.812 INFO:tasks.workunit.client.0.vm05.stdout:2/273: dwrite da/d13/d30/f34 [0,4194304] 0 2026-03-09T15:01:25.819 INFO:tasks.workunit.client.0.vm05.stdout:2/274: dwrite da/d13/d30/f34 [0,4194304] 0 2026-03-09T15:01:25.826 INFO:tasks.workunit.client.0.vm05.stdout:5/297: write d1/d4/f43 [2762470,96202] 0 2026-03-09T15:01:25.826 INFO:tasks.workunit.client.0.vm05.stdout:5/298: readlink d1/da/l22 0 2026-03-09T15:01:25.827 INFO:tasks.workunit.client.0.vm05.stdout:5/299: fdatasync d1/da/fe 0 2026-03-09T15:01:25.831 INFO:tasks.workunit.client.0.vm05.stdout:5/300: write d1/d4/d34/d35/f52 [90444,65309] 0 2026-03-09T15:01:25.831 INFO:tasks.workunit.client.0.vm05.stdout:5/301: stat d1/d4/f20 0 2026-03-09T15:01:25.833 INFO:tasks.workunit.client.0.vm05.stdout:5/302: read d1/f3 [9449,74886] 0 2026-03-09T15:01:25.843 INFO:tasks.workunit.client.0.vm05.stdout:5/303: rmdir d1/d4/d67 0 2026-03-09T15:01:25.857 INFO:tasks.workunit.client.0.vm05.stdout:1/227: stat d9/d2a/f4e 0 2026-03-09T15:01:25.857 INFO:tasks.workunit.client.0.vm05.stdout:1/228: stat d9/d2f/f43 0 2026-03-09T15:01:25.860 INFO:tasks.workunit.client.0.vm05.stdout:1/229: dwrite d9/d3d/f32 [0,4194304] 0 2026-03-09T15:01:25.871 INFO:tasks.workunit.client.0.vm05.stdout:7/265: readlink d1/d9/d23/l4b 0 2026-03-09T15:01:25.876 INFO:tasks.workunit.client.0.vm05.stdout:7/266: rename d1/d9/d23/d31/l40 to d1/d12/l4f 0 2026-03-09T15:01:25.877 INFO:tasks.workunit.client.0.vm05.stdout:7/267: creat d1/d49/f50 x:0 0 0 2026-03-09T15:01:25.895 INFO:tasks.workunit.client.0.vm05.stdout:4/244: write d2/d4/d7/dc/f18 [725540,86828] 0 2026-03-09T15:01:25.896 INFO:tasks.workunit.client.0.vm05.stdout:4/245: read d2/f1b [401823,104089] 0 2026-03-09T15:01:25.897 INFO:tasks.workunit.client.0.vm05.stdout:4/246: write d2/d4/f4e [985310,48151] 0 2026-03-09T15:01:25.903 INFO:tasks.workunit.client.0.vm05.stdout:4/247: creat d2/d4/d7/dc/f54 x:0 0 0 2026-03-09T15:01:25.906 INFO:tasks.workunit.client.0.vm05.stdout:7/268: dread d1/d9/fd [0,4194304] 0 2026-03-09T15:01:25.906 INFO:tasks.workunit.client.0.vm05.stdout:7/269: truncate d1/d9/f4d 761864 0 2026-03-09T15:01:25.913 INFO:tasks.workunit.client.0.vm05.stdout:7/270: rename d1/d9/d3f to d1/d9/d23/d31/d51 0 2026-03-09T15:01:25.915 INFO:tasks.workunit.client.0.vm05.stdout:4/248: mknod d2/c55 0 2026-03-09T15:01:25.922 INFO:tasks.workunit.client.0.vm05.stdout:4/249: creat d2/d49/f56 x:0 0 0 2026-03-09T15:01:25.923 INFO:tasks.workunit.client.0.vm05.stdout:7/271: fdatasync d1/d9/f10 0 2026-03-09T15:01:25.923 INFO:tasks.workunit.client.0.vm05.stdout:7/272: write d1/d12/f11 [838820,62187] 0 2026-03-09T15:01:25.924 INFO:tasks.workunit.client.0.vm05.stdout:7/273: truncate d1/d22/f47 517684 0 2026-03-09T15:01:25.925 INFO:tasks.workunit.client.0.vm05.stdout:7/274: fdatasync d1/d49/f50 0 2026-03-09T15:01:25.934 INFO:tasks.workunit.client.0.vm05.stdout:4/250: rename d2/d4/d8/l26 to d2/d4/d7/l57 0 2026-03-09T15:01:26.037 INFO:tasks.workunit.client.0.vm05.stdout:7/275: dread d1/f45 [0,4194304] 0 2026-03-09T15:01:26.046 INFO:tasks.workunit.client.0.vm05.stdout:7/276: creat d1/d9/f52 x:0 0 0 2026-03-09T15:01:26.047 INFO:tasks.workunit.client.0.vm05.stdout:9/264: truncate d2/d4e/d1b/d23/d37/f36 4081384 0 2026-03-09T15:01:26.051 INFO:tasks.workunit.client.0.vm05.stdout:8/311: write d0/d1/d12/d1b/f67 [136176,31414] 0 2026-03-09T15:01:26.053 INFO:tasks.workunit.client.0.vm05.stdout:8/312: dread d0/d1/d12/d1b/f34 [0,4194304] 0 2026-03-09T15:01:26.054 INFO:tasks.workunit.client.0.vm05.stdout:6/246: truncate da/d17/f1d 966154 0 2026-03-09T15:01:26.055 INFO:tasks.workunit.client.0.vm05.stdout:7/277: mknod d1/d49/c53 0 2026-03-09T15:01:26.058 INFO:tasks.workunit.client.0.vm05.stdout:9/265: creat d2/d4e/d1b/f4f x:0 0 0 2026-03-09T15:01:26.058 INFO:tasks.workunit.client.0.vm05.stdout:9/266: chown d2/d10/d22 86 1 2026-03-09T15:01:26.059 INFO:tasks.workunit.client.0.vm05.stdout:9/267: write d2/d10/d22/d2c/f44 [928667,57606] 0 2026-03-09T15:01:26.065 INFO:tasks.workunit.client.0.vm05.stdout:8/313: fdatasync d0/f3b 0 2026-03-09T15:01:26.065 INFO:tasks.workunit.client.0.vm05.stdout:6/247: mknod da/d19/c4c 0 2026-03-09T15:01:26.065 INFO:tasks.workunit.client.0.vm05.stdout:8/314: stat d0/dc/l61 0 2026-03-09T15:01:26.066 INFO:tasks.workunit.client.0.vm05.stdout:6/248: readlink da/d17/l21 0 2026-03-09T15:01:26.066 INFO:tasks.workunit.client.0.vm05.stdout:6/249: truncate da/f41 512852 0 2026-03-09T15:01:26.068 INFO:tasks.workunit.client.0.vm05.stdout:8/315: dwrite d0/d1/f49 [0,4194304] 0 2026-03-09T15:01:26.072 INFO:tasks.workunit.client.0.vm05.stdout:7/278: unlink d1/fa 0 2026-03-09T15:01:26.078 INFO:tasks.workunit.client.0.vm05.stdout:9/268: symlink d2/d10/d22/l50 0 2026-03-09T15:01:26.078 INFO:tasks.workunit.client.0.vm05.stdout:9/269: readlink d2/d10/d22/l50 0 2026-03-09T15:01:26.081 INFO:tasks.workunit.client.0.vm05.stdout:9/270: dwrite d2/d10/d22/d2c/f3a [0,4194304] 0 2026-03-09T15:01:26.084 INFO:tasks.workunit.client.0.vm05.stdout:0/196: dwrite d9/f35 [0,4194304] 0 2026-03-09T15:01:26.093 INFO:tasks.workunit.client.0.vm05.stdout:9/271: dwrite d2/d10/d15/f18 [0,4194304] 0 2026-03-09T15:01:26.099 INFO:tasks.workunit.client.0.vm05.stdout:9/272: dread d2/f13 [0,4194304] 0 2026-03-09T15:01:26.100 INFO:tasks.workunit.client.0.vm05.stdout:9/273: write d2/d10/f26 [3271228,130117] 0 2026-03-09T15:01:26.100 INFO:tasks.workunit.client.0.vm05.stdout:9/274: readlink d2/d10/d22/d2c/d3c/l3f 0 2026-03-09T15:01:26.104 INFO:tasks.workunit.client.0.vm05.stdout:2/275: write da/f2c [737116,48740] 0 2026-03-09T15:01:26.108 INFO:tasks.workunit.client.0.vm05.stdout:6/250: dwrite da/d17/f20 [0,4194304] 0 2026-03-09T15:01:26.111 INFO:tasks.workunit.client.0.vm05.stdout:6/251: fsync da/d19/f35 0 2026-03-09T15:01:26.120 INFO:tasks.workunit.client.0.vm05.stdout:6/252: rename da/d17 to da/d17/d4d 22 2026-03-09T15:01:26.123 INFO:tasks.workunit.client.0.vm05.stdout:8/316: dread d0/fa [0,4194304] 0 2026-03-09T15:01:26.125 INFO:tasks.workunit.client.0.vm05.stdout:5/304: getdents d1/d4 0 2026-03-09T15:01:26.125 INFO:tasks.workunit.client.0.vm05.stdout:5/305: dread - d1/d4/d34/d35/d3d/f61 zero size 2026-03-09T15:01:26.132 INFO:tasks.workunit.client.0.vm05.stdout:0/197: write d9/de/d12/d15/d2e/f3a [523908,24444] 0 2026-03-09T15:01:26.137 INFO:tasks.workunit.client.0.vm05.stdout:9/275: creat d2/d4e/f51 x:0 0 0 2026-03-09T15:01:26.140 INFO:tasks.workunit.client.0.vm05.stdout:2/276: mknod da/d16/d46/c4f 0 2026-03-09T15:01:26.143 INFO:tasks.workunit.client.0.vm05.stdout:2/277: dwrite da/d29/d3f/f43 [0,4194304] 0 2026-03-09T15:01:26.150 INFO:tasks.workunit.client.0.vm05.stdout:1/230: dwrite d9/f21 [0,4194304] 0 2026-03-09T15:01:26.158 INFO:tasks.workunit.client.0.vm05.stdout:1/231: fdatasync d9/d2a/f3f 0 2026-03-09T15:01:26.158 INFO:tasks.workunit.client.0.vm05.stdout:6/253: mknod da/d17/d3b/c4e 0 2026-03-09T15:01:26.160 INFO:tasks.workunit.client.0.vm05.stdout:5/306: creat d1/d4/d34/f6a x:0 0 0 2026-03-09T15:01:26.166 INFO:tasks.workunit.client.0.vm05.stdout:9/276: mkdir d2/d10/d22/d52 0 2026-03-09T15:01:26.167 INFO:tasks.workunit.client.0.vm05.stdout:9/277: readlink d2/d10/d22/l50 0 2026-03-09T15:01:26.171 INFO:tasks.workunit.client.0.vm05.stdout:2/278: rename da/f1b to da/d16/d46/f50 0 2026-03-09T15:01:26.184 INFO:tasks.workunit.client.0.vm05.stdout:4/251: write d2/d4/d8/f13 [1147564,83617] 0 2026-03-09T15:01:26.192 INFO:tasks.workunit.client.0.vm05.stdout:7/279: dread d1/d9/d23/d31/d51/f39 [0,4194304] 0 2026-03-09T15:01:26.197 INFO:tasks.workunit.client.0.vm05.stdout:1/232: creat d9/d3d/d49/f51 x:0 0 0 2026-03-09T15:01:26.199 INFO:tasks.workunit.client.0.vm05.stdout:6/254: fsync da/d17/f29 0 2026-03-09T15:01:26.201 INFO:tasks.workunit.client.0.vm05.stdout:6/255: write da/d19/f22 [25502,10879] 0 2026-03-09T15:01:26.202 INFO:tasks.workunit.client.0.vm05.stdout:4/252: dread d2/d4/f4e [0,4194304] 0 2026-03-09T15:01:26.213 INFO:tasks.workunit.client.0.vm05.stdout:4/253: dwrite d2/d49/f4d [0,4194304] 0 2026-03-09T15:01:26.223 INFO:tasks.workunit.client.0.vm05.stdout:2/279: symlink da/d29/d45/l51 0 2026-03-09T15:01:26.238 INFO:tasks.workunit.client.0.vm05.stdout:5/307: creat d1/f6b x:0 0 0 2026-03-09T15:01:26.239 INFO:tasks.workunit.client.0.vm05.stdout:4/254: chown d2/d4/d1e/l38 5906 1 2026-03-09T15:01:26.240 INFO:tasks.workunit.client.0.vm05.stdout:7/280: truncate d1/f15 1685571 0 2026-03-09T15:01:26.240 INFO:tasks.workunit.client.0.vm05.stdout:5/308: mkdir d1/d4/d34/d6c 0 2026-03-09T15:01:26.243 INFO:tasks.workunit.client.0.vm05.stdout:4/255: rmdir d2/d4/d1e 39 2026-03-09T15:01:26.258 INFO:tasks.workunit.client.0.vm05.stdout:7/281: mkdir d1/d9/d23/d54 0 2026-03-09T15:01:26.258 INFO:tasks.workunit.client.0.vm05.stdout:5/309: creat d1/d4/d34/d56/f6d x:0 0 0 2026-03-09T15:01:26.258 INFO:tasks.workunit.client.0.vm05.stdout:7/282: stat d1/d9/l27 0 2026-03-09T15:01:26.258 INFO:tasks.workunit.client.0.vm05.stdout:5/310: dwrite d1/d4/d34/f5c [0,4194304] 0 2026-03-09T15:01:26.258 INFO:tasks.workunit.client.0.vm05.stdout:4/256: mknod d2/d4/d8/d4a/c58 0 2026-03-09T15:01:26.258 INFO:tasks.workunit.client.0.vm05.stdout:5/311: chown d1/d4/f43 249470 1 2026-03-09T15:01:26.258 INFO:tasks.workunit.client.0.vm05.stdout:7/283: dwrite d1/d22/f47 [0,4194304] 0 2026-03-09T15:01:26.260 INFO:tasks.workunit.client.0.vm05.stdout:5/312: dwrite d1/d4/d34/d35/f52 [0,4194304] 0 2026-03-09T15:01:26.266 INFO:tasks.workunit.client.0.vm05.stdout:5/313: dwrite d1/d4/d19/f29 [0,4194304] 0 2026-03-09T15:01:26.299 INFO:tasks.workunit.client.0.vm05.stdout:4/257: dwrite d2/d4/d7/dc/f27 [4194304,4194304] 0 2026-03-09T15:01:26.299 INFO:tasks.workunit.client.0.vm05.stdout:7/284: creat d1/d9/d23/d31/f55 x:0 0 0 2026-03-09T15:01:26.299 INFO:tasks.workunit.client.0.vm05.stdout:5/314: stat d1/d4/d19/c3c 0 2026-03-09T15:01:26.299 INFO:tasks.workunit.client.0.vm05.stdout:4/258: dwrite d2/d43/f47 [0,4194304] 0 2026-03-09T15:01:26.299 INFO:tasks.workunit.client.0.vm05.stdout:5/315: truncate d1/d4/d34/d35/f36 1061434 0 2026-03-09T15:01:26.299 INFO:tasks.workunit.client.0.vm05.stdout:7/285: rmdir d1/d22 39 2026-03-09T15:01:26.304 INFO:tasks.workunit.client.0.vm05.stdout:5/316: dwrite d1/f1d [0,4194304] 0 2026-03-09T15:01:26.306 INFO:tasks.workunit.client.0.vm05.stdout:5/317: read - d1/f66 zero size 2026-03-09T15:01:26.314 INFO:tasks.workunit.client.0.vm05.stdout:4/259: symlink d2/d4/d50/l59 0 2026-03-09T15:01:26.328 INFO:tasks.workunit.client.0.vm05.stdout:4/260: write d2/d43/f51 [5529635,121507] 0 2026-03-09T15:01:26.328 INFO:tasks.workunit.client.0.vm05.stdout:7/286: fsync d1/f28 0 2026-03-09T15:01:26.328 INFO:tasks.workunit.client.0.vm05.stdout:7/287: chown d1/d9/d23/d31/f37 619515 1 2026-03-09T15:01:26.328 INFO:tasks.workunit.client.0.vm05.stdout:5/318: rename d1/d4/d34/d35/d3d/d38/f40 to d1/d4/d34/d35/d3d/d38/f6e 0 2026-03-09T15:01:26.328 INFO:tasks.workunit.client.0.vm05.stdout:5/319: read d1/d4/d27/f4f [481353,12797] 0 2026-03-09T15:01:26.328 INFO:tasks.workunit.client.0.vm05.stdout:5/320: read - d1/d4/f55 zero size 2026-03-09T15:01:26.328 INFO:tasks.workunit.client.0.vm05.stdout:4/261: creat d2/d4/d7/d48/f5a x:0 0 0 2026-03-09T15:01:26.328 INFO:tasks.workunit.client.0.vm05.stdout:4/262: write d2/d4/d7/dc/f54 [239401,104981] 0 2026-03-09T15:01:26.328 INFO:tasks.workunit.client.0.vm05.stdout:4/263: write d2/d43/f4f [223952,61414] 0 2026-03-09T15:01:26.328 INFO:tasks.workunit.client.0.vm05.stdout:2/280: sync 2026-03-09T15:01:26.328 INFO:tasks.workunit.client.0.vm05.stdout:1/233: sync 2026-03-09T15:01:26.333 INFO:tasks.workunit.client.0.vm05.stdout:4/264: dwrite d2/d4/d8/f13 [0,4194304] 0 2026-03-09T15:01:26.342 INFO:tasks.workunit.client.0.vm05.stdout:7/288: rename d1/d9/f4d to d1/d12/f56 0 2026-03-09T15:01:26.348 INFO:tasks.workunit.client.0.vm05.stdout:3/255: truncate d3/df/d1e/d24/f35 1946368 0 2026-03-09T15:01:26.349 INFO:tasks.workunit.client.0.vm05.stdout:1/234: symlink d9/d2f/l52 0 2026-03-09T15:01:26.349 INFO:tasks.workunit.client.0.vm05.stdout:1/235: readlink d9/d3d/d49/d48/l2e 0 2026-03-09T15:01:26.359 INFO:tasks.workunit.client.0.vm05.stdout:9/278: dread d2/d4e/f3e [0,4194304] 0 2026-03-09T15:01:26.364 INFO:tasks.workunit.client.0.vm05.stdout:9/279: dwrite d2/d4e/d1b/f43 [0,4194304] 0 2026-03-09T15:01:26.384 INFO:tasks.workunit.client.0.vm05.stdout:4/265: dread d2/f1b [0,4194304] 0 2026-03-09T15:01:26.385 INFO:tasks.workunit.client.0.vm05.stdout:4/266: chown d2/d4 10651968 1 2026-03-09T15:01:26.385 INFO:tasks.workunit.client.0.vm05.stdout:4/267: dread d2/f14 [0,4194304] 0 2026-03-09T15:01:26.388 INFO:tasks.workunit.client.0.vm05.stdout:3/256: readlink d3/df/d10/d34/l36 0 2026-03-09T15:01:26.393 INFO:tasks.workunit.client.0.vm05.stdout:2/281: symlink da/l52 0 2026-03-09T15:01:26.400 INFO:tasks.workunit.client.0.vm05.stdout:1/236: readlink d9/d17/l1c 0 2026-03-09T15:01:26.406 INFO:tasks.workunit.client.0.vm05.stdout:2/282: dread da/d16/f1e [0,4194304] 0 2026-03-09T15:01:26.411 INFO:tasks.workunit.client.0.vm05.stdout:8/317: write d0/f10 [3879525,116493] 0 2026-03-09T15:01:26.417 INFO:tasks.workunit.client.0.vm05.stdout:0/198: dread d9/de/d12/d15/d2e/f3a [0,4194304] 0 2026-03-09T15:01:26.424 INFO:tasks.workunit.client.0.vm05.stdout:0/199: write d9/de/f20 [3520653,50183] 0 2026-03-09T15:01:26.424 INFO:tasks.workunit.client.0.vm05.stdout:2/283: dread f4 [4194304,4194304] 0 2026-03-09T15:01:26.424 INFO:tasks.workunit.client.0.vm05.stdout:9/280: unlink d2/d10/f26 0 2026-03-09T15:01:26.426 INFO:tasks.workunit.client.0.vm05.stdout:4/268: mknod d2/d4/d7/dc/c5b 0 2026-03-09T15:01:26.430 INFO:tasks.workunit.client.0.vm05.stdout:6/256: write da/d17/f29 [196630,89729] 0 2026-03-09T15:01:26.433 INFO:tasks.workunit.client.0.vm05.stdout:5/321: truncate d1/d4/d34/d35/f44 2346968 0 2026-03-09T15:01:26.442 INFO:tasks.workunit.client.0.vm05.stdout:8/318: mkdir d0/d1/d12/d1b/d6e 0 2026-03-09T15:01:26.452 INFO:tasks.workunit.client.0.vm05.stdout:0/200: readlink l8 0 2026-03-09T15:01:26.453 INFO:tasks.workunit.client.0.vm05.stdout:2/284: fsync da/dd/ff 0 2026-03-09T15:01:26.462 INFO:tasks.workunit.client.0.vm05.stdout:4/269: dwrite d2/d4/d7/f9 [0,4194304] 0 2026-03-09T15:01:26.466 INFO:tasks.workunit.client.0.vm05.stdout:0/201: read d9/de/f19 [1358110,90486] 0 2026-03-09T15:01:26.468 INFO:tasks.workunit.client.0.vm05.stdout:4/270: stat d2/d4/d8/d4a 0 2026-03-09T15:01:26.473 INFO:tasks.workunit.client.0.vm05.stdout:6/257: creat da/d17/d3b/f4f x:0 0 0 2026-03-09T15:01:26.474 INFO:tasks.workunit.client.0.vm05.stdout:1/237: unlink d9/l45 0 2026-03-09T15:01:26.474 INFO:tasks.workunit.client.0.vm05.stdout:6/258: fdatasync da/d17/f29 0 2026-03-09T15:01:26.482 INFO:tasks.workunit.client.0.vm05.stdout:4/271: creat d2/d1d/f5c x:0 0 0 2026-03-09T15:01:26.504 INFO:tasks.workunit.client.0.vm05.stdout:4/272: chown d2/c55 13 1 2026-03-09T15:01:26.504 INFO:tasks.workunit.client.0.vm05.stdout:4/273: read - d2/d1d/f5c zero size 2026-03-09T15:01:26.504 INFO:tasks.workunit.client.0.vm05.stdout:5/322: getdents d1/d4/d34/d35/d3d/d38/d69 0 2026-03-09T15:01:26.504 INFO:tasks.workunit.client.0.vm05.stdout:4/274: rmdir d2/d4/d7/dc/d2b 39 2026-03-09T15:01:26.504 INFO:tasks.workunit.client.0.vm05.stdout:3/257: getdents d3/df/d10/d34 0 2026-03-09T15:01:26.504 INFO:tasks.workunit.client.0.vm05.stdout:5/323: dread d1/d4/d34/d35/f4d [0,4194304] 0 2026-03-09T15:01:26.504 INFO:tasks.workunit.client.0.vm05.stdout:5/324: chown d1/d4/d27/f57 1 1 2026-03-09T15:01:26.504 INFO:tasks.workunit.client.0.vm05.stdout:2/285: link da/d13/d2f/d35/l3e da/dd/l53 0 2026-03-09T15:01:26.505 INFO:tasks.workunit.client.0.vm05.stdout:4/275: readlink d2/d4/d7/l22 0 2026-03-09T15:01:26.505 INFO:tasks.workunit.client.0.vm05.stdout:4/276: write d2/d1d/f36 [3145841,80789] 0 2026-03-09T15:01:26.507 INFO:tasks.workunit.client.0.vm05.stdout:3/258: unlink d3/df/d10/d19/d44/l47 0 2026-03-09T15:01:26.511 INFO:tasks.workunit.client.0.vm05.stdout:1/238: dread f7 [0,4194304] 0 2026-03-09T15:01:26.511 INFO:tasks.workunit.client.0.vm05.stdout:1/239: read - d9/d2a/f39 zero size 2026-03-09T15:01:26.512 INFO:tasks.workunit.client.0.vm05.stdout:1/240: truncate d9/d2f/f3a 1376732 0 2026-03-09T15:01:26.512 INFO:tasks.workunit.client.0.vm05.stdout:8/319: getdents d0/d1/d55 0 2026-03-09T15:01:26.517 INFO:tasks.workunit.client.0.vm05.stdout:3/259: creat d3/df/d10/d19/d44/f56 x:0 0 0 2026-03-09T15:01:26.517 INFO:tasks.workunit.client.0.vm05.stdout:5/325: mkdir d1/d4/d34/d35/d4e/d6f 0 2026-03-09T15:01:26.519 INFO:tasks.workunit.client.0.vm05.stdout:1/241: dread d9/d17/f26 [0,4194304] 0 2026-03-09T15:01:26.521 INFO:tasks.workunit.client.0.vm05.stdout:4/277: mkdir d2/d4/d7/dc/d2b/d5d 0 2026-03-09T15:01:26.522 INFO:tasks.workunit.client.0.vm05.stdout:3/260: chown d3/df/c46 288180895 1 2026-03-09T15:01:26.523 INFO:tasks.workunit.client.0.vm05.stdout:5/326: symlink d1/d4/d34/d35/d3d/d38/l70 0 2026-03-09T15:01:26.527 INFO:tasks.workunit.client.0.vm05.stdout:2/286: truncate da/d29/f2d 6299866 0 2026-03-09T15:01:26.528 INFO:tasks.workunit.client.0.vm05.stdout:0/202: sync 2026-03-09T15:01:26.532 INFO:tasks.workunit.client.0.vm05.stdout:2/287: dwrite da/dd/f25 [0,4194304] 0 2026-03-09T15:01:26.536 INFO:tasks.workunit.client.0.vm05.stdout:1/242: rename d9/le to d9/d17/l53 0 2026-03-09T15:01:26.539 INFO:tasks.workunit.client.0.vm05.stdout:0/203: creat d9/de/d12/d15/d2e/f40 x:0 0 0 2026-03-09T15:01:26.540 INFO:tasks.workunit.client.0.vm05.stdout:0/204: truncate d9/de/d12/d15/d2e/f40 321577 0 2026-03-09T15:01:26.542 INFO:tasks.workunit.client.0.vm05.stdout:4/278: fdatasync d2/d1d/f36 0 2026-03-09T15:01:26.543 INFO:tasks.workunit.client.0.vm05.stdout:4/279: chown d2 480807 1 2026-03-09T15:01:26.544 INFO:tasks.workunit.client.0.vm05.stdout:3/261: creat d3/df/d1e/d2f/d52/f57 x:0 0 0 2026-03-09T15:01:26.545 INFO:tasks.workunit.client.0.vm05.stdout:3/262: chown d3/d29 21 1 2026-03-09T15:01:26.550 INFO:tasks.workunit.client.0.vm05.stdout:4/280: dwrite d2/d4/d7/f53 [0,4194304] 0 2026-03-09T15:01:26.554 INFO:tasks.workunit.client.0.vm05.stdout:7/289: write d1/d9/d23/d31/d32/f3a [4418032,113828] 0 2026-03-09T15:01:26.556 INFO:tasks.workunit.client.0.vm05.stdout:3/263: sync 2026-03-09T15:01:26.564 INFO:tasks.workunit.client.0.vm05.stdout:5/327: dread d1/ff [4194304,4194304] 0 2026-03-09T15:01:26.567 INFO:tasks.workunit.client.0.vm05.stdout:5/328: dread - d1/d4/d34/d35/d3d/f61 zero size 2026-03-09T15:01:26.570 INFO:tasks.workunit.client.0.vm05.stdout:1/243: unlink d9/d3d/d33/f4c 0 2026-03-09T15:01:26.573 INFO:tasks.workunit.client.0.vm05.stdout:2/288: rename f4 to da/d16/d46/f54 0 2026-03-09T15:01:26.576 INFO:tasks.workunit.client.0.vm05.stdout:0/205: write d9/f22 [1541229,97563] 0 2026-03-09T15:01:26.577 INFO:tasks.workunit.client.0.vm05.stdout:0/206: chown d9/de/d12/d15/d2e 839 1 2026-03-09T15:01:26.581 INFO:tasks.workunit.client.0.vm05.stdout:6/259: dwrite da/d17/d3b/f3f [0,4194304] 0 2026-03-09T15:01:26.585 INFO:tasks.workunit.client.0.vm05.stdout:6/260: dwrite da/fb [4194304,4194304] 0 2026-03-09T15:01:26.586 INFO:tasks.workunit.client.0.vm05.stdout:6/261: chown da/d17/d3b/f4a 4 1 2026-03-09T15:01:26.586 INFO:tasks.workunit.client.0.vm05.stdout:6/262: fsync da/d17/d3b/f47 0 2026-03-09T15:01:26.588 INFO:tasks.workunit.client.0.vm05.stdout:4/281: unlink d2/d4/d7/d48/f4c 0 2026-03-09T15:01:26.588 INFO:tasks.workunit.client.0.vm05.stdout:4/282: fsync d2/d4/d7/f2d 0 2026-03-09T15:01:26.591 INFO:tasks.workunit.client.0.vm05.stdout:7/290: symlink d1/d9/d23/d31/d51/l57 0 2026-03-09T15:01:26.595 INFO:tasks.workunit.client.0.vm05.stdout:9/281: write d2/d4e/d1b/d23/d37/f36 [3142960,121532] 0 2026-03-09T15:01:26.612 INFO:tasks.workunit.client.0.vm05.stdout:8/320: dwrite d0/fa [0,4194304] 0 2026-03-09T15:01:26.615 INFO:tasks.workunit.client.0.vm05.stdout:1/244: mknod d9/d2f/d37/c54 0 2026-03-09T15:01:26.618 INFO:tasks.workunit.client.0.vm05.stdout:3/264: rename d3/f13 to d3/df/d10/d19/f58 0 2026-03-09T15:01:26.619 INFO:tasks.workunit.client.0.vm05.stdout:3/265: chown d3/df/d10/f2a 28 1 2026-03-09T15:01:26.633 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:26 vm09.local ceph-mon[59673]: pgmap v169: 65 pgs: 65 active+clean; 1.8 GiB data, 6.5 GiB used, 113 GiB / 120 GiB avail; 4.6 MiB/s rd, 54 MiB/s wr, 244 op/s 2026-03-09T15:01:26.634 INFO:tasks.workunit.client.0.vm05.stdout:0/207: mkdir d9/de/d25/d38/d41 0 2026-03-09T15:01:26.640 INFO:tasks.workunit.client.0.vm05.stdout:7/291: creat d1/d9/d23/d31/d32/f58 x:0 0 0 2026-03-09T15:01:26.642 INFO:tasks.workunit.client.0.vm05.stdout:9/282: mkdir d2/d4e/d1b/d23/d53 0 2026-03-09T15:01:26.643 INFO:tasks.workunit.client.0.vm05.stdout:9/283: write d2/f8 [1146094,19690] 0 2026-03-09T15:01:26.647 INFO:tasks.workunit.client.0.vm05.stdout:9/284: dwrite d2/d4e/d1b/f43 [0,4194304] 0 2026-03-09T15:01:26.652 INFO:tasks.workunit.client.0.vm05.stdout:8/321: write d0/d2a/f2e [363330,10824] 0 2026-03-09T15:01:26.656 INFO:tasks.workunit.client.0.vm05.stdout:8/322: dwrite d0/fa [0,4194304] 0 2026-03-09T15:01:26.658 INFO:tasks.workunit.client.0.vm05.stdout:5/329: mknod d1/d4/d34/d56/d68/c71 0 2026-03-09T15:01:26.661 INFO:tasks.workunit.client.0.vm05.stdout:1/245: unlink d9/d3d/l3c 0 2026-03-09T15:01:26.661 INFO:tasks.workunit.client.0.vm05.stdout:1/246: chown d9/d17/c28 193691 1 2026-03-09T15:01:26.669 INFO:tasks.workunit.client.0.vm05.stdout:6/263: mknod da/c50 0 2026-03-09T15:01:26.670 INFO:tasks.workunit.client.0.vm05.stdout:6/264: write da/d17/d3b/f47 [30237,26410] 0 2026-03-09T15:01:26.672 INFO:tasks.workunit.client.0.vm05.stdout:4/283: mknod d2/d4/d7/dc/c5e 0 2026-03-09T15:01:26.674 INFO:tasks.workunit.client.0.vm05.stdout:6/265: dwrite da/d17/f2a [0,4194304] 0 2026-03-09T15:01:26.677 INFO:tasks.workunit.client.0.vm05.stdout:8/323: dread d0/d2a/d2d/d54/f5b [0,4194304] 0 2026-03-09T15:01:26.677 INFO:tasks.workunit.client.0.vm05.stdout:8/324: dread - d0/dc/f15 zero size 2026-03-09T15:01:26.688 INFO:tasks.workunit.client.0.vm05.stdout:9/285: rename d2/d4e/d1b/d23/c30 to d2/d10/d22/d2c/c54 0 2026-03-09T15:01:26.688 INFO:tasks.workunit.client.0.vm05.stdout:6/266: rename da/d17 to da/d17/d51 22 2026-03-09T15:01:26.689 INFO:tasks.workunit.client.0.vm05.stdout:6/267: chown da/d17/d3b/f3f 822604360 1 2026-03-09T15:01:26.691 INFO:tasks.workunit.client.0.vm05.stdout:9/286: dwrite d2/d4e/d1b/d23/d37/f36 [0,4194304] 0 2026-03-09T15:01:26.698 INFO:tasks.workunit.client.0.vm05.stdout:5/330: dwrite d1/da/f2f [0,4194304] 0 2026-03-09T15:01:26.706 INFO:tasks.workunit.client.0.vm05.stdout:3/266: mkdir d3/df/d59 0 2026-03-09T15:01:26.706 INFO:tasks.workunit.client.0.vm05.stdout:2/289: link da/c1d da/d16/d46/c55 0 2026-03-09T15:01:26.707 INFO:tasks.workunit.client.0.vm05.stdout:2/290: read da/d16/f20 [1731093,55287] 0 2026-03-09T15:01:26.707 INFO:tasks.workunit.client.0.vm05.stdout:2/291: fsync da/f4e 0 2026-03-09T15:01:26.709 INFO:tasks.workunit.client.0.vm05.stdout:4/284: rmdir d2/d4/d50 39 2026-03-09T15:01:26.709 INFO:tasks.workunit.client.0.vm05.stdout:4/285: read - d2/d49/f56 zero size 2026-03-09T15:01:26.709 INFO:tasks.workunit.client.0.vm05.stdout:4/286: fsync d2/d1d/f36 0 2026-03-09T15:01:26.711 INFO:tasks.workunit.client.0.vm05.stdout:8/325: mkdir d0/d1/d12/d1b/d66/d6f 0 2026-03-09T15:01:26.714 INFO:tasks.workunit.client.0.vm05.stdout:6/268: creat da/d19/f52 x:0 0 0 2026-03-09T15:01:26.715 INFO:tasks.workunit.client.0.vm05.stdout:6/269: write da/d17/f2a [2979992,12204] 0 2026-03-09T15:01:26.715 INFO:tasks.workunit.client.0.vm05.stdout:6/270: fdatasync da/d17/f29 0 2026-03-09T15:01:26.716 INFO:tasks.workunit.client.0.vm05.stdout:6/271: chown da/d17/f2d 1 1 2026-03-09T15:01:26.719 INFO:tasks.workunit.client.0.vm05.stdout:5/331: rename d1/d4/d34/c41 to d1/d4/d34/d6c/c72 0 2026-03-09T15:01:26.720 INFO:tasks.workunit.client.0.vm05.stdout:5/332: write d1/f6 [1577051,127138] 0 2026-03-09T15:01:26.722 INFO:tasks.workunit.client.0.vm05.stdout:3/267: read d3/df/f4a [3814986,75870] 0 2026-03-09T15:01:26.731 INFO:tasks.workunit.client.0.vm05.stdout:9/287: dread d2/f6 [4194304,4194304] 0 2026-03-09T15:01:26.735 INFO:tasks.workunit.client.0.vm05.stdout:9/288: dwrite d2/d10/d22/d2c/f44 [0,4194304] 0 2026-03-09T15:01:26.739 INFO:tasks.workunit.client.0.vm05.stdout:0/208: creat d9/f42 x:0 0 0 2026-03-09T15:01:26.739 INFO:tasks.workunit.client.0.vm05.stdout:2/292: rmdir da/d13/d2f/d35 39 2026-03-09T15:01:26.740 INFO:tasks.workunit.client.0.vm05.stdout:4/287: mknod d2/d1d/c5f 0 2026-03-09T15:01:26.751 INFO:tasks.workunit.client.0.vm05.stdout:7/292: creat d1/d9/f59 x:0 0 0 2026-03-09T15:01:26.753 INFO:tasks.workunit.client.0.vm05.stdout:8/326: truncate d0/f4 3509104 0 2026-03-09T15:01:26.753 INFO:tasks.workunit.client.0.vm05.stdout:8/327: read d0/d7/f14 [950321,259] 0 2026-03-09T15:01:26.758 INFO:tasks.workunit.client.0.vm05.stdout:8/328: dwrite d0/fa [0,4194304] 0 2026-03-09T15:01:26.763 INFO:tasks.workunit.client.0.vm05.stdout:6/272: write da/d17/f30 [966061,112462] 0 2026-03-09T15:01:26.763 INFO:tasks.workunit.client.0.vm05.stdout:6/273: write da/d19/f22 [1860221,32812] 0 2026-03-09T15:01:26.770 INFO:tasks.workunit.client.0.vm05.stdout:5/333: write d1/ff [4024967,83434] 0 2026-03-09T15:01:26.777 INFO:tasks.workunit.client.0.vm05.stdout:9/289: rename d2/fd to d2/d10/d22/d2c/d3c/f55 0 2026-03-09T15:01:26.778 INFO:tasks.workunit.client.0.vm05.stdout:0/209: stat d9/de/d12/d15/f36 0 2026-03-09T15:01:26.783 INFO:tasks.workunit.client.0.vm05.stdout:7/293: dwrite d1/d9/d23/d31/d51/f39 [0,4194304] 0 2026-03-09T15:01:26.785 INFO:tasks.workunit.client.0.vm05.stdout:7/294: readlink d1/d12/l1d 0 2026-03-09T15:01:26.788 INFO:tasks.workunit.client.0.vm05.stdout:7/295: dwrite d1/d9/fc [4194304,4194304] 0 2026-03-09T15:01:26.789 INFO:tasks.workunit.client.0.vm05.stdout:7/296: read - d1/d9/f52 zero size 2026-03-09T15:01:26.797 INFO:tasks.workunit.client.0.vm05.stdout:7/297: dwrite d1/d12/f20 [0,4194304] 0 2026-03-09T15:01:26.798 INFO:tasks.workunit.client.0.vm05.stdout:7/298: chown d1/d9/l46 0 1 2026-03-09T15:01:26.798 INFO:tasks.workunit.client.0.vm05.stdout:7/299: stat d1/d9 0 2026-03-09T15:01:26.801 INFO:tasks.workunit.client.0.vm05.stdout:7/300: dwrite d1/d9/d23/d31/f55 [0,4194304] 0 2026-03-09T15:01:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:26 vm05.local ceph-mon[50611]: pgmap v169: 65 pgs: 65 active+clean; 1.8 GiB data, 6.5 GiB used, 113 GiB / 120 GiB avail; 4.6 MiB/s rd, 54 MiB/s wr, 244 op/s 2026-03-09T15:01:26.820 INFO:tasks.workunit.client.0.vm05.stdout:6/274: creat da/d17/f53 x:0 0 0 2026-03-09T15:01:26.820 INFO:tasks.workunit.client.0.vm05.stdout:6/275: readlink da/d19/l39 0 2026-03-09T15:01:26.824 INFO:tasks.workunit.client.0.vm05.stdout:6/276: dread da/d17/d3b/f3f [0,4194304] 0 2026-03-09T15:01:26.824 INFO:tasks.workunit.client.0.vm05.stdout:6/277: write da/d19/f35 [188830,12667] 0 2026-03-09T15:01:26.831 INFO:tasks.workunit.client.0.vm05.stdout:5/334: unlink d1/d4/f43 0 2026-03-09T15:01:26.835 INFO:tasks.workunit.client.0.vm05.stdout:5/335: dwrite d1/f9 [4194304,4194304] 0 2026-03-09T15:01:26.854 INFO:tasks.workunit.client.0.vm05.stdout:2/293: mknod da/d13/c56 0 2026-03-09T15:01:26.854 INFO:tasks.workunit.client.0.vm05.stdout:6/278: dread da/d17/d3b/f47 [0,4194304] 0 2026-03-09T15:01:26.854 INFO:tasks.workunit.client.0.vm05.stdout:6/279: fsync da/f14 0 2026-03-09T15:01:26.854 INFO:tasks.workunit.client.0.vm05.stdout:6/280: fsync da/f1a 0 2026-03-09T15:01:26.858 INFO:tasks.workunit.client.0.vm05.stdout:4/288: read d2/d4/d1e/f40 [1500108,96569] 0 2026-03-09T15:01:26.867 INFO:tasks.workunit.client.0.vm05.stdout:7/301: chown d1/d22 2150130 1 2026-03-09T15:01:26.868 INFO:tasks.workunit.client.0.vm05.stdout:7/302: write d1/d9/d23/d31/d32/f58 [810705,42154] 0 2026-03-09T15:01:26.875 INFO:tasks.workunit.client.0.vm05.stdout:8/329: write d0/d2a/f2e [946079,101836] 0 2026-03-09T15:01:26.875 INFO:tasks.workunit.client.0.vm05.stdout:8/330: dread - d0/dc/f15 zero size 2026-03-09T15:01:26.875 INFO:tasks.workunit.client.0.vm05.stdout:8/331: chown d0/fa 51 1 2026-03-09T15:01:26.878 INFO:tasks.workunit.client.0.vm05.stdout:9/290: rename d2/d4e/d1b/d23 to d2/d4e/d56 0 2026-03-09T15:01:26.882 INFO:tasks.workunit.client.0.vm05.stdout:1/247: dwrite d9/d2a/f4e [0,4194304] 0 2026-03-09T15:01:26.887 INFO:tasks.workunit.client.0.vm05.stdout:5/336: dread - d1/d4/f55 zero size 2026-03-09T15:01:26.889 INFO:tasks.workunit.client.0.vm05.stdout:0/210: rename c5 to d9/de/d25/c43 0 2026-03-09T15:01:26.889 INFO:tasks.workunit.client.0.vm05.stdout:6/281: sync 2026-03-09T15:01:26.889 INFO:tasks.workunit.client.0.vm05.stdout:4/289: sync 2026-03-09T15:01:26.890 INFO:tasks.workunit.client.0.vm05.stdout:4/290: readlink d2/d4/d8/ld 0 2026-03-09T15:01:26.892 INFO:tasks.workunit.client.0.vm05.stdout:4/291: chown d2/d43/f4f 719929 1 2026-03-09T15:01:26.894 INFO:tasks.workunit.client.0.vm05.stdout:9/291: creat d2/d4e/d56/f57 x:0 0 0 2026-03-09T15:01:26.894 INFO:tasks.workunit.client.0.vm05.stdout:1/248: mkdir d9/d2f/d55 0 2026-03-09T15:01:26.899 INFO:tasks.workunit.client.0.vm05.stdout:8/332: read d0/f3b [711411,88740] 0 2026-03-09T15:01:26.908 INFO:tasks.workunit.client.0.vm05.stdout:2/294: creat da/d13/d2f/d35/f57 x:0 0 0 2026-03-09T15:01:26.916 INFO:tasks.workunit.client.0.vm05.stdout:4/292: sync 2026-03-09T15:01:26.919 INFO:tasks.workunit.client.0.vm05.stdout:0/211: chown d9/lb 0 1 2026-03-09T15:01:26.923 INFO:tasks.workunit.client.0.vm05.stdout:8/333: dread d0/d2a/f2e [0,4194304] 0 2026-03-09T15:01:26.924 INFO:tasks.workunit.client.0.vm05.stdout:6/282: unlink da/l28 0 2026-03-09T15:01:26.925 INFO:tasks.workunit.client.0.vm05.stdout:6/283: chown da/d17/c27 4660892 1 2026-03-09T15:01:26.925 INFO:tasks.workunit.client.0.vm05.stdout:2/295: dread da/f3c [0,4194304] 0 2026-03-09T15:01:26.928 INFO:tasks.workunit.client.0.vm05.stdout:6/284: dread da/d17/f20 [0,4194304] 0 2026-03-09T15:01:26.928 INFO:tasks.workunit.client.0.vm05.stdout:6/285: write da/d19/f45 [373980,45555] 0 2026-03-09T15:01:26.929 INFO:tasks.workunit.client.0.vm05.stdout:6/286: stat da/d19/c37 0 2026-03-09T15:01:26.929 INFO:tasks.workunit.client.0.vm05.stdout:6/287: write da/d19/f45 [494703,130057] 0 2026-03-09T15:01:26.933 INFO:tasks.workunit.client.0.vm05.stdout:8/334: sync 2026-03-09T15:01:26.935 INFO:tasks.workunit.client.0.vm05.stdout:1/249: creat d9/d2a/f56 x:0 0 0 2026-03-09T15:01:26.939 INFO:tasks.workunit.client.0.vm05.stdout:3/268: link d3/l4e d3/df/d1e/d2c/l5a 0 2026-03-09T15:01:26.942 INFO:tasks.workunit.client.0.vm05.stdout:2/296: dread da/d16/f1f [4194304,4194304] 0 2026-03-09T15:01:26.945 INFO:tasks.workunit.client.0.vm05.stdout:5/337: mknod d1/d4/d34/d6c/c73 0 2026-03-09T15:01:26.949 INFO:tasks.workunit.client.0.vm05.stdout:5/338: dread d1/d4/d34/d35/f36 [0,4194304] 0 2026-03-09T15:01:26.949 INFO:tasks.workunit.client.0.vm05.stdout:5/339: write d1/f4c [446600,5872] 0 2026-03-09T15:01:26.951 INFO:tasks.workunit.client.0.vm05.stdout:7/303: rename d1/d9/f13 to d1/d9/d23/f5a 0 2026-03-09T15:01:26.954 INFO:tasks.workunit.client.0.vm05.stdout:4/293: rmdir d2/d4/d1e 39 2026-03-09T15:01:26.955 INFO:tasks.workunit.client.0.vm05.stdout:4/294: read d2/f33 [7373004,102817] 0 2026-03-09T15:01:26.966 INFO:tasks.workunit.client.0.vm05.stdout:1/250: dread d9/f12 [0,4194304] 0 2026-03-09T15:01:26.967 INFO:tasks.workunit.client.0.vm05.stdout:1/251: rename d9 to d9/d3d/d49/d48/d57 22 2026-03-09T15:01:26.983 INFO:tasks.workunit.client.0.vm05.stdout:2/297: symlink da/d13/d30/l58 0 2026-03-09T15:01:26.984 INFO:tasks.workunit.client.0.vm05.stdout:5/340: creat d1/d4/d34/d56/d68/f74 x:0 0 0 2026-03-09T15:01:26.987 INFO:tasks.workunit.client.0.vm05.stdout:7/304: creat d1/d9/d23/d31/d32/f5b x:0 0 0 2026-03-09T15:01:26.988 INFO:tasks.workunit.client.0.vm05.stdout:7/305: truncate d1/d9/d23/d31/f37 828192 0 2026-03-09T15:01:26.990 INFO:tasks.workunit.client.0.vm05.stdout:2/298: sync 2026-03-09T15:01:27.010 INFO:tasks.workunit.client.0.vm05.stdout:1/252: creat d9/d2f/f58 x:0 0 0 2026-03-09T15:01:27.010 INFO:tasks.workunit.client.0.vm05.stdout:1/253: fsync d9/d3d/f42 0 2026-03-09T15:01:27.018 INFO:tasks.workunit.client.0.vm05.stdout:7/306: mknod d1/d49/c5c 0 2026-03-09T15:01:27.019 INFO:tasks.workunit.client.0.vm05.stdout:7/307: write d1/d9/d23/d31/d51/f39 [1636640,105636] 0 2026-03-09T15:01:27.020 INFO:tasks.workunit.client.0.vm05.stdout:2/299: write da/d16/f1f [4050945,124509] 0 2026-03-09T15:01:27.022 INFO:tasks.workunit.client.0.vm05.stdout:9/292: dwrite d2/f17 [0,4194304] 0 2026-03-09T15:01:27.028 INFO:tasks.workunit.client.0.vm05.stdout:9/293: dwrite d2/d10/f39 [4194304,4194304] 0 2026-03-09T15:01:27.029 INFO:tasks.workunit.client.0.vm05.stdout:4/295: symlink d2/d4/l60 0 2026-03-09T15:01:27.040 INFO:tasks.workunit.client.0.vm05.stdout:1/254: rename d9/d3d to d9/d2a/d59 0 2026-03-09T15:01:27.043 INFO:tasks.workunit.client.0.vm05.stdout:1/255: chown d9/d2a/d59/d49/d48/l2e 14 1 2026-03-09T15:01:27.043 INFO:tasks.workunit.client.0.vm05.stdout:1/256: read d9/d2f/f43 [46538,71799] 0 2026-03-09T15:01:27.043 INFO:tasks.workunit.client.0.vm05.stdout:1/257: chown d9/d2f 166244173 1 2026-03-09T15:01:27.047 INFO:tasks.workunit.client.0.vm05.stdout:0/212: write d9/fd [1223228,64527] 0 2026-03-09T15:01:27.054 INFO:tasks.workunit.client.0.vm05.stdout:5/341: mkdir d1/d4/d27/d75 0 2026-03-09T15:01:27.056 INFO:tasks.workunit.client.0.vm05.stdout:6/288: truncate da/d17/f30 704068 0 2026-03-09T15:01:27.058 INFO:tasks.workunit.client.0.vm05.stdout:7/308: rmdir d1/d22 39 2026-03-09T15:01:27.060 INFO:tasks.workunit.client.0.vm05.stdout:9/294: creat d2/d4e/d1b/f58 x:0 0 0 2026-03-09T15:01:27.061 INFO:tasks.workunit.client.0.vm05.stdout:1/258: sync 2026-03-09T15:01:27.064 INFO:tasks.workunit.client.0.vm05.stdout:8/335: write d0/d24/f2c [1059275,129366] 0 2026-03-09T15:01:27.068 INFO:tasks.workunit.client.0.vm05.stdout:4/296: creat d2/d4/d7/d21/f61 x:0 0 0 2026-03-09T15:01:27.069 INFO:tasks.workunit.client.0.vm05.stdout:4/297: read d2/d43/f4f [90376,122709] 0 2026-03-09T15:01:27.074 INFO:tasks.workunit.client.0.vm05.stdout:3/269: dwrite d3/df/f11 [0,4194304] 0 2026-03-09T15:01:27.079 INFO:tasks.workunit.client.0.vm05.stdout:3/270: dwrite d3/d29/f54 [0,4194304] 0 2026-03-09T15:01:27.087 INFO:tasks.workunit.client.0.vm05.stdout:0/213: mknod d9/de/d25/c44 0 2026-03-09T15:01:27.091 INFO:tasks.workunit.client.0.vm05.stdout:5/342: rmdir d1/d4/d34 39 2026-03-09T15:01:27.096 INFO:tasks.workunit.client.0.vm05.stdout:9/295: write d2/d10/f2e [4517948,19316] 0 2026-03-09T15:01:27.098 INFO:tasks.workunit.client.0.vm05.stdout:1/259: mkdir d9/d2f/d37/d5a 0 2026-03-09T15:01:27.098 INFO:tasks.workunit.client.0.vm05.stdout:1/260: fsync d9/d2a/d59/f32 0 2026-03-09T15:01:27.099 INFO:tasks.workunit.client.0.vm05.stdout:8/336: chown d0/d1/c1f 38203 1 2026-03-09T15:01:27.099 INFO:tasks.workunit.client.0.vm05.stdout:8/337: chown d0/dc/c3f 24095 1 2026-03-09T15:01:27.105 INFO:tasks.workunit.client.0.vm05.stdout:4/298: unlink d2/d4/d7/c11 0 2026-03-09T15:01:27.109 INFO:tasks.workunit.client.0.vm05.stdout:4/299: read d2/d4/d7/dc/f54 [56491,64379] 0 2026-03-09T15:01:27.109 INFO:tasks.workunit.client.0.vm05.stdout:0/214: sync 2026-03-09T15:01:27.113 INFO:tasks.workunit.client.0.vm05.stdout:3/271: creat d3/df/d1e/d2f/d52/f5b x:0 0 0 2026-03-09T15:01:27.116 INFO:tasks.workunit.client.0.vm05.stdout:5/343: unlink d1/f6b 0 2026-03-09T15:01:27.124 INFO:tasks.workunit.client.0.vm05.stdout:2/300: rename da/d29/d3f/f43 to da/d13/f59 0 2026-03-09T15:01:27.127 INFO:tasks.workunit.client.0.vm05.stdout:9/296: dread d2/d4e/f40 [0,4194304] 0 2026-03-09T15:01:27.128 INFO:tasks.workunit.client.0.vm05.stdout:0/215: mknod d9/de/d25/d38/c45 0 2026-03-09T15:01:27.131 INFO:tasks.workunit.client.0.vm05.stdout:1/261: creat d9/d2f/d37/d5a/f5b x:0 0 0 2026-03-09T15:01:27.134 INFO:tasks.workunit.client.0.vm05.stdout:3/272: dread d3/df/d10/f28 [0,4194304] 0 2026-03-09T15:01:27.135 INFO:tasks.workunit.client.0.vm05.stdout:0/216: sync 2026-03-09T15:01:27.140 INFO:tasks.workunit.client.0.vm05.stdout:8/338: mknod d0/d1/d12/d1b/d66/d6f/c70 0 2026-03-09T15:01:27.144 INFO:tasks.workunit.client.0.vm05.stdout:7/309: rename d1/d49/c5c to d1/d9/d23/d31/d51/c5d 0 2026-03-09T15:01:27.145 INFO:tasks.workunit.client.0.vm05.stdout:2/301: creat da/d13/d30/f5a x:0 0 0 2026-03-09T15:01:27.146 INFO:tasks.workunit.client.0.vm05.stdout:9/297: rmdir d2/d4e/d56 39 2026-03-09T15:01:27.160 INFO:tasks.workunit.client.0.vm05.stdout:6/289: getdents da/d19 0 2026-03-09T15:01:27.163 INFO:tasks.workunit.client.0.vm05.stdout:1/262: mkdir d9/d2f/d37/d5a/d5c 0 2026-03-09T15:01:27.166 INFO:tasks.workunit.client.0.vm05.stdout:5/344: dread d1/d4/d34/d35/f44 [0,4194304] 0 2026-03-09T15:01:27.167 INFO:tasks.workunit.client.0.vm05.stdout:5/345: chown d1/d4/f55 314 1 2026-03-09T15:01:27.174 INFO:tasks.workunit.client.0.vm05.stdout:3/273: creat d3/df/d1e/f5c x:0 0 0 2026-03-09T15:01:27.176 INFO:tasks.workunit.client.0.vm05.stdout:0/217: truncate d9/f22 9074232 0 2026-03-09T15:01:27.178 INFO:tasks.workunit.client.0.vm05.stdout:8/339: creat d0/d1/d12/d3c/f71 x:0 0 0 2026-03-09T15:01:27.180 INFO:tasks.workunit.client.0.vm05.stdout:7/310: dread - d1/d22/d3c/f44 zero size 2026-03-09T15:01:27.182 INFO:tasks.workunit.client.0.vm05.stdout:2/302: creat da/d29/d3f/f5b x:0 0 0 2026-03-09T15:01:27.185 INFO:tasks.workunit.client.0.vm05.stdout:7/311: dread d1/d9/d23/d31/f55 [0,4194304] 0 2026-03-09T15:01:27.190 INFO:tasks.workunit.client.0.vm05.stdout:9/298: rename d2/d10/d15 to d2/d10/d22/d52/d59 0 2026-03-09T15:01:27.194 INFO:tasks.workunit.client.0.vm05.stdout:9/299: dread d2/d10/d22/d2c/f44 [0,4194304] 0 2026-03-09T15:01:27.197 INFO:tasks.workunit.client.0.vm05.stdout:4/300: creat d2/d4/d7/dc/f62 x:0 0 0 2026-03-09T15:01:27.198 INFO:tasks.workunit.client.0.vm05.stdout:7/312: dread d1/f21 [0,4194304] 0 2026-03-09T15:01:27.205 INFO:tasks.workunit.client.0.vm05.stdout:6/290: dread da/d17/d3b/f3f [0,4194304] 0 2026-03-09T15:01:27.206 INFO:tasks.workunit.client.0.vm05.stdout:7/313: dread d1/d12/f11 [0,4194304] 0 2026-03-09T15:01:27.208 INFO:tasks.workunit.client.0.vm05.stdout:5/346: fsync d1/d4/f20 0 2026-03-09T15:01:27.208 INFO:tasks.workunit.client.0.vm05.stdout:0/218: rmdir d9/de/d12/d15/d2e/d32 39 2026-03-09T15:01:27.216 INFO:tasks.workunit.client.0.vm05.stdout:9/300: truncate d2/f11 6193540 0 2026-03-09T15:01:27.216 INFO:tasks.workunit.client.0.vm05.stdout:4/301: mknod d2/d1d/c63 0 2026-03-09T15:01:27.217 INFO:tasks.workunit.client.0.vm05.stdout:7/314: dwrite d1/d9/d23/d31/f55 [4194304,4194304] 0 2026-03-09T15:01:27.221 INFO:tasks.workunit.client.0.vm05.stdout:6/291: creat da/d43/f54 x:0 0 0 2026-03-09T15:01:27.222 INFO:tasks.workunit.client.0.vm05.stdout:3/274: mknod d3/df/c5d 0 2026-03-09T15:01:27.223 INFO:tasks.workunit.client.0.vm05.stdout:3/275: chown d3/df/l55 94 1 2026-03-09T15:01:27.224 INFO:tasks.workunit.client.0.vm05.stdout:3/276: write d3/f18 [239713,129737] 0 2026-03-09T15:01:27.225 INFO:tasks.workunit.client.0.vm05.stdout:4/302: read d2/d4/f15 [825803,85492] 0 2026-03-09T15:01:27.228 INFO:tasks.workunit.client.0.vm05.stdout:7/315: dwrite d1/d9/d23/d31/d32/f58 [0,4194304] 0 2026-03-09T15:01:27.247 INFO:tasks.workunit.client.0.vm05.stdout:8/340: write d0/d1/d12/d1b/f34 [1931322,329] 0 2026-03-09T15:01:27.248 INFO:tasks.workunit.client.0.vm05.stdout:8/341: chown d0/d1/d12/d1b/l43 1400 1 2026-03-09T15:01:27.254 INFO:tasks.workunit.client.0.vm05.stdout:9/301: dread d2/f1f [0,4194304] 0 2026-03-09T15:01:27.258 INFO:tasks.workunit.client.0.vm05.stdout:6/292: creat da/d17/f55 x:0 0 0 2026-03-09T15:01:27.259 INFO:tasks.workunit.client.0.vm05.stdout:1/263: rmdir d9/d2f/d37/d5a/d5c 0 2026-03-09T15:01:27.264 INFO:tasks.workunit.client.0.vm05.stdout:3/277: symlink d3/df/d10/d19/l5e 0 2026-03-09T15:01:27.265 INFO:tasks.workunit.client.0.vm05.stdout:3/278: write d3/df/d10/f2a [651759,28590] 0 2026-03-09T15:01:27.266 INFO:tasks.workunit.client.0.vm05.stdout:7/316: rmdir d1/d9 39 2026-03-09T15:01:27.268 INFO:tasks.workunit.client.0.vm05.stdout:9/302: mkdir d2/d4e/d1b/d5a 0 2026-03-09T15:01:27.273 INFO:tasks.workunit.client.0.vm05.stdout:3/279: dread - d3/df/f4b zero size 2026-03-09T15:01:27.276 INFO:tasks.workunit.client.0.vm05.stdout:7/317: dread - d1/d9/d23/d31/d51/f3b zero size 2026-03-09T15:01:27.278 INFO:tasks.workunit.client.0.vm05.stdout:8/342: mkdir d0/d72 0 2026-03-09T15:01:27.278 INFO:tasks.workunit.client.0.vm05.stdout:8/343: dread - d0/d2a/d2d/f4d zero size 2026-03-09T15:01:27.282 INFO:tasks.workunit.client.0.vm05.stdout:0/219: rename d9/c1b to d9/de/d12/d15/d2e/c46 0 2026-03-09T15:01:27.283 INFO:tasks.workunit.client.0.vm05.stdout:4/303: creat d2/d4/d7/dc/f64 x:0 0 0 2026-03-09T15:01:27.286 INFO:tasks.workunit.client.0.vm05.stdout:0/220: dwrite d9/de/d12/f23 [0,4194304] 0 2026-03-09T15:01:27.288 INFO:tasks.workunit.client.0.vm05.stdout:3/280: creat d3/df/d10/d34/f5f x:0 0 0 2026-03-09T15:01:27.288 INFO:tasks.workunit.client.0.vm05.stdout:3/281: read - d3/df/d1e/d2c/f4f zero size 2026-03-09T15:01:27.289 INFO:tasks.workunit.client.0.vm05.stdout:7/318: truncate d1/d9/fd 2344646 0 2026-03-09T15:01:27.290 INFO:tasks.workunit.client.0.vm05.stdout:7/319: dread d1/d12/f56 [0,4194304] 0 2026-03-09T15:01:27.291 INFO:tasks.workunit.client.0.vm05.stdout:7/320: dread - d1/d9/f59 zero size 2026-03-09T15:01:27.293 INFO:tasks.workunit.client.0.vm05.stdout:9/303: mkdir d2/d10/d22/d47/d5b 0 2026-03-09T15:01:27.293 INFO:tasks.workunit.client.0.vm05.stdout:9/304: chown d2/f46 5 1 2026-03-09T15:01:27.301 INFO:tasks.workunit.client.0.vm05.stdout:6/293: rename da/d19/f34 to da/d43/f56 0 2026-03-09T15:01:27.304 INFO:tasks.workunit.client.0.vm05.stdout:8/344: mkdir d0/d2a/d2d/d42/d60/d73 0 2026-03-09T15:01:27.305 INFO:tasks.workunit.client.0.vm05.stdout:7/321: unlink d1/d49/f50 0 2026-03-09T15:01:27.308 INFO:tasks.workunit.client.0.vm05.stdout:3/282: dwrite d3/d29/d2d/f31 [0,4194304] 0 2026-03-09T15:01:27.309 INFO:tasks.workunit.client.0.vm05.stdout:9/305: mknod d2/d4e/d1b/c5c 0 2026-03-09T15:01:27.310 INFO:tasks.workunit.client.0.vm05.stdout:3/283: fdatasync d3/df/d10/f53 0 2026-03-09T15:01:27.318 INFO:tasks.workunit.client.0.vm05.stdout:4/304: dread d2/f33 [4194304,4194304] 0 2026-03-09T15:01:27.328 INFO:tasks.workunit.client.0.vm05.stdout:6/294: sync 2026-03-09T15:01:27.336 INFO:tasks.workunit.client.0.vm05.stdout:2/303: truncate da/d13/d30/f41 1007044 0 2026-03-09T15:01:27.341 INFO:tasks.workunit.client.0.vm05.stdout:5/347: dwrite d1/f30 [0,4194304] 0 2026-03-09T15:01:27.346 INFO:tasks.workunit.client.0.vm05.stdout:9/306: mknod d2/d10/d22/d52/d59/c5d 0 2026-03-09T15:01:27.350 INFO:tasks.workunit.client.0.vm05.stdout:3/284: creat d3/df/d10/d19/d44/f60 x:0 0 0 2026-03-09T15:01:27.350 INFO:tasks.workunit.client.0.vm05.stdout:1/264: rename d9/c18 to d9/d2a/d59/d33/c5d 0 2026-03-09T15:01:27.351 INFO:tasks.workunit.client.0.vm05.stdout:2/304: sync 2026-03-09T15:01:27.352 INFO:tasks.workunit.client.0.vm05.stdout:2/305: truncate da/f4e 798022 0 2026-03-09T15:01:27.352 INFO:tasks.workunit.client.0.vm05.stdout:2/306: stat da/dd/ff 0 2026-03-09T15:01:27.353 INFO:tasks.workunit.client.0.vm05.stdout:2/307: read da/f4e [751530,53462] 0 2026-03-09T15:01:27.355 INFO:tasks.workunit.client.0.vm05.stdout:1/265: dwrite d9/d2a/d59/d49/f2c [0,4194304] 0 2026-03-09T15:01:27.358 INFO:tasks.workunit.client.0.vm05.stdout:4/305: unlink d2/d4/d8/l3f 0 2026-03-09T15:01:27.358 INFO:tasks.workunit.client.0.vm05.stdout:8/345: creat d0/d2a/d2d/d42/d60/d73/f74 x:0 0 0 2026-03-09T15:01:27.358 INFO:tasks.workunit.client.0.vm05.stdout:7/322: creat d1/d49/d4a/f5e x:0 0 0 2026-03-09T15:01:27.362 INFO:tasks.workunit.client.0.vm05.stdout:9/307: mknod d2/d10/d22/d52/c5e 0 2026-03-09T15:01:27.364 INFO:tasks.workunit.client.0.vm05.stdout:8/346: dread d0/d2a/d2d/f5f [0,4194304] 0 2026-03-09T15:01:27.372 INFO:tasks.workunit.client.0.vm05.stdout:3/285: read d3/df/d10/d19/f58 [51331,79787] 0 2026-03-09T15:01:27.378 INFO:tasks.workunit.client.0.vm05.stdout:7/323: truncate d1/f45 3943324 0 2026-03-09T15:01:27.380 INFO:tasks.workunit.client.0.vm05.stdout:7/324: dread - d1/d9/d23/f4c zero size 2026-03-09T15:01:27.380 INFO:tasks.workunit.client.0.vm05.stdout:8/347: mkdir d0/d2a/d2d/d42/d60/d75 0 2026-03-09T15:01:27.381 INFO:tasks.workunit.client.0.vm05.stdout:3/286: creat d3/df/d1e/d2f/d52/f61 x:0 0 0 2026-03-09T15:01:27.382 INFO:tasks.workunit.client.0.vm05.stdout:8/348: write d0/d1/d12/d3c/f51 [76425,76422] 0 2026-03-09T15:01:27.386 INFO:tasks.workunit.client.0.vm05.stdout:6/295: link da/d17/f29 da/f57 0 2026-03-09T15:01:27.388 INFO:tasks.workunit.client.0.vm05.stdout:5/348: dwrite d1/f26 [0,4194304] 0 2026-03-09T15:01:27.388 INFO:tasks.workunit.client.0.vm05.stdout:8/349: dread d0/d2a/d2d/d42/f4e [0,4194304] 0 2026-03-09T15:01:27.389 INFO:tasks.workunit.client.0.vm05.stdout:3/287: sync 2026-03-09T15:01:27.392 INFO:tasks.workunit.client.0.vm05.stdout:4/306: link d2/d4/d7/f2d d2/d4/d7/d21/d3d/f65 0 2026-03-09T15:01:27.394 INFO:tasks.workunit.client.0.vm05.stdout:5/349: write d1/f66 [575349,101311] 0 2026-03-09T15:01:27.396 INFO:tasks.workunit.client.0.vm05.stdout:7/325: rename d1/d9/d23/d31/c34 to d1/d22/d3c/c5f 0 2026-03-09T15:01:27.398 INFO:tasks.workunit.client.0.vm05.stdout:5/350: dread - d1/d4/d34/d56/f6d zero size 2026-03-09T15:01:27.399 INFO:tasks.workunit.client.0.vm05.stdout:8/350: dwrite d0/d1/d12/f57 [0,4194304] 0 2026-03-09T15:01:27.400 INFO:tasks.workunit.client.0.vm05.stdout:6/296: dwrite da/d17/f55 [0,4194304] 0 2026-03-09T15:01:27.415 INFO:tasks.workunit.client.0.vm05.stdout:3/288: write d3/df/f14 [1642380,16144] 0 2026-03-09T15:01:27.421 INFO:tasks.workunit.client.0.vm05.stdout:4/307: mkdir d2/d43/d66 0 2026-03-09T15:01:27.427 INFO:tasks.workunit.client.0.vm05.stdout:7/326: rename d1/d22/f47 to d1/d9/d23/d54/f60 0 2026-03-09T15:01:27.428 INFO:tasks.workunit.client.0.vm05.stdout:6/297: creat da/d17/f58 x:0 0 0 2026-03-09T15:01:27.432 INFO:tasks.workunit.client.0.vm05.stdout:0/221: rmdir d9 39 2026-03-09T15:01:27.434 INFO:tasks.workunit.client.0.vm05.stdout:2/308: link da/d16/d46/c55 da/d16/c5c 0 2026-03-09T15:01:27.453 INFO:tasks.workunit.client.0.vm05.stdout:7/327: creat d1/d12/f61 x:0 0 0 2026-03-09T15:01:27.455 INFO:tasks.workunit.client.0.vm05.stdout:6/298: truncate da/d17/d3b/f47 974836 0 2026-03-09T15:01:27.456 INFO:tasks.workunit.client.0.vm05.stdout:6/299: read - da/d17/f53 zero size 2026-03-09T15:01:27.457 INFO:tasks.workunit.client.0.vm05.stdout:1/266: rmdir d9 39 2026-03-09T15:01:27.461 INFO:tasks.workunit.client.0.vm05.stdout:0/222: write d9/de/d25/d38/f2f [242624,121243] 0 2026-03-09T15:01:27.462 INFO:tasks.workunit.client.0.vm05.stdout:2/309: creat da/dd/f5d x:0 0 0 2026-03-09T15:01:27.464 INFO:tasks.workunit.client.0.vm05.stdout:6/300: dread da/f1a [0,4194304] 0 2026-03-09T15:01:27.465 INFO:tasks.workunit.client.0.vm05.stdout:5/351: rename d1/c62 to d1/d4/d34/d35/d3d/d38/d63/c76 0 2026-03-09T15:01:27.468 INFO:tasks.workunit.client.0.vm05.stdout:0/223: creat d9/de/d25/f47 x:0 0 0 2026-03-09T15:01:27.468 INFO:tasks.workunit.client.0.vm05.stdout:2/310: creat da/d16/d46/f5e x:0 0 0 2026-03-09T15:01:27.470 INFO:tasks.workunit.client.0.vm05.stdout:4/308: creat d2/f67 x:0 0 0 2026-03-09T15:01:27.470 INFO:tasks.workunit.client.0.vm05.stdout:6/301: creat da/d43/f59 x:0 0 0 2026-03-09T15:01:27.472 INFO:tasks.workunit.client.0.vm05.stdout:6/302: fdatasync da/d17/f53 0 2026-03-09T15:01:27.474 INFO:tasks.workunit.client.0.vm05.stdout:8/351: rename d0/dc/c29 to d0/d1/d12/d1b/d66/c76 0 2026-03-09T15:01:27.474 INFO:tasks.workunit.client.0.vm05.stdout:0/224: creat d9/de/d25/f48 x:0 0 0 2026-03-09T15:01:27.474 INFO:tasks.workunit.client.0.vm05.stdout:2/311: creat da/d29/d3f/f5f x:0 0 0 2026-03-09T15:01:27.476 INFO:tasks.workunit.client.0.vm05.stdout:8/352: write d0/d2a/d2d/f48 [77737,84234] 0 2026-03-09T15:01:27.477 INFO:tasks.workunit.client.0.vm05.stdout:4/309: dwrite d2/d4/d7/d21/f61 [0,4194304] 0 2026-03-09T15:01:27.482 INFO:tasks.workunit.client.0.vm05.stdout:6/303: unlink da/d17/f53 0 2026-03-09T15:01:27.482 INFO:tasks.workunit.client.0.vm05.stdout:3/289: getdents d3/df/d1e/d2f 0 2026-03-09T15:01:27.482 INFO:tasks.workunit.client.0.vm05.stdout:4/310: truncate d2/d4/d7/d21/f34 93973 0 2026-03-09T15:01:27.483 INFO:tasks.workunit.client.0.vm05.stdout:6/304: truncate da/f18 343969 0 2026-03-09T15:01:27.484 INFO:tasks.workunit.client.0.vm05.stdout:6/305: fsync da/d19/f22 0 2026-03-09T15:01:27.484 INFO:tasks.workunit.client.0.vm05.stdout:4/311: read d2/d4/d7/f2d [555641,54566] 0 2026-03-09T15:01:27.485 INFO:tasks.workunit.client.0.vm05.stdout:6/306: dread - da/d43/f54 zero size 2026-03-09T15:01:27.485 INFO:tasks.workunit.client.0.vm05.stdout:6/307: readlink da/l2b 0 2026-03-09T15:01:27.489 INFO:tasks.workunit.client.0.vm05.stdout:3/290: dwrite d3/df/d10/d34/f4c [0,4194304] 0 2026-03-09T15:01:27.492 INFO:tasks.workunit.client.0.vm05.stdout:1/267: creat d9/d2f/d55/f5e x:0 0 0 2026-03-09T15:01:27.493 INFO:tasks.workunit.client.0.vm05.stdout:3/291: fsync d3/d29/d2d/f33 0 2026-03-09T15:01:27.499 INFO:tasks.workunit.client.0.vm05.stdout:6/308: dwrite da/d43/f59 [0,4194304] 0 2026-03-09T15:01:27.536 INFO:tasks.workunit.client.0.vm05.stdout:9/308: dwrite d2/d4e/f40 [0,4194304] 0 2026-03-09T15:01:27.536 INFO:tasks.workunit.client.0.vm05.stdout:3/292: dwrite d3/df/f11 [0,4194304] 0 2026-03-09T15:01:27.536 INFO:tasks.workunit.client.0.vm05.stdout:2/312: mknod da/d16/d46/c60 0 2026-03-09T15:01:27.536 INFO:tasks.workunit.client.0.vm05.stdout:4/312: chown d2/d4/d50/l59 10 1 2026-03-09T15:01:27.536 INFO:tasks.workunit.client.0.vm05.stdout:0/225: mkdir d9/de/d12/d15/d49 0 2026-03-09T15:01:27.536 INFO:tasks.workunit.client.0.vm05.stdout:9/309: chown d2/d10/l35 63 1 2026-03-09T15:01:27.541 INFO:tasks.workunit.client.0.vm05.stdout:4/313: creat d2/d4/d7/d21/f68 x:0 0 0 2026-03-09T15:01:27.541 INFO:tasks.workunit.client.0.vm05.stdout:1/268: getdents d9/d2a/d59/d49/d4b 0 2026-03-09T15:01:27.541 INFO:tasks.workunit.client.0.vm05.stdout:1/269: chown d9/d2f/f4f 55833092 1 2026-03-09T15:01:27.544 INFO:tasks.workunit.client.0.vm05.stdout:9/310: unlink d2/fe 0 2026-03-09T15:01:27.545 INFO:tasks.workunit.client.0.vm05.stdout:7/328: rename d1/d9/d23/d31/d51/f41 to d1/f62 0 2026-03-09T15:01:27.546 INFO:tasks.workunit.client.0.vm05.stdout:9/311: write d2/d4e/d1b/f58 [539104,27706] 0 2026-03-09T15:01:27.546 INFO:tasks.workunit.client.0.vm05.stdout:0/226: dread d9/f22 [0,4194304] 0 2026-03-09T15:01:27.547 INFO:tasks.workunit.client.0.vm05.stdout:0/227: write d9/de/d12/d15/f36 [3738654,84655] 0 2026-03-09T15:01:27.549 INFO:tasks.workunit.client.0.vm05.stdout:2/313: dread da/d13/d30/f34 [0,4194304] 0 2026-03-09T15:01:27.554 INFO:tasks.workunit.client.0.vm05.stdout:1/270: mkdir d9/d2f/d37/d5f 0 2026-03-09T15:01:27.555 INFO:tasks.workunit.client.0.vm05.stdout:5/352: rename d1/d4/d27/l58 to d1/d5d/l77 0 2026-03-09T15:01:27.556 INFO:tasks.workunit.client.0.vm05.stdout:2/314: unlink da/d13/d2f/d35/f44 0 2026-03-09T15:01:27.562 INFO:tasks.workunit.client.0.vm05.stdout:1/271: chown d9/d2a/d59/d49/c3b 1 1 2026-03-09T15:01:27.584 INFO:tasks.workunit.client.0.vm05.stdout:3/293: rename d3/df/d10/d34/l36 to d3/df/d1e/d24/l62 0 2026-03-09T15:01:27.584 INFO:tasks.workunit.client.0.vm05.stdout:9/312: link d2/d4e/d1b/f4f d2/d10/d22/d47/d5b/f5f 0 2026-03-09T15:01:27.584 INFO:tasks.workunit.client.0.vm05.stdout:2/315: stat da/dd/l53 0 2026-03-09T15:01:27.584 INFO:tasks.workunit.client.0.vm05.stdout:3/294: dread - d3/df/d1e/d2f/d52/f61 zero size 2026-03-09T15:01:27.584 INFO:tasks.workunit.client.0.vm05.stdout:7/329: dwrite d1/d9/d23/d54/f60 [0,4194304] 0 2026-03-09T15:01:27.584 INFO:tasks.workunit.client.0.vm05.stdout:2/316: chown da/d16/f1f 99262 1 2026-03-09T15:01:27.584 INFO:tasks.workunit.client.0.vm05.stdout:5/353: dwrite d1/d4/d34/d35/d3d/d38/f4b [0,4194304] 0 2026-03-09T15:01:27.584 INFO:tasks.workunit.client.0.vm05.stdout:5/354: dwrite d1/d4/d34/d35/f52 [0,4194304] 0 2026-03-09T15:01:27.584 INFO:tasks.workunit.client.0.vm05.stdout:2/317: symlink da/dd/l61 0 2026-03-09T15:01:27.584 INFO:tasks.workunit.client.0.vm05.stdout:3/295: symlink d3/df/d1e/d2f/l63 0 2026-03-09T15:01:27.584 INFO:tasks.workunit.client.0.vm05.stdout:7/330: dwrite d1/d9/f52 [0,4194304] 0 2026-03-09T15:01:27.586 INFO:tasks.workunit.client.0.vm05.stdout:3/296: write d3/df/d1e/d2f/d52/f61 [162308,58954] 0 2026-03-09T15:01:27.586 INFO:tasks.workunit.client.0.vm05.stdout:2/318: chown da/dd/ff 24 1 2026-03-09T15:01:27.590 INFO:tasks.workunit.client.0.vm05.stdout:3/297: dwrite d3/df/d1e/d2f/d52/f61 [0,4194304] 0 2026-03-09T15:01:27.622 INFO:tasks.workunit.client.0.vm05.stdout:3/298: fdatasync d3/df/d1e/d2c/f4f 0 2026-03-09T15:01:27.623 INFO:tasks.workunit.client.0.vm05.stdout:4/314: getdents d2/d4/d7 0 2026-03-09T15:01:27.623 INFO:tasks.workunit.client.0.vm05.stdout:1/272: mknod d9/d2a/d59/d49/d4b/c60 0 2026-03-09T15:01:27.623 INFO:tasks.workunit.client.0.vm05.stdout:3/299: dwrite d3/d29/f41 [0,4194304] 0 2026-03-09T15:01:27.623 INFO:tasks.workunit.client.0.vm05.stdout:3/300: dwrite d3/df/d10/d19/d44/f56 [0,4194304] 0 2026-03-09T15:01:27.623 INFO:tasks.workunit.client.0.vm05.stdout:5/355: write d1/d4/d34/d56/f59 [644094,122240] 0 2026-03-09T15:01:27.623 INFO:tasks.workunit.client.0.vm05.stdout:9/313: rename d2/d4e/d1b/f58 to d2/d4e/d56/d53/f60 0 2026-03-09T15:01:27.623 INFO:tasks.workunit.client.0.vm05.stdout:9/314: chown d2/d4e/d56/d53/f60 1 1 2026-03-09T15:01:27.623 INFO:tasks.workunit.client.0.vm05.stdout:7/331: creat d1/d9/d23/d31/d32/f63 x:0 0 0 2026-03-09T15:01:27.623 INFO:tasks.workunit.client.0.vm05.stdout:9/315: dwrite d2/d10/f2e [0,4194304] 0 2026-03-09T15:01:27.629 INFO:tasks.workunit.client.0.vm05.stdout:1/273: symlink d9/d17/l61 0 2026-03-09T15:01:27.630 INFO:tasks.workunit.client.0.vm05.stdout:1/274: dread d9/d2a/d59/f42 [0,4194304] 0 2026-03-09T15:01:27.631 INFO:tasks.workunit.client.0.vm05.stdout:1/275: dread - d9/d2a/d59/d49/f51 zero size 2026-03-09T15:01:27.633 INFO:tasks.workunit.client.0.vm05.stdout:3/301: symlink d3/df/d10/d34/l64 0 2026-03-09T15:01:27.636 INFO:tasks.workunit.client.0.vm05.stdout:1/276: dwrite d9/d2a/d59/d49/f2c [4194304,4194304] 0 2026-03-09T15:01:27.645 INFO:tasks.workunit.client.0.vm05.stdout:2/319: symlink da/d13/d2f/l62 0 2026-03-09T15:01:27.645 INFO:tasks.workunit.client.0.vm05.stdout:9/316: creat d2/f61 x:0 0 0 2026-03-09T15:01:27.646 INFO:tasks.workunit.client.0.vm05.stdout:1/277: dread d9/d2f/f43 [0,4194304] 0 2026-03-09T15:01:27.646 INFO:tasks.workunit.client.0.vm05.stdout:2/320: read f5 [8171536,103520] 0 2026-03-09T15:01:27.646 INFO:tasks.workunit.client.0.vm05.stdout:1/278: fsync d9/d2f/f58 0 2026-03-09T15:01:27.648 INFO:tasks.workunit.client.0.vm05.stdout:1/279: chown d9/d2f/f3a 9176668 1 2026-03-09T15:01:27.648 INFO:tasks.workunit.client.0.vm05.stdout:1/280: rename d9/d2a to d9/d2a/d59/d49/d4b/d62 22 2026-03-09T15:01:27.655 INFO:tasks.workunit.client.0.vm05.stdout:3/302: getdents d3 0 2026-03-09T15:01:27.698 INFO:tasks.workunit.client.0.vm05.stdout:6/309: sync 2026-03-09T15:01:27.698 INFO:tasks.workunit.client.0.vm05.stdout:0/228: sync 2026-03-09T15:01:27.699 INFO:tasks.workunit.client.0.vm05.stdout:6/310: chown da/d17/f42 2028093 1 2026-03-09T15:01:27.704 INFO:tasks.workunit.client.0.vm05.stdout:0/229: dwrite d9/fd [0,4194304] 0 2026-03-09T15:01:27.710 INFO:tasks.workunit.client.0.vm05.stdout:6/311: dwrite da/d17/f42 [0,4194304] 0 2026-03-09T15:01:27.715 INFO:tasks.workunit.client.0.vm05.stdout:6/312: readlink da/d19/l36 0 2026-03-09T15:01:27.717 INFO:tasks.workunit.client.0.vm05.stdout:6/313: chown da/d43/f56 32100 1 2026-03-09T15:01:27.724 INFO:tasks.workunit.client.0.vm05.stdout:6/314: getdents da/d19 0 2026-03-09T15:01:27.732 INFO:tasks.workunit.client.0.vm05.stdout:0/230: dread d9/de/f1e [0,4194304] 0 2026-03-09T15:01:27.733 INFO:tasks.workunit.client.0.vm05.stdout:0/231: chown d9/de/d12/d15/d2e/f40 900 1 2026-03-09T15:01:27.733 INFO:tasks.workunit.client.0.vm05.stdout:0/232: stat d9/de/f3d 0 2026-03-09T15:01:27.734 INFO:tasks.workunit.client.0.vm05.stdout:0/233: mknod d9/de/d12/d15/d2e/c4a 0 2026-03-09T15:01:27.735 INFO:tasks.workunit.client.0.vm05.stdout:0/234: truncate d9/de/f20 4825241 0 2026-03-09T15:01:27.739 INFO:tasks.workunit.client.0.vm05.stdout:0/235: dwrite d9/de/d12/f23 [0,4194304] 0 2026-03-09T15:01:27.744 INFO:tasks.workunit.client.0.vm05.stdout:0/236: unlink d9/c33 0 2026-03-09T15:01:27.750 INFO:tasks.workunit.client.0.vm05.stdout:0/237: mknod d9/de/d12/d15/c4b 0 2026-03-09T15:01:27.771 INFO:tasks.workunit.client.0.vm05.stdout:5/356: dread d1/d4/f5f [0,4194304] 0 2026-03-09T15:01:27.773 INFO:tasks.workunit.client.0.vm05.stdout:5/357: link d1/d4/d27/l3b d1/l78 0 2026-03-09T15:01:27.778 INFO:tasks.workunit.client.0.vm05.stdout:5/358: dwrite d1/d4/d34/d35/f47 [0,4194304] 0 2026-03-09T15:01:27.779 INFO:tasks.workunit.client.0.vm05.stdout:5/359: stat d1/da/c39 0 2026-03-09T15:01:27.805 INFO:tasks.workunit.client.0.vm05.stdout:3/303: fsync d3/df/d10/d19/d44/f56 0 2026-03-09T15:01:27.805 INFO:tasks.workunit.client.0.vm05.stdout:3/304: chown d3/f1f 101497 1 2026-03-09T15:01:27.807 INFO:tasks.workunit.client.0.vm05.stdout:3/305: creat d3/df/d10/d19/d44/d50/f65 x:0 0 0 2026-03-09T15:01:27.808 INFO:tasks.workunit.client.0.vm05.stdout:3/306: unlink d3/df/f16 0 2026-03-09T15:01:27.809 INFO:tasks.workunit.client.0.vm05.stdout:3/307: readlink d3/la 0 2026-03-09T15:01:27.809 INFO:tasks.workunit.client.0.vm05.stdout:3/308: dread - d3/df/d10/d34/f5f zero size 2026-03-09T15:01:27.811 INFO:tasks.workunit.client.0.vm05.stdout:3/309: mkdir d3/d66 0 2026-03-09T15:01:27.811 INFO:tasks.workunit.client.0.vm05.stdout:3/310: fdatasync d3/d29/d2d/f33 0 2026-03-09T15:01:27.816 INFO:tasks.workunit.client.0.vm05.stdout:3/311: dwrite d3/df/d1e/d2c/f4f [0,4194304] 0 2026-03-09T15:01:27.819 INFO:tasks.workunit.client.0.vm05.stdout:3/312: truncate d3/f1f 4430662 0 2026-03-09T15:01:27.819 INFO:tasks.workunit.client.0.vm05.stdout:3/313: chown d3/l4e 410169 1 2026-03-09T15:01:27.823 INFO:tasks.workunit.client.0.vm05.stdout:5/360: dread d1/d4/d34/f5c [0,4194304] 0 2026-03-09T15:01:27.827 INFO:tasks.workunit.client.0.vm05.stdout:3/314: rename d3/df/d1e/d2f/c40 to d3/df/d10/c67 0 2026-03-09T15:01:27.831 INFO:tasks.workunit.client.0.vm05.stdout:5/361: unlink d1/f3 0 2026-03-09T15:01:27.831 INFO:tasks.workunit.client.0.vm05.stdout:5/362: chown d1/d4/d27 907416 1 2026-03-09T15:01:27.832 INFO:tasks.workunit.client.0.vm05.stdout:5/363: write d1/d4/d34/d35/d3d/f32 [3980382,958] 0 2026-03-09T15:01:27.849 INFO:tasks.workunit.client.0.vm05.stdout:5/364: chown d1/d4/d34/d35/d3d/d38/d63/c76 668742 1 2026-03-09T15:01:27.852 INFO:tasks.workunit.client.0.vm05.stdout:5/365: dread d1/d4/d34/d35/f44 [0,4194304] 0 2026-03-09T15:01:27.864 INFO:tasks.workunit.client.0.vm05.stdout:3/315: dread d3/df/d10/f2a [0,4194304] 0 2026-03-09T15:01:27.870 INFO:tasks.workunit.client.0.vm05.stdout:3/316: rename d3/df/d10/d19/l27 to d3/df/d10/l68 0 2026-03-09T15:01:27.871 INFO:tasks.workunit.client.0.vm05.stdout:3/317: symlink d3/df/d59/l69 0 2026-03-09T15:01:27.876 INFO:tasks.workunit.client.0.vm05.stdout:3/318: dwrite d3/df/d1e/d2f/d52/f57 [0,4194304] 0 2026-03-09T15:01:27.881 INFO:tasks.workunit.client.0.vm05.stdout:3/319: rename d3/d29/f30 to d3/df/d1e/d2c/f6a 0 2026-03-09T15:01:27.882 INFO:tasks.workunit.client.0.vm05.stdout:3/320: chown d3/df/d1e/d2f/l51 5943 1 2026-03-09T15:01:27.886 INFO:tasks.workunit.client.0.vm05.stdout:3/321: symlink d3/df/d1e/d2f/d52/l6b 0 2026-03-09T15:01:27.887 INFO:tasks.workunit.client.0.vm05.stdout:3/322: mknod d3/df/d1e/d2f/d52/c6c 0 2026-03-09T15:01:27.902 INFO:tasks.workunit.client.0.vm05.stdout:3/323: rename d3/df/d10/f53 to d3/d66/f6d 0 2026-03-09T15:01:27.902 INFO:tasks.workunit.client.0.vm05.stdout:3/324: write d3/df/f11 [717499,71985] 0 2026-03-09T15:01:27.904 INFO:tasks.workunit.client.0.vm05.stdout:3/325: truncate d3/df/d10/d34/f5f 223082 0 2026-03-09T15:01:27.904 INFO:tasks.workunit.client.0.vm05.stdout:3/326: dread - d3/df/d1e/f5c zero size 2026-03-09T15:01:27.908 INFO:tasks.workunit.client.0.vm05.stdout:3/327: rename d3/df/d10/f3f to d3/df/d1e/d24/f6e 0 2026-03-09T15:01:27.914 INFO:tasks.workunit.client.0.vm05.stdout:3/328: readlink d3/l4e 0 2026-03-09T15:01:27.915 INFO:tasks.workunit.client.0.vm05.stdout:8/353: dread d0/f4 [0,4194304] 0 2026-03-09T15:01:27.919 INFO:tasks.workunit.client.0.vm05.stdout:3/329: dwrite d3/d29/f54 [0,4194304] 0 2026-03-09T15:01:27.929 INFO:tasks.workunit.client.0.vm05.stdout:3/330: mknod d3/d29/c6f 0 2026-03-09T15:01:27.941 INFO:tasks.workunit.client.0.vm05.stdout:3/331: creat d3/d29/f70 x:0 0 0 2026-03-09T15:01:27.953 INFO:tasks.workunit.client.0.vm05.stdout:3/332: creat d3/d66/f71 x:0 0 0 2026-03-09T15:01:27.953 INFO:tasks.workunit.client.0.vm05.stdout:3/333: write d3/d66/f6d [164224,8509] 0 2026-03-09T15:01:27.954 INFO:tasks.workunit.client.0.vm05.stdout:3/334: rmdir d3/df/d59 39 2026-03-09T15:01:27.954 INFO:tasks.workunit.client.0.vm05.stdout:3/335: fdatasync d3/df/d10/d34/f4c 0 2026-03-09T15:01:27.954 INFO:tasks.workunit.client.0.vm05.stdout:3/336: rename d3/df/d1e/d2c/l3a to d3/df/l72 0 2026-03-09T15:01:27.957 INFO:tasks.workunit.client.0.vm05.stdout:3/337: dwrite d3/df/d1e/d2f/d52/f5b [0,4194304] 0 2026-03-09T15:01:27.964 INFO:tasks.workunit.client.0.vm05.stdout:3/338: read d3/f17 [932226,58322] 0 2026-03-09T15:01:27.965 INFO:tasks.workunit.client.0.vm05.stdout:8/354: sync 2026-03-09T15:01:27.966 INFO:tasks.workunit.client.0.vm05.stdout:3/339: symlink d3/df/d10/d34/l73 0 2026-03-09T15:01:27.967 INFO:tasks.workunit.client.0.vm05.stdout:3/340: chown d3/d29/d2d 647672 1 2026-03-09T15:01:27.968 INFO:tasks.workunit.client.0.vm05.stdout:3/341: mkdir d3/df/d1e/d2c/d74 0 2026-03-09T15:01:27.970 INFO:tasks.workunit.client.0.vm05.stdout:3/342: creat d3/df/d59/f75 x:0 0 0 2026-03-09T15:01:27.974 INFO:tasks.workunit.client.0.vm05.stdout:3/343: dwrite d3/d29/f70 [0,4194304] 0 2026-03-09T15:01:27.978 INFO:tasks.workunit.client.0.vm05.stdout:4/315: read d2/d4/d7/f9 [7409173,26053] 0 2026-03-09T15:01:27.985 INFO:tasks.workunit.client.0.vm05.stdout:3/344: unlink d3/d29/f54 0 2026-03-09T15:01:27.989 INFO:tasks.workunit.client.0.vm05.stdout:4/316: mkdir d2/d49/d69 0 2026-03-09T15:01:27.990 INFO:tasks.workunit.client.0.vm05.stdout:4/317: readlink d2/d4/d7/l35 0 2026-03-09T15:01:27.990 INFO:tasks.workunit.client.0.vm05.stdout:3/345: rename d3/df/d1e/d24/c45 to d3/df/d59/c76 0 2026-03-09T15:01:27.995 INFO:tasks.workunit.client.0.vm05.stdout:1/281: dread d9/d2f/f3a [0,4194304] 0 2026-03-09T15:01:28.002 INFO:tasks.workunit.client.0.vm05.stdout:4/318: getdents d2/d4/d1e 0 2026-03-09T15:01:28.004 INFO:tasks.workunit.client.0.vm05.stdout:4/319: creat d2/d49/f6a x:0 0 0 2026-03-09T15:01:28.004 INFO:tasks.workunit.client.0.vm05.stdout:4/320: write d2/f67 [48206,51410] 0 2026-03-09T15:01:28.005 INFO:tasks.workunit.client.0.vm05.stdout:4/321: read d2/d4/d7/d21/f34 [50384,117967] 0 2026-03-09T15:01:28.006 INFO:tasks.workunit.client.0.vm05.stdout:4/322: write d2/d1d/f5c [697932,7447] 0 2026-03-09T15:01:28.042 INFO:tasks.workunit.client.0.vm05.stdout:4/323: getdents d2/d4/d8/d4a 0 2026-03-09T15:01:28.046 INFO:tasks.workunit.client.0.vm05.stdout:4/324: mkdir d2/d4/d7/d48/d6b 0 2026-03-09T15:01:28.049 INFO:tasks.workunit.client.0.vm05.stdout:7/332: write d1/d12/f11 [1666036,114074] 0 2026-03-09T15:01:28.049 INFO:tasks.workunit.client.0.vm05.stdout:4/325: symlink d2/d4/l6c 0 2026-03-09T15:01:28.052 INFO:tasks.workunit.client.0.vm05.stdout:4/326: dread d2/f33 [4194304,4194304] 0 2026-03-09T15:01:28.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:27 vm05.local ceph-mon[50611]: pgmap v170: 65 pgs: 65 active+clean; 1.9 GiB data, 6.6 GiB used, 113 GiB / 120 GiB avail; 6.3 MiB/s rd, 70 MiB/s wr, 180 op/s 2026-03-09T15:01:28.055 INFO:tasks.workunit.client.0.vm05.stdout:9/317: truncate d2/f5 844017 0 2026-03-09T15:01:28.057 INFO:tasks.workunit.client.0.vm05.stdout:2/321: dwrite da/f21 [0,4194304] 0 2026-03-09T15:01:28.066 INFO:tasks.workunit.client.0.vm05.stdout:6/315: truncate da/d17/f33 401069 0 2026-03-09T15:01:28.067 INFO:tasks.workunit.client.0.vm05.stdout:7/333: link d1/d12/l1d d1/d12/l64 0 2026-03-09T15:01:28.067 INFO:tasks.workunit.client.0.vm05.stdout:6/316: write da/f41 [62472,17800] 0 2026-03-09T15:01:28.069 INFO:tasks.workunit.client.0.vm05.stdout:6/317: dread da/d17/f3c [0,4194304] 0 2026-03-09T15:01:28.071 INFO:tasks.workunit.client.0.vm05.stdout:9/318: creat d2/d10/d22/d47/f62 x:0 0 0 2026-03-09T15:01:28.073 INFO:tasks.workunit.client.0.vm05.stdout:4/327: sync 2026-03-09T15:01:28.074 INFO:tasks.workunit.client.0.vm05.stdout:0/238: dwrite d9/de/d25/f2d [0,4194304] 0 2026-03-09T15:01:28.079 INFO:tasks.workunit.client.0.vm05.stdout:0/239: dwrite d9/de/f19 [0,4194304] 0 2026-03-09T15:01:28.094 INFO:tasks.workunit.client.0.vm05.stdout:7/334: creat d1/d22/f65 x:0 0 0 2026-03-09T15:01:28.098 INFO:tasks.workunit.client.0.vm05.stdout:9/319: creat d2/d10/d22/d47/f63 x:0 0 0 2026-03-09T15:01:28.099 INFO:tasks.workunit.client.0.vm05.stdout:9/320: chown d2/d10/d22/d52/d59/c5d 1200233182 1 2026-03-09T15:01:28.100 INFO:tasks.workunit.client.0.vm05.stdout:4/328: fdatasync d2/d43/f4f 0 2026-03-09T15:01:28.100 INFO:tasks.workunit.client.0.vm05.stdout:4/329: dread - d2/d4/d7/dc/f64 zero size 2026-03-09T15:01:28.107 INFO:tasks.workunit.client.0.vm05.stdout:5/366: dwrite d1/d4/d34/f65 [0,4194304] 0 2026-03-09T15:01:28.110 INFO:tasks.workunit.client.0.vm05.stdout:7/335: fsync d1/f16 0 2026-03-09T15:01:28.111 INFO:tasks.workunit.client.0.vm05.stdout:5/367: write d1/d4/d34/d35/f52 [779613,109613] 0 2026-03-09T15:01:28.111 INFO:tasks.workunit.client.0.vm05.stdout:7/336: dread - d1/d9/d23/d31/d32/f63 zero size 2026-03-09T15:01:28.118 INFO:tasks.workunit.client.0.vm05.stdout:7/337: dwrite d1/d9/f59 [0,4194304] 0 2026-03-09T15:01:28.133 INFO:tasks.workunit.client.0.vm05.stdout:9/321: dread d2/f1f [0,4194304] 0 2026-03-09T15:01:28.134 INFO:tasks.workunit.client.0.vm05.stdout:4/330: mknod d2/d4/d7/d48/c6d 0 2026-03-09T15:01:28.134 INFO:tasks.workunit.client.0.vm05.stdout:2/322: link f5 da/d13/d30/f63 0 2026-03-09T15:01:28.137 INFO:tasks.workunit.client.0.vm05.stdout:2/323: dwrite da/dd/f25 [0,4194304] 0 2026-03-09T15:01:28.146 INFO:tasks.workunit.client.0.vm05.stdout:5/368: mknod d1/d4/d34/d35/d53/c79 0 2026-03-09T15:01:28.146 INFO:tasks.workunit.client.0.vm05.stdout:8/355: truncate d0/f10 2117348 0 2026-03-09T15:01:28.146 INFO:tasks.workunit.client.0.vm05.stdout:8/356: stat d0/d1/d12/d1b/f67 0 2026-03-09T15:01:28.146 INFO:tasks.workunit.client.0.vm05.stdout:9/322: mkdir d2/d4e/d56/d53/d64 0 2026-03-09T15:01:28.146 INFO:tasks.workunit.client.0.vm05.stdout:4/331: mkdir d2/d4/d8/d4a/d6e 0 2026-03-09T15:01:28.148 INFO:tasks.workunit.client.0.vm05.stdout:9/323: dwrite d2/d10/f28 [0,4194304] 0 2026-03-09T15:01:28.155 INFO:tasks.workunit.client.0.vm05.stdout:9/324: dread d2/d4e/f40 [0,4194304] 0 2026-03-09T15:01:28.160 INFO:tasks.workunit.client.0.vm05.stdout:2/324: mkdir da/d29/d64 0 2026-03-09T15:01:28.167 INFO:tasks.workunit.client.0.vm05.stdout:2/325: write da/dd/f5d [82022,51339] 0 2026-03-09T15:01:28.167 INFO:tasks.workunit.client.0.vm05.stdout:3/346: rename d3/df/d1e/d24 to d3/d29/d2d/d77 0 2026-03-09T15:01:28.167 INFO:tasks.workunit.client.0.vm05.stdout:5/369: symlink d1/da/l7a 0 2026-03-09T15:01:28.167 INFO:tasks.workunit.client.0.vm05.stdout:8/357: creat d0/d1/d12/d3c/f77 x:0 0 0 2026-03-09T15:01:28.168 INFO:tasks.workunit.client.0.vm05.stdout:7/338: symlink d1/d9/l66 0 2026-03-09T15:01:28.172 INFO:tasks.workunit.client.0.vm05.stdout:7/339: dwrite d1/d9/fc [0,4194304] 0 2026-03-09T15:01:28.184 INFO:tasks.workunit.client.0.vm05.stdout:4/332: symlink d2/d4/d7/d21/l6f 0 2026-03-09T15:01:28.186 INFO:tasks.workunit.client.0.vm05.stdout:6/318: dwrite da/d17/f33 [0,4194304] 0 2026-03-09T15:01:28.193 INFO:tasks.workunit.client.0.vm05.stdout:1/282: rename d9/d2a/f3f to d9/d2a/d59/d49/d48/f63 0 2026-03-09T15:01:28.197 INFO:tasks.workunit.client.0.vm05.stdout:8/358: mkdir d0/d2a/d2d/d78 0 2026-03-09T15:01:28.200 INFO:tasks.workunit.client.0.vm05.stdout:9/325: dwrite d2/d4e/f3e [0,4194304] 0 2026-03-09T15:01:28.202 INFO:tasks.workunit.client.0.vm05.stdout:3/347: dwrite d3/df/f1b [4194304,4194304] 0 2026-03-09T15:01:28.212 INFO:tasks.workunit.client.0.vm05.stdout:4/333: dread d2/d1d/f5c [0,4194304] 0 2026-03-09T15:01:28.226 INFO:tasks.workunit.client.0.vm05.stdout:5/370: mknod d1/d4/d27/d5b/c7b 0 2026-03-09T15:01:28.234 INFO:tasks.workunit.client.0.vm05.stdout:8/359: rmdir d0/d1 39 2026-03-09T15:01:28.236 INFO:tasks.workunit.client.0.vm05.stdout:7/340: unlink d1/d12/l64 0 2026-03-09T15:01:28.238 INFO:tasks.workunit.client.0.vm05.stdout:4/334: write d2/d4/d7/f9 [4877472,19671] 0 2026-03-09T15:01:28.240 INFO:tasks.workunit.client.0.vm05.stdout:5/371: symlink d1/d4/d34/d35/d3d/d38/d63/l7c 0 2026-03-09T15:01:28.242 INFO:tasks.workunit.client.0.vm05.stdout:8/360: fsync d0/d2a/d2d/d54/f64 0 2026-03-09T15:01:28.246 INFO:tasks.workunit.client.0.vm05.stdout:5/372: dwrite d1/d4/d34/f65 [4194304,4194304] 0 2026-03-09T15:01:28.260 INFO:tasks.workunit.client.0.vm05.stdout:4/335: chown d2/d4/d1e/l44 231352019 1 2026-03-09T15:01:28.263 INFO:tasks.workunit.client.0.vm05.stdout:7/341: dread d1/f45 [0,4194304] 0 2026-03-09T15:01:28.268 INFO:tasks.workunit.client.0.vm05.stdout:8/361: write d0/d1/d55/f6a [691692,92402] 0 2026-03-09T15:01:28.268 INFO:tasks.workunit.client.0.vm05.stdout:8/362: chown d0/d2a/d2d/d42/l5c 23 1 2026-03-09T15:01:28.272 INFO:tasks.workunit.client.0.vm05.stdout:9/326: creat d2/d10/f65 x:0 0 0 2026-03-09T15:01:28.280 INFO:tasks.workunit.client.0.vm05.stdout:5/373: unlink d1/f1d 0 2026-03-09T15:01:28.281 INFO:tasks.workunit.client.0.vm05.stdout:6/319: getdents da 0 2026-03-09T15:01:28.287 INFO:tasks.workunit.client.0.vm05.stdout:3/348: truncate d3/df/f11 3916648 0 2026-03-09T15:01:28.289 INFO:tasks.workunit.client.0.vm05.stdout:0/240: rename d9/f35 to d9/de/d12/f4c 0 2026-03-09T15:01:28.291 INFO:tasks.workunit.client.0.vm05.stdout:1/283: getdents d9/d2f/d55 0 2026-03-09T15:01:28.292 INFO:tasks.workunit.client.0.vm05.stdout:1/284: read - d9/d2f/f58 zero size 2026-03-09T15:01:28.292 INFO:tasks.workunit.client.0.vm05.stdout:1/285: stat d9/l46 0 2026-03-09T15:01:28.293 INFO:tasks.workunit.client.0.vm05.stdout:8/363: fdatasync d0/f4 0 2026-03-09T15:01:28.299 INFO:tasks.workunit.client.0.vm05.stdout:5/374: fdatasync d1/d4/d34/d35/f4d 0 2026-03-09T15:01:28.300 INFO:tasks.workunit.client.0.vm05.stdout:6/320: write da/f18 [662240,69701] 0 2026-03-09T15:01:28.303 INFO:tasks.workunit.client.0.vm05.stdout:5/375: dwrite d1/ff [4194304,4194304] 0 2026-03-09T15:01:28.308 INFO:tasks.workunit.client.0.vm05.stdout:2/326: rename da/d13/d2f/l62 to da/d13/d30/l65 0 2026-03-09T15:01:28.309 INFO:tasks.workunit.client.0.vm05.stdout:2/327: dread - da/d29/d3f/f5f zero size 2026-03-09T15:01:28.310 INFO:tasks.workunit.client.0.vm05.stdout:1/286: rmdir d9/d2a 39 2026-03-09T15:01:28.314 INFO:tasks.workunit.client.0.vm05.stdout:8/364: mknod d0/d2a/d2d/d54/c79 0 2026-03-09T15:01:28.316 INFO:tasks.workunit.client.0.vm05.stdout:1/287: dwrite d9/d2f/d37/d5a/f5b [0,4194304] 0 2026-03-09T15:01:28.325 INFO:tasks.workunit.client.0.vm05.stdout:9/327: link d2/d10/d22/d52/d59/f18 d2/d4e/d56/d53/f66 0 2026-03-09T15:01:28.326 INFO:tasks.workunit.client.0.vm05.stdout:9/328: dread - d2/d10/d22/d47/d5b/f5f zero size 2026-03-09T15:01:28.331 INFO:tasks.workunit.client.0.vm05.stdout:4/336: link d2/d4/d7/l30 d2/d43/l70 0 2026-03-09T15:01:28.332 INFO:tasks.workunit.client.0.vm05.stdout:5/376: creat d1/d4/d27/d5b/f7d x:0 0 0 2026-03-09T15:01:28.334 INFO:tasks.workunit.client.0.vm05.stdout:3/349: mkdir d3/df/d1e/d2c/d74/d78 0 2026-03-09T15:01:28.338 INFO:tasks.workunit.client.0.vm05.stdout:1/288: creat d9/d2f/d55/f64 x:0 0 0 2026-03-09T15:01:28.342 INFO:tasks.workunit.client.0.vm05.stdout:1/289: dwrite d9/d2f/d55/f5e [0,4194304] 0 2026-03-09T15:01:28.354 INFO:tasks.workunit.client.0.vm05.stdout:9/329: creat d2/d4e/d1b/f67 x:0 0 0 2026-03-09T15:01:28.355 INFO:tasks.workunit.client.0.vm05.stdout:4/337: mkdir d2/d4/d1e/d71 0 2026-03-09T15:01:28.359 INFO:tasks.workunit.client.0.vm05.stdout:0/241: link d9/fd d9/de/d12/d15/d2e/f4d 0 2026-03-09T15:01:28.363 INFO:tasks.workunit.client.0.vm05.stdout:1/290: symlink d9/d2f/l65 0 2026-03-09T15:01:28.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:27 vm09.local ceph-mon[59673]: pgmap v170: 65 pgs: 65 active+clean; 1.9 GiB data, 6.6 GiB used, 113 GiB / 120 GiB avail; 6.3 MiB/s rd, 70 MiB/s wr, 180 op/s 2026-03-09T15:01:28.369 INFO:tasks.workunit.client.0.vm05.stdout:4/338: dread d2/d4/d7/f9 [4194304,4194304] 0 2026-03-09T15:01:28.370 INFO:tasks.workunit.client.0.vm05.stdout:4/339: read - d2/d4/d7/dc/f62 zero size 2026-03-09T15:01:28.374 INFO:tasks.workunit.client.0.vm05.stdout:5/377: mkdir d1/d4/d34/d35/d4e/d6f/d7e 0 2026-03-09T15:01:28.376 INFO:tasks.workunit.client.0.vm05.stdout:5/378: write d1/d4/d34/d35/f44 [5918877,71880] 0 2026-03-09T15:01:28.376 INFO:tasks.workunit.client.0.vm05.stdout:5/379: write d1/f4c [645826,106153] 0 2026-03-09T15:01:28.383 INFO:tasks.workunit.client.0.vm05.stdout:0/242: chown d9/f2b 1091036 1 2026-03-09T15:01:28.389 INFO:tasks.workunit.client.0.vm05.stdout:1/291: rmdir d9/d2f/d37 39 2026-03-09T15:01:28.393 INFO:tasks.workunit.client.0.vm05.stdout:1/292: dwrite d9/d2f/d55/f5e [0,4194304] 0 2026-03-09T15:01:28.402 INFO:tasks.workunit.client.0.vm05.stdout:9/330: creat d2/d10/d22/d47/d5b/f68 x:0 0 0 2026-03-09T15:01:28.403 INFO:tasks.workunit.client.0.vm05.stdout:9/331: chown d2/d4e/d1b 24085 1 2026-03-09T15:01:28.404 INFO:tasks.workunit.client.0.vm05.stdout:9/332: dread - d2/d10/d22/d47/d5b/f5f zero size 2026-03-09T15:01:28.404 INFO:tasks.workunit.client.0.vm05.stdout:7/342: truncate d1/d12/f56 124622 0 2026-03-09T15:01:28.405 INFO:tasks.workunit.client.0.vm05.stdout:7/343: dread - d1/d12/f61 zero size 2026-03-09T15:01:28.408 INFO:tasks.workunit.client.0.vm05.stdout:6/321: dwrite da/d17/f2c [0,4194304] 0 2026-03-09T15:01:28.412 INFO:tasks.workunit.client.0.vm05.stdout:8/365: write d0/f10 [1408207,102531] 0 2026-03-09T15:01:28.414 INFO:tasks.workunit.client.0.vm05.stdout:3/350: write d3/f1f [3951155,36867] 0 2026-03-09T15:01:28.415 INFO:tasks.workunit.client.0.vm05.stdout:6/322: dwrite da/f14 [0,4194304] 0 2026-03-09T15:01:28.417 INFO:tasks.workunit.client.0.vm05.stdout:4/340: truncate d2/d4/d7/dc/f54 1083044 0 2026-03-09T15:01:28.425 INFO:tasks.workunit.client.0.vm05.stdout:5/380: mkdir d1/d5d/d7f 0 2026-03-09T15:01:28.427 INFO:tasks.workunit.client.0.vm05.stdout:2/328: getdents da/d29 0 2026-03-09T15:01:28.428 INFO:tasks.workunit.client.0.vm05.stdout:7/344: creat d1/d22/f67 x:0 0 0 2026-03-09T15:01:28.429 INFO:tasks.workunit.client.0.vm05.stdout:7/345: readlink d1/d9/l48 0 2026-03-09T15:01:28.430 INFO:tasks.workunit.client.0.vm05.stdout:3/351: readlink d3/df/l72 0 2026-03-09T15:01:28.440 INFO:tasks.workunit.client.0.vm05.stdout:7/346: mkdir d1/d49/d68 0 2026-03-09T15:01:28.468 INFO:tasks.workunit.client.0.vm05.stdout:7/347: write d1/d9/d23/d31/d32/f3a [5370075,118016] 0 2026-03-09T15:01:28.468 INFO:tasks.workunit.client.0.vm05.stdout:3/352: mkdir d3/df/d59/d79 0 2026-03-09T15:01:28.468 INFO:tasks.workunit.client.0.vm05.stdout:5/381: rename d1/d4/c54 to d1/d4/d27/d75/c80 0 2026-03-09T15:01:28.468 INFO:tasks.workunit.client.0.vm05.stdout:7/348: symlink d1/d49/d4a/l69 0 2026-03-09T15:01:28.468 INFO:tasks.workunit.client.0.vm05.stdout:5/382: creat d1/d5d/f81 x:0 0 0 2026-03-09T15:01:28.468 INFO:tasks.workunit.client.0.vm05.stdout:5/383: creat d1/d5d/f82 x:0 0 0 2026-03-09T15:01:28.468 INFO:tasks.workunit.client.0.vm05.stdout:5/384: truncate d1/d4/d34/d56/f6d 668148 0 2026-03-09T15:01:28.468 INFO:tasks.workunit.client.0.vm05.stdout:5/385: dread d1/d4/d34/d35/f36 [0,4194304] 0 2026-03-09T15:01:28.468 INFO:tasks.workunit.client.0.vm05.stdout:5/386: rename d1/d4/l64 to d1/d4/d34/d35/d53/l83 0 2026-03-09T15:01:28.468 INFO:tasks.workunit.client.0.vm05.stdout:5/387: dwrite d1/d4/d34/d35/f47 [0,4194304] 0 2026-03-09T15:01:28.474 INFO:tasks.workunit.client.0.vm05.stdout:5/388: truncate d1/d4/f20 321030 0 2026-03-09T15:01:28.492 INFO:tasks.workunit.client.0.vm05.stdout:4/341: dread d2/d4/d7/f53 [0,4194304] 0 2026-03-09T15:01:28.494 INFO:tasks.workunit.client.0.vm05.stdout:4/342: mkdir d2/d4/d72 0 2026-03-09T15:01:28.496 INFO:tasks.workunit.client.0.vm05.stdout:4/343: rename d2/d4/d7/l22 to d2/d4/d8/l73 0 2026-03-09T15:01:28.503 INFO:tasks.workunit.client.0.vm05.stdout:1/293: sync 2026-03-09T15:01:28.507 INFO:tasks.workunit.client.0.vm05.stdout:9/333: sync 2026-03-09T15:01:28.507 INFO:tasks.workunit.client.0.vm05.stdout:8/366: sync 2026-03-09T15:01:28.517 INFO:tasks.workunit.client.0.vm05.stdout:8/367: creat d0/dc/f7a x:0 0 0 2026-03-09T15:01:28.518 INFO:tasks.workunit.client.0.vm05.stdout:1/294: dread d9/d2a/d59/f32 [0,4194304] 0 2026-03-09T15:01:28.519 INFO:tasks.workunit.client.0.vm05.stdout:9/334: truncate d2/f1f 8969283 0 2026-03-09T15:01:28.524 INFO:tasks.workunit.client.0.vm05.stdout:6/323: write da/f16 [2779778,27501] 0 2026-03-09T15:01:28.526 INFO:tasks.workunit.client.0.vm05.stdout:0/243: dwrite d9/f22 [4194304,4194304] 0 2026-03-09T15:01:28.527 INFO:tasks.workunit.client.0.vm05.stdout:2/329: truncate da/dd/f25 788090 0 2026-03-09T15:01:28.531 INFO:tasks.workunit.client.0.vm05.stdout:7/349: write d1/f15 [2049108,71532] 0 2026-03-09T15:01:28.531 INFO:tasks.workunit.client.0.vm05.stdout:3/353: write d3/df/f23 [373649,75344] 0 2026-03-09T15:01:28.532 INFO:tasks.workunit.client.0.vm05.stdout:8/368: creat d0/d1/d55/f7b x:0 0 0 2026-03-09T15:01:28.532 INFO:tasks.workunit.client.0.vm05.stdout:2/330: dread da/dd/f5d [0,4194304] 0 2026-03-09T15:01:28.541 INFO:tasks.workunit.client.0.vm05.stdout:7/350: dwrite d1/d9/d23/d31/d32/f63 [0,4194304] 0 2026-03-09T15:01:28.564 INFO:tasks.workunit.client.0.vm05.stdout:9/335: rename d2/d4e/d1b to d2/d10/d22/d2c/d69 0 2026-03-09T15:01:28.564 INFO:tasks.workunit.client.0.vm05.stdout:9/336: chown d2/d10/d22/d2c/l4b 17 1 2026-03-09T15:01:28.569 INFO:tasks.workunit.client.0.vm05.stdout:4/344: dread d2/d43/f51 [0,4194304] 0 2026-03-09T15:01:28.572 INFO:tasks.workunit.client.0.vm05.stdout:4/345: stat d2/d43/f51 0 2026-03-09T15:01:28.578 INFO:tasks.workunit.client.0.vm05.stdout:6/324: write da/d43/f56 [1004488,39964] 0 2026-03-09T15:01:28.579 INFO:tasks.workunit.client.0.vm05.stdout:0/244: symlink d9/de/l4e 0 2026-03-09T15:01:28.579 INFO:tasks.workunit.client.0.vm05.stdout:6/325: write da/d19/f35 [422281,12292] 0 2026-03-09T15:01:28.580 INFO:tasks.workunit.client.0.vm05.stdout:0/245: write d9/de/f19 [4289971,8767] 0 2026-03-09T15:01:28.580 INFO:tasks.workunit.client.0.vm05.stdout:0/246: write d9/f22 [5230208,40508] 0 2026-03-09T15:01:28.582 INFO:tasks.workunit.client.0.vm05.stdout:0/247: stat d9/de/d25/d38/f2f 0 2026-03-09T15:01:28.587 INFO:tasks.workunit.client.0.vm05.stdout:1/295: creat d9/d2f/d37/f66 x:0 0 0 2026-03-09T15:01:28.596 INFO:tasks.workunit.client.0.vm05.stdout:5/389: truncate d1/f14 3142372 0 2026-03-09T15:01:28.601 INFO:tasks.workunit.client.0.vm05.stdout:9/337: write d2/d10/f48 [534423,103401] 0 2026-03-09T15:01:28.604 INFO:tasks.workunit.client.0.vm05.stdout:6/326: creat da/d17/d3b/f5a x:0 0 0 2026-03-09T15:01:28.606 INFO:tasks.workunit.client.0.vm05.stdout:1/296: creat d9/d2a/f67 x:0 0 0 2026-03-09T15:01:28.607 INFO:tasks.workunit.client.0.vm05.stdout:5/390: symlink d1/d4/d34/d35/d3d/d38/d63/l84 0 2026-03-09T15:01:28.608 INFO:tasks.workunit.client.0.vm05.stdout:5/391: truncate d1/d4/d34/f65 8694882 0 2026-03-09T15:01:28.610 INFO:tasks.workunit.client.0.vm05.stdout:4/346: symlink d2/d49/d69/l74 0 2026-03-09T15:01:28.611 INFO:tasks.workunit.client.0.vm05.stdout:6/327: rmdir da/d43 39 2026-03-09T15:01:28.613 INFO:tasks.workunit.client.0.vm05.stdout:0/248: mknod d9/de/d12/d15/d2e/c4f 0 2026-03-09T15:01:28.615 INFO:tasks.workunit.client.0.vm05.stdout:1/297: creat d9/d2f/d55/f68 x:0 0 0 2026-03-09T15:01:28.617 INFO:tasks.workunit.client.0.vm05.stdout:4/347: write d2/d1d/f5c [1538290,29334] 0 2026-03-09T15:01:28.625 INFO:tasks.workunit.client.0.vm05.stdout:2/331: link da/c1d da/d29/d64/c66 0 2026-03-09T15:01:28.627 INFO:tasks.workunit.client.0.vm05.stdout:1/298: creat d9/d2a/d59/d49/f69 x:0 0 0 2026-03-09T15:01:28.629 INFO:tasks.workunit.client.0.vm05.stdout:3/354: rename d3/df/d10/l68 to d3/d29/l7a 0 2026-03-09T15:01:28.630 INFO:tasks.workunit.client.0.vm05.stdout:3/355: chown d3/df/d10/c43 3333 1 2026-03-09T15:01:28.636 INFO:tasks.workunit.client.0.vm05.stdout:3/356: dwrite d3/df/f4b [0,4194304] 0 2026-03-09T15:01:28.659 INFO:tasks.workunit.client.0.vm05.stdout:9/338: truncate d2/d10/d22/d2c/f3a 3171046 0 2026-03-09T15:01:28.659 INFO:tasks.workunit.client.0.vm05.stdout:1/299: mkdir d9/d2a/d6a 0 2026-03-09T15:01:28.660 INFO:tasks.workunit.client.0.vm05.stdout:1/300: chown d9/d2f/d37/d5f 375732922 1 2026-03-09T15:01:28.666 INFO:tasks.workunit.client.0.vm05.stdout:9/339: dwrite d2/d10/d22/d47/d5b/f68 [0,4194304] 0 2026-03-09T15:01:28.667 INFO:tasks.workunit.client.0.vm05.stdout:1/301: dwrite d9/d2a/d59/d49/f51 [0,4194304] 0 2026-03-09T15:01:28.669 INFO:tasks.workunit.client.0.vm05.stdout:8/369: rename d0/d2a/d2d/d42/l5c to d0/d2a/d2d/d42/d60/l7c 0 2026-03-09T15:01:28.670 INFO:tasks.workunit.client.0.vm05.stdout:3/357: mkdir d3/d29/d2d/d7b 0 2026-03-09T15:01:28.671 INFO:tasks.workunit.client.0.vm05.stdout:4/348: getdents d2/d43/d66 0 2026-03-09T15:01:28.671 INFO:tasks.workunit.client.0.vm05.stdout:0/249: creat d9/de/d12/d15/f50 x:0 0 0 2026-03-09T15:01:28.671 INFO:tasks.workunit.client.0.vm05.stdout:2/332: truncate f5 221609 0 2026-03-09T15:01:28.685 INFO:tasks.workunit.client.0.vm05.stdout:3/358: dwrite d3/df/d59/f75 [0,4194304] 0 2026-03-09T15:01:28.686 INFO:tasks.workunit.client.0.vm05.stdout:3/359: fsync d3/f42 0 2026-03-09T15:01:28.686 INFO:tasks.workunit.client.0.vm05.stdout:8/370: dwrite d0/d1/d12/d1b/f67 [0,4194304] 0 2026-03-09T15:01:28.694 INFO:tasks.workunit.client.0.vm05.stdout:7/351: rename d1/d12/f1f to d1/d9/d23/d31/d51/f6a 0 2026-03-09T15:01:28.702 INFO:tasks.workunit.client.0.vm05.stdout:8/371: dwrite d0/f3b [0,4194304] 0 2026-03-09T15:01:28.706 INFO:tasks.workunit.client.0.vm05.stdout:1/302: dwrite d9/f23 [0,4194304] 0 2026-03-09T15:01:28.709 INFO:tasks.workunit.client.0.vm05.stdout:3/360: mkdir d3/df/d10/d7c 0 2026-03-09T15:01:28.715 INFO:tasks.workunit.client.0.vm05.stdout:0/250: mknod d9/de/d12/d15/d49/c51 0 2026-03-09T15:01:28.717 INFO:tasks.workunit.client.0.vm05.stdout:0/251: fsync d9/de/f20 0 2026-03-09T15:01:28.719 INFO:tasks.workunit.client.0.vm05.stdout:1/303: symlink d9/d2a/d59/d33/l6b 0 2026-03-09T15:01:28.721 INFO:tasks.workunit.client.0.vm05.stdout:9/340: creat d2/d4e/f6a x:0 0 0 2026-03-09T15:01:28.723 INFO:tasks.workunit.client.0.vm05.stdout:5/392: rename d1/d4/d27/l3b to d1/d4/d34/d35/d4e/d6f/d7e/l85 0 2026-03-09T15:01:28.723 INFO:tasks.workunit.client.0.vm05.stdout:3/361: symlink d3/df/d59/l7d 0 2026-03-09T15:01:28.725 INFO:tasks.workunit.client.0.vm05.stdout:4/349: link d2/d1d/c5f d2/d4/d8/c75 0 2026-03-09T15:01:28.725 INFO:tasks.workunit.client.0.vm05.stdout:0/252: creat d9/de/d25/f52 x:0 0 0 2026-03-09T15:01:28.733 INFO:tasks.workunit.client.0.vm05.stdout:1/304: unlink d9/d2a/d59/f32 0 2026-03-09T15:01:28.735 INFO:tasks.workunit.client.0.vm05.stdout:6/328: rename da/d17/f55 to da/d19/f5b 0 2026-03-09T15:01:28.736 INFO:tasks.workunit.client.0.vm05.stdout:3/362: fsync d3/df/d10/d34/f48 0 2026-03-09T15:01:28.739 INFO:tasks.workunit.client.0.vm05.stdout:0/253: unlink d9/de/d12/d15/f1c 0 2026-03-09T15:01:28.742 INFO:tasks.workunit.client.0.vm05.stdout:7/352: rename d1/f28 to d1/d49/d4a/f6b 0 2026-03-09T15:01:28.744 INFO:tasks.workunit.client.0.vm05.stdout:7/353: truncate d1/d12/f61 573621 0 2026-03-09T15:01:28.748 INFO:tasks.workunit.client.0.vm05.stdout:0/254: write d9/de/f1e [1949536,89459] 0 2026-03-09T15:01:28.760 INFO:tasks.workunit.client.0.vm05.stdout:8/372: rename d0/d24/c39 to d0/dc/c7d 0 2026-03-09T15:01:28.760 INFO:tasks.workunit.client.0.vm05.stdout:7/354: symlink d1/d9/d23/d31/d32/l6c 0 2026-03-09T15:01:28.760 INFO:tasks.workunit.client.0.vm05.stdout:5/393: rename d1/da/l7a to d1/d4/d34/d35/d4e/l86 0 2026-03-09T15:01:28.760 INFO:tasks.workunit.client.0.vm05.stdout:5/394: dread - d1/da/f4a zero size 2026-03-09T15:01:28.760 INFO:tasks.workunit.client.0.vm05.stdout:0/255: mkdir d9/de/d12/d15/d2e/d32/d53 0 2026-03-09T15:01:28.760 INFO:tasks.workunit.client.0.vm05.stdout:4/350: getdents d2/d49 0 2026-03-09T15:01:28.760 INFO:tasks.workunit.client.0.vm05.stdout:0/256: write d9/de/d12/d15/d2e/f40 [118354,48467] 0 2026-03-09T15:01:28.763 INFO:tasks.workunit.client.0.vm05.stdout:4/351: dread d2/f33 [4194304,4194304] 0 2026-03-09T15:01:28.765 INFO:tasks.workunit.client.0.vm05.stdout:6/329: rename da/d17/f29 to da/d43/f5c 0 2026-03-09T15:01:28.765 INFO:tasks.workunit.client.0.vm05.stdout:0/257: dwrite d9/de/d12/f3c [0,4194304] 0 2026-03-09T15:01:28.770 INFO:tasks.workunit.client.0.vm05.stdout:7/355: getdents d1/d9/d23/d31 0 2026-03-09T15:01:28.771 INFO:tasks.workunit.client.0.vm05.stdout:7/356: rename d1 to d1/d9/d23/d54/d6d 22 2026-03-09T15:01:28.773 INFO:tasks.workunit.client.0.vm05.stdout:4/352: fsync d2/d4/d7/f9 0 2026-03-09T15:01:28.774 INFO:tasks.workunit.client.0.vm05.stdout:4/353: mknod d2/d49/c76 0 2026-03-09T15:01:28.784 INFO:tasks.workunit.client.0.vm05.stdout:0/258: dread d9/de/d12/d15/f36 [0,4194304] 0 2026-03-09T15:01:28.785 INFO:tasks.workunit.client.0.vm05.stdout:0/259: write d9/de/d25/d38/f2f [1277583,28764] 0 2026-03-09T15:01:28.821 INFO:tasks.workunit.client.0.vm05.stdout:9/341: sync 2026-03-09T15:01:28.824 INFO:tasks.workunit.client.0.vm05.stdout:9/342: creat d2/d10/f6b x:0 0 0 2026-03-09T15:01:28.826 INFO:tasks.workunit.client.0.vm05.stdout:9/343: symlink d2/d10/d22/d2c/d69/l6c 0 2026-03-09T15:01:28.826 INFO:tasks.workunit.client.0.vm05.stdout:9/344: readlink d2/d10/d22/d52/d59/l21 0 2026-03-09T15:01:28.829 INFO:tasks.workunit.client.0.vm05.stdout:9/345: symlink d2/d10/d22/d52/l6d 0 2026-03-09T15:01:28.830 INFO:tasks.workunit.client.0.vm05.stdout:9/346: creat d2/d10/d22/d52/d59/f6e x:0 0 0 2026-03-09T15:01:28.831 INFO:tasks.workunit.client.0.vm05.stdout:9/347: creat d2/d10/d22/d52/d59/f6f x:0 0 0 2026-03-09T15:01:28.832 INFO:tasks.workunit.client.0.vm05.stdout:9/348: readlink d2/d10/d22/d2c/l4c 0 2026-03-09T15:01:28.832 INFO:tasks.workunit.client.0.vm05.stdout:9/349: stat d2/d4e/f40 0 2026-03-09T15:01:28.833 INFO:tasks.workunit.client.0.vm05.stdout:9/350: mkdir d2/d70 0 2026-03-09T15:01:28.834 INFO:tasks.workunit.client.0.vm05.stdout:9/351: creat d2/d10/f71 x:0 0 0 2026-03-09T15:01:28.836 INFO:tasks.workunit.client.0.vm05.stdout:9/352: creat d2/d4e/f72 x:0 0 0 2026-03-09T15:01:28.837 INFO:tasks.workunit.client.0.vm05.stdout:9/353: truncate d2/fc 178893 0 2026-03-09T15:01:28.838 INFO:tasks.workunit.client.0.vm05.stdout:9/354: mkdir d2/d10/d22/d47/d73 0 2026-03-09T15:01:28.851 INFO:tasks.workunit.client.0.vm05.stdout:6/330: sync 2026-03-09T15:01:28.851 INFO:tasks.workunit.client.0.vm05.stdout:8/373: sync 2026-03-09T15:01:28.853 INFO:tasks.workunit.client.0.vm05.stdout:1/305: read d9/d2a/d59/d49/d48/f25 [2330809,51123] 0 2026-03-09T15:01:28.853 INFO:tasks.workunit.client.0.vm05.stdout:2/333: write da/d16/f1e [3916521,29153] 0 2026-03-09T15:01:28.861 INFO:tasks.workunit.client.0.vm05.stdout:8/374: creat d0/dc/f7e x:0 0 0 2026-03-09T15:01:28.862 INFO:tasks.workunit.client.0.vm05.stdout:8/375: dread - d0/d2a/d2d/f4d zero size 2026-03-09T15:01:28.869 INFO:tasks.workunit.client.0.vm05.stdout:8/376: dwrite d0/d1/d12/d3c/f71 [0,4194304] 0 2026-03-09T15:01:28.878 INFO:tasks.workunit.client.0.vm05.stdout:1/306: symlink d9/d2a/l6c 0 2026-03-09T15:01:28.885 INFO:tasks.workunit.client.0.vm05.stdout:7/357: getdents d1/d49/d4a 0 2026-03-09T15:01:28.886 INFO:tasks.workunit.client.0.vm05.stdout:1/307: readlink d9/d2a/l3e 0 2026-03-09T15:01:28.891 INFO:tasks.workunit.client.0.vm05.stdout:3/363: dwrite d3/df/f4a [4194304,4194304] 0 2026-03-09T15:01:28.894 INFO:tasks.workunit.client.0.vm05.stdout:6/331: creat da/f5d x:0 0 0 2026-03-09T15:01:28.898 INFO:tasks.workunit.client.0.vm05.stdout:6/332: dread da/d43/f56 [0,4194304] 0 2026-03-09T15:01:28.903 INFO:tasks.workunit.client.0.vm05.stdout:6/333: dwrite da/d17/d3b/f3f [0,4194304] 0 2026-03-09T15:01:28.908 INFO:tasks.workunit.client.0.vm05.stdout:0/260: getdents d9/de/d12/d15/d2e/d32 0 2026-03-09T15:01:28.920 INFO:tasks.workunit.client.0.vm05.stdout:3/364: unlink d3/d29/c6f 0 2026-03-09T15:01:28.920 INFO:tasks.workunit.client.0.vm05.stdout:5/395: link d1/d4/d34/d35/d4e/d6f/d7e/l85 d1/d4/d34/d35/d4e/d6f/d7e/l87 0 2026-03-09T15:01:28.920 INFO:tasks.workunit.client.0.vm05.stdout:2/334: rename da/d13/l2b to da/d13/d2f/l67 0 2026-03-09T15:01:28.920 INFO:tasks.workunit.client.0.vm05.stdout:6/334: symlink da/d19/l5e 0 2026-03-09T15:01:28.921 INFO:tasks.workunit.client.0.vm05.stdout:2/335: dwrite da/d13/f4b [0,4194304] 0 2026-03-09T15:01:28.922 INFO:tasks.workunit.client.0.vm05.stdout:2/336: chown da/d16/f1f 8049 1 2026-03-09T15:01:28.926 INFO:tasks.workunit.client.0.vm05.stdout:3/365: creat d3/df/d10/d19/d44/f7e x:0 0 0 2026-03-09T15:01:28.928 INFO:tasks.workunit.client.0.vm05.stdout:4/354: dwrite d2/d4/d7/d21/f34 [0,4194304] 0 2026-03-09T15:01:28.939 INFO:tasks.workunit.client.0.vm05.stdout:7/358: creat d1/f6e x:0 0 0 2026-03-09T15:01:28.945 INFO:tasks.workunit.client.0.vm05.stdout:8/377: rename d0/d1/d12/d1b/f34 to d0/d1/f7f 0 2026-03-09T15:01:28.948 INFO:tasks.workunit.client.0.vm05.stdout:6/335: creat da/d17/d3b/f5f x:0 0 0 2026-03-09T15:01:28.950 INFO:tasks.workunit.client.0.vm05.stdout:6/336: fdatasync da/d19/f45 0 2026-03-09T15:01:28.950 INFO:tasks.workunit.client.0.vm05.stdout:9/355: write d2/d10/d22/d52/d59/f18 [5456800,9234] 0 2026-03-09T15:01:28.959 INFO:tasks.workunit.client.0.vm05.stdout:4/355: dread d2/f1b [0,4194304] 0 2026-03-09T15:01:28.959 INFO:tasks.workunit.client.0.vm05.stdout:4/356: chown d2/d1d/f5c 48513 1 2026-03-09T15:01:28.959 INFO:tasks.workunit.client.0.vm05.stdout:4/357: write d2/f67 [194456,68063] 0 2026-03-09T15:01:28.964 INFO:tasks.workunit.client.0.vm05.stdout:3/366: unlink d3/f18 0 2026-03-09T15:01:28.979 INFO:tasks.workunit.client.0.vm05.stdout:7/359: creat d1/d9/d23/d54/f6f x:0 0 0 2026-03-09T15:01:28.980 INFO:tasks.workunit.client.0.vm05.stdout:1/308: rename d9/d2a/d59/l36 to d9/d2f/d37/l6d 0 2026-03-09T15:01:28.981 INFO:tasks.workunit.client.0.vm05.stdout:1/309: dread - d9/d2f/d37/f66 zero size 2026-03-09T15:01:28.981 INFO:tasks.workunit.client.0.vm05.stdout:7/360: dwrite d1/d9/d23/d31/d51/f29 [0,4194304] 0 2026-03-09T15:01:28.981 INFO:tasks.workunit.client.0.vm05.stdout:7/361: read - d1/d9/d23/d31/d51/f3b zero size 2026-03-09T15:01:28.981 INFO:tasks.workunit.client.0.vm05.stdout:6/337: dwrite da/fb [4194304,4194304] 0 2026-03-09T15:01:29.004 INFO:tasks.workunit.client.0.vm05.stdout:3/367: dread d3/df/d10/f28 [0,4194304] 0 2026-03-09T15:01:29.010 INFO:tasks.workunit.client.0.vm05.stdout:8/378: rename d0/d2a/d2d/d4b/d69 to d0/d1/d12/d1b/d66/d6f/d80 0 2026-03-09T15:01:29.011 INFO:tasks.workunit.client.0.vm05.stdout:9/356: rename d2/d10/d22/d2c/d69 to d2/d10/d22/d2c/d69/d5a/d74 22 2026-03-09T15:01:29.012 INFO:tasks.workunit.client.0.vm05.stdout:8/379: write d0/d1/d12/d1b/f67 [1179332,114960] 0 2026-03-09T15:01:29.013 INFO:tasks.workunit.client.0.vm05.stdout:5/396: sync 2026-03-09T15:01:29.023 INFO:tasks.workunit.client.0.vm05.stdout:4/358: sync 2026-03-09T15:01:29.027 INFO:tasks.workunit.client.0.vm05.stdout:0/261: getdents d9/de/d12/d15/d49 0 2026-03-09T15:01:29.027 INFO:tasks.workunit.client.0.vm05.stdout:0/262: dread - d9/de/d25/f48 zero size 2026-03-09T15:01:29.033 INFO:tasks.workunit.client.0.vm05.stdout:5/397: dread d1/d4/d34/d35/d3d/f32 [0,4194304] 0 2026-03-09T15:01:29.047 INFO:tasks.workunit.client.0.vm05.stdout:7/362: fsync d1/d9/d23/f5a 0 2026-03-09T15:01:29.048 INFO:tasks.workunit.client.0.vm05.stdout:7/363: write d1/f6e [482177,129464] 0 2026-03-09T15:01:29.056 INFO:tasks.workunit.client.0.vm05.stdout:2/337: dwrite da/d13/d30/f63 [0,4194304] 0 2026-03-09T15:01:29.078 INFO:tasks.workunit.client.0.vm05.stdout:3/368: mkdir d3/d29/d7f 0 2026-03-09T15:01:29.078 INFO:tasks.workunit.client.0.vm05.stdout:9/357: creat d2/d10/d22/d2c/d69/d5a/f75 x:0 0 0 2026-03-09T15:01:29.078 INFO:tasks.workunit.client.0.vm05.stdout:9/358: dwrite d2/f61 [0,4194304] 0 2026-03-09T15:01:29.084 INFO:tasks.workunit.client.0.vm05.stdout:0/263: getdents d9/de/d12/d15/d49 0 2026-03-09T15:01:29.088 INFO:tasks.workunit.client.0.vm05.stdout:5/398: symlink d1/d4/d34/d35/d3d/d38/d69/l88 0 2026-03-09T15:01:29.091 INFO:tasks.workunit.client.0.vm05.stdout:1/310: write f7 [1732534,100303] 0 2026-03-09T15:01:29.093 INFO:tasks.workunit.client.0.vm05.stdout:9/359: symlink d2/d10/d22/d52/d59/l76 0 2026-03-09T15:01:29.096 INFO:tasks.workunit.client.0.vm05.stdout:8/380: rmdir d0/d72 0 2026-03-09T15:01:29.099 INFO:tasks.workunit.client.0.vm05.stdout:6/338: getdents da/d17/d3b 0 2026-03-09T15:01:29.101 INFO:tasks.workunit.client.0.vm05.stdout:4/359: mknod d2/d4/d7/dc/c77 0 2026-03-09T15:01:29.104 INFO:tasks.workunit.client.0.vm05.stdout:0/264: write d9/f2b [690745,6223] 0 2026-03-09T15:01:29.114 INFO:tasks.workunit.client.0.vm05.stdout:3/369: creat d3/d29/d2d/d77/d4d/f80 x:0 0 0 2026-03-09T15:01:29.114 INFO:tasks.workunit.client.0.vm05.stdout:9/360: mknod d2/d10/d22/d52/c77 0 2026-03-09T15:01:29.114 INFO:tasks.workunit.client.0.vm05.stdout:9/361: fsync d2/d10/f28 0 2026-03-09T15:01:29.114 INFO:tasks.workunit.client.0.vm05.stdout:6/339: write da/f16 [3697554,91494] 0 2026-03-09T15:01:29.127 INFO:tasks.workunit.client.0.vm05.stdout:2/338: dwrite da/d16/f20 [0,4194304] 0 2026-03-09T15:01:29.128 INFO:tasks.workunit.client.0.vm05.stdout:2/339: write da/d16/d46/f54 [1689999,129880] 0 2026-03-09T15:01:29.131 INFO:tasks.workunit.client.0.vm05.stdout:1/311: link d9/d2f/d37/f66 d9/d2a/f6e 0 2026-03-09T15:01:29.134 INFO:tasks.workunit.client.0.vm05.stdout:8/381: mknod d0/c81 0 2026-03-09T15:01:29.136 INFO:tasks.workunit.client.0.vm05.stdout:8/382: read d0/d24/f30 [3834886,82325] 0 2026-03-09T15:01:29.138 INFO:tasks.workunit.client.0.vm05.stdout:9/362: unlink d2/d4e/d56/d37/l2d 0 2026-03-09T15:01:29.140 INFO:tasks.workunit.client.0.vm05.stdout:7/364: link d1/d9/fd d1/d22/d3c/f70 0 2026-03-09T15:01:29.142 INFO:tasks.workunit.client.0.vm05.stdout:6/340: rmdir da/d17/d3b 39 2026-03-09T15:01:29.148 INFO:tasks.workunit.client.0.vm05.stdout:2/340: mknod da/d16/c68 0 2026-03-09T15:01:29.152 INFO:tasks.workunit.client.0.vm05.stdout:1/312: creat d9/d2a/d59/d49/d48/f6f x:0 0 0 2026-03-09T15:01:29.157 INFO:tasks.workunit.client.0.vm05.stdout:8/383: truncate d0/f47 293404 0 2026-03-09T15:01:29.157 INFO:tasks.workunit.client.0.vm05.stdout:8/384: fdatasync d0/f3b 0 2026-03-09T15:01:29.161 INFO:tasks.workunit.client.0.vm05.stdout:9/363: symlink d2/d10/d22/d47/l78 0 2026-03-09T15:01:29.166 INFO:tasks.workunit.client.0.vm05.stdout:0/265: truncate d9/de/f20 117789 0 2026-03-09T15:01:29.188 INFO:tasks.workunit.client.0.vm05.stdout:0/266: dwrite d9/de/f3e [0,4194304] 0 2026-03-09T15:01:29.188 INFO:tasks.workunit.client.0.vm05.stdout:2/341: rmdir da/dd 39 2026-03-09T15:01:29.188 INFO:tasks.workunit.client.0.vm05.stdout:5/399: rename d1/d4/l2e to d1/d4/l89 0 2026-03-09T15:01:29.188 INFO:tasks.workunit.client.0.vm05.stdout:5/400: fdatasync d1/f9 0 2026-03-09T15:01:29.188 INFO:tasks.workunit.client.0.vm05.stdout:1/313: symlink d9/d2a/d59/d49/l70 0 2026-03-09T15:01:29.188 INFO:tasks.workunit.client.0.vm05.stdout:8/385: mknod d0/d1/d12/d1b/d66/c82 0 2026-03-09T15:01:29.188 INFO:tasks.workunit.client.0.vm05.stdout:9/364: creat d2/d10/d22/d52/d59/f79 x:0 0 0 2026-03-09T15:01:29.188 INFO:tasks.workunit.client.0.vm05.stdout:9/365: chown d2/f13 5374716 1 2026-03-09T15:01:29.188 INFO:tasks.workunit.client.0.vm05.stdout:6/341: mknod da/c60 0 2026-03-09T15:01:29.188 INFO:tasks.workunit.client.0.vm05.stdout:3/370: link d3/df/d59/c76 d3/df/d10/c81 0 2026-03-09T15:01:29.188 INFO:tasks.workunit.client.0.vm05.stdout:1/314: rmdir d9/d2f/d55 39 2026-03-09T15:01:29.188 INFO:tasks.workunit.client.0.vm05.stdout:9/366: creat d2/d10/d22/d52/d59/f7a x:0 0 0 2026-03-09T15:01:29.190 INFO:tasks.workunit.client.0.vm05.stdout:0/267: mknod d9/c54 0 2026-03-09T15:01:29.192 INFO:tasks.workunit.client.0.vm05.stdout:5/401: truncate d1/f2a 2903410 0 2026-03-09T15:01:29.194 INFO:tasks.workunit.client.0.vm05.stdout:9/367: rename d2/d10/d22/d52/d59/f7a to d2/d10/d22/d47/f7b 0 2026-03-09T15:01:29.195 INFO:tasks.workunit.client.0.vm05.stdout:0/268: creat d9/de/d25/d38/f55 x:0 0 0 2026-03-09T15:01:29.196 INFO:tasks.workunit.client.0.vm05.stdout:6/342: fsync da/d17/f1d 0 2026-03-09T15:01:29.197 INFO:tasks.workunit.client.0.vm05.stdout:5/402: readlink d1/d5d/l77 0 2026-03-09T15:01:29.198 INFO:tasks.workunit.client.0.vm05.stdout:8/386: rmdir d0/d2a/d2d/d42/d60/d75 0 2026-03-09T15:01:29.199 INFO:tasks.workunit.client.0.vm05.stdout:0/269: chown d9/la 16839080 1 2026-03-09T15:01:29.200 INFO:tasks.workunit.client.0.vm05.stdout:0/270: write d9/de/d12/f3c [3975087,67252] 0 2026-03-09T15:01:29.204 INFO:tasks.workunit.client.0.vm05.stdout:2/342: getdents da 0 2026-03-09T15:01:29.209 INFO:tasks.workunit.client.0.vm05.stdout:8/387: unlink d0/fa 0 2026-03-09T15:01:29.209 INFO:tasks.workunit.client.0.vm05.stdout:8/388: dread - d0/d2a/d2d/d42/d60/d73/f74 zero size 2026-03-09T15:01:29.209 INFO:tasks.workunit.client.0.vm05.stdout:9/368: creat d2/d70/f7c x:0 0 0 2026-03-09T15:01:29.210 INFO:tasks.workunit.client.0.vm05.stdout:9/369: fdatasync d2/d10/d22/d47/f62 0 2026-03-09T15:01:29.210 INFO:tasks.workunit.client.0.vm05.stdout:9/370: readlink d2/d10/d22/d2c/l4b 0 2026-03-09T15:01:29.211 INFO:tasks.workunit.client.0.vm05.stdout:9/371: chown d2/d10/d22 0 1 2026-03-09T15:01:29.211 INFO:tasks.workunit.client.0.vm05.stdout:9/372: chown d2/d4e 0 1 2026-03-09T15:01:29.213 INFO:tasks.workunit.client.0.vm05.stdout:6/343: dread da/f57 [0,4194304] 0 2026-03-09T15:01:29.215 INFO:tasks.workunit.client.0.vm05.stdout:0/271: unlink d9/de/d12/d15/f36 0 2026-03-09T15:01:29.229 INFO:tasks.workunit.client.0.vm05.stdout:2/343: creat da/d16/f69 x:0 0 0 2026-03-09T15:01:29.229 INFO:tasks.workunit.client.0.vm05.stdout:2/344: write da/d16/d46/f5e [271797,110834] 0 2026-03-09T15:01:29.229 INFO:tasks.workunit.client.0.vm05.stdout:1/315: link d9/d2f/l52 d9/l71 0 2026-03-09T15:01:29.229 INFO:tasks.workunit.client.0.vm05.stdout:8/389: symlink d0/d1/d12/d1b/d66/l83 0 2026-03-09T15:01:29.229 INFO:tasks.workunit.client.0.vm05.stdout:1/316: dread f7 [0,4194304] 0 2026-03-09T15:01:29.229 INFO:tasks.workunit.client.0.vm05.stdout:9/373: chown d2/d10/d22/d2c/f3a 101159238 1 2026-03-09T15:01:29.230 INFO:tasks.workunit.client.0.vm05.stdout:3/371: sync 2026-03-09T15:01:29.233 INFO:tasks.workunit.client.0.vm05.stdout:6/344: stat da/f12 0 2026-03-09T15:01:29.234 INFO:tasks.workunit.client.0.vm05.stdout:6/345: write da/d43/f54 [383367,20432] 0 2026-03-09T15:01:29.235 INFO:tasks.workunit.client.0.vm05.stdout:3/372: dread d3/df/d10/f2a [0,4194304] 0 2026-03-09T15:01:29.236 INFO:tasks.workunit.client.0.vm05.stdout:3/373: truncate d3/d29/d2d/d77/d4d/f80 572493 0 2026-03-09T15:01:29.238 INFO:tasks.workunit.client.0.vm05.stdout:0/272: rename d9/fd to d9/de/d25/f56 0 2026-03-09T15:01:29.243 INFO:tasks.workunit.client.0.vm05.stdout:4/360: truncate d2/d4/d7/d21/f61 1975654 0 2026-03-09T15:01:29.264 INFO:tasks.workunit.client.0.vm05.stdout:8/390: fdatasync d0/d1/d12/d1b/d66/f56 0 2026-03-09T15:01:29.264 INFO:tasks.workunit.client.0.vm05.stdout:6/346: write da/d17/d3b/f4a [1012396,5275] 0 2026-03-09T15:01:29.265 INFO:tasks.workunit.client.0.vm05.stdout:6/347: write da/fb [1422506,24805] 0 2026-03-09T15:01:29.265 INFO:tasks.workunit.client.0.vm05.stdout:6/348: fsync da/d17/f42 0 2026-03-09T15:01:29.265 INFO:tasks.workunit.client.0.vm05.stdout:0/273: dwrite d9/de/f3d [0,4194304] 0 2026-03-09T15:01:29.271 INFO:tasks.workunit.client.0.vm05.stdout:4/361: dread d2/d1d/f36 [0,4194304] 0 2026-03-09T15:01:29.275 INFO:tasks.workunit.client.0.vm05.stdout:5/403: getdents d1/d4/d34/d35/d3d/d38/d63 0 2026-03-09T15:01:29.276 INFO:tasks.workunit.client.0.vm05.stdout:5/404: readlink d1/d4/d34/d35/d3d/d38/d69/l88 0 2026-03-09T15:01:29.276 INFO:tasks.workunit.client.0.vm05.stdout:5/405: chown d1/da/f4a 75664 1 2026-03-09T15:01:29.280 INFO:tasks.workunit.client.0.vm05.stdout:7/365: write d1/f16 [826274,51860] 0 2026-03-09T15:01:29.281 INFO:tasks.workunit.client.0.vm05.stdout:7/366: chown d1/d9/d23/d31/d32/f58 7 1 2026-03-09T15:01:29.282 INFO:tasks.workunit.client.0.vm05.stdout:1/317: symlink d9/d2f/d37/d5f/l72 0 2026-03-09T15:01:29.298 INFO:tasks.workunit.client.0.vm05.stdout:9/374: rename d2/d10/d22/d2c/c54 to d2/d10/d22/d52/d59/c7d 0 2026-03-09T15:01:29.330 INFO:tasks.workunit.client.0.vm05.stdout:3/374: rename d3 to d3/df/d59/d79/d82 22 2026-03-09T15:01:29.331 INFO:tasks.workunit.client.0.vm05.stdout:9/375: chown d2/d10/d22/d47/l78 3510332 1 2026-03-09T15:01:29.331 INFO:tasks.workunit.client.0.vm05.stdout:8/391: dread d0/d2a/d2d/d54/f5b [0,4194304] 0 2026-03-09T15:01:29.331 INFO:tasks.workunit.client.0.vm05.stdout:8/392: fsync d0/d1/d12/d3c/f51 0 2026-03-09T15:01:29.331 INFO:tasks.workunit.client.0.vm05.stdout:0/274: dwrite d9/de/d12/d15/d2e/f3a [0,4194304] 0 2026-03-09T15:01:29.331 INFO:tasks.workunit.client.0.vm05.stdout:0/275: write d9/de/d25/d38/f55 [246632,69739] 0 2026-03-09T15:01:29.331 INFO:tasks.workunit.client.0.vm05.stdout:0/276: fsync d9/de/d25/f2d 0 2026-03-09T15:01:29.331 INFO:tasks.workunit.client.0.vm05.stdout:1/318: creat d9/d2f/d37/d5f/f73 x:0 0 0 2026-03-09T15:01:29.332 INFO:tasks.workunit.client.0.vm05.stdout:8/393: fsync d0/d2a/d2d/d42/f4e 0 2026-03-09T15:01:29.332 INFO:tasks.workunit.client.0.vm05.stdout:4/362: dwrite d2/d4/d7/f2d [0,4194304] 0 2026-03-09T15:01:29.332 INFO:tasks.workunit.client.0.vm05.stdout:4/363: chown d2/d4/d7/dc/f18 685 1 2026-03-09T15:01:29.332 INFO:tasks.workunit.client.0.vm05.stdout:5/406: link d1/d4/d34/f5c d1/d4/d34/d35/d3d/d38/f8a 0 2026-03-09T15:01:29.332 INFO:tasks.workunit.client.0.vm05.stdout:7/367: mknod d1/c71 0 2026-03-09T15:01:29.333 INFO:tasks.workunit.client.0.vm05.stdout:1/319: mknod d9/d2a/d59/d49/d4b/c74 0 2026-03-09T15:01:29.336 INFO:tasks.workunit.client.0.vm05.stdout:3/375: creat d3/d29/d7f/f83 x:0 0 0 2026-03-09T15:01:29.338 INFO:tasks.workunit.client.0.vm05.stdout:8/394: creat d0/d1/d12/d3c/f84 x:0 0 0 2026-03-09T15:01:29.339 INFO:tasks.workunit.client.0.vm05.stdout:8/395: dread - d0/d1/d12/d3c/f84 zero size 2026-03-09T15:01:29.342 INFO:tasks.workunit.client.0.vm05.stdout:8/396: dwrite d0/d1/d12/d3c/f77 [0,4194304] 0 2026-03-09T15:01:29.353 INFO:tasks.workunit.client.0.vm05.stdout:5/407: creat d1/d4/d34/d35/f8b x:0 0 0 2026-03-09T15:01:29.357 INFO:tasks.workunit.client.0.vm05.stdout:5/408: dwrite d1/f5e [0,4194304] 0 2026-03-09T15:01:29.360 INFO:tasks.workunit.client.0.vm05.stdout:5/409: write d1/da/f2f [1939221,126798] 0 2026-03-09T15:01:29.368 INFO:tasks.workunit.client.0.vm05.stdout:6/349: getdents da/d43 0 2026-03-09T15:01:29.370 INFO:tasks.workunit.client.0.vm05.stdout:6/350: fsync da/f12 0 2026-03-09T15:01:29.381 INFO:tasks.workunit.client.0.vm05.stdout:5/410: dread d1/d4/d34/d35/f52 [0,4194304] 0 2026-03-09T15:01:29.384 INFO:tasks.workunit.client.0.vm05.stdout:8/397: creat d0/d2a/d2d/d54/f85 x:0 0 0 2026-03-09T15:01:29.391 INFO:tasks.workunit.client.0.vm05.stdout:7/368: mkdir d1/d9/d72 0 2026-03-09T15:01:29.404 INFO:tasks.workunit.client.0.vm05.stdout:7/369: dwrite d1/d9/fc [0,4194304] 0 2026-03-09T15:01:29.404 INFO:tasks.workunit.client.0.vm05.stdout:5/411: dread d1/f6 [0,4194304] 0 2026-03-09T15:01:29.404 INFO:tasks.workunit.client.0.vm05.stdout:3/376: mknod d3/d29/c84 0 2026-03-09T15:01:29.404 INFO:tasks.workunit.client.0.vm05.stdout:7/370: creat d1/d12/f73 x:0 0 0 2026-03-09T15:01:29.409 INFO:tasks.workunit.client.0.vm05.stdout:3/377: dread d3/df/f1b [4194304,4194304] 0 2026-03-09T15:01:29.409 INFO:tasks.workunit.client.0.vm05.stdout:5/412: readlink d1/d4/d34/d35/d4e/l86 0 2026-03-09T15:01:29.409 INFO:tasks.workunit.client.0.vm05.stdout:3/378: write d3/d29/f41 [2736239,19223] 0 2026-03-09T15:01:29.410 INFO:tasks.workunit.client.0.vm05.stdout:5/413: stat d1/l2 0 2026-03-09T15:01:29.411 INFO:tasks.workunit.client.0.vm05.stdout:5/414: stat d1/d4/d34/d35/d53 0 2026-03-09T15:01:29.411 INFO:tasks.workunit.client.0.vm05.stdout:3/379: write d3/df/d10/d19/d44/d50/f65 [449836,128340] 0 2026-03-09T15:01:29.411 INFO:tasks.workunit.client.0.vm05.stdout:8/398: mkdir d0/d2a/d2d/d78/d86 0 2026-03-09T15:01:29.413 INFO:tasks.workunit.client.0.vm05.stdout:3/380: chown d3/df/d10/d19/f58 84 1 2026-03-09T15:01:29.418 INFO:tasks.workunit.client.0.vm05.stdout:5/415: dwrite d1/d4/d34/d56/f59 [0,4194304] 0 2026-03-09T15:01:29.426 INFO:tasks.workunit.client.0.vm05.stdout:6/351: creat da/d17/f61 x:0 0 0 2026-03-09T15:01:29.426 INFO:tasks.workunit.client.0.vm05.stdout:3/381: creat d3/d66/f85 x:0 0 0 2026-03-09T15:01:29.426 INFO:tasks.workunit.client.0.vm05.stdout:3/382: stat d3/df/d10/d19/l3e 0 2026-03-09T15:01:29.427 INFO:tasks.workunit.client.0.vm05.stdout:5/416: dread d1/d4/f5f [0,4194304] 0 2026-03-09T15:01:29.427 INFO:tasks.workunit.client.0.vm05.stdout:3/383: stat d3/df/d10/d19/d44 0 2026-03-09T15:01:29.428 INFO:tasks.workunit.client.0.vm05.stdout:3/384: write d3/d66/f71 [340248,81925] 0 2026-03-09T15:01:29.434 INFO:tasks.workunit.client.0.vm05.stdout:3/385: dread d3/df/f14 [0,4194304] 0 2026-03-09T15:01:29.448 INFO:tasks.workunit.client.0.vm05.stdout:6/352: dread da/d19/f35 [0,4194304] 0 2026-03-09T15:01:29.448 INFO:tasks.workunit.client.0.vm05.stdout:6/353: rename da/fb to da/f62 0 2026-03-09T15:01:29.448 INFO:tasks.workunit.client.0.vm05.stdout:3/386: getdents d3/df/d1e/d2f/d52 0 2026-03-09T15:01:29.448 INFO:tasks.workunit.client.0.vm05.stdout:2/345: write da/dd/f1c [2646766,27410] 0 2026-03-09T15:01:29.450 INFO:tasks.workunit.client.0.vm05.stdout:0/277: sync 2026-03-09T15:01:29.450 INFO:tasks.workunit.client.0.vm05.stdout:9/376: sync 2026-03-09T15:01:29.453 INFO:tasks.workunit.client.0.vm05.stdout:2/346: stat da/dd/f25 0 2026-03-09T15:01:29.453 INFO:tasks.workunit.client.0.vm05.stdout:6/354: mknod da/d17/c63 0 2026-03-09T15:01:29.454 INFO:tasks.workunit.client.0.vm05.stdout:0/278: symlink d9/de/l57 0 2026-03-09T15:01:29.455 INFO:tasks.workunit.client.0.vm05.stdout:2/347: mkdir da/d29/d6a 0 2026-03-09T15:01:29.457 INFO:tasks.workunit.client.0.vm05.stdout:0/279: readlink d9/de/d12/d15/d2e/l3f 0 2026-03-09T15:01:29.459 INFO:tasks.workunit.client.0.vm05.stdout:2/348: dwrite da/d16/f69 [0,4194304] 0 2026-03-09T15:01:29.470 INFO:tasks.workunit.client.0.vm05.stdout:9/377: link d2/d10/d22/d52/l6d d2/d70/l7e 0 2026-03-09T15:01:29.470 INFO:tasks.workunit.client.0.vm05.stdout:2/349: chown da/d13/d2f/d35/f3a 51084 1 2026-03-09T15:01:29.470 INFO:tasks.workunit.client.0.vm05.stdout:9/378: read d2/d10/d22/d2c/f44 [376966,33627] 0 2026-03-09T15:01:29.470 INFO:tasks.workunit.client.0.vm05.stdout:6/355: creat da/d17/f64 x:0 0 0 2026-03-09T15:01:29.470 INFO:tasks.workunit.client.0.vm05.stdout:0/280: truncate d9/de/d25/f56 799851 0 2026-03-09T15:01:29.493 INFO:tasks.workunit.client.0.vm05.stdout:7/371: sync 2026-03-09T15:01:29.497 INFO:tasks.workunit.client.0.vm05.stdout:7/372: dwrite d1/d9/f59 [0,4194304] 0 2026-03-09T15:01:29.499 INFO:tasks.workunit.client.0.vm05.stdout:7/373: dread - d1/d9/d23/f4c zero size 2026-03-09T15:01:29.514 INFO:tasks.workunit.client.0.vm05.stdout:9/379: symlink d2/d10/d22/d47/d73/l7f 0 2026-03-09T15:01:29.514 INFO:tasks.workunit.client.0.vm05.stdout:9/380: write d2/f61 [4462506,16437] 0 2026-03-09T15:01:29.514 INFO:tasks.workunit.client.0.vm05.stdout:0/281: link d9/de/d12/f4c d9/f58 0 2026-03-09T15:01:29.520 INFO:tasks.workunit.client.0.vm05.stdout:9/381: link d2/d4e/d56/d37/f36 d2/d10/d22/d2c/d69/d5a/f80 0 2026-03-09T15:01:29.527 INFO:tasks.workunit.client.0.vm05.stdout:9/382: fdatasync d2/f12 0 2026-03-09T15:01:29.527 INFO:tasks.workunit.client.0.vm05.stdout:0/282: mkdir d9/d59 0 2026-03-09T15:01:29.527 INFO:tasks.workunit.client.0.vm05.stdout:0/283: symlink d9/de/d25/l5a 0 2026-03-09T15:01:29.528 INFO:tasks.workunit.client.0.vm05.stdout:0/284: chown d9/de/d25/f56 0 1 2026-03-09T15:01:29.531 INFO:tasks.workunit.client.0.vm05.stdout:0/285: dwrite d9/de/d12/f23 [0,4194304] 0 2026-03-09T15:01:29.542 INFO:tasks.workunit.client.0.vm05.stdout:0/286: dwrite d9/de/d25/d38/f2f [0,4194304] 0 2026-03-09T15:01:29.543 INFO:tasks.workunit.client.0.vm05.stdout:0/287: write d9/de/f3d [5124353,123182] 0 2026-03-09T15:01:29.564 INFO:tasks.workunit.client.0.vm05.stdout:1/320: write f5 [1382829,124645] 0 2026-03-09T15:01:29.572 INFO:tasks.workunit.client.0.vm05.stdout:4/364: dwrite d2/d4/d7/dc/f54 [0,4194304] 0 2026-03-09T15:01:29.576 INFO:tasks.workunit.client.0.vm05.stdout:4/365: dread d2/d43/f4f [0,4194304] 0 2026-03-09T15:01:29.586 INFO:tasks.workunit.client.0.vm05.stdout:4/366: stat d2/d4/d7/c41 0 2026-03-09T15:01:29.619 INFO:tasks.workunit.client.0.vm05.stdout:1/321: mkdir d9/d2a/d6a/d75 0 2026-03-09T15:01:29.620 INFO:tasks.workunit.client.0.vm05.stdout:1/322: dread d9/d2a/d59/d49/f51 [0,4194304] 0 2026-03-09T15:01:29.620 INFO:tasks.workunit.client.0.vm05.stdout:4/367: rename d2/d4/d1e/c24 to d2/d49/c78 0 2026-03-09T15:01:29.620 INFO:tasks.workunit.client.0.vm05.stdout:5/417: write d1/d4/d19/f29 [5221828,94975] 0 2026-03-09T15:01:29.620 INFO:tasks.workunit.client.0.vm05.stdout:3/387: write d3/df/d10/d19/f25 [1021947,18922] 0 2026-03-09T15:01:29.620 INFO:tasks.workunit.client.0.vm05.stdout:5/418: rmdir d1/d4/d34/d35/d3d/d38/d63 39 2026-03-09T15:01:29.620 INFO:tasks.workunit.client.0.vm05.stdout:5/419: dwrite d1/f4c [0,4194304] 0 2026-03-09T15:01:29.620 INFO:tasks.workunit.client.0.vm05.stdout:2/350: dwrite da/dd/f25 [0,4194304] 0 2026-03-09T15:01:29.632 INFO:tasks.workunit.client.0.vm05.stdout:2/351: dwrite da/f2c [0,4194304] 0 2026-03-09T15:01:29.641 INFO:tasks.workunit.client.0.vm05.stdout:1/323: truncate d9/f15 1540491 0 2026-03-09T15:01:29.645 INFO:tasks.workunit.client.0.vm05.stdout:2/352: unlink da/d13/d2f/d35/l38 0 2026-03-09T15:01:29.645 INFO:tasks.workunit.client.0.vm05.stdout:2/353: read da/d16/f20 [1542395,69364] 0 2026-03-09T15:01:29.646 INFO:tasks.workunit.client.0.vm05.stdout:1/324: creat d9/d2a/d6a/d75/f76 x:0 0 0 2026-03-09T15:01:29.648 INFO:tasks.workunit.client.0.vm05.stdout:1/325: mkdir d9/d2a/d59/d49/d77 0 2026-03-09T15:01:29.649 INFO:tasks.workunit.client.0.vm05.stdout:1/326: readlink d9/l46 0 2026-03-09T15:01:29.649 INFO:tasks.workunit.client.0.vm05.stdout:1/327: chown d9/l40 567823267 1 2026-03-09T15:01:29.649 INFO:tasks.workunit.client.0.vm05.stdout:1/328: read d9/d2f/f43 [65179,108967] 0 2026-03-09T15:01:29.651 INFO:tasks.workunit.client.0.vm05.stdout:1/329: mkdir d9/d2a/d59/d49/d78 0 2026-03-09T15:01:29.785 INFO:tasks.workunit.client.0.vm05.stdout:2/354: dread da/d29/f39 [0,4194304] 0 2026-03-09T15:01:29.788 INFO:tasks.workunit.client.0.vm05.stdout:1/330: sync 2026-03-09T15:01:29.795 INFO:tasks.workunit.client.0.vm05.stdout:2/355: dread da/d16/f20 [0,4194304] 0 2026-03-09T15:01:29.795 INFO:tasks.workunit.client.0.vm05.stdout:1/331: creat d9/d17/f79 x:0 0 0 2026-03-09T15:01:29.795 INFO:tasks.workunit.client.0.vm05.stdout:1/332: dwrite d9/d2a/f56 [0,4194304] 0 2026-03-09T15:01:29.798 INFO:tasks.workunit.client.0.vm05.stdout:1/333: dwrite d9/d2a/d59/f42 [0,4194304] 0 2026-03-09T15:01:29.804 INFO:tasks.workunit.client.0.vm05.stdout:1/334: rmdir d9/d2a/d59/d49/d4b 39 2026-03-09T15:01:29.810 INFO:tasks.workunit.client.0.vm05.stdout:7/374: truncate d1/d9/d23/d31/d32/f63 1067823 0 2026-03-09T15:01:29.811 INFO:tasks.workunit.client.0.vm05.stdout:7/375: fdatasync d1/d9/d23/d31/d51/f3b 0 2026-03-09T15:01:29.812 INFO:tasks.workunit.client.0.vm05.stdout:1/335: symlink d9/d2f/d55/l7a 0 2026-03-09T15:01:29.816 INFO:tasks.workunit.client.0.vm05.stdout:1/336: mknod d9/d2a/d59/d49/d4b/c7b 0 2026-03-09T15:01:29.825 INFO:tasks.workunit.client.0.vm05.stdout:1/337: write d9/d2f/d55/f68 [701722,130236] 0 2026-03-09T15:01:29.825 INFO:tasks.workunit.client.0.vm05.stdout:0/288: truncate d9/de/f3d 4581774 0 2026-03-09T15:01:29.825 INFO:tasks.workunit.client.0.vm05.stdout:0/289: write d9/de/d25/f47 [387995,65500] 0 2026-03-09T15:01:29.827 INFO:tasks.workunit.client.0.vm05.stdout:0/290: dwrite d9/de/d25/f52 [0,4194304] 0 2026-03-09T15:01:29.832 INFO:tasks.workunit.client.0.vm05.stdout:0/291: dwrite d9/f22 [4194304,4194304] 0 2026-03-09T15:01:29.885 INFO:tasks.workunit.client.0.vm05.stdout:3/388: rmdir d3/df/d10/d19/d44 39 2026-03-09T15:01:29.901 INFO:tasks.workunit.client.0.vm05.stdout:4/368: truncate d2/d49/f4d 1756967 0 2026-03-09T15:01:29.903 INFO:tasks.workunit.client.0.vm05.stdout:4/369: fsync d2/d4/d7/dc/f27 0 2026-03-09T15:01:29.912 INFO:tasks.workunit.client.0.vm05.stdout:4/370: rmdir d2/d4/d72 0 2026-03-09T15:01:29.914 INFO:tasks.workunit.client.0.vm05.stdout:1/338: fsync d9/d2a/d6a/d75/f76 0 2026-03-09T15:01:29.915 INFO:tasks.workunit.client.0.vm05.stdout:1/339: stat d9/d2a/d59/d49/d4b/c60 0 2026-03-09T15:01:29.928 INFO:tasks.workunit.client.0.vm05.stdout:1/340: dread d9/d17/f22 [0,4194304] 0 2026-03-09T15:01:29.930 INFO:tasks.workunit.client.0.vm05.stdout:9/383: rmdir d2/d4e/d56/d53 39 2026-03-09T15:01:29.932 INFO:tasks.workunit.client.0.vm05.stdout:9/384: creat d2/d10/d22/d47/d73/f81 x:0 0 0 2026-03-09T15:01:29.936 INFO:tasks.workunit.client.0.vm05.stdout:1/341: link d9/d2a/l6c d9/d2a/d59/d49/d78/l7c 0 2026-03-09T15:01:29.938 INFO:tasks.workunit.client.0.vm05.stdout:1/342: symlink d9/d2a/d59/d49/d78/l7d 0 2026-03-09T15:01:29.950 INFO:tasks.workunit.client.0.vm05.stdout:1/343: dread f5 [0,4194304] 0 2026-03-09T15:01:29.953 INFO:tasks.workunit.client.0.vm05.stdout:1/344: dwrite d9/d2f/f58 [0,4194304] 0 2026-03-09T15:01:29.962 INFO:tasks.workunit.client.0.vm05.stdout:9/385: sync 2026-03-09T15:01:29.970 INFO:tasks.workunit.client.0.vm05.stdout:3/389: dread d3/df/d1e/d2c/f6a [0,4194304] 0 2026-03-09T15:01:29.971 INFO:tasks.workunit.client.0.vm05.stdout:3/390: truncate d3/d66/f71 1044054 0 2026-03-09T15:01:29.977 INFO:tasks.workunit.client.0.vm05.stdout:1/345: mkdir d9/d2a/d59/d49/d78/d7e 0 2026-03-09T15:01:29.981 INFO:tasks.workunit.client.0.vm05.stdout:0/292: write d9/f58 [4194299,29740] 0 2026-03-09T15:01:29.992 INFO:tasks.workunit.client.0.vm05.stdout:0/293: dwrite d9/de/d25/d38/f2f [0,4194304] 0 2026-03-09T15:01:29.992 INFO:tasks.workunit.client.0.vm05.stdout:7/376: unlink d1/d9/f10 0 2026-03-09T15:01:29.999 INFO:tasks.workunit.client.0.vm05.stdout:5/420: rename d1/d4/d34/d35/d53 to d1/d4/d34/d35/d3d/d38/d63/d8c 0 2026-03-09T15:01:29.999 INFO:tasks.workunit.client.0.vm05.stdout:3/391: symlink d3/d29/d2d/d77/l86 0 2026-03-09T15:01:29.999 INFO:tasks.workunit.client.0.vm05.stdout:1/346: truncate d9/d2f/f43 688131 0 2026-03-09T15:01:30.000 INFO:tasks.workunit.client.0.vm05.stdout:0/294: mknod d9/de/d25/c5b 0 2026-03-09T15:01:30.001 INFO:tasks.workunit.client.0.vm05.stdout:0/295: dread - d9/de/d25/f48 zero size 2026-03-09T15:01:30.004 INFO:tasks.workunit.client.0.vm05.stdout:5/421: dread d1/d4/d34/d56/f59 [0,4194304] 0 2026-03-09T15:01:30.007 INFO:tasks.workunit.client.0.vm05.stdout:0/296: dwrite d9/de/f3e [0,4194304] 0 2026-03-09T15:01:30.012 INFO:tasks.workunit.client.0.vm05.stdout:8/399: dread d0/d7/f33 [0,4194304] 0 2026-03-09T15:01:30.012 INFO:tasks.workunit.client.0.vm05.stdout:8/400: dread - d0/d2a/d2d/d54/f85 zero size 2026-03-09T15:01:30.016 INFO:tasks.workunit.client.0.vm05.stdout:3/392: fsync d3/df/d10/f28 0 2026-03-09T15:01:30.017 INFO:tasks.workunit.client.0.vm05.stdout:3/393: write d3/d29/d2d/f31 [3716123,77329] 0 2026-03-09T15:01:30.029 INFO:tasks.workunit.client.0.vm05.stdout:2/356: creat da/d16/f6b x:0 0 0 2026-03-09T15:01:30.033 INFO:tasks.workunit.client.0.vm05.stdout:5/422: creat d1/d4/d34/d35/d4e/f8d x:0 0 0 2026-03-09T15:01:30.033 INFO:tasks.workunit.client.0.vm05.stdout:5/423: chown d1/d5d 225980660 1 2026-03-09T15:01:30.035 INFO:tasks.workunit.client.0.vm05.stdout:6/356: creat da/f65 x:0 0 0 2026-03-09T15:01:30.043 INFO:tasks.workunit.client.0.vm05.stdout:9/386: rename d2/d10/f2e to d2/d10/f82 0 2026-03-09T15:01:30.046 INFO:tasks.workunit.client.0.vm05.stdout:9/387: dwrite d2/d4e/f3e [0,4194304] 0 2026-03-09T15:01:30.058 INFO:tasks.workunit.client.0.vm05.stdout:3/394: creat d3/df/d1e/d2f/d52/f87 x:0 0 0 2026-03-09T15:01:30.058 INFO:tasks.workunit.client.0.vm05.stdout:3/395: chown d3/df/d1e/l3d 1076062411 1 2026-03-09T15:01:30.064 INFO:tasks.workunit.client.0.vm05.stdout:2/357: symlink da/l6c 0 2026-03-09T15:01:30.072 INFO:tasks.workunit.client.0.vm05.stdout:6/357: mkdir da/d43/d66 0 2026-03-09T15:01:30.075 INFO:tasks.workunit.client.0.vm05.stdout:5/424: dread d1/d4/d27/f4f [0,4194304] 0 2026-03-09T15:01:30.076 INFO:tasks.workunit.client.0.vm05.stdout:5/425: dread - d1/d4/d34/d35/d4e/f8d zero size 2026-03-09T15:01:30.082 INFO:tasks.workunit.client.0.vm05.stdout:9/388: mknod d2/d10/d22/d47/c83 0 2026-03-09T15:01:30.086 INFO:tasks.workunit.client.0.vm05.stdout:4/371: rename d2/d43/d66 to d2/d4/d7/d79 0 2026-03-09T15:01:30.098 INFO:tasks.workunit.client.0.vm05.stdout:8/401: dread d0/d1/d12/d1b/f27 [0,4194304] 0 2026-03-09T15:01:30.126 INFO:tasks.workunit.client.0.vm05.stdout:7/377: rename d1/d9/d23/l2b to d1/d22/l74 0 2026-03-09T15:01:30.126 INFO:tasks.workunit.client.0.vm05.stdout:7/378: write d1/d22/f67 [616119,18305] 0 2026-03-09T15:01:30.135 INFO:tasks.workunit.client.0.vm05.stdout:6/358: mknod da/d43/d66/c67 0 2026-03-09T15:01:30.139 INFO:tasks.workunit.client.0.vm05.stdout:6/359: dwrite da/d17/f2c [0,4194304] 0 2026-03-09T15:01:30.144 INFO:tasks.workunit.client.0.vm05.stdout:6/360: dwrite da/d17/d3b/f4a [0,4194304] 0 2026-03-09T15:01:30.152 INFO:tasks.workunit.client.0.vm05.stdout:1/347: write d9/d2f/f3a [1029468,12981] 0 2026-03-09T15:01:30.152 INFO:tasks.workunit.client.0.vm05.stdout:1/348: write d9/d2f/f3a [912651,129432] 0 2026-03-09T15:01:30.159 INFO:tasks.workunit.client.0.vm05.stdout:8/402: read - d0/d1/d12/f4f zero size 2026-03-09T15:01:30.166 INFO:tasks.workunit.client.0.vm05.stdout:4/372: mkdir d2/d7a 0 2026-03-09T15:01:30.167 INFO:tasks.workunit.client.0.vm05.stdout:4/373: read d2/f3e [52866,68542] 0 2026-03-09T15:01:30.179 INFO:tasks.workunit.client.0.vm05.stdout:1/349: readlink d9/d2a/l6c 0 2026-03-09T15:01:30.180 INFO:tasks.workunit.client.0.vm05.stdout:1/350: fdatasync d9/d2f/d55/f64 0 2026-03-09T15:01:30.180 INFO:tasks.workunit.client.0.vm05.stdout:1/351: dread - d9/d2a/d59/d49/d48/f6f zero size 2026-03-09T15:01:30.183 INFO:tasks.workunit.client.0.vm05.stdout:1/352: dwrite d9/d2f/d55/f5e [0,4194304] 0 2026-03-09T15:01:30.193 INFO:tasks.workunit.client.0.vm05.stdout:8/403: rename d0/d2a/d2d/d42/l46 to d0/d1/d55/l87 0 2026-03-09T15:01:30.194 INFO:tasks.workunit.client.0.vm05.stdout:0/297: truncate d9/de/d25/d38/f2f 1147182 0 2026-03-09T15:01:30.201 INFO:tasks.workunit.client.0.vm05.stdout:2/358: symlink da/d16/d46/l6d 0 2026-03-09T15:01:30.202 INFO:tasks.workunit.client.0.vm05.stdout:0/298: creat d9/de/d25/d38/d41/f5c x:0 0 0 2026-03-09T15:01:30.203 INFO:tasks.workunit.client.0.vm05.stdout:0/299: fsync d9/de/f19 0 2026-03-09T15:01:30.204 INFO:tasks.workunit.client.0.vm05.stdout:4/374: creat d2/d4/d7/f7b x:0 0 0 2026-03-09T15:01:30.206 INFO:tasks.workunit.client.0.vm05.stdout:1/353: creat d9/f7f x:0 0 0 2026-03-09T15:01:30.208 INFO:tasks.workunit.client.0.vm05.stdout:8/404: creat d0/d7/f88 x:0 0 0 2026-03-09T15:01:30.210 INFO:tasks.workunit.client.0.vm05.stdout:0/300: unlink d9/de/d12/d15/c4b 0 2026-03-09T15:01:30.214 INFO:tasks.workunit.client.0.vm05.stdout:5/426: dwrite d1/d4/d34/d35/d3d/f37 [0,4194304] 0 2026-03-09T15:01:30.214 INFO:tasks.workunit.client.0.vm05.stdout:5/427: dread - d1/d5d/f81 zero size 2026-03-09T15:01:30.216 INFO:tasks.workunit.client.0.vm05.stdout:2/359: unlink da/c28 0 2026-03-09T15:01:30.217 INFO:tasks.workunit.client.0.vm05.stdout:3/396: truncate d3/df/f23 1470216 0 2026-03-09T15:01:30.225 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:29 vm09.local ceph-mon[59673]: pgmap v171: 65 pgs: 65 active+clean; 2.0 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail; 8.9 MiB/s rd, 80 MiB/s wr, 187 op/s 2026-03-09T15:01:30.226 INFO:tasks.workunit.client.0.vm05.stdout:9/389: dwrite d2/f1f [4194304,4194304] 0 2026-03-09T15:01:30.237 INFO:tasks.workunit.client.0.vm05.stdout:1/354: rmdir d9/d2a/d59/d33 39 2026-03-09T15:01:30.239 INFO:tasks.workunit.client.0.vm05.stdout:7/379: dwrite d1/d49/d4a/f6b [0,4194304] 0 2026-03-09T15:01:30.244 INFO:tasks.workunit.client.0.vm05.stdout:0/301: dread d9/de/d25/f56 [0,4194304] 0 2026-03-09T15:01:30.248 INFO:tasks.workunit.client.0.vm05.stdout:5/428: dread d1/d4/d34/d35/d3d/f32 [0,4194304] 0 2026-03-09T15:01:30.251 INFO:tasks.workunit.client.0.vm05.stdout:1/355: rename d9/d2a/d59/d49/f1d to d9/d2f/d37/d5f/f80 0 2026-03-09T15:01:30.254 INFO:tasks.workunit.client.0.vm05.stdout:8/405: link d0/d2a/d2d/d54/f85 d0/d1/d12/d1b/f89 0 2026-03-09T15:01:30.255 INFO:tasks.workunit.client.0.vm05.stdout:4/375: symlink d2/d4/d7/d48/d6b/l7c 0 2026-03-09T15:01:30.256 INFO:tasks.workunit.client.0.vm05.stdout:0/302: creat d9/de/f5d x:0 0 0 2026-03-09T15:01:30.257 INFO:tasks.workunit.client.0.vm05.stdout:5/429: symlink d1/d4/d27/l8e 0 2026-03-09T15:01:30.258 INFO:tasks.workunit.client.0.vm05.stdout:5/430: chown d1/d4/d34/d35/d3d 353656216 1 2026-03-09T15:01:30.264 INFO:tasks.workunit.client.0.vm05.stdout:5/431: creat d1/d4/d34/d56/d68/f8f x:0 0 0 2026-03-09T15:01:30.266 INFO:tasks.workunit.client.0.vm05.stdout:7/380: creat d1/d9/f75 x:0 0 0 2026-03-09T15:01:30.267 INFO:tasks.workunit.client.0.vm05.stdout:1/356: link d9/d2a/d59/d49/d48/f6f d9/d17/f81 0 2026-03-09T15:01:30.273 INFO:tasks.workunit.client.0.vm05.stdout:4/376: unlink d2/d4/d8/c31 0 2026-03-09T15:01:30.273 INFO:tasks.workunit.client.0.vm05.stdout:4/377: write d2/d4/d7/dc/f45 [532119,13714] 0 2026-03-09T15:01:30.273 INFO:tasks.workunit.client.0.vm05.stdout:4/378: dread - d2/d49/f56 zero size 2026-03-09T15:01:30.273 INFO:tasks.workunit.client.0.vm05.stdout:5/432: creat d1/d4/d34/d35/d4e/d6f/f90 x:0 0 0 2026-03-09T15:01:30.273 INFO:tasks.workunit.client.0.vm05.stdout:3/397: link d3/df/d1e/d2c/f6a d3/df/f88 0 2026-03-09T15:01:30.273 INFO:tasks.workunit.client.0.vm05.stdout:0/303: creat d9/de/d12/d15/f5e x:0 0 0 2026-03-09T15:01:30.273 INFO:tasks.workunit.client.0.vm05.stdout:3/398: dwrite d3/df/d1e/d2f/d52/f61 [0,4194304] 0 2026-03-09T15:01:30.275 INFO:tasks.workunit.client.0.vm05.stdout:4/379: creat d2/d1d/f7d x:0 0 0 2026-03-09T15:01:30.278 INFO:tasks.workunit.client.0.vm05.stdout:5/433: unlink d1/d4/d34/d35/f8b 0 2026-03-09T15:01:30.281 INFO:tasks.workunit.client.0.vm05.stdout:3/399: mknod d3/df/d1e/d2f/d52/c89 0 2026-03-09T15:01:30.281 INFO:tasks.workunit.client.0.vm05.stdout:3/400: truncate d3/df/d59/f75 4919854 0 2026-03-09T15:01:30.282 INFO:tasks.workunit.client.0.vm05.stdout:5/434: dwrite d1/d4/d34/d35/d4e/f8d [0,4194304] 0 2026-03-09T15:01:30.285 INFO:tasks.workunit.client.0.vm05.stdout:1/357: creat d9/d2a/d59/d49/f82 x:0 0 0 2026-03-09T15:01:30.296 INFO:tasks.workunit.client.0.vm05.stdout:9/390: sync 2026-03-09T15:01:30.296 INFO:tasks.workunit.client.0.vm05.stdout:8/406: sync 2026-03-09T15:01:30.296 INFO:tasks.workunit.client.0.vm05.stdout:9/391: chown d2/d10/d22/d47 2 1 2026-03-09T15:01:30.297 INFO:tasks.workunit.client.0.vm05.stdout:9/392: dread - d2/d4e/f6a zero size 2026-03-09T15:01:30.298 INFO:tasks.workunit.client.0.vm05.stdout:9/393: read d2/d10/f82 [839611,10253] 0 2026-03-09T15:01:30.299 INFO:tasks.workunit.client.0.vm05.stdout:9/394: readlink d2/d10/d22/d52/d59/l25 0 2026-03-09T15:01:30.301 INFO:tasks.workunit.client.0.vm05.stdout:8/407: creat d0/d24/f8a x:0 0 0 2026-03-09T15:01:30.303 INFO:tasks.workunit.client.0.vm05.stdout:5/435: mkdir d1/d5d/d7f/d91 0 2026-03-09T15:01:30.303 INFO:tasks.workunit.client.0.vm05.stdout:3/401: link d3/df/d59/l7d d3/df/d1e/d2f/d52/l8a 0 2026-03-09T15:01:30.304 INFO:tasks.workunit.client.0.vm05.stdout:1/358: dread f7 [0,4194304] 0 2026-03-09T15:01:30.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:29 vm05.local ceph-mon[50611]: pgmap v171: 65 pgs: 65 active+clean; 2.0 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail; 8.9 MiB/s rd, 80 MiB/s wr, 187 op/s 2026-03-09T15:01:30.305 INFO:tasks.workunit.client.0.vm05.stdout:8/408: mkdir d0/d1/d12/d3c/d8b 0 2026-03-09T15:01:30.306 INFO:tasks.workunit.client.0.vm05.stdout:9/395: mkdir d2/d4e/d56/d84 0 2026-03-09T15:01:30.309 INFO:tasks.workunit.client.0.vm05.stdout:5/436: rename d1/d4/d19/c3c to d1/d4/d34/d35/d3d/d38/d63/c92 0 2026-03-09T15:01:30.311 INFO:tasks.workunit.client.0.vm05.stdout:9/396: chown d2/d10/d22/d52/l6d 14 1 2026-03-09T15:01:30.315 INFO:tasks.workunit.client.0.vm05.stdout:8/409: read d0/d2a/f2e [781713,36376] 0 2026-03-09T15:01:30.315 INFO:tasks.workunit.client.0.vm05.stdout:9/397: creat d2/d10/f85 x:0 0 0 2026-03-09T15:01:30.315 INFO:tasks.workunit.client.0.vm05.stdout:8/410: getdents d0/d2a/d2d/d4b 0 2026-03-09T15:01:30.316 INFO:tasks.workunit.client.0.vm05.stdout:9/398: chown d2/d10/d22/d52/l6d 2 1 2026-03-09T15:01:30.318 INFO:tasks.workunit.client.0.vm05.stdout:9/399: creat d2/d10/d22/d2c/d69/f86 x:0 0 0 2026-03-09T15:01:30.320 INFO:tasks.workunit.client.0.vm05.stdout:8/411: dread d0/d1/d12/d3c/f51 [0,4194304] 0 2026-03-09T15:01:30.321 INFO:tasks.workunit.client.0.vm05.stdout:6/361: write da/d17/f1d [1949734,42756] 0 2026-03-09T15:01:30.333 INFO:tasks.workunit.client.0.vm05.stdout:1/359: dread d9/d2a/d59/d49/d48/f25 [0,4194304] 0 2026-03-09T15:01:30.334 INFO:tasks.workunit.client.0.vm05.stdout:1/360: truncate d9/d2f/d55/f68 1558114 0 2026-03-09T15:01:30.336 INFO:tasks.workunit.client.0.vm05.stdout:6/362: write da/fe [3517101,75460] 0 2026-03-09T15:01:30.340 INFO:tasks.workunit.client.0.vm05.stdout:9/400: rename d2/d10/d22/d52/d59/l21 to d2/d4e/d56/l87 0 2026-03-09T15:01:30.340 INFO:tasks.workunit.client.0.vm05.stdout:9/401: fsync d2/d10/d22/d47/f62 0 2026-03-09T15:01:30.341 INFO:tasks.workunit.client.0.vm05.stdout:7/381: chown d1/d9/d23/d31/d51/f3b 11314837 1 2026-03-09T15:01:30.343 INFO:tasks.workunit.client.0.vm05.stdout:9/402: dread d2/f1f [4194304,4194304] 0 2026-03-09T15:01:30.344 INFO:tasks.workunit.client.0.vm05.stdout:9/403: chown d2/d10/f82 34 1 2026-03-09T15:01:30.344 INFO:tasks.workunit.client.0.vm05.stdout:9/404: dread - d2/d10/d22/d47/f62 zero size 2026-03-09T15:01:30.347 INFO:tasks.workunit.client.0.vm05.stdout:6/363: creat da/d43/f68 x:0 0 0 2026-03-09T15:01:30.352 INFO:tasks.workunit.client.0.vm05.stdout:1/361: getdents d9/d2a/d59/d49/d4b 0 2026-03-09T15:01:30.353 INFO:tasks.workunit.client.0.vm05.stdout:7/382: creat d1/f76 x:0 0 0 2026-03-09T15:01:30.353 INFO:tasks.workunit.client.0.vm05.stdout:9/405: link d2/d10/d22/c32 d2/d10/d22/c88 0 2026-03-09T15:01:30.354 INFO:tasks.workunit.client.0.vm05.stdout:1/362: fsync d9/d2f/f4f 0 2026-03-09T15:01:30.356 INFO:tasks.workunit.client.0.vm05.stdout:7/383: readlink d1/d9/d23/d31/d51/l57 0 2026-03-09T15:01:30.360 INFO:tasks.workunit.client.0.vm05.stdout:7/384: dwrite d1/d9/d23/d31/d32/f58 [0,4194304] 0 2026-03-09T15:01:30.366 INFO:tasks.workunit.client.0.vm05.stdout:2/360: write da/dd/f5d [280767,120724] 0 2026-03-09T15:01:30.376 INFO:tasks.workunit.client.0.vm05.stdout:1/363: chown d9/d2a/d59/d33/l4a 57 1 2026-03-09T15:01:30.383 INFO:tasks.workunit.client.0.vm05.stdout:7/385: mkdir d1/d49/d4a/d77 0 2026-03-09T15:01:30.383 INFO:tasks.workunit.client.0.vm05.stdout:7/386: write d1/f6e [115824,37997] 0 2026-03-09T15:01:30.394 INFO:tasks.workunit.client.0.vm05.stdout:9/406: link d2/d10/d22/d2c/d69/d5a/f80 d2/d10/d22/d2c/d69/d5a/f89 0 2026-03-09T15:01:30.401 INFO:tasks.workunit.client.0.vm05.stdout:7/387: dread d1/d12/f56 [0,4194304] 0 2026-03-09T15:01:30.403 INFO:tasks.workunit.client.0.vm05.stdout:4/380: write d2/f33 [2929385,109444] 0 2026-03-09T15:01:30.404 INFO:tasks.workunit.client.0.vm05.stdout:4/381: write d2/f67 [422968,68998] 0 2026-03-09T15:01:30.405 INFO:tasks.workunit.client.0.vm05.stdout:4/382: write d2/d1d/f36 [1429654,32280] 0 2026-03-09T15:01:30.420 INFO:tasks.workunit.client.0.vm05.stdout:0/304: truncate d9/de/d12/f23 131569 0 2026-03-09T15:01:30.425 INFO:tasks.workunit.client.0.vm05.stdout:5/437: dwrite d1/d4/d34/d35/d4e/f8d [4194304,4194304] 0 2026-03-09T15:01:30.436 INFO:tasks.workunit.client.0.vm05.stdout:7/388: mkdir d1/d9/d23/d31/d32/d78 0 2026-03-09T15:01:30.442 INFO:tasks.workunit.client.0.vm05.stdout:3/402: dwrite d3/df/d10/d34/f48 [0,4194304] 0 2026-03-09T15:01:30.454 INFO:tasks.workunit.client.0.vm05.stdout:1/364: rename d9/d2a/d59/d33 to d9/d2f/d83 0 2026-03-09T15:01:30.458 INFO:tasks.workunit.client.0.vm05.stdout:1/365: dwrite d9/d17/f79 [0,4194304] 0 2026-03-09T15:01:30.465 INFO:tasks.workunit.client.0.vm05.stdout:5/438: mkdir d1/d4/d19/d93 0 2026-03-09T15:01:30.468 INFO:tasks.workunit.client.0.vm05.stdout:8/412: dwrite d0/f4 [0,4194304] 0 2026-03-09T15:01:30.479 INFO:tasks.workunit.client.0.vm05.stdout:9/407: mknod d2/d4e/d56/d84/c8a 0 2026-03-09T15:01:30.485 INFO:tasks.workunit.client.0.vm05.stdout:0/305: rename d9/de/d25/f56 to d9/de/d12/d15/d2e/d32/d53/f5f 0 2026-03-09T15:01:30.490 INFO:tasks.workunit.client.0.vm05.stdout:6/364: write da/d43/f56 [1756549,8609] 0 2026-03-09T15:01:30.490 INFO:tasks.workunit.client.0.vm05.stdout:1/366: creat d9/d2a/d59/d49/d78/f84 x:0 0 0 2026-03-09T15:01:30.490 INFO:tasks.workunit.client.0.vm05.stdout:7/389: sync 2026-03-09T15:01:30.494 INFO:tasks.workunit.client.0.vm05.stdout:8/413: rmdir d0/d1 39 2026-03-09T15:01:30.495 INFO:tasks.workunit.client.0.vm05.stdout:9/408: mkdir d2/d8b 0 2026-03-09T15:01:30.496 INFO:tasks.workunit.client.0.vm05.stdout:3/403: mkdir d3/df/d10/d7c/d8b 0 2026-03-09T15:01:30.498 INFO:tasks.workunit.client.0.vm05.stdout:2/361: getdents da/d29/d3f 0 2026-03-09T15:01:30.501 INFO:tasks.workunit.client.0.vm05.stdout:2/362: write da/d13/d2f/d35/f57 [496268,5887] 0 2026-03-09T15:01:30.501 INFO:tasks.workunit.client.0.vm05.stdout:2/363: read - da/d29/d3f/f5b zero size 2026-03-09T15:01:30.501 INFO:tasks.workunit.client.0.vm05.stdout:2/364: fdatasync da/d13/d30/f5a 0 2026-03-09T15:01:30.505 INFO:tasks.workunit.client.0.vm05.stdout:5/439: rename d1/da/l22 to d1/d4/d34/d35/d4e/d6f/l94 0 2026-03-09T15:01:30.506 INFO:tasks.workunit.client.0.vm05.stdout:5/440: write d1/d4/d34/d35/f44 [782530,33893] 0 2026-03-09T15:01:30.510 INFO:tasks.workunit.client.0.vm05.stdout:5/441: dwrite d1/d5d/f81 [0,4194304] 0 2026-03-09T15:01:30.525 INFO:tasks.workunit.client.0.vm05.stdout:6/365: dwrite da/f10 [0,4194304] 0 2026-03-09T15:01:30.527 INFO:tasks.workunit.client.0.vm05.stdout:1/367: mknod d9/d17/c85 0 2026-03-09T15:01:30.528 INFO:tasks.workunit.client.0.vm05.stdout:1/368: chown d9/d2a/f44 37 1 2026-03-09T15:01:30.535 INFO:tasks.workunit.client.0.vm05.stdout:7/390: chown d1/d9/fd 53 1 2026-03-09T15:01:30.545 INFO:tasks.workunit.client.0.vm05.stdout:2/365: dwrite da/f10 [0,4194304] 0 2026-03-09T15:01:30.548 INFO:tasks.workunit.client.0.vm05.stdout:4/383: dwrite d2/d4/d7/dc/f64 [0,4194304] 0 2026-03-09T15:01:30.559 INFO:tasks.workunit.client.0.vm05.stdout:2/366: dwrite da/dd/f25 [4194304,4194304] 0 2026-03-09T15:01:30.561 INFO:tasks.workunit.client.0.vm05.stdout:1/369: mknod d9/d2a/d59/d49/d4b/c86 0 2026-03-09T15:01:30.569 INFO:tasks.workunit.client.0.vm05.stdout:9/409: truncate d2/f6 5346371 0 2026-03-09T15:01:30.583 INFO:tasks.workunit.client.0.vm05.stdout:9/410: dread - d2/d10/d22/d2c/d69/f67 zero size 2026-03-09T15:01:30.583 INFO:tasks.workunit.client.0.vm05.stdout:1/370: dwrite d9/d2f/f3a [0,4194304] 0 2026-03-09T15:01:30.583 INFO:tasks.workunit.client.0.vm05.stdout:4/384: fdatasync d2/f1b 0 2026-03-09T15:01:30.586 INFO:tasks.workunit.client.0.vm05.stdout:0/306: rename d9/de/d12/d15/d2e/c4f to d9/de/d12/d15/d2e/c60 0 2026-03-09T15:01:30.586 INFO:tasks.workunit.client.0.vm05.stdout:0/307: write d9/de/d25/f52 [4206578,96713] 0 2026-03-09T15:01:30.595 INFO:tasks.workunit.client.0.vm05.stdout:7/391: mkdir d1/d49/d4a/d77/d79 0 2026-03-09T15:01:30.595 INFO:tasks.workunit.client.0.vm05.stdout:2/367: chown da/d13/d30/f41 147689864 1 2026-03-09T15:01:30.611 INFO:tasks.workunit.client.0.vm05.stdout:8/414: write d0/d1/d12/d1b/f89 [813246,38643] 0 2026-03-09T15:01:30.616 INFO:tasks.workunit.client.0.vm05.stdout:3/404: dwrite d3/df/d1e/d2c/f6a [0,4194304] 0 2026-03-09T15:01:30.621 INFO:tasks.workunit.client.0.vm05.stdout:5/442: write d1/d4/f20 [190489,11995] 0 2026-03-09T15:01:30.621 INFO:tasks.workunit.client.0.vm05.stdout:7/392: sync 2026-03-09T15:01:30.625 INFO:tasks.workunit.client.0.vm05.stdout:0/308: mkdir d9/de/d12/d15/d2e/d32/d53/d61 0 2026-03-09T15:01:30.626 INFO:tasks.workunit.client.0.vm05.stdout:0/309: chown d9/de/d12/d15/d2e/d32/l39 71597981 1 2026-03-09T15:01:30.628 INFO:tasks.workunit.client.0.vm05.stdout:6/366: link da/d17/f2d da/d17/f69 0 2026-03-09T15:01:30.630 INFO:tasks.workunit.client.0.vm05.stdout:9/411: mkdir d2/d10/d8c 0 2026-03-09T15:01:30.636 INFO:tasks.workunit.client.0.vm05.stdout:8/415: rmdir d0/d2a 39 2026-03-09T15:01:30.644 INFO:tasks.workunit.client.0.vm05.stdout:3/405: rmdir d3/df/d10/d19 39 2026-03-09T15:01:30.645 INFO:tasks.workunit.client.0.vm05.stdout:4/385: chown d2/d4/d7/d21/f68 12820 1 2026-03-09T15:01:30.651 INFO:tasks.workunit.client.0.vm05.stdout:4/386: dread d2/d4/d7/dc/f27 [4194304,4194304] 0 2026-03-09T15:01:30.657 INFO:tasks.workunit.client.0.vm05.stdout:0/310: rename d9/de/f19 to d9/de/d12/d15/d2e/d32/d53/d61/f62 0 2026-03-09T15:01:30.658 INFO:tasks.workunit.client.0.vm05.stdout:0/311: chown d9/de/d12/d15/d2e/f40 82 1 2026-03-09T15:01:30.660 INFO:tasks.workunit.client.0.vm05.stdout:6/367: chown da/f14 174792378 1 2026-03-09T15:01:30.660 INFO:tasks.workunit.client.0.vm05.stdout:6/368: chown da/f12 149081 1 2026-03-09T15:01:30.660 INFO:tasks.workunit.client.0.vm05.stdout:6/369: write da/f10 [553670,122969] 0 2026-03-09T15:01:30.664 INFO:tasks.workunit.client.0.vm05.stdout:6/370: dwrite da/d17/d3b/f5f [0,4194304] 0 2026-03-09T15:01:30.664 INFO:tasks.workunit.client.0.vm05.stdout:6/371: readlink da/d17/l21 0 2026-03-09T15:01:30.665 INFO:tasks.workunit.client.0.vm05.stdout:6/372: readlink l2 0 2026-03-09T15:01:30.665 INFO:tasks.workunit.client.0.vm05.stdout:6/373: readlink l3 0 2026-03-09T15:01:30.674 INFO:tasks.workunit.client.0.vm05.stdout:8/416: chown d0/d2a/d2d/l58 13 1 2026-03-09T15:01:30.677 INFO:tasks.workunit.client.0.vm05.stdout:1/371: write d9/d2f/f43 [1026910,535] 0 2026-03-09T15:01:30.679 INFO:tasks.workunit.client.0.vm05.stdout:7/393: write d1/d12/f56 [179103,32715] 0 2026-03-09T15:01:30.681 INFO:tasks.workunit.client.0.vm05.stdout:5/443: truncate d1/d4/d34/d35/d3d/d38/f8a 2680367 0 2026-03-09T15:01:30.682 INFO:tasks.workunit.client.0.vm05.stdout:0/312: rename l4 to d9/de/d12/d15/d2e/d32/d53/d61/l63 0 2026-03-09T15:01:30.685 INFO:tasks.workunit.client.0.vm05.stdout:9/412: fsync d2/f5 0 2026-03-09T15:01:30.687 INFO:tasks.workunit.client.0.vm05.stdout:9/413: truncate d2/d10/d22/d52/d59/f79 261133 0 2026-03-09T15:01:30.687 INFO:tasks.workunit.client.0.vm05.stdout:9/414: stat d2/l34 0 2026-03-09T15:01:30.688 INFO:tasks.workunit.client.0.vm05.stdout:9/415: dread d2/d10/d22/d52/d59/f79 [0,4194304] 0 2026-03-09T15:01:30.694 INFO:tasks.workunit.client.0.vm05.stdout:3/406: dwrite d3/df/d10/d19/d44/d50/f65 [0,4194304] 0 2026-03-09T15:01:30.702 INFO:tasks.workunit.client.0.vm05.stdout:7/394: rmdir d1 39 2026-03-09T15:01:30.704 INFO:tasks.workunit.client.0.vm05.stdout:5/444: mknod d1/d4/d34/d35/d4e/c95 0 2026-03-09T15:01:30.709 INFO:tasks.workunit.client.0.vm05.stdout:2/368: getdents da/d29/d45 0 2026-03-09T15:01:30.711 INFO:tasks.workunit.client.0.vm05.stdout:6/374: link da/d17/d3b/f4f da/d19/f6a 0 2026-03-09T15:01:30.712 INFO:tasks.workunit.client.0.vm05.stdout:6/375: readlink da/d17/l21 0 2026-03-09T15:01:30.716 INFO:tasks.workunit.client.0.vm05.stdout:8/417: getdents d0/d1/d12/d3c/d8b 0 2026-03-09T15:01:30.718 INFO:tasks.workunit.client.0.vm05.stdout:9/416: rename d2/d10/f6b to d2/d10/d22/d2c/f8d 0 2026-03-09T15:01:30.721 INFO:tasks.workunit.client.0.vm05.stdout:3/407: unlink d3/df/d1e/d2f/d52/f5b 0 2026-03-09T15:01:30.722 INFO:tasks.workunit.client.0.vm05.stdout:1/372: write d9/d2a/d59/d49/d48/f6f [80513,14835] 0 2026-03-09T15:01:30.727 INFO:tasks.workunit.client.0.vm05.stdout:1/373: dwrite d9/d2f/d55/f64 [0,4194304] 0 2026-03-09T15:01:30.729 INFO:tasks.workunit.client.0.vm05.stdout:4/387: creat d2/f7e x:0 0 0 2026-03-09T15:01:30.732 INFO:tasks.workunit.client.0.vm05.stdout:5/445: mkdir d1/d4/d34/d35/d3d/d96 0 2026-03-09T15:01:30.733 INFO:tasks.workunit.client.0.vm05.stdout:0/313: mkdir d9/d64 0 2026-03-09T15:01:30.734 INFO:tasks.workunit.client.0.vm05.stdout:6/376: rename da/d17/d3b/f47 to da/d17/d3b/f6b 0 2026-03-09T15:01:30.735 INFO:tasks.workunit.client.0.vm05.stdout:9/417: unlink d2/d10/d22/d2c/d69/f2a 0 2026-03-09T15:01:30.739 INFO:tasks.workunit.client.0.vm05.stdout:3/408: rename d3/d66 to d3/df/d10/d34/d8c 0 2026-03-09T15:01:30.742 INFO:tasks.workunit.client.0.vm05.stdout:1/374: mkdir d9/d2a/d87 0 2026-03-09T15:01:30.749 INFO:tasks.workunit.client.0.vm05.stdout:4/388: creat d2/d4/d7/d21/d3d/f7f x:0 0 0 2026-03-09T15:01:30.749 INFO:tasks.workunit.client.0.vm05.stdout:4/389: stat d2/l2a 0 2026-03-09T15:01:30.749 INFO:tasks.workunit.client.0.vm05.stdout:5/446: symlink d1/d4/d34/d35/d3d/l97 0 2026-03-09T15:01:30.752 INFO:tasks.workunit.client.0.vm05.stdout:9/418: rename d2/d10/d22/d47/d5b to d2/d10/d22/d52/d59/d8e 0 2026-03-09T15:01:30.754 INFO:tasks.workunit.client.0.vm05.stdout:1/375: symlink d9/d2a/d6a/d75/l88 0 2026-03-09T15:01:30.754 INFO:tasks.workunit.client.0.vm05.stdout:0/314: sync 2026-03-09T15:01:30.758 INFO:tasks.workunit.client.0.vm05.stdout:4/390: dwrite d2/d4/d7/f7b [0,4194304] 0 2026-03-09T15:01:30.761 INFO:tasks.workunit.client.0.vm05.stdout:5/447: chown d1/l78 10887 1 2026-03-09T15:01:30.762 INFO:tasks.workunit.client.0.vm05.stdout:6/377: mknod da/c6c 0 2026-03-09T15:01:30.766 INFO:tasks.workunit.client.0.vm05.stdout:6/378: dread da/fe [0,4194304] 0 2026-03-09T15:01:30.766 INFO:tasks.workunit.client.0.vm05.stdout:6/379: read da/d17/f44 [241643,65198] 0 2026-03-09T15:01:30.769 INFO:tasks.workunit.client.0.vm05.stdout:6/380: truncate da/d19/f52 1025696 0 2026-03-09T15:01:30.769 INFO:tasks.workunit.client.0.vm05.stdout:3/409: mknod d3/df/d10/d19/c8d 0 2026-03-09T15:01:30.769 INFO:tasks.workunit.client.0.vm05.stdout:0/315: rmdir d9/de/d12/d15/d2e 39 2026-03-09T15:01:30.770 INFO:tasks.workunit.client.0.vm05.stdout:3/410: fdatasync d3/df/d10/d19/d44/f60 0 2026-03-09T15:01:30.771 INFO:tasks.workunit.client.0.vm05.stdout:3/411: write d3/df/f4a [4479668,118468] 0 2026-03-09T15:01:30.772 INFO:tasks.workunit.client.0.vm05.stdout:3/412: chown d3/df/d10/d19/f25 26571944 1 2026-03-09T15:01:30.776 INFO:tasks.workunit.client.0.vm05.stdout:4/391: symlink d2/d4/d7/d21/d3d/l80 0 2026-03-09T15:01:30.783 INFO:tasks.workunit.client.0.vm05.stdout:2/369: getdents da 0 2026-03-09T15:01:30.783 INFO:tasks.workunit.client.0.vm05.stdout:9/419: write d2/d4e/d56/d53/f60 [904418,119715] 0 2026-03-09T15:01:30.783 INFO:tasks.workunit.client.0.vm05.stdout:1/376: mknod d9/d2a/d59/d49/d77/c89 0 2026-03-09T15:01:30.783 INFO:tasks.workunit.client.0.vm05.stdout:9/420: readlink d2/d10/l35 0 2026-03-09T15:01:30.785 INFO:tasks.workunit.client.0.vm05.stdout:6/381: creat da/d43/d66/f6d x:0 0 0 2026-03-09T15:01:30.785 INFO:tasks.workunit.client.0.vm05.stdout:1/377: dread d9/d17/f26 [0,4194304] 0 2026-03-09T15:01:30.788 INFO:tasks.workunit.client.0.vm05.stdout:2/370: sync 2026-03-09T15:01:30.789 INFO:tasks.workunit.client.0.vm05.stdout:3/413: truncate d3/df/d10/f28 2182742 0 2026-03-09T15:01:30.789 INFO:tasks.workunit.client.0.vm05.stdout:4/392: mknod d2/d4/d7/d21/d3d/c81 0 2026-03-09T15:01:30.791 INFO:tasks.workunit.client.0.vm05.stdout:3/414: write d3/df/d10/d34/d8c/f85 [221026,114499] 0 2026-03-09T15:01:30.795 INFO:tasks.workunit.client.0.vm05.stdout:5/448: mknod d1/d5d/d7f/d91/c98 0 2026-03-09T15:01:30.795 INFO:tasks.workunit.client.0.vm05.stdout:5/449: chown d1/d4/f20 28766451 1 2026-03-09T15:01:30.796 INFO:tasks.workunit.client.0.vm05.stdout:5/450: chown d1/f66 6040 1 2026-03-09T15:01:30.798 INFO:tasks.workunit.client.0.vm05.stdout:1/378: symlink d9/d2a/d59/d49/d78/l8a 0 2026-03-09T15:01:30.802 INFO:tasks.workunit.client.0.vm05.stdout:0/316: mknod d9/de/d12/d15/d2e/d32/d53/d61/c65 0 2026-03-09T15:01:30.803 INFO:tasks.workunit.client.0.vm05.stdout:0/317: fsync d9/de/d25/f47 0 2026-03-09T15:01:30.803 INFO:tasks.workunit.client.0.vm05.stdout:8/418: dwrite d0/f47 [0,4194304] 0 2026-03-09T15:01:30.806 INFO:tasks.workunit.client.0.vm05.stdout:0/318: stat d9/de/d12/d15/d2e/d32/d53/d61/c65 0 2026-03-09T15:01:30.815 INFO:tasks.workunit.client.0.vm05.stdout:2/371: creat da/d16/f6e x:0 0 0 2026-03-09T15:01:30.821 INFO:tasks.workunit.client.0.vm05.stdout:3/415: rmdir d3/df/d10/d19/d44 39 2026-03-09T15:01:30.823 INFO:tasks.workunit.client.0.vm05.stdout:7/395: dwrite d1/d9/d23/d31/d51/f6a [0,4194304] 0 2026-03-09T15:01:30.826 INFO:tasks.workunit.client.0.vm05.stdout:3/416: dwrite d3/df/d1e/d2f/d52/f61 [0,4194304] 0 2026-03-09T15:01:30.830 INFO:tasks.workunit.client.0.vm05.stdout:9/421: mknod d2/d10/c8f 0 2026-03-09T15:01:30.838 INFO:tasks.workunit.client.0.vm05.stdout:9/422: dwrite d2/d10/f65 [0,4194304] 0 2026-03-09T15:01:30.858 INFO:tasks.workunit.client.0.vm05.stdout:4/393: write d2/d43/f4f [1101371,35022] 0 2026-03-09T15:01:30.861 INFO:tasks.workunit.client.0.vm05.stdout:4/394: dwrite d2/f33 [0,4194304] 0 2026-03-09T15:01:30.865 INFO:tasks.workunit.client.0.vm05.stdout:0/319: readlink d9/de/d12/d15/d2e/d32/d53/d61/l63 0 2026-03-09T15:01:30.867 INFO:tasks.workunit.client.0.vm05.stdout:0/320: write d9/de/d12/f4c [3865161,60398] 0 2026-03-09T15:01:30.877 INFO:tasks.workunit.client.0.vm05.stdout:7/396: stat d1/d9/d23/l4b 0 2026-03-09T15:01:30.879 INFO:tasks.workunit.client.0.vm05.stdout:5/451: creat d1/d4/d19/d93/f99 x:0 0 0 2026-03-09T15:01:30.880 INFO:tasks.workunit.client.0.vm05.stdout:5/452: fsync d1/d4/d34/d56/d68/f74 0 2026-03-09T15:01:30.882 INFO:tasks.workunit.client.0.vm05.stdout:3/417: creat d3/df/d10/d34/f8e x:0 0 0 2026-03-09T15:01:30.883 INFO:tasks.workunit.client.0.vm05.stdout:3/418: stat d3/d29/d2d/d77/d4d 0 2026-03-09T15:01:30.885 INFO:tasks.workunit.client.0.vm05.stdout:8/419: dread d0/d7/f8 [0,4194304] 0 2026-03-09T15:01:30.889 INFO:tasks.workunit.client.0.vm05.stdout:9/423: rename d2/d10/f85 to d2/d10/d22/d47/d73/f90 0 2026-03-09T15:01:30.892 INFO:tasks.workunit.client.0.vm05.stdout:1/379: symlink d9/d2a/d59/d49/d78/d7e/l8b 0 2026-03-09T15:01:30.892 INFO:tasks.workunit.client.0.vm05.stdout:1/380: fdatasync d9/f7f 0 2026-03-09T15:01:30.893 INFO:tasks.workunit.client.0.vm05.stdout:1/381: chown d9/d2a/d59/d49/d78/d7e 22919 1 2026-03-09T15:01:30.896 INFO:tasks.workunit.client.0.vm05.stdout:4/395: unlink d2/d43/l46 0 2026-03-09T15:01:30.901 INFO:tasks.workunit.client.0.vm05.stdout:4/396: dread d2/f33 [0,4194304] 0 2026-03-09T15:01:30.901 INFO:tasks.workunit.client.0.vm05.stdout:9/424: sync 2026-03-09T15:01:30.904 INFO:tasks.workunit.client.0.vm05.stdout:4/397: sync 2026-03-09T15:01:30.912 INFO:tasks.workunit.client.0.vm05.stdout:6/382: truncate da/d17/d3b/f5f 3366568 0 2026-03-09T15:01:30.913 INFO:tasks.workunit.client.0.vm05.stdout:0/321: write d9/de/d12/d15/d2e/d32/d53/d61/f62 [2783825,29641] 0 2026-03-09T15:01:30.915 INFO:tasks.workunit.client.0.vm05.stdout:2/372: link da/f10 da/dd/f6f 0 2026-03-09T15:01:30.917 INFO:tasks.workunit.client.0.vm05.stdout:5/453: mknod d1/d5d/d7f/c9a 0 2026-03-09T15:01:30.917 INFO:tasks.workunit.client.0.vm05.stdout:3/419: rename d3/df/d1e/f5c to d3/df/d1e/f8f 0 2026-03-09T15:01:30.917 INFO:tasks.workunit.client.0.vm05.stdout:5/454: fdatasync d1/d4/f55 0 2026-03-09T15:01:30.918 INFO:tasks.workunit.client.0.vm05.stdout:8/420: rename d0/d1/d12/f57 to d0/d1/d12/d3c/f8c 0 2026-03-09T15:01:30.919 INFO:tasks.workunit.client.0.vm05.stdout:8/421: write d0/d1/d12/d1b/f67 [1047906,20572] 0 2026-03-09T15:01:30.919 INFO:tasks.workunit.client.0.vm05.stdout:5/455: sync 2026-03-09T15:01:30.920 INFO:tasks.workunit.client.0.vm05.stdout:8/422: fsync d0/d1/d12/d3c/f84 0 2026-03-09T15:01:30.921 INFO:tasks.workunit.client.0.vm05.stdout:8/423: write d0/dc/f7a [528109,28191] 0 2026-03-09T15:01:30.922 INFO:tasks.workunit.client.0.vm05.stdout:1/382: rmdir d9/d2a/d59/d49/d78/d7e 39 2026-03-09T15:01:30.924 INFO:tasks.workunit.client.0.vm05.stdout:5/456: dwrite d1/d4/d34/d35/d3d/d38/f4b [0,4194304] 0 2026-03-09T15:01:30.933 INFO:tasks.workunit.client.0.vm05.stdout:9/425: creat d2/d10/d22/d2c/f91 x:0 0 0 2026-03-09T15:01:30.936 INFO:tasks.workunit.client.0.vm05.stdout:5/457: dwrite d1/d4/d34/f6a [0,4194304] 0 2026-03-09T15:01:30.940 INFO:tasks.workunit.client.0.vm05.stdout:6/383: unlink da/f12 0 2026-03-09T15:01:30.955 INFO:tasks.workunit.client.0.vm05.stdout:3/420: mkdir d3/df/d10/d34/d8c/d90 0 2026-03-09T15:01:30.955 INFO:tasks.workunit.client.0.vm05.stdout:3/421: write d3/d29/f70 [3562988,21108] 0 2026-03-09T15:01:30.962 INFO:tasks.workunit.client.0.vm05.stdout:2/373: rename da/d13/d2f/l3b to da/d29/d6a/l70 0 2026-03-09T15:01:30.965 INFO:tasks.workunit.client.0.vm05.stdout:1/383: creat d9/d2f/d37/d5a/f8c x:0 0 0 2026-03-09T15:01:30.965 INFO:tasks.workunit.client.0.vm05.stdout:1/384: write d9/d2f/d37/d5f/f73 [713852,5909] 0 2026-03-09T15:01:30.968 INFO:tasks.workunit.client.0.vm05.stdout:1/385: dread d9/d2a/d59/f42 [0,4194304] 0 2026-03-09T15:01:30.975 INFO:tasks.workunit.client.0.vm05.stdout:4/398: symlink d2/d4/l82 0 2026-03-09T15:01:30.975 INFO:tasks.workunit.client.0.vm05.stdout:9/426: unlink d2/d10/f82 0 2026-03-09T15:01:30.975 INFO:tasks.workunit.client.0.vm05.stdout:9/427: fsync d2/d10/d22/d52/d59/f6f 0 2026-03-09T15:01:30.975 INFO:tasks.workunit.client.0.vm05.stdout:9/428: dwrite d2/f12 [0,4194304] 0 2026-03-09T15:01:30.976 INFO:tasks.workunit.client.0.vm05.stdout:2/374: sync 2026-03-09T15:01:30.980 INFO:tasks.workunit.client.0.vm05.stdout:9/429: dwrite d2/d10/f48 [0,4194304] 0 2026-03-09T15:01:30.987 INFO:tasks.workunit.client.0.vm05.stdout:8/424: dread d0/d2a/d2d/f3e [0,4194304] 0 2026-03-09T15:01:30.995 INFO:tasks.workunit.client.0.vm05.stdout:9/430: dwrite d2/d10/d22/d2c/d69/f67 [0,4194304] 0 2026-03-09T15:01:31.010 INFO:tasks.workunit.client.0.vm05.stdout:7/397: truncate d1/f45 2077030 0 2026-03-09T15:01:31.010 INFO:tasks.workunit.client.0.vm05.stdout:7/398: chown d1/d9/d23/d31/d51/f3b 4177841 1 2026-03-09T15:01:31.011 INFO:tasks.workunit.client.0.vm05.stdout:7/399: chown d1/f15 1 1 2026-03-09T15:01:31.012 INFO:tasks.workunit.client.0.vm05.stdout:6/384: creat da/d43/d66/f6e x:0 0 0 2026-03-09T15:01:31.012 INFO:tasks.workunit.client.0.vm05.stdout:6/385: read - da/d17/f64 zero size 2026-03-09T15:01:31.013 INFO:tasks.workunit.client.0.vm05.stdout:3/422: mknod d3/d29/d7f/c91 0 2026-03-09T15:01:31.021 INFO:tasks.workunit.client.0.vm05.stdout:8/425: dread d0/d1/d12/d3c/f8c [0,4194304] 0 2026-03-09T15:01:31.022 INFO:tasks.workunit.client.0.vm05.stdout:1/386: creat d9/d2a/f8d x:0 0 0 2026-03-09T15:01:31.023 INFO:tasks.workunit.client.0.vm05.stdout:1/387: write d9/d2a/f4e [1584012,39800] 0 2026-03-09T15:01:31.024 INFO:tasks.workunit.client.0.vm05.stdout:1/388: truncate d9/d2a/d59/d49/f69 232545 0 2026-03-09T15:01:31.025 INFO:tasks.workunit.client.0.vm05.stdout:8/426: dread d0/dc/f7a [0,4194304] 0 2026-03-09T15:01:31.027 INFO:tasks.workunit.client.0.vm05.stdout:2/375: creat da/d29/d6a/f71 x:0 0 0 2026-03-09T15:01:31.030 INFO:tasks.workunit.client.0.vm05.stdout:9/431: mkdir d2/d92 0 2026-03-09T15:01:31.032 INFO:tasks.workunit.client.0.vm05.stdout:6/386: symlink da/d43/d66/l6f 0 2026-03-09T15:01:31.032 INFO:tasks.workunit.client.0.vm05.stdout:3/423: stat d3/df/c1d 0 2026-03-09T15:01:31.035 INFO:tasks.workunit.client.0.vm05.stdout:2/376: rmdir da/d29/d3f 39 2026-03-09T15:01:31.037 INFO:tasks.workunit.client.0.vm05.stdout:2/377: write da/d29/d6a/f71 [391821,128331] 0 2026-03-09T15:01:31.038 INFO:tasks.workunit.client.0.vm05.stdout:2/378: chown da/d13/f59 437699195 1 2026-03-09T15:01:31.039 INFO:tasks.workunit.client.0.vm05.stdout:7/400: mknod d1/d9/d23/d54/c7a 0 2026-03-09T15:01:31.047 INFO:tasks.workunit.client.0.vm05.stdout:2/379: creat da/d16/f72 x:0 0 0 2026-03-09T15:01:31.048 INFO:tasks.workunit.client.0.vm05.stdout:7/401: sync 2026-03-09T15:01:31.057 INFO:tasks.workunit.client.0.vm05.stdout:5/458: dwrite d1/d4/d27/f3a [0,4194304] 0 2026-03-09T15:01:31.062 INFO:tasks.workunit.client.0.vm05.stdout:9/432: dread d2/d10/d22/d2c/d69/d5a/f80 [0,4194304] 0 2026-03-09T15:01:31.068 INFO:tasks.workunit.client.0.vm05.stdout:0/322: truncate d9/de/d12/d15/d2e/d32/d53/d61/f62 2714986 0 2026-03-09T15:01:31.069 INFO:tasks.workunit.client.0.vm05.stdout:0/323: write d9/de/d12/d15/f5e [397090,34409] 0 2026-03-09T15:01:31.070 INFO:tasks.workunit.client.0.vm05.stdout:0/324: read d9/de/d25/f2d [1800041,106788] 0 2026-03-09T15:01:31.075 INFO:tasks.workunit.client.0.vm05.stdout:2/380: creat da/d16/d46/f73 x:0 0 0 2026-03-09T15:01:31.089 INFO:tasks.workunit.client.0.vm05.stdout:5/459: creat d1/d5d/f9b x:0 0 0 2026-03-09T15:01:31.089 INFO:tasks.workunit.client.0.vm05.stdout:9/433: creat d2/d10/d22/d2c/f93 x:0 0 0 2026-03-09T15:01:31.090 INFO:tasks.workunit.client.0.vm05.stdout:9/434: write d2/d10/f71 [947565,130047] 0 2026-03-09T15:01:31.094 INFO:tasks.workunit.client.0.vm05.stdout:9/435: dwrite d2/d4e/f72 [0,4194304] 0 2026-03-09T15:01:31.100 INFO:tasks.workunit.client.0.vm05.stdout:4/399: truncate d2/d43/f4f 235016 0 2026-03-09T15:01:31.105 INFO:tasks.workunit.client.0.vm05.stdout:1/389: dwrite d9/d2a/f6e [0,4194304] 0 2026-03-09T15:01:31.110 INFO:tasks.workunit.client.0.vm05.stdout:1/390: dread d9/d2a/d59/d49/f69 [0,4194304] 0 2026-03-09T15:01:31.113 INFO:tasks.workunit.client.0.vm05.stdout:4/400: dread d2/d4/d1e/f40 [0,4194304] 0 2026-03-09T15:01:31.116 INFO:tasks.workunit.client.0.vm05.stdout:1/391: dwrite d9/d2f/f58 [0,4194304] 0 2026-03-09T15:01:31.119 INFO:tasks.workunit.client.0.vm05.stdout:0/325: symlink d9/de/d25/l66 0 2026-03-09T15:01:31.128 INFO:tasks.workunit.client.0.vm05.stdout:6/387: dwrite da/d17/f3c [0,4194304] 0 2026-03-09T15:01:31.133 INFO:tasks.workunit.client.0.vm05.stdout:3/424: link d3/df/d10/d34/d8c/f85 d3/d29/f92 0 2026-03-09T15:01:31.139 INFO:tasks.workunit.client.0.vm05.stdout:2/381: symlink da/d13/d2f/d35/l74 0 2026-03-09T15:01:31.144 INFO:tasks.workunit.client.0.vm05.stdout:3/425: sync 2026-03-09T15:01:31.145 INFO:tasks.workunit.client.0.vm05.stdout:5/460: mkdir d1/d4/d27/d75/d9c 0 2026-03-09T15:01:31.146 INFO:tasks.workunit.client.0.vm05.stdout:9/436: mkdir d2/d10/d22/d52/d59/d94 0 2026-03-09T15:01:31.153 INFO:tasks.workunit.client.0.vm05.stdout:5/461: sync 2026-03-09T15:01:31.157 INFO:tasks.workunit.client.0.vm05.stdout:4/401: dread d2/f33 [0,4194304] 0 2026-03-09T15:01:31.158 INFO:tasks.workunit.client.0.vm05.stdout:0/326: mkdir d9/de/d67 0 2026-03-09T15:01:31.159 INFO:tasks.workunit.client.0.vm05.stdout:0/327: write d9/de/d25/f47 [291982,18904] 0 2026-03-09T15:01:31.162 INFO:tasks.workunit.client.0.vm05.stdout:4/402: dwrite d2/f14 [0,4194304] 0 2026-03-09T15:01:31.167 INFO:tasks.workunit.client.0.vm05.stdout:8/427: link d0/d1/d12/d1b/d21/c23 d0/d7/c8d 0 2026-03-09T15:01:31.175 INFO:tasks.workunit.client.0.vm05.stdout:3/426: creat d3/df/d1e/d2f/d52/f93 x:0 0 0 2026-03-09T15:01:31.179 INFO:tasks.workunit.client.0.vm05.stdout:3/427: dwrite d3/df/d10/d34/f48 [0,4194304] 0 2026-03-09T15:01:31.191 INFO:tasks.workunit.client.0.vm05.stdout:0/328: creat d9/de/d12/d15/d2e/d32/d53/f68 x:0 0 0 2026-03-09T15:01:31.191 INFO:tasks.workunit.client.0.vm05.stdout:0/329: write d9/de/d12/f4c [982958,126213] 0 2026-03-09T15:01:31.193 INFO:tasks.workunit.client.0.vm05.stdout:0/330: write d9/de/d12/d15/f50 [12689,45273] 0 2026-03-09T15:01:31.193 INFO:tasks.workunit.client.0.vm05.stdout:0/331: readlink d9/de/d12/d15/d2e/l3f 0 2026-03-09T15:01:31.197 INFO:tasks.workunit.client.0.vm05.stdout:8/428: rename d0/d1/d12/d1b/l43 to d0/d1/d12/d1b/d66/d6f/d80/l8e 0 2026-03-09T15:01:31.212 INFO:tasks.workunit.client.0.vm05.stdout:7/402: dwrite d1/d9/d23/d31/d32/f38 [0,4194304] 0 2026-03-09T15:01:31.217 INFO:tasks.workunit.client.0.vm05.stdout:4/403: dread d2/d4/d7/f9 [4194304,4194304] 0 2026-03-09T15:01:31.231 INFO:tasks.workunit.client.0.vm05.stdout:1/392: link d9/d17/f26 d9/d2a/d59/d49/d4b/f8e 0 2026-03-09T15:01:31.232 INFO:tasks.workunit.client.0.vm05.stdout:1/393: write d9/d2a/f50 [552585,37097] 0 2026-03-09T15:01:31.242 INFO:tasks.workunit.client.0.vm05.stdout:8/429: creat d0/d2a/d2d/d42/d60/f8f x:0 0 0 2026-03-09T15:01:31.245 INFO:tasks.workunit.client.0.vm05.stdout:3/428: link d3/df/f14 d3/df/d10/d7c/f94 0 2026-03-09T15:01:31.245 INFO:tasks.workunit.client.0.vm05.stdout:3/429: write d3/f42 [532645,52291] 0 2026-03-09T15:01:31.252 INFO:tasks.workunit.client.0.vm05.stdout:7/403: mkdir d1/d9/d23/d54/d7b 0 2026-03-09T15:01:31.257 INFO:tasks.workunit.client.0.vm05.stdout:4/404: fdatasync d2/f3e 0 2026-03-09T15:01:31.257 INFO:tasks.workunit.client.0.vm05.stdout:1/394: creat d9/d2a/d59/d49/d48/f8f x:0 0 0 2026-03-09T15:01:31.257 INFO:tasks.workunit.client.0.vm05.stdout:1/395: fsync d9/d2f/f43 0 2026-03-09T15:01:31.258 INFO:tasks.workunit.client.0.vm05.stdout:8/430: sync 2026-03-09T15:01:31.262 INFO:tasks.workunit.client.0.vm05.stdout:8/431: dwrite d0/f47 [4194304,4194304] 0 2026-03-09T15:01:31.264 INFO:tasks.workunit.client.0.vm05.stdout:8/432: chown d0/d1/d12/d1b/d21/l2f 227781 1 2026-03-09T15:01:31.279 INFO:tasks.workunit.client.0.vm05.stdout:3/430: write d3/df/d10/f2a [3807440,39070] 0 2026-03-09T15:01:31.280 INFO:tasks.workunit.client.0.vm05.stdout:6/388: write da/d17/f20 [797532,27983] 0 2026-03-09T15:01:31.280 INFO:tasks.workunit.client.0.vm05.stdout:6/389: truncate da/d17/f42 4229245 0 2026-03-09T15:01:31.284 INFO:tasks.workunit.client.0.vm05.stdout:2/382: dwrite da/dd/f6f [0,4194304] 0 2026-03-09T15:01:31.290 INFO:tasks.workunit.client.0.vm05.stdout:9/437: getdents d2/d10/d22/d2c/d69/d5a 0 2026-03-09T15:01:31.293 INFO:tasks.workunit.client.0.vm05.stdout:5/462: write d1/d4/d34/d56/f59 [1998663,14775] 0 2026-03-09T15:01:31.296 INFO:tasks.workunit.client.0.vm05.stdout:4/405: symlink d2/d4/d1e/l83 0 2026-03-09T15:01:31.296 INFO:tasks.workunit.client.0.vm05.stdout:4/406: chown d2/f1b 149155452 1 2026-03-09T15:01:31.297 INFO:tasks.workunit.client.0.vm05.stdout:4/407: readlink d2/d4/l6c 0 2026-03-09T15:01:31.299 INFO:tasks.workunit.client.0.vm05.stdout:1/396: mknod d9/d2a/d59/d49/d48/c90 0 2026-03-09T15:01:31.306 INFO:tasks.workunit.client.0.vm05.stdout:8/433: creat d0/d2a/d2d/d54/f90 x:0 0 0 2026-03-09T15:01:31.308 INFO:tasks.workunit.client.0.vm05.stdout:3/431: rmdir d3/df/d10/d34 39 2026-03-09T15:01:31.309 INFO:tasks.workunit.client.0.vm05.stdout:3/432: dread - d3/df/d1e/d2f/d52/f87 zero size 2026-03-09T15:01:31.311 INFO:tasks.workunit.client.0.vm05.stdout:6/390: mknod da/d43/c70 0 2026-03-09T15:01:31.311 INFO:tasks.workunit.client.0.vm05.stdout:6/391: stat da/f57 0 2026-03-09T15:01:31.314 INFO:tasks.workunit.client.0.vm05.stdout:9/438: mkdir d2/d10/d22/d47/d95 0 2026-03-09T15:01:31.319 INFO:tasks.workunit.client.0.vm05.stdout:3/433: rename d3/d29/f70 to d3/df/d1e/d2f/d52/f95 0 2026-03-09T15:01:31.320 INFO:tasks.workunit.client.0.vm05.stdout:3/434: chown d3/df/d10 1 1 2026-03-09T15:01:31.320 INFO:tasks.workunit.client.0.vm05.stdout:6/392: fsync da/d17/f69 0 2026-03-09T15:01:31.324 INFO:tasks.workunit.client.0.vm05.stdout:5/463: write d1/d4/d34/d35/f52 [1700291,109300] 0 2026-03-09T15:01:31.326 INFO:tasks.workunit.client.0.vm05.stdout:0/332: dwrite d9/de/d12/d15/d2e/d32/d53/d61/f62 [0,4194304] 0 2026-03-09T15:01:31.329 INFO:tasks.workunit.client.0.vm05.stdout:6/393: sync 2026-03-09T15:01:31.330 INFO:tasks.workunit.client.0.vm05.stdout:8/434: symlink d0/d7/l91 0 2026-03-09T15:01:31.331 INFO:tasks.workunit.client.0.vm05.stdout:6/394: sync 2026-03-09T15:01:31.333 INFO:tasks.workunit.client.0.vm05.stdout:7/404: getdents d1/d9/d23/d31/d32 0 2026-03-09T15:01:31.334 INFO:tasks.workunit.client.0.vm05.stdout:7/405: fsync d1/d12/f20 0 2026-03-09T15:01:31.338 INFO:tasks.workunit.client.0.vm05.stdout:9/439: mknod d2/d4e/c96 0 2026-03-09T15:01:31.338 INFO:tasks.workunit.client.0.vm05.stdout:5/464: symlink d1/d4/d34/d35/d3d/l9d 0 2026-03-09T15:01:31.338 INFO:tasks.workunit.client.0.vm05.stdout:9/440: chown d2/d10/d22/d2c/d69/d5a 15 1 2026-03-09T15:01:31.339 INFO:tasks.workunit.client.0.vm05.stdout:4/408: rmdir d2/d4/d7/dc/d2b/d5d 0 2026-03-09T15:01:31.340 INFO:tasks.workunit.client.0.vm05.stdout:1/397: creat d9/d2f/f91 x:0 0 0 2026-03-09T15:01:31.341 INFO:tasks.workunit.client.0.vm05.stdout:8/435: stat d0/d2a/d2d/d42/c6b 0 2026-03-09T15:01:31.342 INFO:tasks.workunit.client.0.vm05.stdout:8/436: write d0/d1/d12/d3c/f77 [2364145,114901] 0 2026-03-09T15:01:31.343 INFO:tasks.workunit.client.0.vm05.stdout:8/437: stat d0/d1/d12/d1b/d66/d6f 0 2026-03-09T15:01:31.347 INFO:tasks.workunit.client.0.vm05.stdout:3/435: mknod d3/df/d10/d34/d8c/c96 0 2026-03-09T15:01:31.348 INFO:tasks.workunit.client.0.vm05.stdout:2/383: getdents da/d29 0 2026-03-09T15:01:31.349 INFO:tasks.workunit.client.0.vm05.stdout:2/384: readlink da/lb 0 2026-03-09T15:01:31.354 INFO:tasks.workunit.client.0.vm05.stdout:3/436: dwrite d3/df/d1e/d2f/d52/f87 [0,4194304] 0 2026-03-09T15:01:31.355 INFO:tasks.workunit.client.0.vm05.stdout:3/437: readlink d3/df/l55 0 2026-03-09T15:01:31.355 INFO:tasks.workunit.client.0.vm05.stdout:9/441: dread d2/d10/d22/d2c/d3c/f55 [0,4194304] 0 2026-03-09T15:01:31.356 INFO:tasks.workunit.client.0.vm05.stdout:3/438: chown d3/df/d1e/d2c/d74/d78 209708 1 2026-03-09T15:01:31.356 INFO:tasks.workunit.client.0.vm05.stdout:3/439: stat d3/d29/d7f/c91 0 2026-03-09T15:01:31.359 INFO:tasks.workunit.client.0.vm05.stdout:7/406: rename d1/d49/d4a/f5e to d1/d9/d72/f7c 0 2026-03-09T15:01:31.364 INFO:tasks.workunit.client.0.vm05.stdout:7/407: dwrite d1/f76 [0,4194304] 0 2026-03-09T15:01:31.364 INFO:tasks.workunit.client.0.vm05.stdout:5/465: unlink d1/d4/d34/d35/d3d/d38/d63/c76 0 2026-03-09T15:01:31.380 INFO:tasks.workunit.client.0.vm05.stdout:8/438: truncate d0/d1/d12/d1b/d21/f65 258419 0 2026-03-09T15:01:31.386 INFO:tasks.workunit.client.0.vm05.stdout:2/385: fsync da/d16/f20 0 2026-03-09T15:01:31.393 INFO:tasks.workunit.client.0.vm05.stdout:9/442: creat d2/d10/d22/d52/d59/f97 x:0 0 0 2026-03-09T15:01:31.400 INFO:tasks.workunit.client.0.vm05.stdout:8/439: dread d0/d1/f49 [0,4194304] 0 2026-03-09T15:01:31.401 INFO:tasks.workunit.client.0.vm05.stdout:0/333: dwrite d9/de/d12/f23 [0,4194304] 0 2026-03-09T15:01:31.402 INFO:tasks.workunit.client.0.vm05.stdout:0/334: truncate d9/f58 4626464 0 2026-03-09T15:01:31.403 INFO:tasks.workunit.client.0.vm05.stdout:0/335: truncate d9/de/d12/d15/f50 133693 0 2026-03-09T15:01:31.410 INFO:tasks.workunit.client.0.vm05.stdout:0/336: dwrite d9/de/f1e [0,4194304] 0 2026-03-09T15:01:31.425 INFO:tasks.workunit.client.0.vm05.stdout:5/466: creat d1/d4/d34/d35/d4e/d6f/d7e/f9e x:0 0 0 2026-03-09T15:01:31.429 INFO:tasks.workunit.client.0.vm05.stdout:6/395: link da/d17/f20 da/d17/f71 0 2026-03-09T15:01:31.432 INFO:tasks.workunit.client.0.vm05.stdout:9/443: mknod d2/c98 0 2026-03-09T15:01:31.433 INFO:tasks.workunit.client.0.vm05.stdout:9/444: read - d2/d10/d22/d47/f7b zero size 2026-03-09T15:01:31.433 INFO:tasks.workunit.client.0.vm05.stdout:9/445: dread - d2/d4e/f51 zero size 2026-03-09T15:01:31.434 INFO:tasks.workunit.client.0.vm05.stdout:8/440: creat d0/d1/d12/d1b/d21/f92 x:0 0 0 2026-03-09T15:01:31.437 INFO:tasks.workunit.client.0.vm05.stdout:9/446: dwrite d2/d10/d22/d2c/f91 [0,4194304] 0 2026-03-09T15:01:31.440 INFO:tasks.workunit.client.0.vm05.stdout:4/409: rename d2/d4/c2e to d2/d43/c84 0 2026-03-09T15:01:31.442 INFO:tasks.workunit.client.0.vm05.stdout:0/337: creat d9/de/f69 x:0 0 0 2026-03-09T15:01:31.456 INFO:tasks.workunit.client.0.vm05.stdout:5/467: creat d1/d5d/d7f/d91/f9f x:0 0 0 2026-03-09T15:01:31.456 INFO:tasks.workunit.client.0.vm05.stdout:5/468: readlink d1/d4/d19/l45 0 2026-03-09T15:01:31.463 INFO:tasks.workunit.client.0.vm05.stdout:3/440: creat d3/d29/f97 x:0 0 0 2026-03-09T15:01:31.466 INFO:tasks.workunit.client.0.vm05.stdout:9/447: mkdir d2/d4e/d56/d37/d99 0 2026-03-09T15:01:31.476 INFO:tasks.workunit.client.0.vm05.stdout:7/408: creat d1/d9/f7d x:0 0 0 2026-03-09T15:01:31.477 INFO:tasks.workunit.client.0.vm05.stdout:7/409: write d1/d9/d23/d54/f60 [20717,95733] 0 2026-03-09T15:01:31.488 INFO:tasks.workunit.client.0.vm05.stdout:5/469: dwrite d1/d4/f5f [0,4194304] 0 2026-03-09T15:01:31.490 INFO:tasks.workunit.client.0.vm05.stdout:6/396: link da/d43/f56 da/d43/f72 0 2026-03-09T15:01:31.490 INFO:tasks.workunit.client.0.vm05.stdout:5/470: readlink d1/d4/d34/d35/d3d/d38/d63/l84 0 2026-03-09T15:01:31.495 INFO:tasks.workunit.client.0.vm05.stdout:2/386: creat da/d13/f75 x:0 0 0 2026-03-09T15:01:31.495 INFO:tasks.workunit.client.0.vm05.stdout:2/387: stat f5 0 2026-03-09T15:01:31.496 INFO:tasks.workunit.client.0.vm05.stdout:6/397: dwrite da/d17/f69 [4194304,4194304] 0 2026-03-09T15:01:31.497 INFO:tasks.workunit.client.0.vm05.stdout:3/441: creat d3/df/d59/f98 x:0 0 0 2026-03-09T15:01:31.498 INFO:tasks.workunit.client.0.vm05.stdout:8/441: mkdir d0/d1/d12/d1b/d6e/d93 0 2026-03-09T15:01:31.500 INFO:tasks.workunit.client.0.vm05.stdout:2/388: dread da/d13/d30/f41 [0,4194304] 0 2026-03-09T15:01:31.501 INFO:tasks.workunit.client.0.vm05.stdout:9/448: symlink d2/d10/d22/d2c/d3c/l9a 0 2026-03-09T15:01:31.514 INFO:tasks.workunit.client.0.vm05.stdout:4/410: unlink d2/d43/c84 0 2026-03-09T15:01:31.519 INFO:tasks.workunit.client.0.vm05.stdout:0/338: unlink d9/de/f20 0 2026-03-09T15:01:31.519 INFO:tasks.workunit.client.0.vm05.stdout:8/442: mknod d0/d2a/d2d/d78/c94 0 2026-03-09T15:01:31.519 INFO:tasks.workunit.client.0.vm05.stdout:2/389: creat da/d29/f76 x:0 0 0 2026-03-09T15:01:31.519 INFO:tasks.workunit.client.0.vm05.stdout:9/449: dread d2/d10/d22/d52/d59/f18 [4194304,4194304] 0 2026-03-09T15:01:31.519 INFO:tasks.workunit.client.0.vm05.stdout:1/398: rename d9/d2a/d6a to d9/d2a/d59/d49/d92 0 2026-03-09T15:01:31.520 INFO:tasks.workunit.client.0.vm05.stdout:1/399: stat d9/d2f 0 2026-03-09T15:01:31.521 INFO:tasks.workunit.client.0.vm05.stdout:4/411: dread d2/f3e [0,4194304] 0 2026-03-09T15:01:31.525 INFO:tasks.workunit.client.0.vm05.stdout:5/471: dread d1/f9 [0,4194304] 0 2026-03-09T15:01:31.536 INFO:tasks.workunit.client.0.vm05.stdout:7/410: mkdir d1/d9/d23/d31/d32/d78/d7e 0 2026-03-09T15:01:31.537 INFO:tasks.workunit.client.0.vm05.stdout:0/339: mkdir d9/de/d6a 0 2026-03-09T15:01:31.538 INFO:tasks.workunit.client.0.vm05.stdout:0/340: fsync d9/de/d25/f52 0 2026-03-09T15:01:31.538 INFO:tasks.workunit.client.0.vm05.stdout:0/341: fsync d9/de/d25/f47 0 2026-03-09T15:01:31.539 INFO:tasks.workunit.client.0.vm05.stdout:0/342: chown d9/de/d12/d15/c24 1003 1 2026-03-09T15:01:31.541 INFO:tasks.workunit.client.0.vm05.stdout:2/390: symlink da/dd/l77 0 2026-03-09T15:01:31.541 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:31 vm09.local ceph-mon[59673]: Upgrade: Updating mgr.vm09.cfuwdz 2026-03-09T15:01:31.541 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:31 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:31.541 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:31 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.cfuwdz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:01:31.541 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:31 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:01:31.541 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:31 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:01:31.541 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:31 vm09.local ceph-mon[59673]: Deploying daemon mgr.vm09.cfuwdz on vm09 2026-03-09T15:01:31.541 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:31 vm09.local ceph-mon[59673]: pgmap v172: 65 pgs: 65 active+clean; 2.1 GiB data, 7.3 GiB used, 113 GiB / 120 GiB avail; 22 MiB/s rd, 116 MiB/s wr, 258 op/s 2026-03-09T15:01:31.541 INFO:tasks.workunit.client.0.vm05.stdout:2/391: chown da/d13/d2f/d35 843234772 1 2026-03-09T15:01:31.549 INFO:tasks.workunit.client.0.vm05.stdout:5/472: mknod d1/d4/d34/d35/d3d/d38/d63/ca0 0 2026-03-09T15:01:31.550 INFO:tasks.workunit.client.0.vm05.stdout:4/412: read d2/d4/f4e [1020189,44211] 0 2026-03-09T15:01:31.551 INFO:tasks.workunit.client.0.vm05.stdout:4/413: chown d2/d4/d1e/l44 5 1 2026-03-09T15:01:31.551 INFO:tasks.workunit.client.0.vm05.stdout:4/414: write d2/f14 [982855,3287] 0 2026-03-09T15:01:31.560 INFO:tasks.workunit.client.0.vm05.stdout:2/392: symlink da/d13/d2f/d35/l78 0 2026-03-09T15:01:31.561 INFO:tasks.workunit.client.0.vm05.stdout:9/450: mkdir d2/d4e/d56/d37/d99/d9b 0 2026-03-09T15:01:31.562 INFO:tasks.workunit.client.0.vm05.stdout:9/451: readlink d2/d10/d22/d2c/d3c/l9a 0 2026-03-09T15:01:31.563 INFO:tasks.workunit.client.0.vm05.stdout:5/473: dwrite d1/ff [4194304,4194304] 0 2026-03-09T15:01:31.578 INFO:tasks.workunit.client.0.vm05.stdout:2/393: dwrite da/d16/f72 [0,4194304] 0 2026-03-09T15:01:31.592 INFO:tasks.workunit.client.0.vm05.stdout:1/400: dwrite d9/d17/f26 [0,4194304] 0 2026-03-09T15:01:31.592 INFO:tasks.workunit.client.0.vm05.stdout:1/401: fsync d9/d2f/f43 0 2026-03-09T15:01:31.598 INFO:tasks.workunit.client.0.vm05.stdout:3/442: link d3/d29/d2d/c3b d3/df/c99 0 2026-03-09T15:01:31.598 INFO:tasks.workunit.client.0.vm05.stdout:3/443: fdatasync d3/d29/d2d/f33 0 2026-03-09T15:01:31.599 INFO:tasks.workunit.client.0.vm05.stdout:6/398: rename da/d19/l25 to da/d17/l73 0 2026-03-09T15:01:31.600 INFO:tasks.workunit.client.0.vm05.stdout:6/399: chown da/c6c 1909569 1 2026-03-09T15:01:31.613 INFO:tasks.workunit.client.0.vm05.stdout:4/415: dread d2/d4/d7/d21/d3d/f65 [0,4194304] 0 2026-03-09T15:01:31.621 INFO:tasks.workunit.client.0.vm05.stdout:3/444: creat d3/df/d1e/d2f/f9a x:0 0 0 2026-03-09T15:01:31.623 INFO:tasks.workunit.client.0.vm05.stdout:8/443: rename d0/d2a/d2d to d0/d1/d12/d1b/d95 0 2026-03-09T15:01:31.632 INFO:tasks.workunit.client.0.vm05.stdout:3/445: mkdir d3/df/d1e/d2c/d74/d9b 0 2026-03-09T15:01:31.633 INFO:tasks.workunit.client.0.vm05.stdout:7/411: rename d1/f21 to d1/d9/d23/d54/d7b/f7f 0 2026-03-09T15:01:31.634 INFO:tasks.workunit.client.0.vm05.stdout:7/412: write d1/f76 [1980604,22787] 0 2026-03-09T15:01:31.634 INFO:tasks.workunit.client.0.vm05.stdout:0/343: truncate d9/de/d12/f23 797546 0 2026-03-09T15:01:31.637 INFO:tasks.workunit.client.0.vm05.stdout:6/400: write da/d43/f56 [1189156,97891] 0 2026-03-09T15:01:31.637 INFO:tasks.workunit.client.0.vm05.stdout:5/474: truncate d1/d5d/f81 3338931 0 2026-03-09T15:01:31.642 INFO:tasks.workunit.client.0.vm05.stdout:6/401: dread da/f57 [0,4194304] 0 2026-03-09T15:01:31.642 INFO:tasks.workunit.client.0.vm05.stdout:6/402: dread - da/d17/d3b/f5a zero size 2026-03-09T15:01:31.653 INFO:tasks.workunit.client.0.vm05.stdout:9/452: rename d2/d10/d22/d52/d59 to d2/d4e/d56/d37/d9c 0 2026-03-09T15:01:31.656 INFO:tasks.workunit.client.0.vm05.stdout:7/413: chown d1/d9/c2c 1 1 2026-03-09T15:01:31.657 INFO:tasks.workunit.client.0.vm05.stdout:7/414: write d1/d22/d3c/f44 [133185,4614] 0 2026-03-09T15:01:31.658 INFO:tasks.workunit.client.0.vm05.stdout:7/415: readlink d1/d12/l2d 0 2026-03-09T15:01:31.663 INFO:tasks.workunit.client.0.vm05.stdout:6/403: mknod da/d43/c74 0 2026-03-09T15:01:31.663 INFO:tasks.workunit.client.0.vm05.stdout:8/444: dwrite d0/d1/d12/d1b/d95/d42/f4e [0,4194304] 0 2026-03-09T15:01:31.663 INFO:tasks.workunit.client.0.vm05.stdout:6/404: chown da/d19/l32 0 1 2026-03-09T15:01:31.671 INFO:tasks.workunit.client.0.vm05.stdout:5/475: sync 2026-03-09T15:01:31.677 INFO:tasks.workunit.client.0.vm05.stdout:2/394: rename da/d16/d46/f54 to da/f79 0 2026-03-09T15:01:31.677 INFO:tasks.workunit.client.0.vm05.stdout:1/402: rename d9/d2f/d37 to d9/d2f/d37/d5f/d93 22 2026-03-09T15:01:31.678 INFO:tasks.workunit.client.0.vm05.stdout:2/395: chown da/c33 85363 1 2026-03-09T15:01:31.679 INFO:tasks.workunit.client.0.vm05.stdout:1/403: truncate d9/d2a/d59/d49/d48/f63 1012339 0 2026-03-09T15:01:31.688 INFO:tasks.workunit.client.0.vm05.stdout:2/396: dread da/f21 [0,4194304] 0 2026-03-09T15:01:31.689 INFO:tasks.workunit.client.0.vm05.stdout:0/344: mkdir d9/de/d12/d15/d2e/d6b 0 2026-03-09T15:01:31.690 INFO:tasks.workunit.client.0.vm05.stdout:0/345: write d9/de/f3e [4165706,120719] 0 2026-03-09T15:01:31.697 INFO:tasks.workunit.client.0.vm05.stdout:6/405: write da/d19/f5b [129704,46979] 0 2026-03-09T15:01:31.699 INFO:tasks.workunit.client.0.vm05.stdout:8/445: mkdir d0/d24/d96 0 2026-03-09T15:01:31.700 INFO:tasks.workunit.client.0.vm05.stdout:8/446: write d0/d1/d12/d1b/d95/f4d [220899,37139] 0 2026-03-09T15:01:31.702 INFO:tasks.workunit.client.0.vm05.stdout:8/447: read d0/d1/d12/d1b/d95/d54/f5b [1177493,1294] 0 2026-03-09T15:01:31.703 INFO:tasks.workunit.client.0.vm05.stdout:5/476: symlink d1/d4/d34/d35/d3d/d38/d69/la1 0 2026-03-09T15:01:31.704 INFO:tasks.workunit.client.0.vm05.stdout:5/477: chown d1/d4/d34/d56 1 1 2026-03-09T15:01:31.711 INFO:tasks.workunit.client.0.vm05.stdout:5/478: dread d1/d4/d34/d35/f52 [0,4194304] 0 2026-03-09T15:01:31.712 INFO:tasks.workunit.client.0.vm05.stdout:4/416: rename d2/d4/d7/d21/d3d/c81 to d2/d4/d1e/d71/c85 0 2026-03-09T15:01:31.719 INFO:tasks.workunit.client.0.vm05.stdout:9/453: creat d2/d10/d22/d47/d95/f9d x:0 0 0 2026-03-09T15:01:31.720 INFO:tasks.workunit.client.0.vm05.stdout:1/404: dwrite f5 [0,4194304] 0 2026-03-09T15:01:31.732 INFO:tasks.workunit.client.0.vm05.stdout:0/346: write d9/de/d25/d38/f2f [143572,77347] 0 2026-03-09T15:01:31.733 INFO:tasks.workunit.client.0.vm05.stdout:0/347: chown d9/f42 286964671 1 2026-03-09T15:01:31.737 INFO:tasks.workunit.client.0.vm05.stdout:8/448: mkdir d0/d1/d97 0 2026-03-09T15:01:31.743 INFO:tasks.workunit.client.0.vm05.stdout:3/446: rename d3/df/c99 to d3/d29/d2d/c9c 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:9/454: mkdir d2/d9e 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:9/455: dread d2/f12 [0,4194304] 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:8/449: fsync d0/d24/f30 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:8/450: chown d0/d1/d12/d1b/d66/c82 441267213 1 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:7/416: rename d1/d9/d23/d31/f37 to d1/d49/d4a/d77/f80 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:3/447: write d3/df/d10/d19/d44/f56 [5071534,4699] 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:3/448: chown d3/df/c5d 0 1 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:3/449: write d3/d29/d2d/f31 [3677899,34499] 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:3/450: truncate d3/df/d59/f98 673931 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:3/451: dread - d3/df/d10/d19/d44/f60 zero size 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:4/417: mknod d2/d4/d7/d48/c86 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:4/418: readlink d2/d4/d7/d21/d3d/l80 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:4/419: write d2/d43/f51 [1712251,104031] 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:1/405: mkdir d9/d2a/d59/d49/d78/d94 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:1/406: fsync d9/d2a/d59/d49/d92/d75/f76 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:4/420: read d2/d4/d7/dc/f64 [1514346,67240] 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:4/421: write d2/d1d/f5c [1306111,59843] 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:0/348: getdents d9/d59 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:4/422: stat d2/d4/d7/d48 0 2026-03-09T15:01:31.769 INFO:tasks.workunit.client.0.vm05.stdout:8/451: mknod d0/d1/d12/d1b/d6e/c98 0 2026-03-09T15:01:31.771 INFO:tasks.workunit.client.0.vm05.stdout:4/423: chown d2/d4/d7/d21/c37 1730 1 2026-03-09T15:01:31.773 INFO:tasks.workunit.client.0.vm05.stdout:4/424: stat d2/d4/d7/dc/c5e 0 2026-03-09T15:01:31.773 INFO:tasks.workunit.client.0.vm05.stdout:3/452: dread d3/df/d10/d34/d8c/f6d [0,4194304] 0 2026-03-09T15:01:31.782 INFO:tasks.workunit.client.0.vm05.stdout:2/397: rename da/d29/d45/l51 to da/d29/d3f/l7a 0 2026-03-09T15:01:31.798 INFO:tasks.workunit.client.0.vm05.stdout:7/417: mkdir d1/d9/d23/d31/d32/d78/d7e/d81 0 2026-03-09T15:01:31.798 INFO:tasks.workunit.client.0.vm05.stdout:7/418: fdatasync d1/d9/f59 0 2026-03-09T15:01:31.798 INFO:tasks.workunit.client.0.vm05.stdout:7/419: dwrite d1/d22/f65 [0,4194304] 0 2026-03-09T15:01:31.798 INFO:tasks.workunit.client.0.vm05.stdout:7/420: stat d1/c42 0 2026-03-09T15:01:31.798 INFO:tasks.workunit.client.0.vm05.stdout:3/453: creat d3/df/d10/d34/f9d x:0 0 0 2026-03-09T15:01:31.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:31 vm05.local ceph-mon[50611]: Upgrade: Updating mgr.vm09.cfuwdz 2026-03-09T15:01:31.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:31 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:31.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:31 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.cfuwdz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:01:31.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:31 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:01:31.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:31 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:01:31.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:31 vm05.local ceph-mon[50611]: Deploying daemon mgr.vm09.cfuwdz on vm09 2026-03-09T15:01:31.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:31 vm05.local ceph-mon[50611]: pgmap v172: 65 pgs: 65 active+clean; 2.1 GiB data, 7.3 GiB used, 113 GiB / 120 GiB avail; 22 MiB/s rd, 116 MiB/s wr, 258 op/s 2026-03-09T15:01:31.807 INFO:tasks.workunit.client.0.vm05.stdout:2/398: rename da/d13/d30/f63 to da/d29/d45/f7b 0 2026-03-09T15:01:31.812 INFO:tasks.workunit.client.0.vm05.stdout:5/479: sync 2026-03-09T15:01:31.815 INFO:tasks.workunit.client.0.vm05.stdout:5/480: write d1/d4/d34/d35/d3d/d38/f4b [3404467,100150] 0 2026-03-09T15:01:31.816 INFO:tasks.workunit.client.0.vm05.stdout:3/454: creat d3/d29/d2d/d77/f9e x:0 0 0 2026-03-09T15:01:31.820 INFO:tasks.workunit.client.0.vm05.stdout:7/421: rename d1/d22/d3c/f44 to d1/d49/d4a/d77/d79/f82 0 2026-03-09T15:01:31.821 INFO:tasks.workunit.client.0.vm05.stdout:7/422: write d1/d9/d23/d31/f55 [8272034,86161] 0 2026-03-09T15:01:31.822 INFO:tasks.workunit.client.0.vm05.stdout:7/423: write d1/d9/d23/d31/d32/f3a [5610710,68260] 0 2026-03-09T15:01:31.825 INFO:tasks.workunit.client.0.vm05.stdout:2/399: mkdir da/d13/d30/d7c 0 2026-03-09T15:01:31.826 INFO:tasks.workunit.client.0.vm05.stdout:8/452: dread d0/d7/f20 [0,4194304] 0 2026-03-09T15:01:31.832 INFO:tasks.workunit.client.0.vm05.stdout:0/349: rmdir d9/de/d67 0 2026-03-09T15:01:31.838 INFO:tasks.workunit.client.0.vm05.stdout:5/481: symlink d1/d4/d27/d75/la2 0 2026-03-09T15:01:31.838 INFO:tasks.workunit.client.0.vm05.stdout:5/482: fsync d1/da/f4a 0 2026-03-09T15:01:31.839 INFO:tasks.workunit.client.0.vm05.stdout:2/400: dread da/f2c [0,4194304] 0 2026-03-09T15:01:31.839 INFO:tasks.workunit.client.0.vm05.stdout:2/401: readlink da/d13/d2f/d35/l78 0 2026-03-09T15:01:31.840 INFO:tasks.workunit.client.0.vm05.stdout:2/402: readlink da/d13/d30/l58 0 2026-03-09T15:01:31.846 INFO:tasks.workunit.client.0.vm05.stdout:8/453: creat d0/d1/d12/d3c/f99 x:0 0 0 2026-03-09T15:01:31.849 INFO:tasks.workunit.client.0.vm05.stdout:0/350: creat d9/de/f6c x:0 0 0 2026-03-09T15:01:31.850 INFO:tasks.workunit.client.0.vm05.stdout:0/351: read d9/de/d12/d15/f5e [191936,3108] 0 2026-03-09T15:01:31.853 INFO:tasks.workunit.client.0.vm05.stdout:5/483: rmdir d1/d5d 39 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:3/455: symlink d3/df/d1e/d2c/d74/d9b/l9f 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:8/454: creat d0/d1/d12/d3c/f9a x:0 0 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:8/455: stat d0/d1/d12/d1b/f67 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:3/456: rename d3/df/d1e/d2c/f4f to d3/df/d10/d7c/d8b/fa0 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:3/457: stat d3/df/l55 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:0/352: symlink d9/de/d12/d15/d2e/d6b/l6d 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:0/353: dread - d9/de/f5d zero size 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:2/403: creat da/d29/f7d x:0 0 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:2/404: stat da/d16/c48 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:3/458: rename d3/df/d1e/d2c/f6a to d3/d29/d7f/fa1 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:8/456: fsync d0/d1/d12/d1b/d21/f65 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:0/354: mkdir d9/de/d12/d15/d2e/d32/d53/d6e 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:3/459: dwrite d3/df/d10/d34/f5f [0,4194304] 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:2/405: unlink da/d16/d46/f5e 0 2026-03-09T15:01:31.881 INFO:tasks.workunit.client.0.vm05.stdout:8/457: mknod d0/d1/d12/d1b/d66/c9b 0 2026-03-09T15:01:31.883 INFO:tasks.workunit.client.0.vm05.stdout:2/406: dwrite da/d13/d30/f5a [0,4194304] 0 2026-03-09T15:01:31.885 INFO:tasks.workunit.client.0.vm05.stdout:0/355: mknod d9/de/d25/d38/d41/c6f 0 2026-03-09T15:01:31.886 INFO:tasks.workunit.client.0.vm05.stdout:0/356: dread - d9/de/f5d zero size 2026-03-09T15:01:31.890 INFO:tasks.workunit.client.0.vm05.stdout:0/357: dwrite d9/de/d12/d15/d2e/f40 [0,4194304] 0 2026-03-09T15:01:31.891 INFO:tasks.workunit.client.0.vm05.stdout:0/358: truncate d9/de/f5d 697727 0 2026-03-09T15:01:31.897 INFO:tasks.workunit.client.0.vm05.stdout:0/359: dwrite d9/de/d12/d15/d2e/d32/d53/f68 [0,4194304] 0 2026-03-09T15:01:31.901 INFO:tasks.workunit.client.0.vm05.stdout:0/360: chown d9/de/l57 3249 1 2026-03-09T15:01:31.909 INFO:tasks.workunit.client.0.vm05.stdout:3/460: mkdir d3/df/d10/d19/d44/da2 0 2026-03-09T15:01:31.914 INFO:tasks.workunit.client.0.vm05.stdout:8/458: rename d0/d7/f88 to d0/d1/d12/d1b/d95/d42/d60/f9c 0 2026-03-09T15:01:31.916 INFO:tasks.workunit.client.0.vm05.stdout:8/459: dread d0/d1/d55/f6a [0,4194304] 0 2026-03-09T15:01:31.917 INFO:tasks.workunit.client.0.vm05.stdout:8/460: truncate d0/d1/d12/d1b/d21/f92 148646 0 2026-03-09T15:01:31.922 INFO:tasks.workunit.client.0.vm05.stdout:3/461: fsync d3/df/d10/d7c/f94 0 2026-03-09T15:01:31.925 INFO:tasks.workunit.client.0.vm05.stdout:8/461: mkdir d0/d24/d9d 0 2026-03-09T15:01:31.928 INFO:tasks.workunit.client.0.vm05.stdout:8/462: dwrite d0/f10 [0,4194304] 0 2026-03-09T15:01:31.929 INFO:tasks.workunit.client.0.vm05.stdout:8/463: stat d0/d1/d97 0 2026-03-09T15:01:31.932 INFO:tasks.workunit.client.0.vm05.stdout:0/361: fsync d9/de/d12/d15/d2e/f4d 0 2026-03-09T15:01:31.936 INFO:tasks.workunit.client.0.vm05.stdout:0/362: write d9/de/d12/d15/d2e/f3a [978423,28010] 0 2026-03-09T15:01:31.942 INFO:tasks.workunit.client.0.vm05.stdout:2/407: symlink da/d29/d64/l7e 0 2026-03-09T15:01:31.948 INFO:tasks.workunit.client.0.vm05.stdout:3/462: link d3/df/f4a d3/d29/d2d/d7b/fa3 0 2026-03-09T15:01:31.955 INFO:tasks.workunit.client.0.vm05.stdout:8/464: link d0/d1/d12/d1b/d66/l83 d0/d1/d12/l9e 0 2026-03-09T15:01:32.015 INFO:tasks.workunit.client.0.vm05.stdout:1/407: sync 2026-03-09T15:01:32.019 INFO:tasks.workunit.client.0.vm05.stdout:1/408: getdents d9/d2a/d59/d49/d92 0 2026-03-09T15:01:32.020 INFO:tasks.workunit.client.0.vm05.stdout:1/409: write d9/d2f/d37/d5f/f73 [351962,59486] 0 2026-03-09T15:01:32.023 INFO:tasks.workunit.client.0.vm05.stdout:1/410: creat d9/d2a/f95 x:0 0 0 2026-03-09T15:01:32.026 INFO:tasks.workunit.client.0.vm05.stdout:1/411: rename d9/d2a/d59/d49/d78/f84 to d9/d2a/d59/d49/d48/f96 0 2026-03-09T15:01:32.028 INFO:tasks.workunit.client.0.vm05.stdout:1/412: mkdir d9/d97 0 2026-03-09T15:01:32.029 INFO:tasks.workunit.client.0.vm05.stdout:7/424: sync 2026-03-09T15:01:32.144 INFO:tasks.workunit.client.0.vm05.stdout:6/406: write da/d17/d3b/f4a [4371146,61961] 0 2026-03-09T15:01:32.149 INFO:tasks.workunit.client.0.vm05.stdout:9/456: dwrite d2/d10/d22/d2c/d69/d5a/f80 [0,4194304] 0 2026-03-09T15:01:32.152 INFO:tasks.workunit.client.0.vm05.stdout:9/457: write d2/d4e/d56/d37/d9c/f6e [558811,83741] 0 2026-03-09T15:01:32.152 INFO:tasks.workunit.client.0.vm05.stdout:9/458: dread - d2/d4e/d56/d37/d9c/f97 zero size 2026-03-09T15:01:32.153 INFO:tasks.workunit.client.0.vm05.stdout:9/459: write d2/f61 [3471232,43926] 0 2026-03-09T15:01:32.155 INFO:tasks.workunit.client.0.vm05.stdout:6/407: read da/d17/f2d [1631855,86926] 0 2026-03-09T15:01:32.161 INFO:tasks.workunit.client.0.vm05.stdout:6/408: symlink da/d43/d66/l75 0 2026-03-09T15:01:32.162 INFO:tasks.workunit.client.0.vm05.stdout:6/409: write da/d17/f58 [831047,110886] 0 2026-03-09T15:01:32.162 INFO:tasks.workunit.client.0.vm05.stdout:6/410: chown da 5113 1 2026-03-09T15:01:32.162 INFO:tasks.workunit.client.0.vm05.stdout:9/460: rmdir d2/d4e/d56/d37/d99/d9b 0 2026-03-09T15:01:32.165 INFO:tasks.workunit.client.0.vm05.stdout:9/461: mkdir d2/d10/d22/d9f 0 2026-03-09T15:01:32.165 INFO:tasks.workunit.client.0.vm05.stdout:9/462: dread - d2/d4e/d56/f57 zero size 2026-03-09T15:01:32.168 INFO:tasks.workunit.client.0.vm05.stdout:9/463: getdents d2/d10/d22/d2c 0 2026-03-09T15:01:32.197 INFO:tasks.workunit.client.0.vm05.stdout:9/464: stat d2/f5 0 2026-03-09T15:01:32.197 INFO:tasks.workunit.client.0.vm05.stdout:9/465: mkdir d2/d10/d22/da0 0 2026-03-09T15:01:32.219 INFO:tasks.workunit.client.0.vm05.stdout:7/425: read d1/f62 [2556134,32558] 0 2026-03-09T15:01:32.255 INFO:tasks.workunit.client.0.vm05.stdout:7/426: dread d1/d9/d23/d31/d32/f38 [0,4194304] 0 2026-03-09T15:01:32.255 INFO:tasks.workunit.client.0.vm05.stdout:4/425: truncate d2/d4/d7/dc/f27 949470 0 2026-03-09T15:01:32.255 INFO:tasks.workunit.client.0.vm05.stdout:2/408: getdents da/d13/d30 0 2026-03-09T15:01:32.255 INFO:tasks.workunit.client.0.vm05.stdout:3/463: unlink d3/df/d10/d7c/d8b/fa0 0 2026-03-09T15:01:32.255 INFO:tasks.workunit.client.0.vm05.stdout:5/484: write d1/f2a [2488895,44528] 0 2026-03-09T15:01:32.255 INFO:tasks.workunit.client.0.vm05.stdout:3/464: creat d3/df/d10/d7c/d8b/fa4 x:0 0 0 2026-03-09T15:01:32.255 INFO:tasks.workunit.client.0.vm05.stdout:4/426: rename d2/d4/d7/l35 to d2/d49/d69/l87 0 2026-03-09T15:01:32.255 INFO:tasks.workunit.client.0.vm05.stdout:4/427: stat d2/d4/l25 0 2026-03-09T15:01:32.255 INFO:tasks.workunit.client.0.vm05.stdout:5/485: creat d1/d4/d34/d35/d4e/d6f/fa3 x:0 0 0 2026-03-09T15:01:32.255 INFO:tasks.workunit.client.0.vm05.stdout:2/409: mkdir da/d29/d6a/d7f 0 2026-03-09T15:01:32.255 INFO:tasks.workunit.client.0.vm05.stdout:2/410: truncate da/d16/d46/f73 154290 0 2026-03-09T15:01:32.256 INFO:tasks.workunit.client.0.vm05.stdout:2/411: chown da/d13/d2f/d35/f57 48 1 2026-03-09T15:01:32.256 INFO:tasks.workunit.client.0.vm05.stdout:5/486: dread - d1/d4/d34/d35/d3d/f61 zero size 2026-03-09T15:01:32.256 INFO:tasks.workunit.client.0.vm05.stdout:5/487: unlink d1/f26 0 2026-03-09T15:01:32.256 INFO:tasks.workunit.client.0.vm05.stdout:4/428: dwrite d2/d4/d7/f53 [0,4194304] 0 2026-03-09T15:01:32.256 INFO:tasks.workunit.client.0.vm05.stdout:2/412: creat da/d13/d30/d7c/f80 x:0 0 0 2026-03-09T15:01:32.258 INFO:tasks.workunit.client.0.vm05.stdout:3/465: getdents d3/d29/d2d/d77/d4d 0 2026-03-09T15:01:32.259 INFO:tasks.workunit.client.0.vm05.stdout:5/488: mknod d1/d4/d27/ca4 0 2026-03-09T15:01:32.262 INFO:tasks.workunit.client.0.vm05.stdout:3/466: write d3/df/f4a [5573640,97800] 0 2026-03-09T15:01:32.267 INFO:tasks.workunit.client.0.vm05.stdout:3/467: mkdir d3/df/d10/d34/da5 0 2026-03-09T15:01:32.268 INFO:tasks.workunit.client.0.vm05.stdout:3/468: fsync d3/df/d10/d19/d44/f7e 0 2026-03-09T15:01:32.270 INFO:tasks.workunit.client.0.vm05.stdout:7/427: dread d1/d12/f18 [0,4194304] 0 2026-03-09T15:01:32.275 INFO:tasks.workunit.client.0.vm05.stdout:7/428: dread d1/f15 [0,4194304] 0 2026-03-09T15:01:32.471 INFO:tasks.workunit.client.0.vm05.stdout:2/413: sync 2026-03-09T15:01:32.471 INFO:tasks.workunit.client.0.vm05.stdout:7/429: sync 2026-03-09T15:01:32.472 INFO:tasks.workunit.client.0.vm05.stdout:2/414: truncate da/dd/f5d 484427 0 2026-03-09T15:01:32.476 INFO:tasks.workunit.client.0.vm05.stdout:0/363: rmdir d9/de/d25/d38/d41 39 2026-03-09T15:01:32.476 INFO:tasks.workunit.client.0.vm05.stdout:2/415: dwrite da/dd/f1c [0,4194304] 0 2026-03-09T15:01:32.477 INFO:tasks.workunit.client.0.vm05.stdout:2/416: readlink da/d16/l4d 0 2026-03-09T15:01:32.477 INFO:tasks.workunit.client.0.vm05.stdout:0/364: write d9/de/d12/d15/f50 [1024218,115414] 0 2026-03-09T15:01:32.478 INFO:tasks.workunit.client.0.vm05.stdout:0/365: fdatasync d9/de/d25/f48 0 2026-03-09T15:01:32.490 INFO:tasks.workunit.client.0.vm05.stdout:7/430: truncate d1/d9/d23/d31/d32/f63 1796714 0 2026-03-09T15:01:32.494 INFO:tasks.workunit.client.0.vm05.stdout:8/465: write d0/d1/d12/d1b/d95/f3e [1831827,62394] 0 2026-03-09T15:01:32.496 INFO:tasks.workunit.client.0.vm05.stdout:1/413: write d9/d2f/f3a [4484126,8615] 0 2026-03-09T15:01:32.499 INFO:tasks.workunit.client.0.vm05.stdout:1/414: dwrite d9/d2a/f56 [0,4194304] 0 2026-03-09T15:01:32.502 INFO:tasks.workunit.client.0.vm05.stdout:1/415: write d9/d17/f81 [46147,36761] 0 2026-03-09T15:01:32.502 INFO:tasks.workunit.client.0.vm05.stdout:6/411: rmdir da/d43 39 2026-03-09T15:01:32.503 INFO:tasks.workunit.client.0.vm05.stdout:6/412: readlink da/d19/l39 0 2026-03-09T15:01:32.513 INFO:tasks.workunit.client.0.vm05.stdout:2/417: creat da/d29/d6a/f81 x:0 0 0 2026-03-09T15:01:32.515 INFO:tasks.workunit.client.0.vm05.stdout:0/366: dread d9/de/d12/d15/d2e/f4d [0,4194304] 0 2026-03-09T15:01:32.516 INFO:tasks.workunit.client.0.vm05.stdout:0/367: fsync d9/de/f5d 0 2026-03-09T15:01:32.518 INFO:tasks.workunit.client.0.vm05.stdout:9/466: dwrite d2/fc [0,4194304] 0 2026-03-09T15:01:32.530 INFO:tasks.workunit.client.0.vm05.stdout:7/431: truncate d1/d12/f18 4536824 0 2026-03-09T15:01:32.530 INFO:tasks.workunit.client.0.vm05.stdout:8/466: fsync d0/d1/d12/f4f 0 2026-03-09T15:01:32.530 INFO:tasks.workunit.client.0.vm05.stdout:1/416: rename d9/d2a to d9/d2f/d83/d98 0 2026-03-09T15:01:32.530 INFO:tasks.workunit.client.0.vm05.stdout:1/417: truncate d9/d2f/d83/d98/f67 364514 0 2026-03-09T15:01:32.551 INFO:tasks.workunit.client.0.vm05.stdout:3/469: write d3/df/f11 [3106155,123104] 0 2026-03-09T15:01:32.554 INFO:tasks.workunit.client.0.vm05.stdout:5/489: dwrite d1/d4/d34/d35/d3d/d38/d63/d8c/f5a [0,4194304] 0 2026-03-09T15:01:32.554 INFO:tasks.workunit.client.0.vm05.stdout:4/429: dwrite d2/d43/f4f [0,4194304] 0 2026-03-09T15:01:32.556 INFO:tasks.workunit.client.0.vm05.stdout:4/430: truncate d2/d43/f4f 4413106 0 2026-03-09T15:01:32.567 INFO:tasks.workunit.client.0.vm05.stdout:6/413: mknod da/d19/c76 0 2026-03-09T15:01:32.610 INFO:tasks.workunit.client.0.vm05.stdout:4/431: mkdir d2/d1d/d88 0 2026-03-09T15:01:32.632 INFO:tasks.workunit.client.0.vm05.stdout:1/418: rmdir d9/d2f 39 2026-03-09T15:01:32.632 INFO:tasks.workunit.client.0.vm05.stdout:0/368: mkdir d9/d59/d70 0 2026-03-09T15:01:32.633 INFO:tasks.workunit.client.0.vm05.stdout:2/418: write da/f2c [3818465,101910] 0 2026-03-09T15:01:32.638 INFO:tasks.workunit.client.0.vm05.stdout:2/419: dwrite da/d13/f4b [0,4194304] 0 2026-03-09T15:01:32.639 INFO:tasks.workunit.client.0.vm05.stdout:1/419: read d9/d17/f81 [78212,47988] 0 2026-03-09T15:01:32.648 INFO:tasks.workunit.client.0.vm05.stdout:2/420: dread da/dd/f6f [0,4194304] 0 2026-03-09T15:01:32.664 INFO:tasks.workunit.client.0.vm05.stdout:9/467: write d2/d4e/d56/d37/d9c/f18 [24945,43818] 0 2026-03-09T15:01:32.699 INFO:tasks.workunit.client.0.vm05.stdout:7/432: truncate d1/f45 721094 0 2026-03-09T15:01:32.700 INFO:tasks.workunit.client.0.vm05.stdout:8/467: mkdir d0/d1/d12/d1b/d6e/d93/d9f 0 2026-03-09T15:01:32.700 INFO:tasks.workunit.client.0.vm05.stdout:3/470: creat d3/df/d10/d34/d8c/d90/fa6 x:0 0 0 2026-03-09T15:01:32.701 INFO:tasks.workunit.client.0.vm05.stdout:5/490: link d1/f5e d1/d4/d34/d35/d4e/d6f/fa5 0 2026-03-09T15:01:32.703 INFO:tasks.workunit.client.0.vm05.stdout:6/414: rmdir da/d43/d66 39 2026-03-09T15:01:32.704 INFO:tasks.workunit.client.0.vm05.stdout:5/491: write d1/d4/d34/d35/d3d/d38/f4b [5194359,99292] 0 2026-03-09T15:01:32.706 INFO:tasks.workunit.client.0.vm05.stdout:2/421: creat da/d16/d46/f82 x:0 0 0 2026-03-09T15:01:32.708 INFO:tasks.workunit.client.0.vm05.stdout:1/420: dread - d9/d2f/d83/d98/f95 zero size 2026-03-09T15:01:32.709 INFO:tasks.workunit.client.0.vm05.stdout:5/492: write d1/d4/d34/d35/f44 [8894501,29607] 0 2026-03-09T15:01:32.717 INFO:tasks.workunit.client.0.vm05.stdout:7/433: mknod d1/d9/d23/d31/d32/c83 0 2026-03-09T15:01:32.718 INFO:tasks.workunit.client.0.vm05.stdout:8/468: mknod d0/d1/d12/d1b/d95/d54/ca0 0 2026-03-09T15:01:32.719 INFO:tasks.workunit.client.0.vm05.stdout:9/468: dwrite d2/f12 [4194304,4194304] 0 2026-03-09T15:01:32.720 INFO:tasks.workunit.client.0.vm05.stdout:8/469: chown d0/d1/d12/d1b/d6e 23672984 1 2026-03-09T15:01:32.720 INFO:tasks.workunit.client.0.vm05.stdout:4/432: symlink d2/d7a/l89 0 2026-03-09T15:01:32.721 INFO:tasks.workunit.client.0.vm05.stdout:6/415: dwrite da/d17/f1d [0,4194304] 0 2026-03-09T15:01:32.738 INFO:tasks.workunit.client.0.vm05.stdout:7/434: dwrite d1/d9/d23/d31/f55 [0,4194304] 0 2026-03-09T15:01:32.765 INFO:tasks.workunit.client.0.vm05.stdout:0/369: creat d9/de/d25/d38/d41/f71 x:0 0 0 2026-03-09T15:01:32.768 INFO:tasks.workunit.client.0.vm05.stdout:0/370: dwrite d9/de/f3e [0,4194304] 0 2026-03-09T15:01:32.771 INFO:tasks.workunit.client.0.vm05.stdout:5/493: rmdir d1/d4/d27/d5b 39 2026-03-09T15:01:32.776 INFO:tasks.workunit.client.0.vm05.stdout:5/494: write d1/ff [8295231,10039] 0 2026-03-09T15:01:32.782 INFO:tasks.workunit.client.0.vm05.stdout:8/470: mkdir d0/d1/d12/d1b/d95/d42/da1 0 2026-03-09T15:01:32.783 INFO:tasks.workunit.client.0.vm05.stdout:8/471: truncate d0/d1/d12/d1b/d95/d54/f64 4828359 0 2026-03-09T15:01:32.783 INFO:tasks.workunit.client.0.vm05.stdout:8/472: write d0/d1/d12/d1b/d95/f4d [80999,9318] 0 2026-03-09T15:01:32.784 INFO:tasks.workunit.client.0.vm05.stdout:8/473: write d0/d1/d12/d1b/d95/d42/d60/f8f [819168,48449] 0 2026-03-09T15:01:32.790 INFO:tasks.workunit.client.0.vm05.stdout:4/433: mkdir d2/d4/d50/d8a 0 2026-03-09T15:01:32.794 INFO:tasks.workunit.client.0.vm05.stdout:3/471: link d3/df/d1e/f2b d3/d29/d2d/d77/d4d/fa7 0 2026-03-09T15:01:32.803 INFO:tasks.workunit.client.0.vm05.stdout:0/371: symlink d9/de/l72 0 2026-03-09T15:01:32.807 INFO:tasks.workunit.client.0.vm05.stdout:5/495: mkdir d1/d4/d34/d56/da6 0 2026-03-09T15:01:32.809 INFO:tasks.workunit.client.0.vm05.stdout:1/421: mknod d9/d2f/d83/d98/d87/c99 0 2026-03-09T15:01:32.810 INFO:tasks.workunit.client.0.vm05.stdout:1/422: readlink d9/d17/l1c 0 2026-03-09T15:01:32.810 INFO:tasks.workunit.client.0.vm05.stdout:1/423: chown f5 39576368 1 2026-03-09T15:01:32.834 INFO:tasks.workunit.client.0.vm05.stdout:6/416: mknod da/d43/d66/c77 0 2026-03-09T15:01:32.836 INFO:tasks.workunit.client.0.vm05.stdout:6/417: chown da/d43/f68 132668948 1 2026-03-09T15:01:32.836 INFO:tasks.workunit.client.0.vm05.stdout:2/422: truncate da/d13/d2f/d35/f57 80320 0 2026-03-09T15:01:32.836 INFO:tasks.workunit.client.0.vm05.stdout:0/372: rmdir d9/de/d25/d38/d41 39 2026-03-09T15:01:32.840 INFO:tasks.workunit.client.0.vm05.stdout:1/424: rmdir d9/d2f/d83/d98/d59/d49/d92 39 2026-03-09T15:01:32.849 INFO:tasks.workunit.client.0.vm05.stdout:1/425: dread d9/d2f/d55/f5e [0,4194304] 0 2026-03-09T15:01:32.849 INFO:tasks.workunit.client.0.vm05.stdout:8/474: creat d0/d1/d97/fa2 x:0 0 0 2026-03-09T15:01:32.855 INFO:tasks.workunit.client.0.vm05.stdout:9/469: link d2/d10/d22/d52/l6d d2/d10/d22/d2c/d69/la1 0 2026-03-09T15:01:32.856 INFO:tasks.workunit.client.0.vm05.stdout:4/434: link d2/d4/d7/dc/f54 d2/d1d/d88/f8b 0 2026-03-09T15:01:32.857 INFO:tasks.workunit.client.0.vm05.stdout:9/470: readlink d2/d10/d22/d47/l78 0 2026-03-09T15:01:32.861 INFO:tasks.workunit.client.0.vm05.stdout:7/435: link d1/d22/c26 d1/d9/d72/c84 0 2026-03-09T15:01:32.861 INFO:tasks.workunit.client.0.vm05.stdout:7/436: chown d1/d49 5144 1 2026-03-09T15:01:32.873 INFO:tasks.workunit.client.0.vm05.stdout:9/471: mknod d2/d10/d22/d47/d73/ca2 0 2026-03-09T15:01:32.873 INFO:tasks.workunit.client.0.vm05.stdout:9/472: write d2/d70/f7c [281610,31308] 0 2026-03-09T15:01:32.875 INFO:tasks.workunit.client.0.vm05.stdout:3/472: link d3/d29/f92 d3/df/d59/d79/fa8 0 2026-03-09T15:01:32.877 INFO:tasks.workunit.client.0.vm05.stdout:9/473: write d2/d4e/d56/d37/d9c/d8e/f68 [4955242,120214] 0 2026-03-09T15:01:32.878 INFO:tasks.workunit.client.0.vm05.stdout:9/474: truncate d2/d4e/d56/f57 113817 0 2026-03-09T15:01:32.878 INFO:tasks.workunit.client.0.vm05.stdout:9/475: chown d2/f1f 10736 1 2026-03-09T15:01:32.882 INFO:tasks.workunit.client.0.vm05.stdout:2/423: write da/d16/f20 [1092044,38067] 0 2026-03-09T15:01:32.882 INFO:tasks.workunit.client.0.vm05.stdout:4/435: write d2/f33 [1220407,3223] 0 2026-03-09T15:01:32.885 INFO:tasks.workunit.client.0.vm05.stdout:6/418: mknod da/c78 0 2026-03-09T15:01:32.886 INFO:tasks.workunit.client.0.vm05.stdout:6/419: chown da/f5d 38623 1 2026-03-09T15:01:32.890 INFO:tasks.workunit.client.0.vm05.stdout:2/424: fdatasync da/d16/f20 0 2026-03-09T15:01:32.890 INFO:tasks.workunit.client.0.vm05.stdout:1/426: mknod d9/d97/c9a 0 2026-03-09T15:01:32.893 INFO:tasks.workunit.client.0.vm05.stdout:8/475: symlink d0/d1/d12/d1b/d6e/d93/d9f/la3 0 2026-03-09T15:01:32.896 INFO:tasks.workunit.client.0.vm05.stdout:0/373: truncate d9/de/d12/d15/d2e/f3a 2507500 0 2026-03-09T15:01:32.898 INFO:tasks.workunit.client.0.vm05.stdout:7/437: getdents d1/d49/d68 0 2026-03-09T15:01:32.899 INFO:tasks.workunit.client.0.vm05.stdout:7/438: read d1/f76 [3404406,39616] 0 2026-03-09T15:01:32.901 INFO:tasks.workunit.client.0.vm05.stdout:3/473: rename d3/df/d10/d7c/d8b to d3/df/d1e/da9 0 2026-03-09T15:01:32.906 INFO:tasks.workunit.client.0.vm05.stdout:5/496: link d1/da/l1c d1/d5d/la7 0 2026-03-09T15:01:32.909 INFO:tasks.workunit.client.0.vm05.stdout:3/474: dread d3/df/d10/d7c/f94 [0,4194304] 0 2026-03-09T15:01:32.909 INFO:tasks.workunit.client.0.vm05.stdout:0/374: dread d9/f2b [0,4194304] 0 2026-03-09T15:01:32.910 INFO:tasks.workunit.client.0.vm05.stdout:4/436: rename d2/d4/d7/d21/d3d/l80 to d2/d4/d7/d21/d3d/l8c 0 2026-03-09T15:01:32.914 INFO:tasks.workunit.client.0.vm05.stdout:1/427: chown d9/d17/l53 387694 1 2026-03-09T15:01:32.916 INFO:tasks.workunit.client.0.vm05.stdout:8/476: write d0/d1/d12/d3c/f8c [2756842,88199] 0 2026-03-09T15:01:32.925 INFO:tasks.workunit.client.0.vm05.stdout:2/425: sync 2026-03-09T15:01:32.927 INFO:tasks.workunit.client.0.vm05.stdout:2/426: dread - da/d16/d46/f82 zero size 2026-03-09T15:01:32.927 INFO:tasks.workunit.client.0.vm05.stdout:1/428: dwrite d9/d2f/d83/d98/f67 [0,4194304] 0 2026-03-09T15:01:32.929 INFO:tasks.workunit.client.0.vm05.stdout:3/475: mknod d3/df/d1e/d2c/d74/d9b/caa 0 2026-03-09T15:01:32.929 INFO:tasks.workunit.client.0.vm05.stdout:2/427: chown da/d13/f4b 261 1 2026-03-09T15:01:32.935 INFO:tasks.workunit.client.0.vm05.stdout:6/420: dread da/f1a [0,4194304] 0 2026-03-09T15:01:32.940 INFO:tasks.workunit.client.0.vm05.stdout:0/375: chown d9/de/l2a 26223 1 2026-03-09T15:01:32.941 INFO:tasks.workunit.client.0.vm05.stdout:0/376: dread - d9/de/f69 zero size 2026-03-09T15:01:32.947 INFO:tasks.workunit.client.0.vm05.stdout:9/476: dwrite d2/d10/d22/d2c/d69/f43 [0,4194304] 0 2026-03-09T15:01:32.961 INFO:tasks.workunit.client.0.vm05.stdout:7/439: write d1/d9/fd [694886,77807] 0 2026-03-09T15:01:32.963 INFO:tasks.workunit.client.0.vm05.stdout:5/497: dwrite d1/d4/d34/d35/f36 [0,4194304] 0 2026-03-09T15:01:32.970 INFO:tasks.workunit.client.0.vm05.stdout:5/498: dwrite d1/d4/d27/f3a [0,4194304] 0 2026-03-09T15:01:32.970 INFO:tasks.workunit.client.0.vm05.stdout:5/499: chown d1/d5d/d7f 1 1 2026-03-09T15:01:32.970 INFO:tasks.workunit.client.0.vm05.stdout:5/500: readlink d1/d4/d34/d35/d3d/l9d 0 2026-03-09T15:01:32.971 INFO:tasks.workunit.client.0.vm05.stdout:5/501: write d1/d4/d34/d35/f44 [392245,65761] 0 2026-03-09T15:01:32.980 INFO:tasks.workunit.client.0.vm05.stdout:1/429: rmdir d9/d2f/d83/d98/d59/d49 39 2026-03-09T15:01:32.990 INFO:tasks.workunit.client.0.vm05.stdout:2/428: unlink da/dd/f1c 0 2026-03-09T15:01:32.990 INFO:tasks.workunit.client.0.vm05.stdout:6/421: mknod da/d17/d3b/c79 0 2026-03-09T15:01:33.001 INFO:tasks.workunit.client.0.vm05.stdout:8/477: symlink d0/d1/d12/d1b/d95/d78/d86/la4 0 2026-03-09T15:01:33.003 INFO:tasks.workunit.client.0.vm05.stdout:5/502: read d1/d4/d34/d35/d3d/d38/f8a [648411,94325] 0 2026-03-09T15:01:33.006 INFO:tasks.workunit.client.0.vm05.stdout:3/476: creat d3/df/d1e/d2c/d74/d78/fab x:0 0 0 2026-03-09T15:01:33.007 INFO:tasks.workunit.client.0.vm05.stdout:7/440: dread d1/f45 [0,4194304] 0 2026-03-09T15:01:33.007 INFO:tasks.workunit.client.0.vm05.stdout:7/441: chown d1/d22/d3c/c5f 576344 1 2026-03-09T15:01:33.011 INFO:tasks.workunit.client.0.vm05.stdout:2/429: creat da/dd/f83 x:0 0 0 2026-03-09T15:01:33.021 INFO:tasks.workunit.client.0.vm05.stdout:9/477: creat d2/d92/fa3 x:0 0 0 2026-03-09T15:01:33.021 INFO:tasks.workunit.client.0.vm05.stdout:9/478: stat d2/d10/d22/d2c 0 2026-03-09T15:01:33.028 INFO:tasks.workunit.client.0.vm05.stdout:8/478: dwrite d0/d1/d12/d1b/d95/d42/d60/f9c [0,4194304] 0 2026-03-09T15:01:33.033 INFO:tasks.workunit.client.0.vm05.stdout:8/479: dwrite d0/d1/d12/d3c/f9a [0,4194304] 0 2026-03-09T15:01:33.042 INFO:tasks.workunit.client.0.vm05.stdout:5/503: symlink d1/d4/d34/d35/la8 0 2026-03-09T15:01:33.057 INFO:tasks.workunit.client.0.vm05.stdout:7/442: fdatasync d1/d9/d72/f7c 0 2026-03-09T15:01:33.066 INFO:tasks.workunit.client.0.vm05.stdout:2/430: rmdir da/d13/d30 39 2026-03-09T15:01:33.069 INFO:tasks.workunit.client.0.vm05.stdout:4/437: truncate d2/d4/d7/f53 24088 0 2026-03-09T15:01:33.070 INFO:tasks.workunit.client.0.vm05.stdout:0/377: creat d9/de/d12/d15/d2e/f73 x:0 0 0 2026-03-09T15:01:33.074 INFO:tasks.workunit.client.0.vm05.stdout:0/378: chown d9/de/d25/f48 830 1 2026-03-09T15:01:33.076 INFO:tasks.workunit.client.0.vm05.stdout:9/479: mknod d2/ca4 0 2026-03-09T15:01:33.078 INFO:tasks.workunit.client.0.vm05.stdout:6/422: dwrite da/d19/f6a [0,4194304] 0 2026-03-09T15:01:33.086 INFO:tasks.workunit.client.0.vm05.stdout:9/480: dwrite d2/d10/d22/d2c/f93 [0,4194304] 0 2026-03-09T15:01:33.095 INFO:tasks.workunit.client.0.vm05.stdout:8/480: fdatasync d0/d2a/f2e 0 2026-03-09T15:01:33.114 INFO:tasks.workunit.client.0.vm05.stdout:5/504: creat d1/d4/d34/d35/d4e/fa9 x:0 0 0 2026-03-09T15:01:33.119 INFO:tasks.workunit.client.0.vm05.stdout:1/430: mknod d9/d2f/d83/d98/d59/d49/d78/d7e/c9b 0 2026-03-09T15:01:33.121 INFO:tasks.workunit.client.0.vm05.stdout:5/505: dwrite d1/d4/d34/d35/f47 [0,4194304] 0 2026-03-09T15:01:33.130 INFO:tasks.workunit.client.0.vm05.stdout:8/481: dread d0/d1/f7f [0,4194304] 0 2026-03-09T15:01:33.136 INFO:tasks.workunit.client.0.vm05.stdout:1/431: read d9/d2f/d37/d5f/f73 [716403,77482] 0 2026-03-09T15:01:33.148 INFO:tasks.workunit.client.0.vm05.stdout:0/379: mkdir d9/de/d12/d15/d2e/d32/d74 0 2026-03-09T15:01:33.156 INFO:tasks.workunit.client.0.vm05.stdout:9/481: symlink d2/d10/d22/d2c/d69/la5 0 2026-03-09T15:01:33.156 INFO:tasks.workunit.client.0.vm05.stdout:9/482: chown d2/d10/d22/d2c/f8d 6157 1 2026-03-09T15:01:33.157 INFO:tasks.workunit.client.0.vm05.stdout:9/483: readlink d2/d4e/d56/d37/d9c/l76 0 2026-03-09T15:01:33.157 INFO:tasks.workunit.client.0.vm05.stdout:9/484: chown d2/f1f 159475 1 2026-03-09T15:01:33.173 INFO:tasks.workunit.client.0.vm05.stdout:3/477: rename d3/df/d1e/d2f/d52/c6c to d3/df/d10/cac 0 2026-03-09T15:01:33.177 INFO:tasks.workunit.client.0.vm05.stdout:4/438: dread d2/f14 [0,4194304] 0 2026-03-09T15:01:33.205 INFO:tasks.workunit.client.0.vm05.stdout:7/443: symlink d1/d9/d23/d31/d32/d78/d7e/d81/l85 0 2026-03-09T15:01:33.222 INFO:tasks.workunit.client.0.vm05.stdout:3/478: write d3/d29/d2d/d77/f35 [3631905,53992] 0 2026-03-09T15:01:33.225 INFO:tasks.workunit.client.0.vm05.stdout:3/479: write d3/df/d10/d19/d44/f56 [1547226,42638] 0 2026-03-09T15:01:33.230 INFO:tasks.workunit.client.0.vm05.stdout:3/480: chown d3/d29/d2d/d77/d4d 400598 1 2026-03-09T15:01:33.233 INFO:tasks.workunit.client.0.vm05.stdout:2/431: write da/d16/f1f [388102,104177] 0 2026-03-09T15:01:33.244 INFO:tasks.workunit.client.0.vm05.stdout:7/444: symlink d1/d9/d23/d31/d51/l86 0 2026-03-09T15:01:33.249 INFO:tasks.workunit.client.0.vm05.stdout:0/380: creat d9/de/d6a/f75 x:0 0 0 2026-03-09T15:01:33.249 INFO:tasks.workunit.client.0.vm05.stdout:6/423: creat da/f7a x:0 0 0 2026-03-09T15:01:33.250 INFO:tasks.workunit.client.0.vm05.stdout:8/482: dwrite d0/d1/d55/f6a [0,4194304] 0 2026-03-09T15:01:33.250 INFO:tasks.workunit.client.0.vm05.stdout:3/481: dwrite d3/df/d1e/d2f/d52/f87 [4194304,4194304] 0 2026-03-09T15:01:33.260 INFO:tasks.workunit.client.0.vm05.stdout:3/482: dread - d3/df/d1e/d2f/f9a zero size 2026-03-09T15:01:33.264 INFO:tasks.workunit.client.0.vm05.stdout:4/439: creat d2/d4/d8/d4a/d6e/f8d x:0 0 0 2026-03-09T15:01:33.276 INFO:tasks.workunit.client.0.vm05.stdout:5/506: rename d1/d4/d34/l51 to d1/d4/d34/d35/d3d/d38/laa 0 2026-03-09T15:01:33.277 INFO:tasks.workunit.client.0.vm05.stdout:7/445: rmdir d1/d49/d4a 39 2026-03-09T15:01:33.286 INFO:tasks.workunit.client.0.vm05.stdout:2/432: creat da/d13/d30/f84 x:0 0 0 2026-03-09T15:01:33.291 INFO:tasks.workunit.client.0.vm05.stdout:4/440: truncate d2/d4/f15 4322315 0 2026-03-09T15:01:33.291 INFO:tasks.workunit.client.0.vm05.stdout:9/485: rename d2/d10/d22/d47/d73/l7f to d2/d10/d22/d47/la6 0 2026-03-09T15:01:33.293 INFO:tasks.workunit.client.0.vm05.stdout:8/483: sync 2026-03-09T15:01:33.293 INFO:tasks.workunit.client.0.vm05.stdout:6/424: sync 2026-03-09T15:01:33.298 INFO:tasks.workunit.client.0.vm05.stdout:4/441: dread d2/d4/d7/dc/f45 [0,4194304] 0 2026-03-09T15:01:33.303 INFO:tasks.workunit.client.0.vm05.stdout:0/381: creat d9/de/d12/d15/d2e/f76 x:0 0 0 2026-03-09T15:01:33.303 INFO:tasks.workunit.client.0.vm05.stdout:2/433: truncate da/d29/d3f/f5b 31794 0 2026-03-09T15:01:33.307 INFO:tasks.workunit.client.0.vm05.stdout:6/425: dwrite da/d43/f56 [0,4194304] 0 2026-03-09T15:01:33.310 INFO:tasks.workunit.client.0.vm05.stdout:6/426: write da/f65 [12020,123138] 0 2026-03-09T15:01:33.316 INFO:tasks.workunit.client.0.vm05.stdout:9/486: mknod d2/ca7 0 2026-03-09T15:01:33.319 INFO:tasks.workunit.client.0.vm05.stdout:6/427: dwrite da/d17/f1d [0,4194304] 0 2026-03-09T15:01:33.334 INFO:tasks.workunit.client.0.vm05.stdout:8/484: mknod d0/d24/ca5 0 2026-03-09T15:01:33.339 INFO:tasks.workunit.client.0.vm05.stdout:5/507: link d1/d5d/f9b d1/d4/d19/fab 0 2026-03-09T15:01:33.350 INFO:tasks.workunit.client.0.vm05.stdout:5/508: dread d1/f2a [0,4194304] 0 2026-03-09T15:01:33.363 INFO:tasks.workunit.client.0.vm05.stdout:1/432: rename d9/l13 to d9/d2f/d83/d98/d59/d49/d92/l9c 0 2026-03-09T15:01:33.370 INFO:tasks.workunit.client.0.vm05.stdout:5/509: symlink d1/d4/d34/lac 0 2026-03-09T15:01:33.377 INFO:tasks.workunit.client.0.vm05.stdout:0/382: symlink d9/d59/d70/l77 0 2026-03-09T15:01:33.380 INFO:tasks.workunit.client.0.vm05.stdout:0/383: dwrite d9/de/d12/f4c [4194304,4194304] 0 2026-03-09T15:01:33.382 INFO:tasks.workunit.client.0.vm05.stdout:0/384: dwrite d9/f58 [4194304,4194304] 0 2026-03-09T15:01:33.385 INFO:tasks.workunit.client.0.vm05.stdout:7/446: truncate d1/d22/f65 3364014 0 2026-03-09T15:01:33.389 INFO:tasks.workunit.client.0.vm05.stdout:5/510: dread d1/d4/d34/d35/d3d/f37 [0,4194304] 0 2026-03-09T15:01:33.402 INFO:tasks.workunit.client.0.vm05.stdout:3/483: rename d3/df/d1e/d2c/d74/d9b/caa to d3/d29/d2d/d77/d4d/cad 0 2026-03-09T15:01:33.410 INFO:tasks.workunit.client.0.vm05.stdout:9/487: creat d2/d10/d8c/fa8 x:0 0 0 2026-03-09T15:01:33.413 INFO:tasks.workunit.client.0.vm05.stdout:4/442: creat d2/d4/d7/dc/f8e x:0 0 0 2026-03-09T15:01:33.413 INFO:tasks.workunit.client.0.vm05.stdout:4/443: stat d2/d49/d69 0 2026-03-09T15:01:33.415 INFO:tasks.workunit.client.0.vm05.stdout:6/428: mkdir da/d43/d7b 0 2026-03-09T15:01:33.415 INFO:tasks.workunit.client.0.vm05.stdout:6/429: dread - da/d43/d66/f6d zero size 2026-03-09T15:01:33.420 INFO:tasks.workunit.client.0.vm05.stdout:3/484: dread d3/df/d59/f75 [0,4194304] 0 2026-03-09T15:01:33.424 INFO:tasks.workunit.client.0.vm05.stdout:7/447: dread d1/d49/d4a/d77/d79/f82 [0,4194304] 0 2026-03-09T15:01:33.426 INFO:tasks.workunit.client.0.vm05.stdout:8/485: rename d0/d1/c1f to d0/d1/d12/d1b/d95/d42/ca6 0 2026-03-09T15:01:33.426 INFO:tasks.workunit.client.0.vm05.stdout:8/486: readlink d0/d1/d12/d1b/d95/d4b/l6d 0 2026-03-09T15:01:33.428 INFO:tasks.workunit.client.0.vm05.stdout:9/488: mkdir d2/da9 0 2026-03-09T15:01:33.428 INFO:tasks.workunit.client.0.vm05.stdout:9/489: write d2/d10/d22/d2c/f91 [4087055,53152] 0 2026-03-09T15:01:33.433 INFO:tasks.workunit.client.0.vm05.stdout:2/434: truncate da/d16/f72 652126 0 2026-03-09T15:01:33.436 INFO:tasks.workunit.client.0.vm05.stdout:1/433: write d9/d2f/d83/d98/d59/d49/f69 [545901,111028] 0 2026-03-09T15:01:33.436 INFO:tasks.workunit.client.0.vm05.stdout:1/434: chown d9/d2f/d83/d98/d59 189 1 2026-03-09T15:01:33.438 INFO:tasks.workunit.client.0.vm05.stdout:1/435: dread d9/d2f/d83/d98/d59/d49/d48/f63 [0,4194304] 0 2026-03-09T15:01:33.441 INFO:tasks.workunit.client.0.vm05.stdout:6/430: unlink da/d43/d66/c67 0 2026-03-09T15:01:33.444 INFO:tasks.workunit.client.0.vm05.stdout:0/385: mkdir d9/de/d25/d38/d78 0 2026-03-09T15:01:33.450 INFO:tasks.workunit.client.0.vm05.stdout:7/448: mknod d1/d9/d23/d31/d51/c87 0 2026-03-09T15:01:33.451 INFO:tasks.workunit.client.0.vm05.stdout:7/449: write d1/d49/d4a/f6b [3981803,71734] 0 2026-03-09T15:01:33.452 INFO:tasks.workunit.client.0.vm05.stdout:7/450: dread d1/f45 [0,4194304] 0 2026-03-09T15:01:33.457 INFO:tasks.workunit.client.0.vm05.stdout:4/444: dread d2/d4/d8/f13 [0,4194304] 0 2026-03-09T15:01:33.460 INFO:tasks.workunit.client.0.vm05.stdout:9/490: rename d2/d4e/d56/d37/d9c/f97 to d2/d10/d8c/faa 0 2026-03-09T15:01:33.461 INFO:tasks.workunit.client.0.vm05.stdout:9/491: write d2/d4e/d56/d37/f36 [5192475,120910] 0 2026-03-09T15:01:33.467 INFO:tasks.workunit.client.0.vm05.stdout:0/386: fsync d9/de/d12/d15/d2e/f4d 0 2026-03-09T15:01:33.468 INFO:tasks.workunit.client.0.vm05.stdout:0/387: read d9/de/d12/d15/f50 [909433,72665] 0 2026-03-09T15:01:33.469 INFO:tasks.workunit.client.0.vm05.stdout:3/485: truncate d3/df/d10/d19/f26 993023 0 2026-03-09T15:01:33.471 INFO:tasks.workunit.client.0.vm05.stdout:5/511: truncate d1/f9 6996222 0 2026-03-09T15:01:33.474 INFO:tasks.workunit.client.0.vm05.stdout:5/512: dwrite d1/d4/d34/d35/d4e/f8d [0,4194304] 0 2026-03-09T15:01:33.477 INFO:tasks.workunit.client.0.vm05.stdout:7/451: creat d1/d9/d23/d31/d32/d78/f88 x:0 0 0 2026-03-09T15:01:33.478 INFO:tasks.workunit.client.0.vm05.stdout:7/452: chown d1/d49/c53 212 1 2026-03-09T15:01:33.479 INFO:tasks.workunit.client.0.vm05.stdout:9/492: symlink d2/d4e/d56/d37/d99/lab 0 2026-03-09T15:01:33.482 INFO:tasks.workunit.client.0.vm05.stdout:7/453: dwrite d1/d9/f52 [0,4194304] 0 2026-03-09T15:01:33.489 INFO:tasks.workunit.client.0.vm05.stdout:7/454: dwrite d1/d9/d23/d31/d51/f3b [0,4194304] 0 2026-03-09T15:01:33.494 INFO:tasks.workunit.client.0.vm05.stdout:1/436: symlink d9/d2f/d83/d98/d59/d49/d78/l9d 0 2026-03-09T15:01:33.496 INFO:tasks.workunit.client.0.vm05.stdout:6/431: mkdir da/d17/d7c 0 2026-03-09T15:01:33.497 INFO:tasks.workunit.client.0.vm05.stdout:0/388: creat d9/d59/f79 x:0 0 0 2026-03-09T15:01:33.498 INFO:tasks.workunit.client.0.vm05.stdout:5/513: mknod d1/d5d/d7f/d91/cad 0 2026-03-09T15:01:33.498 INFO:tasks.workunit.client.0.vm05.stdout:8/487: rmdir d0/d24/d9d 0 2026-03-09T15:01:33.500 INFO:tasks.workunit.client.0.vm05.stdout:3/486: chown d3/df/f23 111053763 1 2026-03-09T15:01:33.502 INFO:tasks.workunit.client.0.vm05.stdout:1/437: dwrite d9/d2f/f4f [0,4194304] 0 2026-03-09T15:01:33.508 INFO:tasks.workunit.client.0.vm05.stdout:5/514: dwrite d1/d4/d34/d35/f36 [0,4194304] 0 2026-03-09T15:01:33.520 INFO:tasks.workunit.client.0.vm05.stdout:9/493: creat d2/d10/d22/d2c/d3c/fac x:0 0 0 2026-03-09T15:01:33.527 INFO:tasks.workunit.client.0.vm05.stdout:6/432: fdatasync da/f18 0 2026-03-09T15:01:33.529 INFO:tasks.workunit.client.0.vm05.stdout:1/438: link d9/d2f/d83/d98/f6e d9/d2f/d83/f9e 0 2026-03-09T15:01:33.530 INFO:tasks.workunit.client.0.vm05.stdout:5/515: symlink d1/d4/d27/d75/d9c/lae 0 2026-03-09T15:01:33.533 INFO:tasks.workunit.client.0.vm05.stdout:7/455: dread d1/d9/f59 [0,4194304] 0 2026-03-09T15:01:33.534 INFO:tasks.workunit.client.0.vm05.stdout:6/433: mknod da/d43/d66/c7d 0 2026-03-09T15:01:33.534 INFO:tasks.workunit.client.0.vm05.stdout:9/494: creat d2/d10/d22/da0/fad x:0 0 0 2026-03-09T15:01:33.540 INFO:tasks.workunit.client.0.vm05.stdout:3/487: creat d3/df/d10/fae x:0 0 0 2026-03-09T15:01:33.540 INFO:tasks.workunit.client.0.vm05.stdout:0/389: creat d9/de/d12/f7a x:0 0 0 2026-03-09T15:01:33.540 INFO:tasks.workunit.client.0.vm05.stdout:5/516: dread d1/f5e [0,4194304] 0 2026-03-09T15:01:33.541 INFO:tasks.workunit.client.0.vm05.stdout:1/439: fdatasync d9/d2f/d83/d98/d59/d49/f69 0 2026-03-09T15:01:33.544 INFO:tasks.workunit.client.0.vm05.stdout:8/488: dread d0/d1/d12/d1b/d95/f5f [0,4194304] 0 2026-03-09T15:01:33.557 INFO:tasks.workunit.client.0.vm05.stdout:3/488: stat d3/la 0 2026-03-09T15:01:33.557 INFO:tasks.workunit.client.0.vm05.stdout:6/434: chown da/d17/d3b/f5f 2972914 1 2026-03-09T15:01:33.562 INFO:tasks.workunit.client.0.vm05.stdout:2/435: write da/f3c [1409517,8439] 0 2026-03-09T15:01:33.563 INFO:tasks.workunit.client.0.vm05.stdout:0/390: dread d9/de/f1e [0,4194304] 0 2026-03-09T15:01:33.566 INFO:tasks.workunit.client.0.vm05.stdout:8/489: mkdir d0/d1/d12/d1b/d95/d42/d60/da7 0 2026-03-09T15:01:33.567 INFO:tasks.workunit.client.0.vm05.stdout:8/490: readlink d0/d1/d12/d1b/d95/l58 0 2026-03-09T15:01:33.568 INFO:tasks.workunit.client.0.vm05.stdout:6/435: fdatasync da/fe 0 2026-03-09T15:01:33.568 INFO:tasks.workunit.client.0.vm05.stdout:6/436: truncate da/f7a 477789 0 2026-03-09T15:01:33.569 INFO:tasks.workunit.client.0.vm05.stdout:6/437: fdatasync da/d19/f45 0 2026-03-09T15:01:33.569 INFO:tasks.workunit.client.0.vm05.stdout:3/489: mkdir d3/df/d1e/daf 0 2026-03-09T15:01:33.569 INFO:tasks.workunit.client.0.vm05.stdout:6/438: dread - da/d17/d3b/f5a zero size 2026-03-09T15:01:33.572 INFO:tasks.workunit.client.0.vm05.stdout:7/456: fdatasync d1/d22/f65 0 2026-03-09T15:01:33.575 INFO:tasks.workunit.client.0.vm05.stdout:6/439: dwrite da/d43/d66/f6e [0,4194304] 0 2026-03-09T15:01:33.584 INFO:tasks.workunit.client.0.vm05.stdout:0/391: read - d9/de/d25/d38/d41/f71 zero size 2026-03-09T15:01:33.584 INFO:tasks.workunit.client.0.vm05.stdout:7/457: dwrite d1/d9/d23/d31/d32/d78/f88 [0,4194304] 0 2026-03-09T15:01:33.584 INFO:tasks.workunit.client.0.vm05.stdout:3/490: creat d3/df/d10/d19/d44/fb0 x:0 0 0 2026-03-09T15:01:33.597 INFO:tasks.workunit.client.0.vm05.stdout:9/495: getdents d2/d10/d22/d47/d95 0 2026-03-09T15:01:33.600 INFO:tasks.workunit.client.0.vm05.stdout:8/491: mkdir d0/d7/da8 0 2026-03-09T15:01:33.600 INFO:tasks.workunit.client.0.vm05.stdout:8/492: stat d0/d1/f49 0 2026-03-09T15:01:33.610 INFO:tasks.workunit.client.0.vm05.stdout:6/440: truncate da/d17/d3b/f6b 962164 0 2026-03-09T15:01:33.610 INFO:tasks.workunit.client.0.vm05.stdout:0/392: symlink d9/de/d12/l7b 0 2026-03-09T15:01:33.614 INFO:tasks.workunit.client.0.vm05.stdout:9/496: mkdir d2/d8b/dae 0 2026-03-09T15:01:33.617 INFO:tasks.workunit.client.0.vm05.stdout:8/493: symlink d0/d7/da8/la9 0 2026-03-09T15:01:33.618 INFO:tasks.workunit.client.0.vm05.stdout:9/497: dwrite d2/d10/f65 [0,4194304] 0 2026-03-09T15:01:33.622 INFO:tasks.workunit.client.0.vm05.stdout:9/498: readlink d2/d4e/d56/d37/d9c/l76 0 2026-03-09T15:01:33.625 INFO:tasks.workunit.client.0.vm05.stdout:9/499: stat d2/d4e/d56/d37/d9c/d94 0 2026-03-09T15:01:33.627 INFO:tasks.workunit.client.0.vm05.stdout:0/393: creat d9/de/d6a/f7c x:0 0 0 2026-03-09T15:01:33.632 INFO:tasks.workunit.client.0.vm05.stdout:9/500: dwrite d2/f12 [4194304,4194304] 0 2026-03-09T15:01:33.639 INFO:tasks.workunit.client.0.vm05.stdout:7/458: link d1/d22/d3c/c5f d1/d49/d4a/d77/d79/c89 0 2026-03-09T15:01:33.647 INFO:tasks.workunit.client.0.vm05.stdout:8/494: creat d0/d1/d12/d1b/d95/d4b/faa x:0 0 0 2026-03-09T15:01:33.651 INFO:tasks.workunit.client.0.vm05.stdout:0/394: creat d9/de/d12/d15/d2e/d32/f7d x:0 0 0 2026-03-09T15:01:33.653 INFO:tasks.workunit.client.0.vm05.stdout:0/395: read d9/de/d12/d15/d2e/d32/d53/d61/f62 [1208078,25774] 0 2026-03-09T15:01:33.655 INFO:tasks.workunit.client.0.vm05.stdout:7/459: mknod d1/d9/d23/d31/c8a 0 2026-03-09T15:01:33.660 INFO:tasks.workunit.client.0.vm05.stdout:7/460: rmdir d1/d49/d4a/d77/d79 39 2026-03-09T15:01:33.665 INFO:tasks.workunit.client.0.vm05.stdout:0/396: symlink d9/de/d12/d15/l7e 0 2026-03-09T15:01:33.666 INFO:tasks.workunit.client.0.vm05.stdout:9/501: link d2/c98 d2/d4e/d56/caf 0 2026-03-09T15:01:33.668 INFO:tasks.workunit.client.0.vm05.stdout:0/397: creat d9/de/f7f x:0 0 0 2026-03-09T15:01:33.669 INFO:tasks.workunit.client.0.vm05.stdout:9/502: creat d2/d10/d22/d52/fb0 x:0 0 0 2026-03-09T15:01:33.670 INFO:tasks.workunit.client.0.vm05.stdout:9/503: truncate d2/d10/d22/da0/fad 156931 0 2026-03-09T15:01:33.677 INFO:tasks.workunit.client.0.vm05.stdout:9/504: mkdir d2/d70/db1 0 2026-03-09T15:01:33.684 INFO:tasks.workunit.client.0.vm05.stdout:4/445: write d2/d49/f4d [728414,16771] 0 2026-03-09T15:01:33.707 INFO:tasks.workunit.client.0.vm05.stdout:4/446: mkdir d2/d4/d8/d4a/d8f 0 2026-03-09T15:01:33.707 INFO:tasks.workunit.client.0.vm05.stdout:4/447: rmdir d2/d4/d50 39 2026-03-09T15:01:33.707 INFO:tasks.workunit.client.0.vm05.stdout:4/448: chown d2/d4/d7/d21/c4b 435 1 2026-03-09T15:01:33.761 INFO:tasks.workunit.client.0.vm05.stdout:1/440: write d9/d2f/d83/d98/d59/f42 [3894449,74341] 0 2026-03-09T15:01:33.763 INFO:tasks.workunit.client.0.vm05.stdout:5/517: dwrite d1/f2a [0,4194304] 0 2026-03-09T15:01:33.772 INFO:tasks.workunit.client.0.vm05.stdout:1/441: unlink d9/l14 0 2026-03-09T15:01:33.772 INFO:tasks.workunit.client.0.vm05.stdout:1/442: fsync d9/d2f/d83/d98/f95 0 2026-03-09T15:01:33.773 INFO:tasks.workunit.client.0.vm05.stdout:5/518: creat d1/d4/d34/d6c/faf x:0 0 0 2026-03-09T15:01:33.774 INFO:tasks.workunit.client.0.vm05.stdout:5/519: write d1/da/fe [5772158,106374] 0 2026-03-09T15:01:33.778 INFO:tasks.workunit.client.0.vm05.stdout:5/520: rename d1/d4/d34/f5c to d1/d5d/d7f/d91/fb0 0 2026-03-09T15:01:33.784 INFO:tasks.workunit.client.0.vm05.stdout:5/521: dread d1/f6 [0,4194304] 0 2026-03-09T15:01:33.787 INFO:tasks.workunit.client.0.vm05.stdout:5/522: unlink d1/d4/d34/d35/d4e/d6f/d7e/f9e 0 2026-03-09T15:01:33.789 INFO:tasks.workunit.client.0.vm05.stdout:5/523: creat d1/d5d/d7f/d91/fb1 x:0 0 0 2026-03-09T15:01:33.790 INFO:tasks.workunit.client.0.vm05.stdout:8/495: sync 2026-03-09T15:01:33.795 INFO:tasks.workunit.client.0.vm05.stdout:2/436: dwrite da/d29/f2d [0,4194304] 0 2026-03-09T15:01:33.795 INFO:tasks.workunit.client.0.vm05.stdout:3/491: write d3/df/f1b [7092161,99426] 0 2026-03-09T15:01:33.796 INFO:tasks.workunit.client.0.vm05.stdout:6/441: write da/d17/f20 [1589028,80712] 0 2026-03-09T15:01:33.798 INFO:tasks.workunit.client.0.vm05.stdout:6/442: fdatasync da/d43/f68 0 2026-03-09T15:01:33.798 INFO:tasks.workunit.client.0.vm05.stdout:7/461: sync 2026-03-09T15:01:33.801 INFO:tasks.workunit.client.0.vm05.stdout:9/505: sync 2026-03-09T15:01:33.801 INFO:tasks.workunit.client.0.vm05.stdout:1/443: sync 2026-03-09T15:01:33.801 INFO:tasks.workunit.client.0.vm05.stdout:2/437: sync 2026-03-09T15:01:33.825 INFO:tasks.workunit.client.0.vm05.stdout:6/443: creat da/d19/f7e x:0 0 0 2026-03-09T15:01:33.828 INFO:tasks.workunit.client.0.vm05.stdout:1/444: rmdir d9/d2f/d83/d98 39 2026-03-09T15:01:33.830 INFO:tasks.workunit.client.0.vm05.stdout:9/506: mkdir d2/d4e/d56/d37/d9c/db2 0 2026-03-09T15:01:33.831 INFO:tasks.workunit.client.0.vm05.stdout:9/507: read d2/d10/d22/d2c/f44 [1376016,65763] 0 2026-03-09T15:01:33.833 INFO:tasks.workunit.client.0.vm05.stdout:5/524: symlink d1/d4/d34/d35/d3d/d38/lb2 0 2026-03-09T15:01:33.836 INFO:tasks.workunit.client.0.vm05.stdout:3/492: unlink d3/df/d10/c67 0 2026-03-09T15:01:33.837 INFO:tasks.workunit.client.0.vm05.stdout:3/493: write d3/d29/d2d/d77/f35 [2004340,75033] 0 2026-03-09T15:01:33.838 INFO:tasks.workunit.client.0.vm05.stdout:3/494: write d3/d29/d2d/f33 [1101131,7277] 0 2026-03-09T15:01:33.840 INFO:tasks.workunit.client.0.vm05.stdout:0/398: write d9/f2b [902491,18134] 0 2026-03-09T15:01:33.844 INFO:tasks.workunit.client.0.vm05.stdout:1/445: read d9/d2f/d83/d98/f67 [2086843,54524] 0 2026-03-09T15:01:33.846 INFO:tasks.workunit.client.0.vm05.stdout:2/438: unlink da/d29/d45/l4c 0 2026-03-09T15:01:33.849 INFO:tasks.workunit.client.0.vm05.stdout:4/449: dwrite d2/d4/d8/f13 [0,4194304] 0 2026-03-09T15:01:33.851 INFO:tasks.workunit.client.0.vm05.stdout:5/525: symlink d1/d4/d34/d35/d3d/d38/d69/lb3 0 2026-03-09T15:01:33.852 INFO:tasks.workunit.client.0.vm05.stdout:7/462: dread d1/d9/d23/d31/d51/f29 [0,4194304] 0 2026-03-09T15:01:33.860 INFO:tasks.workunit.client.0.vm05.stdout:5/526: dread d1/d4/d34/d35/d3d/f32 [0,4194304] 0 2026-03-09T15:01:33.860 INFO:tasks.workunit.client.0.vm05.stdout:3/495: mknod d3/df/d10/d34/cb1 0 2026-03-09T15:01:33.863 INFO:tasks.workunit.client.0.vm05.stdout:0/399: creat d9/de/d12/d15/d2e/d32/d53/f80 x:0 0 0 2026-03-09T15:01:33.863 INFO:tasks.workunit.client.0.vm05.stdout:0/400: fsync d9/f58 0 2026-03-09T15:01:33.866 INFO:tasks.workunit.client.0.vm05.stdout:1/446: symlink d9/d2f/d83/d98/l9f 0 2026-03-09T15:01:33.868 INFO:tasks.workunit.client.0.vm05.stdout:1/447: truncate d9/d2f/d83/d98/d59/d49/d48/f63 1757310 0 2026-03-09T15:01:33.870 INFO:tasks.workunit.client.0.vm05.stdout:9/508: mknod d2/d70/db1/cb3 0 2026-03-09T15:01:33.875 INFO:tasks.workunit.client.0.vm05.stdout:1/448: dwrite d9/d17/f79 [0,4194304] 0 2026-03-09T15:01:33.890 INFO:tasks.workunit.client.0.vm05.stdout:9/509: rmdir d2/d10/d22/d47/d95 39 2026-03-09T15:01:33.890 INFO:tasks.workunit.client.0.vm05.stdout:9/510: read - d2/d10/d22/d2c/d69/f4f zero size 2026-03-09T15:01:33.895 INFO:tasks.workunit.client.0.vm05.stdout:3/496: mkdir d3/df/d10/d19/db2 0 2026-03-09T15:01:33.899 INFO:tasks.workunit.client.0.vm05.stdout:1/449: rmdir d9/d2f/d83/d98/d59/d49/d78/d7e 39 2026-03-09T15:01:33.899 INFO:tasks.workunit.client.0.vm05.stdout:4/450: creat d2/d4/d7/f90 x:0 0 0 2026-03-09T15:01:33.899 INFO:tasks.workunit.client.0.vm05.stdout:1/450: stat d9/d2f/d83/d98/d59/d49/d92/d75 0 2026-03-09T15:01:33.900 INFO:tasks.workunit.client.0.vm05.stdout:4/451: chown d2/d4/d7/dc/d2b 55011196 1 2026-03-09T15:01:33.900 INFO:tasks.workunit.client.0.vm05.stdout:7/463: creat d1/d9/f8b x:0 0 0 2026-03-09T15:01:33.901 INFO:tasks.workunit.client.0.vm05.stdout:9/511: symlink d2/d4e/d56/d53/d64/lb4 0 2026-03-09T15:01:33.902 INFO:tasks.workunit.client.0.vm05.stdout:1/451: truncate d9/d2f/d83/d98/f8d 719667 0 2026-03-09T15:01:33.902 INFO:tasks.workunit.client.0.vm05.stdout:1/452: readlink d9/d2f/d83/d98/l6c 0 2026-03-09T15:01:33.905 INFO:tasks.workunit.client.0.vm05.stdout:7/464: dread d1/d9/f52 [0,4194304] 0 2026-03-09T15:01:33.906 INFO:tasks.workunit.client.0.vm05.stdout:7/465: write d1/d9/f8b [804661,48423] 0 2026-03-09T15:01:33.911 INFO:tasks.workunit.client.0.vm05.stdout:1/453: creat d9/d2f/d37/d5a/fa0 x:0 0 0 2026-03-09T15:01:33.912 INFO:tasks.workunit.client.0.vm05.stdout:7/466: read - d1/d9/d23/d31/d32/f5b zero size 2026-03-09T15:01:33.912 INFO:tasks.workunit.client.0.vm05.stdout:9/512: symlink d2/d9e/lb5 0 2026-03-09T15:01:33.912 INFO:tasks.workunit.client.0.vm05.stdout:7/467: readlink d1/d9/l46 0 2026-03-09T15:01:33.914 INFO:tasks.workunit.client.0.vm05.stdout:9/513: creat d2/d10/d22/fb6 x:0 0 0 2026-03-09T15:01:33.916 INFO:tasks.workunit.client.0.vm05.stdout:1/454: rename d9/d17/l27 to d9/d2f/d83/d98/d59/d49/d48/la1 0 2026-03-09T15:01:33.917 INFO:tasks.workunit.client.0.vm05.stdout:1/455: write d9/f7f [937480,48589] 0 2026-03-09T15:01:33.917 INFO:tasks.workunit.client.0.vm05.stdout:9/514: symlink d2/d4e/d56/d37/d9c/d8e/lb7 0 2026-03-09T15:01:33.919 INFO:tasks.workunit.client.0.vm05.stdout:1/456: rmdir d9/d2f/d55 39 2026-03-09T15:01:33.920 INFO:tasks.workunit.client.0.vm05.stdout:9/515: creat d2/d70/db1/fb8 x:0 0 0 2026-03-09T15:01:33.920 INFO:tasks.workunit.client.0.vm05.stdout:7/468: dread d1/d9/d23/d31/d32/d78/f88 [0,4194304] 0 2026-03-09T15:01:33.922 INFO:tasks.workunit.client.0.vm05.stdout:9/516: dread - d2/d10/d22/d47/f63 zero size 2026-03-09T15:01:33.928 INFO:tasks.workunit.client.0.vm05.stdout:7/469: fdatasync d1/d9/d23/d31/d32/f63 0 2026-03-09T15:01:33.929 INFO:tasks.workunit.client.0.vm05.stdout:9/517: dwrite d2/d10/d22/d2c/d69/f67 [0,4194304] 0 2026-03-09T15:01:33.948 INFO:tasks.workunit.client.0.vm05.stdout:8/496: write d0/d1/d12/d1b/d95/d54/f5b [3466764,56805] 0 2026-03-09T15:01:33.948 INFO:tasks.workunit.client.0.vm05.stdout:8/497: stat d0/d1/d12/d1b/l28 0 2026-03-09T15:01:33.951 INFO:tasks.workunit.client.0.vm05.stdout:8/498: mknod d0/d1/d12/d1b/d95/d42/d60/cab 0 2026-03-09T15:01:33.951 INFO:tasks.workunit.client.0.vm05.stdout:8/499: dread - d0/d1/d12/d3c/f99 zero size 2026-03-09T15:01:33.952 INFO:tasks.workunit.client.0.vm05.stdout:8/500: fdatasync d0/d1/d12/d3c/f99 0 2026-03-09T15:01:33.969 INFO:tasks.workunit.client.0.vm05.stdout:6/444: write da/f1a [304923,94922] 0 2026-03-09T15:01:33.978 INFO:tasks.workunit.client.0.vm05.stdout:2/439: dwrite da/f10 [0,4194304] 0 2026-03-09T15:01:33.981 INFO:tasks.workunit.client.0.vm05.stdout:2/440: read da/d16/f1e [824862,89840] 0 2026-03-09T15:01:33.984 INFO:tasks.workunit.client.0.vm05.stdout:2/441: getdents da/d29/d3f 0 2026-03-09T15:01:33.985 INFO:tasks.workunit.client.0.vm05.stdout:2/442: truncate da/d29/f7d 852256 0 2026-03-09T15:01:33.996 INFO:tasks.workunit.client.0.vm05.stdout:5/527: dwrite d1/f30 [0,4194304] 0 2026-03-09T15:01:34.015 INFO:tasks.workunit.client.0.vm05.stdout:2/443: sync 2026-03-09T15:01:34.015 INFO:tasks.workunit.client.0.vm05.stdout:5/528: sync 2026-03-09T15:01:34.016 INFO:tasks.workunit.client.0.vm05.stdout:5/529: readlink d1/l78 0 2026-03-09T15:01:34.016 INFO:tasks.workunit.client.0.vm05.stdout:5/530: stat d1/d4/d34/d56/da6 0 2026-03-09T15:01:34.017 INFO:tasks.workunit.client.0.vm05.stdout:5/531: fsync d1/d4/d34/d35/f44 0 2026-03-09T15:01:34.019 INFO:tasks.workunit.client.0.vm05.stdout:2/444: creat da/d16/d46/f85 x:0 0 0 2026-03-09T15:01:34.022 INFO:tasks.workunit.client.0.vm05.stdout:2/445: dwrite da/dd/f5d [0,4194304] 0 2026-03-09T15:01:34.023 INFO:tasks.workunit.client.0.vm05.stdout:2/446: fdatasync da/d16/f6b 0 2026-03-09T15:01:34.024 INFO:tasks.workunit.client.0.vm05.stdout:2/447: dread da/d16/d46/f73 [0,4194304] 0 2026-03-09T15:01:34.027 INFO:tasks.workunit.client.0.vm05.stdout:2/448: mknod da/d13/d2f/d35/c86 0 2026-03-09T15:01:34.031 INFO:tasks.workunit.client.0.vm05.stdout:5/532: rename d1/l17 to d1/d4/d34/lb4 0 2026-03-09T15:01:34.041 INFO:tasks.workunit.client.0.vm05.stdout:5/533: dread - d1/d4/d27/d5b/f7d zero size 2026-03-09T15:01:34.046 INFO:tasks.workunit.client.0.vm05.stdout:2/449: getdents da/d13/d30/d7c 0 2026-03-09T15:01:34.051 INFO:tasks.workunit.client.0.vm05.stdout:2/450: unlink da/d13/f75 0 2026-03-09T15:01:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:33 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:33 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:33 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:01:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:33 vm05.local ceph-mon[50611]: pgmap v173: 65 pgs: 65 active+clean; 2.1 GiB data, 7.3 GiB used, 113 GiB / 120 GiB avail; 21 MiB/s rd, 91 MiB/s wr, 191 op/s 2026-03-09T15:01:34.059 INFO:tasks.workunit.client.0.vm05.stdout:5/534: rename d1/d4/d34/d35/d3d/d38/d63/d8c to d1/db5 0 2026-03-09T15:01:34.059 INFO:tasks.workunit.client.0.vm05.stdout:2/451: mknod da/d29/d64/c87 0 2026-03-09T15:01:34.059 INFO:tasks.workunit.client.0.vm05.stdout:2/452: chown da/d16/l4d 8596 1 2026-03-09T15:01:34.067 INFO:tasks.workunit.client.0.vm05.stdout:5/535: sync 2026-03-09T15:01:34.072 INFO:tasks.workunit.client.0.vm05.stdout:5/536: truncate d1/d4/d34/d35/f52 2185820 0 2026-03-09T15:01:34.075 INFO:tasks.workunit.client.0.vm05.stdout:5/537: mknod d1/d4/d34/d35/d3d/d38/d63/cb6 0 2026-03-09T15:01:34.075 INFO:tasks.workunit.client.0.vm05.stdout:5/538: stat d1/d4/d34/d35/d4e/f8d 0 2026-03-09T15:01:34.078 INFO:tasks.workunit.client.0.vm05.stdout:5/539: creat d1/da/fb7 x:0 0 0 2026-03-09T15:01:34.079 INFO:tasks.workunit.client.0.vm05.stdout:5/540: chown d1/d4/d27/l2c 766 1 2026-03-09T15:01:34.081 INFO:tasks.workunit.client.0.vm05.stdout:2/453: getdents da/d13/d30 0 2026-03-09T15:01:34.081 INFO:tasks.workunit.client.0.vm05.stdout:2/454: dread - da/d29/d6a/f81 zero size 2026-03-09T15:01:34.083 INFO:tasks.workunit.client.0.vm05.stdout:5/541: rmdir d1/d4/d27 39 2026-03-09T15:01:34.088 INFO:tasks.workunit.client.0.vm05.stdout:5/542: symlink d1/d4/d34/d35/d3d/d38/d63/lb8 0 2026-03-09T15:01:34.089 INFO:tasks.workunit.client.0.vm05.stdout:2/455: truncate da/d29/f39 1101198 0 2026-03-09T15:01:34.091 INFO:tasks.workunit.client.0.vm05.stdout:5/543: unlink d1/d5d/f9b 0 2026-03-09T15:01:34.093 INFO:tasks.workunit.client.0.vm05.stdout:2/456: symlink da/d29/d64/l88 0 2026-03-09T15:01:34.095 INFO:tasks.workunit.client.0.vm05.stdout:5/544: chown d1/d4/d34/d35/d4e/d6f/l94 195312835 1 2026-03-09T15:01:34.098 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:33 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:34.098 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:33 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:34.098 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:33 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:01:34.098 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:33 vm09.local ceph-mon[59673]: pgmap v173: 65 pgs: 65 active+clean; 2.1 GiB data, 7.3 GiB used, 113 GiB / 120 GiB avail; 21 MiB/s rd, 91 MiB/s wr, 191 op/s 2026-03-09T15:01:34.098 INFO:tasks.workunit.client.0.vm05.stdout:2/457: dread da/d13/d2f/d35/f57 [0,4194304] 0 2026-03-09T15:01:34.098 INFO:tasks.workunit.client.0.vm05.stdout:2/458: write da/d16/f6e [446007,87367] 0 2026-03-09T15:01:34.099 INFO:tasks.workunit.client.0.vm05.stdout:2/459: stat da/d13/d2f/d35/c86 0 2026-03-09T15:01:34.106 INFO:tasks.workunit.client.0.vm05.stdout:2/460: creat da/dd/d4a/f89 x:0 0 0 2026-03-09T15:01:34.108 INFO:tasks.workunit.client.0.vm05.stdout:2/461: read da/d16/f1e [1104751,90358] 0 2026-03-09T15:01:34.110 INFO:tasks.workunit.client.0.vm05.stdout:2/462: getdents da/d29/d3f 0 2026-03-09T15:01:34.117 INFO:tasks.workunit.client.0.vm05.stdout:2/463: mkdir da/d13/d2f/d35/d8a 0 2026-03-09T15:01:34.125 INFO:tasks.workunit.client.0.vm05.stdout:2/464: rename da/dd/c24 to da/d13/d2f/c8b 0 2026-03-09T15:01:34.129 INFO:tasks.workunit.client.0.vm05.stdout:2/465: dwrite da/d29/d6a/f71 [0,4194304] 0 2026-03-09T15:01:34.130 INFO:tasks.workunit.client.0.vm05.stdout:2/466: readlink da/d13/d30/l65 0 2026-03-09T15:01:34.139 INFO:tasks.workunit.client.0.vm05.stdout:2/467: creat da/d13/f8c x:0 0 0 2026-03-09T15:01:34.145 INFO:tasks.workunit.client.0.vm05.stdout:0/401: dwrite d9/de/d12/d15/d2e/f3a [0,4194304] 0 2026-03-09T15:01:34.151 INFO:tasks.workunit.client.0.vm05.stdout:2/468: mknod da/d29/d3f/c8d 0 2026-03-09T15:01:34.157 INFO:tasks.workunit.client.0.vm05.stdout:3/497: truncate d3/d29/d2d/f31 3613618 0 2026-03-09T15:01:34.158 INFO:tasks.workunit.client.0.vm05.stdout:3/498: write d3/df/d1e/d2f/f9a [298240,125169] 0 2026-03-09T15:01:34.159 INFO:tasks.workunit.client.0.vm05.stdout:3/499: readlink d3/df/d10/d34/l73 0 2026-03-09T15:01:34.160 INFO:tasks.workunit.client.0.vm05.stdout:4/452: dwrite d2/d4/d7/dc/f62 [0,4194304] 0 2026-03-09T15:01:34.161 INFO:tasks.workunit.client.0.vm05.stdout:4/453: fsync d2/d4/d7/f7b 0 2026-03-09T15:01:34.170 INFO:tasks.workunit.client.0.vm05.stdout:2/469: symlink da/d29/d45/l8e 0 2026-03-09T15:01:34.170 INFO:tasks.workunit.client.0.vm05.stdout:2/470: chown da/d13/d2f/f42 3741738 1 2026-03-09T15:01:34.171 INFO:tasks.workunit.client.0.vm05.stdout:0/402: dwrite d9/de/f1e [0,4194304] 0 2026-03-09T15:01:34.173 INFO:tasks.workunit.client.0.vm05.stdout:1/457: dwrite d9/d2f/d37/d5a/f5b [0,4194304] 0 2026-03-09T15:01:34.181 INFO:tasks.workunit.client.0.vm05.stdout:9/518: dwrite d2/f13 [0,4194304] 0 2026-03-09T15:01:34.185 INFO:tasks.workunit.client.0.vm05.stdout:9/519: dread d2/f12 [4194304,4194304] 0 2026-03-09T15:01:34.203 INFO:tasks.workunit.client.0.vm05.stdout:3/500: unlink d3/df/f4b 0 2026-03-09T15:01:34.212 INFO:tasks.workunit.client.0.vm05.stdout:4/454: rmdir d2/d4/d7/dc/d2b 39 2026-03-09T15:01:34.216 INFO:tasks.workunit.client.0.vm05.stdout:2/471: rename da/d16/f20 to da/d13/d30/f8f 0 2026-03-09T15:01:34.219 INFO:tasks.workunit.client.0.vm05.stdout:1/458: mkdir d9/d2f/d37/d5f/da2 0 2026-03-09T15:01:34.219 INFO:tasks.workunit.client.0.vm05.stdout:9/520: truncate d2/f17 3962430 0 2026-03-09T15:01:34.223 INFO:tasks.workunit.client.0.vm05.stdout:2/472: symlink da/d13/d30/d7c/l90 0 2026-03-09T15:01:34.228 INFO:tasks.workunit.client.0.vm05.stdout:9/521: unlink d2/d10/f28 0 2026-03-09T15:01:34.228 INFO:tasks.workunit.client.0.vm05.stdout:9/522: read - d2/d10/d8c/fa8 zero size 2026-03-09T15:01:34.228 INFO:tasks.workunit.client.0.vm05.stdout:1/459: stat d9/d2f/d55/f64 0 2026-03-09T15:01:34.230 INFO:tasks.workunit.client.0.vm05.stdout:4/455: fdatasync d2/d4/d7/f53 0 2026-03-09T15:01:34.242 INFO:tasks.workunit.client.0.vm05.stdout:0/403: getdents d9/de/d6a 0 2026-03-09T15:01:34.250 INFO:tasks.workunit.client.0.vm05.stdout:3/501: getdents d3/df/d10/d34/d8c 0 2026-03-09T15:01:34.250 INFO:tasks.workunit.client.0.vm05.stdout:0/404: stat d9/de/l72 0 2026-03-09T15:01:34.250 INFO:tasks.workunit.client.0.vm05.stdout:0/405: fdatasync d9/de/d12/d15/d2e/f73 0 2026-03-09T15:01:34.250 INFO:tasks.workunit.client.0.vm05.stdout:3/502: write d3/d29/d7f/f83 [567809,9784] 0 2026-03-09T15:01:34.250 INFO:tasks.workunit.client.0.vm05.stdout:3/503: write d3/df/d10/d19/d44/f7e [295753,36185] 0 2026-03-09T15:01:34.251 INFO:tasks.workunit.client.0.vm05.stdout:7/470: dwrite d1/d9/d23/f5a [0,4194304] 0 2026-03-09T15:01:34.252 INFO:tasks.workunit.client.0.vm05.stdout:7/471: fsync d1/d9/d23/d31/d32/f3a 0 2026-03-09T15:01:34.253 INFO:tasks.workunit.client.0.vm05.stdout:1/460: sync 2026-03-09T15:01:34.253 INFO:tasks.workunit.client.0.vm05.stdout:4/456: sync 2026-03-09T15:01:34.260 INFO:tasks.workunit.client.0.vm05.stdout:8/501: dwrite d0/d1/d55/f7b [0,4194304] 0 2026-03-09T15:01:34.264 INFO:tasks.workunit.client.0.vm05.stdout:6/445: write da/f14 [3099581,83924] 0 2026-03-09T15:01:34.268 INFO:tasks.workunit.client.0.vm05.stdout:6/446: dwrite da/d19/f52 [0,4194304] 0 2026-03-09T15:01:34.275 INFO:tasks.workunit.client.0.vm05.stdout:3/504: stat d3/df/d10/f28 0 2026-03-09T15:01:34.275 INFO:tasks.workunit.client.0.vm05.stdout:3/505: dread - d3/df/d10/fae zero size 2026-03-09T15:01:34.281 INFO:tasks.workunit.client.0.vm05.stdout:9/523: link d2/d4e/d56/d37/d9c/f79 d2/d70/db1/fb9 0 2026-03-09T15:01:34.281 INFO:tasks.workunit.client.0.vm05.stdout:9/524: chown d2/d10/d22/d2c/d69/f86 810236 1 2026-03-09T15:01:34.286 INFO:tasks.workunit.client.0.vm05.stdout:1/461: dread d9/d2f/d83/d98/f50 [0,4194304] 0 2026-03-09T15:01:34.287 INFO:tasks.workunit.client.0.vm05.stdout:1/462: fsync d9/d2f/d55/f64 0 2026-03-09T15:01:34.296 INFO:tasks.workunit.client.0.vm05.stdout:7/472: rename d1/d12/f73 to d1/d12/f8c 0 2026-03-09T15:01:34.297 INFO:tasks.workunit.client.0.vm05.stdout:7/473: write d1/d9/d23/d31/d32/f58 [259523,60657] 0 2026-03-09T15:01:34.299 INFO:tasks.workunit.client.0.vm05.stdout:0/406: mknod d9/de/d12/d15/d2e/c81 0 2026-03-09T15:01:34.300 INFO:tasks.workunit.client.0.vm05.stdout:0/407: chown d9/de/d12/d15/d2e/d32/d53/f5f 998 1 2026-03-09T15:01:34.303 INFO:tasks.workunit.client.0.vm05.stdout:8/502: fsync d0/d1/d12/d1b/d95/f5f 0 2026-03-09T15:01:34.317 INFO:tasks.workunit.client.0.vm05.stdout:4/457: mknod d2/c91 0 2026-03-09T15:01:34.318 INFO:tasks.workunit.client.0.vm05.stdout:7/474: creat d1/d22/f8d x:0 0 0 2026-03-09T15:01:34.319 INFO:tasks.workunit.client.0.vm05.stdout:7/475: chown d1/d9/l48 425566461 1 2026-03-09T15:01:34.322 INFO:tasks.workunit.client.0.vm05.stdout:4/458: dwrite d2/d4/d7/d21/f34 [0,4194304] 0 2026-03-09T15:01:34.323 INFO:tasks.workunit.client.0.vm05.stdout:4/459: chown d2/d4/d7/d21/d3d/f65 7 1 2026-03-09T15:01:34.325 INFO:tasks.workunit.client.0.vm05.stdout:9/525: mknod d2/da9/cba 0 2026-03-09T15:01:34.337 INFO:tasks.workunit.client.0.vm05.stdout:4/460: dread d2/d4/d7/f9 [4194304,4194304] 0 2026-03-09T15:01:34.346 INFO:tasks.workunit.client.0.vm05.stdout:1/463: creat d9/d2f/d83/fa3 x:0 0 0 2026-03-09T15:01:34.347 INFO:tasks.workunit.client.0.vm05.stdout:1/464: chown d9/d17/l53 41077295 1 2026-03-09T15:01:34.349 INFO:tasks.workunit.client.0.vm05.stdout:4/461: dread d2/d1d/f36 [0,4194304] 0 2026-03-09T15:01:34.352 INFO:tasks.workunit.client.0.vm05.stdout:4/462: dread d2/d1d/f36 [0,4194304] 0 2026-03-09T15:01:34.353 INFO:tasks.workunit.client.0.vm05.stdout:4/463: write d2/d43/f4f [4254675,93637] 0 2026-03-09T15:01:34.354 INFO:tasks.workunit.client.0.vm05.stdout:7/476: creat d1/d49/d4a/d77/d79/f8e x:0 0 0 2026-03-09T15:01:34.357 INFO:tasks.workunit.client.0.vm05.stdout:0/408: link d9/de/f5d d9/f82 0 2026-03-09T15:01:34.359 INFO:tasks.workunit.client.0.vm05.stdout:9/526: creat d2/d8b/dae/fbb x:0 0 0 2026-03-09T15:01:34.362 INFO:tasks.workunit.client.0.vm05.stdout:0/409: dwrite d9/de/f3e [4194304,4194304] 0 2026-03-09T15:01:34.367 INFO:tasks.workunit.client.0.vm05.stdout:7/477: mkdir d1/d9/d23/d31/d8f 0 2026-03-09T15:01:34.367 INFO:tasks.workunit.client.0.vm05.stdout:9/527: creat d2/d10/d22/d2c/d3c/fbc x:0 0 0 2026-03-09T15:01:34.369 INFO:tasks.workunit.client.0.vm05.stdout:4/464: mkdir d2/d1d/d88/d92 0 2026-03-09T15:01:34.372 INFO:tasks.workunit.client.0.vm05.stdout:1/465: dread d9/d2f/f58 [0,4194304] 0 2026-03-09T15:01:34.373 INFO:tasks.workunit.client.0.vm05.stdout:0/410: dwrite d9/de/d25/d38/d41/f71 [0,4194304] 0 2026-03-09T15:01:34.379 INFO:tasks.workunit.client.0.vm05.stdout:0/411: truncate d9/de/f5d 1502149 0 2026-03-09T15:01:34.381 INFO:tasks.workunit.client.0.vm05.stdout:9/528: dwrite d2/d10/d22/d2c/d3c/f55 [0,4194304] 0 2026-03-09T15:01:34.391 INFO:tasks.workunit.client.0.vm05.stdout:7/478: rename d1/d22/f8d to d1/d9/d23/d31/d32/f90 0 2026-03-09T15:01:34.395 INFO:tasks.workunit.client.0.vm05.stdout:4/465: creat d2/d4/d8/d4a/d6e/f93 x:0 0 0 2026-03-09T15:01:34.397 INFO:tasks.workunit.client.0.vm05.stdout:5/545: write d1/d4/d34/d35/d3d/d38/f6e [4740221,2080] 0 2026-03-09T15:01:34.416 INFO:tasks.workunit.client.0.vm05.stdout:5/546: creat d1/da/fb9 x:0 0 0 2026-03-09T15:01:34.423 INFO:tasks.workunit.client.0.vm05.stdout:7/479: symlink d1/d49/l91 0 2026-03-09T15:01:34.427 INFO:tasks.workunit.client.0.vm05.stdout:4/466: getdents d2/d4/d8/d4a/d8f 0 2026-03-09T15:01:34.433 INFO:tasks.workunit.client.0.vm05.stdout:4/467: mkdir d2/d4/d8/d4a/d94 0 2026-03-09T15:01:34.434 INFO:tasks.workunit.client.0.vm05.stdout:1/466: getdents d9/d2f/d83/d98/d59/d49/d78 0 2026-03-09T15:01:34.436 INFO:tasks.workunit.client.0.vm05.stdout:4/468: rmdir d2/d7a 39 2026-03-09T15:01:34.439 INFO:tasks.workunit.client.0.vm05.stdout:4/469: readlink d2/d4/d7/d21/d3d/l8c 0 2026-03-09T15:01:34.442 INFO:tasks.workunit.client.0.vm05.stdout:2/473: write da/d16/f72 [432937,82364] 0 2026-03-09T15:01:34.442 INFO:tasks.workunit.client.0.vm05.stdout:4/470: getdents d2/d4/d50 0 2026-03-09T15:01:34.444 INFO:tasks.workunit.client.0.vm05.stdout:2/474: dread da/f4e [0,4194304] 0 2026-03-09T15:01:34.444 INFO:tasks.workunit.client.0.vm05.stdout:2/475: chown da/d16/d46/f73 11266712 1 2026-03-09T15:01:34.445 INFO:tasks.workunit.client.0.vm05.stdout:7/480: sync 2026-03-09T15:01:34.445 INFO:tasks.workunit.client.0.vm05.stdout:1/467: sync 2026-03-09T15:01:34.447 INFO:tasks.workunit.client.0.vm05.stdout:4/471: rename d2/d4/d7/d21/c37 to d2/d4/d7/d21/d3d/c95 0 2026-03-09T15:01:34.447 INFO:tasks.workunit.client.0.vm05.stdout:4/472: rename d2/d4/d8/d4a/d6e to d2/d4/d8/d4a/d6e/d96 22 2026-03-09T15:01:34.449 INFO:tasks.workunit.client.0.vm05.stdout:2/476: mkdir da/d13/d30/d91 0 2026-03-09T15:01:34.451 INFO:tasks.workunit.client.0.vm05.stdout:1/468: creat d9/d2f/d83/d98/fa4 x:0 0 0 2026-03-09T15:01:34.467 INFO:tasks.workunit.client.0.vm05.stdout:2/477: creat da/d16/d46/f92 x:0 0 0 2026-03-09T15:01:34.468 INFO:tasks.workunit.client.0.vm05.stdout:3/506: dwrite d3/df/d10/d34/d8c/f6d [0,4194304] 0 2026-03-09T15:01:34.469 INFO:tasks.workunit.client.0.vm05.stdout:3/507: truncate d3/df/d10/d34/f5f 4354813 0 2026-03-09T15:01:34.471 INFO:tasks.workunit.client.0.vm05.stdout:4/473: mkdir d2/d4/d7/dc/d2b/d97 0 2026-03-09T15:01:34.472 INFO:tasks.workunit.client.0.vm05.stdout:6/447: truncate da/d17/f69 1572231 0 2026-03-09T15:01:34.477 INFO:tasks.workunit.client.0.vm05.stdout:0/412: getdents d9 0 2026-03-09T15:01:34.479 INFO:tasks.workunit.client.0.vm05.stdout:8/503: dwrite d0/d7/f8 [4194304,4194304] 0 2026-03-09T15:01:34.486 INFO:tasks.workunit.client.0.vm05.stdout:8/504: dwrite d0/d1/d12/d1b/d95/d42/d60/d73/f74 [0,4194304] 0 2026-03-09T15:01:34.511 INFO:tasks.workunit.client.0.vm05.stdout:5/547: getdents d1/da 0 2026-03-09T15:01:34.511 INFO:tasks.workunit.client.0.vm05.stdout:5/548: chown d1/d4/d34/d56 2914 1 2026-03-09T15:01:34.521 INFO:tasks.workunit.client.0.vm05.stdout:5/549: dread d1/d4/d34/d35/f44 [4194304,4194304] 0 2026-03-09T15:01:34.536 INFO:tasks.workunit.client.0.vm05.stdout:9/529: dwrite d2/f17 [0,4194304] 0 2026-03-09T15:01:34.543 INFO:tasks.workunit.client.0.vm05.stdout:8/505: mkdir d0/d1/d12/d1b/d95/d42/d60/d73/dac 0 2026-03-09T15:01:34.544 INFO:tasks.workunit.client.0.vm05.stdout:8/506: write d0/d1/d55/f7b [2762185,83912] 0 2026-03-09T15:01:34.544 INFO:tasks.workunit.client.0.vm05.stdout:8/507: fdatasync d0/d1/d12/d1b/d95/d54/f64 0 2026-03-09T15:01:34.546 INFO:tasks.workunit.client.0.vm05.stdout:3/508: dread d3/df/d10/d19/f26 [0,4194304] 0 2026-03-09T15:01:34.547 INFO:tasks.workunit.client.0.vm05.stdout:3/509: chown d3/d29/d2d/d77/d4d/f80 1 1 2026-03-09T15:01:34.560 INFO:tasks.workunit.client.0.vm05.stdout:1/469: link d9/d2f/d83/d98/d59/d49/d4b/c60 d9/d2f/d83/d98/d59/d49/d78/d94/ca5 0 2026-03-09T15:01:34.561 INFO:tasks.workunit.client.0.vm05.stdout:1/470: dread - d9/d2f/d83/d98/d59/d49/f82 zero size 2026-03-09T15:01:34.563 INFO:tasks.workunit.client.0.vm05.stdout:6/448: mknod da/d43/d7b/c7f 0 2026-03-09T15:01:34.564 INFO:tasks.workunit.client.0.vm05.stdout:6/449: fsync da/d17/f42 0 2026-03-09T15:01:34.576 INFO:tasks.workunit.client.0.vm05.stdout:9/530: rename d2/d10/d22/d2c/d69/f43 to d2/d10/d22/d2c/fbd 0 2026-03-09T15:01:34.579 INFO:tasks.workunit.client.0.vm05.stdout:6/450: sync 2026-03-09T15:01:34.580 INFO:tasks.workunit.client.0.vm05.stdout:8/508: unlink d0/d1/d12/d1b/d66/d6f/c70 0 2026-03-09T15:01:34.583 INFO:tasks.workunit.client.0.vm05.stdout:2/478: creat da/d13/d2f/f93 x:0 0 0 2026-03-09T15:01:34.585 INFO:tasks.workunit.client.0.vm05.stdout:5/550: read - d1/d4/d27/f57 zero size 2026-03-09T15:01:34.594 INFO:tasks.workunit.client.0.vm05.stdout:3/510: dread d3/f7 [0,4194304] 0 2026-03-09T15:01:34.594 INFO:tasks.workunit.client.0.vm05.stdout:3/511: write d3/df/d10/d34/d8c/d90/fa6 [668567,46809] 0 2026-03-09T15:01:34.602 INFO:tasks.workunit.client.0.vm05.stdout:7/481: dwrite d1/d9/d23/d31/d51/f39 [0,4194304] 0 2026-03-09T15:01:34.623 INFO:tasks.workunit.client.0.vm05.stdout:9/531: dwrite d2/d10/d22/d47/f7b [0,4194304] 0 2026-03-09T15:01:34.626 INFO:tasks.workunit.client.0.vm05.stdout:8/509: fsync d0/dc/f7e 0 2026-03-09T15:01:34.626 INFO:tasks.workunit.client.0.vm05.stdout:8/510: stat d0/d7 0 2026-03-09T15:01:34.635 INFO:tasks.workunit.client.0.vm05.stdout:7/482: creat d1/d9/d23/d31/d32/d78/f92 x:0 0 0 2026-03-09T15:01:34.651 INFO:tasks.workunit.client.0.vm05.stdout:7/483: rename d1/d49/d4a/d77/d79 to d1/d9/d23/d31/d8f/d93 0 2026-03-09T15:01:34.669 INFO:tasks.workunit.client.0.vm05.stdout:6/451: getdents da/d19 0 2026-03-09T15:01:34.669 INFO:tasks.workunit.client.0.vm05.stdout:6/452: stat da/d43/f46 0 2026-03-09T15:01:34.670 INFO:tasks.workunit.client.0.vm05.stdout:7/484: truncate d1/d12/f18 172364 0 2026-03-09T15:01:34.672 INFO:tasks.workunit.client.0.vm05.stdout:9/532: getdents d2/d10 0 2026-03-09T15:01:34.673 INFO:tasks.workunit.client.0.vm05.stdout:7/485: mkdir d1/d49/d4a/d94 0 2026-03-09T15:01:34.674 INFO:tasks.workunit.client.0.vm05.stdout:7/486: write d1/d9/d23/d54/f60 [354894,87798] 0 2026-03-09T15:01:34.683 INFO:tasks.workunit.client.0.vm05.stdout:6/453: creat da/f80 x:0 0 0 2026-03-09T15:01:34.690 INFO:tasks.workunit.client.0.vm05.stdout:6/454: write da/d17/f61 [874348,75431] 0 2026-03-09T15:01:34.729 INFO:tasks.workunit.client.0.vm05.stdout:9/533: dread d2/d4e/d56/d37/d9c/d8e/f68 [0,4194304] 0 2026-03-09T15:01:34.731 INFO:tasks.workunit.client.0.vm05.stdout:9/534: rename d2/d9e/lb5 to d2/da9/lbe 0 2026-03-09T15:01:34.735 INFO:tasks.workunit.client.0.vm05.stdout:0/413: dwrite d9/f42 [0,4194304] 0 2026-03-09T15:01:34.784 INFO:tasks.workunit.client.0.vm05.stdout:2/479: write da/f4e [1786760,67130] 0 2026-03-09T15:01:34.788 INFO:tasks.workunit.client.0.vm05.stdout:5/551: dwrite d1/d4/d34/d35/d3d/f37 [0,4194304] 0 2026-03-09T15:01:34.793 INFO:tasks.workunit.client.0.vm05.stdout:8/511: dwrite d0/d1/d12/d1b/d95/f41 [0,4194304] 0 2026-03-09T15:01:34.797 INFO:tasks.workunit.client.0.vm05.stdout:8/512: mkdir d0/d1/d12/d1b/d6e/d93/d9f/dad 0 2026-03-09T15:01:34.810 INFO:tasks.workunit.client.0.vm05.stdout:2/480: sync 2026-03-09T15:01:34.810 INFO:tasks.workunit.client.0.vm05.stdout:2/481: write da/f2c [1794142,12001] 0 2026-03-09T15:01:34.812 INFO:tasks.workunit.client.0.vm05.stdout:2/482: mkdir da/d29/d45/d94 0 2026-03-09T15:01:34.814 INFO:tasks.workunit.client.0.vm05.stdout:2/483: symlink da/d16/d46/l95 0 2026-03-09T15:01:34.824 INFO:tasks.workunit.client.0.vm05.stdout:3/512: rmdir d3/df/d59 39 2026-03-09T15:01:34.825 INFO:tasks.workunit.client.0.vm05.stdout:3/513: readlink d3/df/d1e/d2f/l63 0 2026-03-09T15:01:34.826 INFO:tasks.workunit.client.0.vm05.stdout:9/535: rename d2/d10/c8f to d2/d4e/d56/d53/d64/cbf 0 2026-03-09T15:01:34.827 INFO:tasks.workunit.client.0.vm05.stdout:9/536: write d2/d4e/d56/d53/f66 [5712121,16585] 0 2026-03-09T15:01:34.832 INFO:tasks.workunit.client.0.vm05.stdout:0/414: rename d9/de/d12/d15/d2e/d32/d53/f80 to d9/d59/f83 0 2026-03-09T15:01:34.837 INFO:tasks.workunit.client.0.vm05.stdout:9/537: symlink d2/d4e/d56/d53/lc0 0 2026-03-09T15:01:34.840 INFO:tasks.workunit.client.0.vm05.stdout:7/487: truncate d1/d9/fd 1983362 0 2026-03-09T15:01:34.843 INFO:tasks.workunit.client.0.vm05.stdout:7/488: dwrite d1/f6e [0,4194304] 0 2026-03-09T15:01:34.845 INFO:tasks.workunit.client.0.vm05.stdout:7/489: dread - d1/d9/f7d zero size 2026-03-09T15:01:34.847 INFO:tasks.workunit.client.0.vm05.stdout:3/514: dread d3/f1f [0,4194304] 0 2026-03-09T15:01:34.852 INFO:tasks.workunit.client.0.vm05.stdout:6/455: dwrite da/d17/f2a [0,4194304] 0 2026-03-09T15:01:34.854 INFO:tasks.workunit.client.0.vm05.stdout:6/456: read da/d19/f6a [3959363,62088] 0 2026-03-09T15:01:34.866 INFO:tasks.workunit.client.0.vm05.stdout:1/471: unlink d9/f23 0 2026-03-09T15:01:34.868 INFO:tasks.workunit.client.0.vm05.stdout:4/474: creat d2/f98 x:0 0 0 2026-03-09T15:01:34.869 INFO:tasks.workunit.client.0.vm05.stdout:4/475: write d2/d1d/f5c [1677679,110285] 0 2026-03-09T15:01:34.870 INFO:tasks.workunit.client.0.vm05.stdout:4/476: write d2/d4/d7/dc/f62 [4966465,126464] 0 2026-03-09T15:01:34.870 INFO:tasks.workunit.client.0.vm05.stdout:4/477: chown d2/c91 38054 1 2026-03-09T15:01:34.871 INFO:tasks.workunit.client.0.vm05.stdout:0/415: rename d9/de/d12/d15/d2e/f73 to d9/de/d12/f84 0 2026-03-09T15:01:34.878 INFO:tasks.workunit.client.0.vm05.stdout:6/457: unlink da/d17/d3b/c4e 0 2026-03-09T15:01:34.881 INFO:tasks.workunit.client.0.vm05.stdout:1/472: symlink d9/d2f/d83/d98/d59/d49/d77/la6 0 2026-03-09T15:01:34.885 INFO:tasks.workunit.client.0.vm05.stdout:6/458: fsync da/f62 0 2026-03-09T15:01:34.886 INFO:tasks.workunit.client.0.vm05.stdout:1/473: truncate f7 826804 0 2026-03-09T15:01:34.891 INFO:tasks.workunit.client.0.vm05.stdout:6/459: mkdir da/d17/d3b/d81 0 2026-03-09T15:01:34.900 INFO:tasks.workunit.client.0.vm05.stdout:1/474: write d9/d2f/d83/d98/d59/d49/d92/d75/f76 [3892,101089] 0 2026-03-09T15:01:34.901 INFO:tasks.workunit.client.0.vm05.stdout:1/475: write d9/d2f/d83/d98/d59/d49/f69 [1616563,26991] 0 2026-03-09T15:01:34.907 INFO:tasks.workunit.client.0.vm05.stdout:6/460: getdents da/d19 0 2026-03-09T15:01:34.907 INFO:tasks.workunit.client.0.vm05.stdout:6/461: write da/d17/d3b/f4a [3046343,69268] 0 2026-03-09T15:01:34.911 INFO:tasks.workunit.client.0.vm05.stdout:1/476: symlink d9/d2f/la7 0 2026-03-09T15:01:34.911 INFO:tasks.workunit.client.0.vm05.stdout:1/477: dread - d9/d2f/d83/d98/fa4 zero size 2026-03-09T15:01:34.912 INFO:tasks.workunit.client.0.vm05.stdout:1/478: stat d9/l46 0 2026-03-09T15:01:34.921 INFO:tasks.workunit.client.0.vm05.stdout:0/416: dread d9/de/d12/f3c [0,4194304] 0 2026-03-09T15:01:34.927 INFO:tasks.workunit.client.0.vm05.stdout:6/462: truncate da/d17/f69 147654 0 2026-03-09T15:01:34.929 INFO:tasks.workunit.client.0.vm05.stdout:2/484: write da/d29/d3f/f5f [455116,82306] 0 2026-03-09T15:01:34.935 INFO:tasks.workunit.client.0.vm05.stdout:8/513: rmdir d0/d1 39 2026-03-09T15:01:34.945 INFO:tasks.workunit.client.0.vm05.stdout:7/490: write d1/f62 [3526870,80188] 0 2026-03-09T15:01:34.952 INFO:tasks.workunit.client.0.vm05.stdout:8/514: getdents d0/d24/d96 0 2026-03-09T15:01:34.953 INFO:tasks.workunit.client.0.vm05.stdout:3/515: write d3/df/d1e/d2f/d52/f57 [798311,71017] 0 2026-03-09T15:01:34.953 INFO:tasks.workunit.client.0.vm05.stdout:3/516: write d3/df/d1e/d2f/d52/f93 [531771,103857] 0 2026-03-09T15:01:34.955 INFO:tasks.workunit.client.0.vm05.stdout:6/463: creat da/f82 x:0 0 0 2026-03-09T15:01:34.956 INFO:tasks.workunit.client.0.vm05.stdout:9/538: mkdir d2/d10/d22/dc1 0 2026-03-09T15:01:34.957 INFO:tasks.workunit.client.0.vm05.stdout:2/485: rmdir da/d29/d45/d94 0 2026-03-09T15:01:34.957 INFO:tasks.workunit.client.0.vm05.stdout:2/486: stat da/d16/l4d 0 2026-03-09T15:01:34.958 INFO:tasks.workunit.client.0.vm05.stdout:7/491: mkdir d1/d9/d23/d31/d8f/d93/d95 0 2026-03-09T15:01:34.962 INFO:tasks.workunit.client.0.vm05.stdout:3/517: mkdir d3/df/d1e/da9/db3 0 2026-03-09T15:01:34.965 INFO:tasks.workunit.client.0.vm05.stdout:9/539: rename d2/d70 to d2/d10/d22/dc2 0 2026-03-09T15:01:34.966 INFO:tasks.workunit.client.0.vm05.stdout:9/540: chown d2/d4e/d56/d84/c8a 753820 1 2026-03-09T15:01:34.967 INFO:tasks.workunit.client.0.vm05.stdout:2/487: unlink da/d13/f59 0 2026-03-09T15:01:34.971 INFO:tasks.workunit.client.0.vm05.stdout:9/541: dwrite d2/d10/d22/da0/fad [0,4194304] 0 2026-03-09T15:01:34.973 INFO:tasks.workunit.client.0.vm05.stdout:9/542: dread - d2/d10/d22/dc2/db1/fb8 zero size 2026-03-09T15:01:34.974 INFO:tasks.workunit.client.0.vm05.stdout:9/543: chown d2/d10/d22/d2c/d69/f67 933559 1 2026-03-09T15:01:34.981 INFO:tasks.workunit.client.0.vm05.stdout:5/552: dread d1/da/fe [0,4194304] 0 2026-03-09T15:01:34.982 INFO:tasks.workunit.client.0.vm05.stdout:5/553: chown d1/d4/d34/d35/f47 164305 1 2026-03-09T15:01:34.985 INFO:tasks.workunit.client.0.vm05.stdout:8/515: creat d0/d1/d97/fae x:0 0 0 2026-03-09T15:01:34.989 INFO:tasks.workunit.client.0.vm05.stdout:4/478: write d2/d4/d7/dc/f64 [3917290,122913] 0 2026-03-09T15:01:34.991 INFO:tasks.workunit.client.0.vm05.stdout:4/479: truncate d2/d1d/f7d 668493 0 2026-03-09T15:01:34.996 INFO:tasks.workunit.client.0.vm05.stdout:5/554: mkdir d1/d4/d27/d75/d9c/dba 0 2026-03-09T15:01:34.996 INFO:tasks.workunit.client.0.vm05.stdout:8/516: mkdir d0/d1/d55/daf 0 2026-03-09T15:01:34.999 INFO:tasks.workunit.client.0.vm05.stdout:9/544: dread d2/f61 [0,4194304] 0 2026-03-09T15:01:35.005 INFO:tasks.workunit.client.0.vm05.stdout:8/517: dread d0/d1/d12/d1b/d95/d42/d60/f8f [0,4194304] 0 2026-03-09T15:01:35.009 INFO:tasks.workunit.client.0.vm05.stdout:3/518: truncate d3/df/d59/d79/fa8 391974 0 2026-03-09T15:01:35.015 INFO:tasks.workunit.client.0.vm05.stdout:6/464: getdents da/d43 0 2026-03-09T15:01:35.019 INFO:tasks.workunit.client.0.vm05.stdout:2/488: creat da/d13/f96 x:0 0 0 2026-03-09T15:01:35.023 INFO:tasks.workunit.client.0.vm05.stdout:1/479: write d9/d2f/d83/f9e [641999,38612] 0 2026-03-09T15:01:35.023 INFO:tasks.workunit.client.0.vm05.stdout:1/480: write d9/d2f/d83/d98/f95 [610455,112124] 0 2026-03-09T15:01:35.024 INFO:tasks.workunit.client.0.vm05.stdout:1/481: dread - d9/d2f/d83/d98/fa4 zero size 2026-03-09T15:01:35.025 INFO:tasks.workunit.client.0.vm05.stdout:1/482: dread d9/d2f/d83/d98/f8d [0,4194304] 0 2026-03-09T15:01:35.038 INFO:tasks.workunit.client.0.vm05.stdout:0/417: write d9/de/d12/d15/d2e/d32/d53/d61/f62 [1940924,29264] 0 2026-03-09T15:01:35.038 INFO:tasks.workunit.client.0.vm05.stdout:8/518: creat d0/d1/d12/d1b/d95/d4b/fb0 x:0 0 0 2026-03-09T15:01:35.038 INFO:tasks.workunit.client.0.vm05.stdout:9/545: mkdir d2/d10/d22/dc1/dc3 0 2026-03-09T15:01:35.040 INFO:tasks.workunit.client.0.vm05.stdout:9/546: dread d2/f17 [0,4194304] 0 2026-03-09T15:01:35.047 INFO:tasks.workunit.client.0.vm05.stdout:1/483: truncate d9/d17/f81 12106 0 2026-03-09T15:01:35.051 INFO:tasks.workunit.client.0.vm05.stdout:5/555: getdents d1/db5 0 2026-03-09T15:01:35.054 INFO:tasks.workunit.client.0.vm05.stdout:5/556: dread d1/d4/d34/f65 [4194304,4194304] 0 2026-03-09T15:01:35.055 INFO:tasks.workunit.client.0.vm05.stdout:0/418: unlink d9/c30 0 2026-03-09T15:01:35.057 INFO:tasks.workunit.client.0.vm05.stdout:8/519: unlink d0/d1/d55/f7b 0 2026-03-09T15:01:35.059 INFO:tasks.workunit.client.0.vm05.stdout:1/484: symlink d9/d2f/d37/la8 0 2026-03-09T15:01:35.067 INFO:tasks.workunit.client.0.vm05.stdout:3/519: link d3/d29/d2d/d7b/fa3 d3/df/d10/fb4 0 2026-03-09T15:01:35.070 INFO:tasks.workunit.client.0.vm05.stdout:3/520: dwrite d3/df/d1e/da9/fa4 [0,4194304] 0 2026-03-09T15:01:35.078 INFO:tasks.workunit.client.0.vm05.stdout:3/521: dwrite d3/df/d1e/d2f/d52/f87 [0,4194304] 0 2026-03-09T15:01:35.080 INFO:tasks.workunit.client.0.vm05.stdout:7/492: write d1/d12/f11 [640943,69273] 0 2026-03-09T15:01:35.083 INFO:tasks.workunit.client.0.vm05.stdout:2/489: link da/d13/l19 da/d16/l97 0 2026-03-09T15:01:35.083 INFO:tasks.workunit.client.0.vm05.stdout:2/490: readlink da/d13/l32 0 2026-03-09T15:01:35.094 INFO:tasks.workunit.client.0.vm05.stdout:0/419: dwrite d9/de/d12/d15/d2e/d32/f7d [0,4194304] 0 2026-03-09T15:01:35.095 INFO:tasks.workunit.client.0.vm05.stdout:6/465: creat da/d17/f83 x:0 0 0 2026-03-09T15:01:35.100 INFO:tasks.workunit.client.0.vm05.stdout:9/547: mkdir d2/d10/d22/d47/dc4 0 2026-03-09T15:01:35.101 INFO:tasks.workunit.client.0.vm05.stdout:1/485: mkdir d9/d2f/d37/d5a/da9 0 2026-03-09T15:01:35.102 INFO:tasks.workunit.client.0.vm05.stdout:1/486: truncate d9/d2f/d83/d98/fa4 734047 0 2026-03-09T15:01:35.103 INFO:tasks.workunit.client.0.vm05.stdout:1/487: stat d9/d2f/d83/d98/d59/d49/d4b/c74 0 2026-03-09T15:01:35.107 INFO:tasks.workunit.client.0.vm05.stdout:1/488: dwrite d9/d2f/f4f [0,4194304] 0 2026-03-09T15:01:35.113 INFO:tasks.workunit.client.0.vm05.stdout:7/493: mknod d1/d9/d23/d31/d32/d78/d7e/c96 0 2026-03-09T15:01:35.113 INFO:tasks.workunit.client.0.vm05.stdout:2/491: symlink da/dd/d4a/l98 0 2026-03-09T15:01:35.120 INFO:tasks.workunit.client.0.vm05.stdout:6/466: symlink da/d43/d7b/l84 0 2026-03-09T15:01:35.121 INFO:tasks.workunit.client.0.vm05.stdout:1/489: dread d9/d17/f79 [0,4194304] 0 2026-03-09T15:01:35.129 INFO:tasks.workunit.client.0.vm05.stdout:6/467: creat da/d17/d3b/f85 x:0 0 0 2026-03-09T15:01:35.131 INFO:tasks.workunit.client.0.vm05.stdout:0/420: mkdir d9/de/d12/d15/d2e/d32/d53/d6e/d85 0 2026-03-09T15:01:35.131 INFO:tasks.workunit.client.0.vm05.stdout:0/421: write d9/de/d6a/f75 [1002677,67831] 0 2026-03-09T15:01:35.132 INFO:tasks.workunit.client.0.vm05.stdout:1/490: fdatasync d9/f12 0 2026-03-09T15:01:35.132 INFO:tasks.workunit.client.0.vm05.stdout:9/548: sync 2026-03-09T15:01:35.133 INFO:tasks.workunit.client.0.vm05.stdout:9/549: chown d2/ca4 16262 1 2026-03-09T15:01:35.137 INFO:tasks.workunit.client.0.vm05.stdout:0/422: symlink d9/de/d12/d15/d2e/d32/d53/d61/l86 0 2026-03-09T15:01:35.138 INFO:tasks.workunit.client.0.vm05.stdout:0/423: write d9/de/d25/f47 [259389,84069] 0 2026-03-09T15:01:35.141 INFO:tasks.workunit.client.0.vm05.stdout:0/424: dwrite d9/de/d12/d15/d2e/f3a [0,4194304] 0 2026-03-09T15:01:35.143 INFO:tasks.workunit.client.0.vm05.stdout:1/491: truncate d9/d2f/d55/f5e 1875584 0 2026-03-09T15:01:35.159 INFO:tasks.workunit.client.0.vm05.stdout:6/468: rename da/f1f to da/d43/f86 0 2026-03-09T15:01:35.163 INFO:tasks.workunit.client.0.vm05.stdout:2/492: getdents da/d13/d2f/d35 0 2026-03-09T15:01:35.165 INFO:tasks.workunit.client.0.vm05.stdout:1/492: mkdir d9/d97/daa 0 2026-03-09T15:01:35.170 INFO:tasks.workunit.client.0.vm05.stdout:9/550: creat d2/d10/fc5 x:0 0 0 2026-03-09T15:01:35.172 INFO:tasks.workunit.client.0.vm05.stdout:2/493: creat da/d16/d46/f99 x:0 0 0 2026-03-09T15:01:35.172 INFO:tasks.workunit.client.0.vm05.stdout:2/494: readlink da/d13/l32 0 2026-03-09T15:01:35.173 INFO:tasks.workunit.client.0.vm05.stdout:2/495: write da/d13/f8c [617732,126985] 0 2026-03-09T15:01:35.181 INFO:tasks.workunit.client.0.vm05.stdout:1/493: fdatasync d9/d17/f22 0 2026-03-09T15:01:35.187 INFO:tasks.workunit.client.0.vm05.stdout:6/469: dread da/fe [4194304,4194304] 0 2026-03-09T15:01:35.193 INFO:tasks.workunit.client.0.vm05.stdout:6/470: read da/fe [1753059,73105] 0 2026-03-09T15:01:35.195 INFO:tasks.workunit.client.0.vm05.stdout:2/496: creat da/dd/d4a/f9a x:0 0 0 2026-03-09T15:01:35.199 INFO:tasks.workunit.client.0.vm05.stdout:9/551: dread d2/d10/d22/d2c/fbd [0,4194304] 0 2026-03-09T15:01:35.201 INFO:tasks.workunit.client.0.vm05.stdout:1/494: rename d9/d2f/d55/l7a to d9/d2f/d83/d98/d59/d49/d78/d7e/lab 0 2026-03-09T15:01:35.202 INFO:tasks.workunit.client.0.vm05.stdout:1/495: truncate d9/d2f/f3a 4804641 0 2026-03-09T15:01:35.204 INFO:tasks.workunit.client.0.vm05.stdout:0/425: creat d9/de/d25/d38/f87 x:0 0 0 2026-03-09T15:01:35.213 INFO:tasks.workunit.client.0.vm05.stdout:0/426: fdatasync d9/de/d25/d38/f55 0 2026-03-09T15:01:35.218 INFO:tasks.workunit.client.0.vm05.stdout:6/471: unlink da/c11 0 2026-03-09T15:01:35.218 INFO:tasks.workunit.client.0.vm05.stdout:6/472: fdatasync da/f7a 0 2026-03-09T15:01:35.219 INFO:tasks.workunit.client.0.vm05.stdout:1/496: dread d9/d2f/d83/d98/d59/d49/f51 [0,4194304] 0 2026-03-09T15:01:35.221 INFO:tasks.workunit.client.0.vm05.stdout:9/552: mkdir d2/d10/d22/dc1/dc3/dc6 0 2026-03-09T15:01:35.225 INFO:tasks.workunit.client.0.vm05.stdout:6/473: symlink da/d17/d7c/l87 0 2026-03-09T15:01:35.227 INFO:tasks.workunit.client.0.vm05.stdout:0/427: rename d9/de/d25/d38/d41/f5c to d9/de/d12/d15/d2e/f88 0 2026-03-09T15:01:35.229 INFO:tasks.workunit.client.0.vm05.stdout:9/553: creat d2/d10/d22/d47/fc7 x:0 0 0 2026-03-09T15:01:35.231 INFO:tasks.workunit.client.0.vm05.stdout:9/554: creat d2/d10/d22/d2c/fc8 x:0 0 0 2026-03-09T15:01:35.231 INFO:tasks.workunit.client.0.vm05.stdout:1/497: sync 2026-03-09T15:01:35.232 INFO:tasks.workunit.client.0.vm05.stdout:0/428: mknod d9/de/d12/d15/d2e/c89 0 2026-03-09T15:01:35.234 INFO:tasks.workunit.client.0.vm05.stdout:0/429: fsync d9/f2b 0 2026-03-09T15:01:35.239 INFO:tasks.workunit.client.0.vm05.stdout:0/430: dwrite d9/f58 [4194304,4194304] 0 2026-03-09T15:01:35.240 INFO:tasks.workunit.client.0.vm05.stdout:0/431: fsync d9/de/d12/d15/d2e/f3a 0 2026-03-09T15:01:35.256 INFO:tasks.workunit.client.0.vm05.stdout:0/432: mkdir d9/de/d12/d8a 0 2026-03-09T15:01:35.257 INFO:tasks.workunit.client.0.vm05.stdout:0/433: write d9/de/d12/d15/d2e/f76 [616063,24248] 0 2026-03-09T15:01:35.261 INFO:tasks.workunit.client.0.vm05.stdout:0/434: symlink d9/de/d12/d15/l8b 0 2026-03-09T15:01:35.264 INFO:tasks.workunit.client.0.vm05.stdout:0/435: link d9/f42 d9/d64/f8c 0 2026-03-09T15:01:35.303 INFO:tasks.workunit.client.0.vm05.stdout:0/436: dread d9/f22 [0,4194304] 0 2026-03-09T15:01:35.310 INFO:tasks.workunit.client.0.vm05.stdout:0/437: creat d9/de/f8d x:0 0 0 2026-03-09T15:01:35.313 INFO:tasks.workunit.client.0.vm05.stdout:0/438: dread d9/de/d25/d38/d41/f71 [0,4194304] 0 2026-03-09T15:01:35.316 INFO:tasks.workunit.client.0.vm05.stdout:0/439: getdents d9/d59/d70 0 2026-03-09T15:01:35.342 INFO:tasks.workunit.client.0.vm05.stdout:2/497: fdatasync da/d13/f8c 0 2026-03-09T15:01:35.347 INFO:tasks.workunit.client.0.vm05.stdout:4/480: write d2/f3e [1061739,98227] 0 2026-03-09T15:01:35.365 INFO:tasks.workunit.client.0.vm05.stdout:3/522: dread d3/f17 [0,4194304] 0 2026-03-09T15:01:35.379 INFO:tasks.workunit.client.0.vm05.stdout:5/557: write d1/d4/d34/d35/d4e/d6f/fa5 [2762472,46357] 0 2026-03-09T15:01:35.384 INFO:tasks.workunit.client.0.vm05.stdout:5/558: symlink d1/d4/d34/d56/d68/lbb 0 2026-03-09T15:01:35.385 INFO:tasks.workunit.client.0.vm05.stdout:5/559: creat d1/d5d/fbc x:0 0 0 2026-03-09T15:01:35.389 INFO:tasks.workunit.client.0.vm05.stdout:8/520: truncate d0/d1/d12/d3c/f8c 1149448 0 2026-03-09T15:01:35.391 INFO:tasks.workunit.client.0.vm05.stdout:8/521: link d0/dc/f4a d0/d1/d12/d1b/d66/d6f/d80/fb1 0 2026-03-09T15:01:35.392 INFO:tasks.workunit.client.0.vm05.stdout:8/522: symlink d0/d1/d12/d1b/d95/d4b/lb2 0 2026-03-09T15:01:35.393 INFO:tasks.workunit.client.0.vm05.stdout:8/523: unlink d0/d1/d12/d1b/d95/f5f 0 2026-03-09T15:01:35.394 INFO:tasks.workunit.client.0.vm05.stdout:8/524: chown d0/d2a/f2e 250464078 1 2026-03-09T15:01:35.395 INFO:tasks.workunit.client.0.vm05.stdout:8/525: mkdir d0/d1/d12/d1b/d95/d42/d60/da7/db3 0 2026-03-09T15:01:35.397 INFO:tasks.workunit.client.0.vm05.stdout:8/526: rename d0/d1/d12/d1b/d95/d54/f90 to d0/d1/d12/d1b/fb4 0 2026-03-09T15:01:35.403 INFO:tasks.workunit.client.0.vm05.stdout:8/527: dread d0/d24/f2c [0,4194304] 0 2026-03-09T15:01:35.407 INFO:tasks.workunit.client.0.vm05.stdout:8/528: dread d0/d1/d12/d1b/d95/f3e [0,4194304] 0 2026-03-09T15:01:35.408 INFO:tasks.workunit.client.0.vm05.stdout:8/529: dread - d0/d1/d97/fae zero size 2026-03-09T15:01:35.409 INFO:tasks.workunit.client.0.vm05.stdout:8/530: unlink d0/d7/f33 0 2026-03-09T15:01:35.410 INFO:tasks.workunit.client.0.vm05.stdout:8/531: mkdir d0/d1/d12/d1b/d95/d78/db5 0 2026-03-09T15:01:35.413 INFO:tasks.workunit.client.0.vm05.stdout:8/532: dwrite d0/d1/d12/d1b/f67 [0,4194304] 0 2026-03-09T15:01:35.415 INFO:tasks.workunit.client.0.vm05.stdout:8/533: chown d0/d7/l91 28 1 2026-03-09T15:01:35.416 INFO:tasks.workunit.client.0.vm05.stdout:7/494: write d1/d9/d23/d54/d7b/f7f [2655690,26184] 0 2026-03-09T15:01:35.419 INFO:tasks.workunit.client.0.vm05.stdout:7/495: mkdir d1/d9/d72/d97 0 2026-03-09T15:01:35.420 INFO:tasks.workunit.client.0.vm05.stdout:8/534: dwrite d0/d1/d12/d1b/d95/d4b/fb0 [0,4194304] 0 2026-03-09T15:01:35.424 INFO:tasks.workunit.client.0.vm05.stdout:8/535: creat d0/d1/d12/fb6 x:0 0 0 2026-03-09T15:01:35.424 INFO:tasks.workunit.client.0.vm05.stdout:7/496: getdents d1/d22/d3c 0 2026-03-09T15:01:35.426 INFO:tasks.workunit.client.0.vm05.stdout:8/536: mkdir d0/d1/d12/d1b/d66/db7 0 2026-03-09T15:01:35.427 INFO:tasks.workunit.client.0.vm05.stdout:7/497: mknod d1/d12/c98 0 2026-03-09T15:01:35.428 INFO:tasks.workunit.client.0.vm05.stdout:7/498: rename d1/d9/d23 to d1/d9/d23/d54/d7b/d99 22 2026-03-09T15:01:35.428 INFO:tasks.workunit.client.0.vm05.stdout:8/537: rmdir d0/d1/d12/d1b/d95/d42/d60 39 2026-03-09T15:01:35.430 INFO:tasks.workunit.client.0.vm05.stdout:7/499: mknod d1/d22/d3c/c9a 0 2026-03-09T15:01:35.431 INFO:tasks.workunit.client.0.vm05.stdout:8/538: creat d0/d1/d12/d1b/d21/fb8 x:0 0 0 2026-03-09T15:01:35.433 INFO:tasks.workunit.client.0.vm05.stdout:8/539: mkdir d0/d1/d12/d1b/d95/d42/da1/db9 0 2026-03-09T15:01:35.436 INFO:tasks.workunit.client.0.vm05.stdout:7/500: dread d1/d9/fd [0,4194304] 0 2026-03-09T15:01:35.437 INFO:tasks.workunit.client.0.vm05.stdout:8/540: rmdir d0/d1/d55/daf 0 2026-03-09T15:01:35.440 INFO:tasks.workunit.client.0.vm05.stdout:7/501: dwrite d1/f62 [0,4194304] 0 2026-03-09T15:01:35.453 INFO:tasks.workunit.client.0.vm05.stdout:7/502: rename d1/d9/f7d to d1/d9/d23/d31/d51/f9b 0 2026-03-09T15:01:35.455 INFO:tasks.workunit.client.0.vm05.stdout:7/503: dread d1/d9/fd [0,4194304] 0 2026-03-09T15:01:35.459 INFO:tasks.workunit.client.0.vm05.stdout:7/504: creat d1/d9/d72/d97/f9c x:0 0 0 2026-03-09T15:01:35.472 INFO:tasks.workunit.client.0.vm05.stdout:7/505: chown d1/d49/d4a/d77/f80 2 1 2026-03-09T15:01:35.472 INFO:tasks.workunit.client.0.vm05.stdout:7/506: dwrite d1/d49/d4a/f6b [4194304,4194304] 0 2026-03-09T15:01:35.474 INFO:tasks.workunit.client.0.vm05.stdout:7/507: mknod d1/d9/d23/d31/d32/d78/d7e/d81/c9d 0 2026-03-09T15:01:35.478 INFO:tasks.workunit.client.0.vm05.stdout:8/541: sync 2026-03-09T15:01:35.478 INFO:tasks.workunit.client.0.vm05.stdout:8/542: readlink d0/d7/da8/la9 0 2026-03-09T15:01:35.481 INFO:tasks.workunit.client.0.vm05.stdout:7/508: dwrite d1/d9/d72/f7c [0,4194304] 0 2026-03-09T15:01:35.485 INFO:tasks.workunit.client.0.vm05.stdout:7/509: truncate d1/d9/d72/d97/f9c 14064 0 2026-03-09T15:01:35.491 INFO:tasks.workunit.client.0.vm05.stdout:8/543: fdatasync d0/d24/f30 0 2026-03-09T15:01:35.498 INFO:tasks.workunit.client.0.vm05.stdout:8/544: dwrite d0/d1/d12/d1b/d95/d4b/fb0 [0,4194304] 0 2026-03-09T15:01:35.499 INFO:tasks.workunit.client.0.vm05.stdout:8/545: write d0/f47 [8414687,45263] 0 2026-03-09T15:01:35.503 INFO:tasks.workunit.client.0.vm05.stdout:7/510: rename d1/f6e to d1/d9/d23/d31/d32/d78/f9e 0 2026-03-09T15:01:35.523 INFO:tasks.workunit.client.0.vm05.stdout:7/511: getdents d1/d49 0 2026-03-09T15:01:35.538 INFO:tasks.workunit.client.0.vm05.stdout:7/512: dread d1/d12/f20 [0,4194304] 0 2026-03-09T15:01:35.542 INFO:tasks.workunit.client.0.vm05.stdout:7/513: symlink d1/d49/l9f 0 2026-03-09T15:01:35.543 INFO:tasks.workunit.client.0.vm05.stdout:7/514: write d1/d9/d23/d31/d51/f3b [1313599,21385] 0 2026-03-09T15:01:35.580 INFO:tasks.workunit.client.0.vm05.stdout:8/546: fdatasync d0/d1/d12/d1b/d95/d4b/fb0 0 2026-03-09T15:01:35.584 INFO:tasks.workunit.client.0.vm05.stdout:8/547: dwrite d0/d1/d12/fb6 [0,4194304] 0 2026-03-09T15:01:35.587 INFO:tasks.workunit.client.0.vm05.stdout:8/548: write d0/d1/d12/d1b/f89 [1259749,125277] 0 2026-03-09T15:01:35.602 INFO:tasks.workunit.client.0.vm05.stdout:6/474: readlink da/d43/d66/l6f 0 2026-03-09T15:01:35.607 INFO:tasks.workunit.client.0.vm05.stdout:8/549: creat d0/d1/d12/d1b/d95/d42/d60/da7/db3/fba x:0 0 0 2026-03-09T15:01:35.608 INFO:tasks.workunit.client.0.vm05.stdout:8/550: creat d0/d1/d12/d1b/d95/d78/db5/fbb x:0 0 0 2026-03-09T15:01:35.609 INFO:tasks.workunit.client.0.vm05.stdout:6/475: read da/d43/f59 [1675474,32466] 0 2026-03-09T15:01:35.609 INFO:tasks.workunit.client.0.vm05.stdout:8/551: dread - d0/d1/d12/d3c/f4c zero size 2026-03-09T15:01:35.613 INFO:tasks.workunit.client.0.vm05.stdout:6/476: dwrite da/f3d [0,4194304] 0 2026-03-09T15:01:35.621 INFO:tasks.workunit.client.0.vm05.stdout:8/552: rename d0/d1/d12/d1b/d95/d54/c79 to d0/d1/d12/d1b/d95/d42/d60/cbc 0 2026-03-09T15:01:35.624 INFO:tasks.workunit.client.0.vm05.stdout:1/498: getdents d9/d2f/d83/d98/d59/d49/d78/d7e 0 2026-03-09T15:01:35.626 INFO:tasks.workunit.client.0.vm05.stdout:2/498: getdents da/dd/d4a 0 2026-03-09T15:01:35.631 INFO:tasks.workunit.client.0.vm05.stdout:2/499: creat da/d29/d3f/f9b x:0 0 0 2026-03-09T15:01:35.640 INFO:tasks.workunit.client.0.vm05.stdout:6/477: symlink da/d17/d3b/d81/l88 0 2026-03-09T15:01:35.650 INFO:tasks.workunit.client.0.vm05.stdout:6/478: mkdir da/d43/d7b/d89 0 2026-03-09T15:01:35.650 INFO:tasks.workunit.client.0.vm05.stdout:9/555: dwrite d2/f5 [0,4194304] 0 2026-03-09T15:01:35.663 INFO:tasks.workunit.client.0.vm05.stdout:1/499: getdents d9/d2f/d83/d98/d59/d49/d4b 0 2026-03-09T15:01:35.669 INFO:tasks.workunit.client.0.vm05.stdout:1/500: rename d9/d2f/d83/d98/f8d to d9/d2f/d37/fac 0 2026-03-09T15:01:35.674 INFO:tasks.workunit.client.0.vm05.stdout:9/556: mknod d2/d10/d22/d47/d95/cc9 0 2026-03-09T15:01:35.680 INFO:tasks.workunit.client.0.vm05.stdout:1/501: symlink d9/d2f/d83/d98/d59/d49/d92/lad 0 2026-03-09T15:01:35.680 INFO:tasks.workunit.client.0.vm05.stdout:1/502: dread - d9/d2f/d83/fa3 zero size 2026-03-09T15:01:35.686 INFO:tasks.workunit.client.0.vm05.stdout:9/557: symlink d2/d10/d22/d2c/d69/lca 0 2026-03-09T15:01:35.689 INFO:tasks.workunit.client.0.vm05.stdout:1/503: truncate d9/d17/f22 1765106 0 2026-03-09T15:01:35.690 INFO:tasks.workunit.client.0.vm05.stdout:1/504: chown d9/d2f/d83/d98/f4e 1 1 2026-03-09T15:01:35.693 INFO:tasks.workunit.client.0.vm05.stdout:0/440: truncate d9/de/d12/d15/d2e/f40 1406820 0 2026-03-09T15:01:35.696 INFO:tasks.workunit.client.0.vm05.stdout:3/523: write d3/df/d59/d79/fa8 [1378738,58316] 0 2026-03-09T15:01:35.699 INFO:tasks.workunit.client.0.vm05.stdout:4/481: dwrite d2/d4/d7/f9 [0,4194304] 0 2026-03-09T15:01:35.700 INFO:tasks.workunit.client.0.vm05.stdout:0/441: dwrite d9/de/d12/d15/d2e/f76 [0,4194304] 0 2026-03-09T15:01:35.702 INFO:tasks.workunit.client.0.vm05.stdout:0/442: readlink l7 0 2026-03-09T15:01:35.711 INFO:tasks.workunit.client.0.vm05.stdout:5/560: dwrite d1/d4/d27/d5b/f7d [0,4194304] 0 2026-03-09T15:01:35.713 INFO:tasks.workunit.client.0.vm05.stdout:5/561: chown d1/d4/d34/d35/d4e/c95 28 1 2026-03-09T15:01:35.720 INFO:tasks.workunit.client.0.vm05.stdout:9/558: dread d2/d4e/d56/d53/f66 [4194304,4194304] 0 2026-03-09T15:01:35.720 INFO:tasks.workunit.client.0.vm05.stdout:9/559: stat d2/d10/d22/d2c/f91 0 2026-03-09T15:01:35.731 INFO:tasks.workunit.client.0.vm05.stdout:1/505: rmdir d9/d2f/d83/d98/d59/d49/d78/d7e 39 2026-03-09T15:01:35.734 INFO:tasks.workunit.client.0.vm05.stdout:8/553: rename d0/d1/d12/d1b/fb4 to d0/d1/d12/d1b/fbd 0 2026-03-09T15:01:35.743 INFO:tasks.workunit.client.0.vm05.stdout:4/482: mknod d2/d4/d1e/d71/c99 0 2026-03-09T15:01:35.744 INFO:tasks.workunit.client.0.vm05.stdout:4/483: write d2/d4/d8/d4a/d6e/f8d [895145,59042] 0 2026-03-09T15:01:35.754 INFO:tasks.workunit.client.0.vm05.stdout:5/562: dread d1/d4/d34/d56/f59 [0,4194304] 0 2026-03-09T15:01:35.764 INFO:tasks.workunit.client.0.vm05.stdout:3/524: mkdir d3/df/d10/d19/db5 0 2026-03-09T15:01:35.765 INFO:tasks.workunit.client.0.vm05.stdout:0/443: creat d9/de/d25/d38/d78/f8e x:0 0 0 2026-03-09T15:01:35.769 INFO:tasks.workunit.client.0.vm05.stdout:5/563: mknod d1/d4/d34/d35/d3d/cbd 0 2026-03-09T15:01:35.773 INFO:tasks.workunit.client.0.vm05.stdout:5/564: dwrite d1/d4/f5f [0,4194304] 0 2026-03-09T15:01:35.774 INFO:tasks.workunit.client.0.vm05.stdout:5/565: chown d1/d5d/d7f/d91/fb1 239 1 2026-03-09T15:01:35.780 INFO:tasks.workunit.client.0.vm05.stdout:1/506: mknod d9/d2f/d83/d98/d59/d49/d78/d94/cae 0 2026-03-09T15:01:35.783 INFO:tasks.workunit.client.0.vm05.stdout:8/554: mkdir d0/d1/d12/d1b/d66/db7/dbe 0 2026-03-09T15:01:35.785 INFO:tasks.workunit.client.0.vm05.stdout:3/525: fsync d3/df/d10/d34/d8c/f71 0 2026-03-09T15:01:35.786 INFO:tasks.workunit.client.0.vm05.stdout:0/444: unlink d9/de/d12/d15/d2e/f4d 0 2026-03-09T15:01:35.794 INFO:tasks.workunit.client.0.vm05.stdout:5/566: rename d1/f4c to d1/d4/d34/d35/d3d/d96/fbe 0 2026-03-09T15:01:35.803 INFO:tasks.workunit.client.0.vm05.stdout:4/484: getdents d2/d4/d7/d79 0 2026-03-09T15:01:35.804 INFO:tasks.workunit.client.0.vm05.stdout:4/485: write d2/d4/d7/d21/f34 [4127084,41446] 0 2026-03-09T15:01:35.813 INFO:tasks.workunit.client.0.vm05.stdout:4/486: symlink d2/d4/d1e/l9a 0 2026-03-09T15:01:35.820 INFO:tasks.workunit.client.0.vm05.stdout:4/487: dwrite d2/d43/f4f [0,4194304] 0 2026-03-09T15:01:35.826 INFO:tasks.workunit.client.0.vm05.stdout:1/507: sync 2026-03-09T15:01:35.835 INFO:tasks.workunit.client.0.vm05.stdout:0/445: getdents d9/de 0 2026-03-09T15:01:35.842 INFO:tasks.workunit.client.0.vm05.stdout:4/488: creat d2/d49/d69/f9b x:0 0 0 2026-03-09T15:01:35.843 INFO:tasks.workunit.client.0.vm05.stdout:7/515: write d1/d9/d23/d31/d51/f6a [521869,27998] 0 2026-03-09T15:01:35.844 INFO:tasks.workunit.client.0.vm05.stdout:5/567: getdents d1/d4/d34/d35/d3d/d38 0 2026-03-09T15:01:35.845 INFO:tasks.workunit.client.0.vm05.stdout:5/568: write d1/d4/d34/d35/d3d/d38/f6e [1733926,128000] 0 2026-03-09T15:01:35.847 INFO:tasks.workunit.client.0.vm05.stdout:5/569: write d1/d4/d34/d35/d3d/d38/f6e [3114933,121887] 0 2026-03-09T15:01:35.860 INFO:tasks.workunit.client.0.vm05.stdout:7/516: dread d1/d9/d23/d31/d51/f3b [0,4194304] 0 2026-03-09T15:01:35.861 INFO:tasks.workunit.client.0.vm05.stdout:1/508: symlink d9/d17/laf 0 2026-03-09T15:01:35.883 INFO:tasks.workunit.client.0.vm05.stdout:5/570: mknod d1/d4/d27/d5b/cbf 0 2026-03-09T15:01:35.898 INFO:tasks.workunit.client.0.vm05.stdout:0/446: mknod d9/de/d25/d38/c8f 0 2026-03-09T15:01:35.898 INFO:tasks.workunit.client.0.vm05.stdout:2/500: truncate da/f79 2279133 0 2026-03-09T15:01:35.899 INFO:tasks.workunit.client.0.vm05.stdout:2/501: dread - da/d13/d2f/f42 zero size 2026-03-09T15:01:35.911 INFO:tasks.workunit.client.0.vm05.stdout:6/479: write da/d43/f54 [1256803,87802] 0 2026-03-09T15:01:35.922 INFO:tasks.workunit.client.0.vm05.stdout:6/480: dread da/d17/f58 [0,4194304] 0 2026-03-09T15:01:35.923 INFO:tasks.workunit.client.0.vm05.stdout:5/571: rename d1/d4/d27/d75/d9c/dba to d1/d4/d34/dc0 0 2026-03-09T15:01:35.944 INFO:tasks.workunit.client.0.vm05.stdout:0/447: write d9/de/d25/d38/d41/f71 [76432,82602] 0 2026-03-09T15:01:35.949 INFO:tasks.workunit.client.0.vm05.stdout:4/489: link d2/d4/d8/ld d2/d1d/d88/l9c 0 2026-03-09T15:01:35.954 INFO:tasks.workunit.client.0.vm05.stdout:6/481: mknod da/d17/d7c/c8a 0 2026-03-09T15:01:35.956 INFO:tasks.workunit.client.0.vm05.stdout:6/482: read da/d19/f35 [74387,80369] 0 2026-03-09T15:01:35.956 INFO:tasks.workunit.client.0.vm05.stdout:0/448: dread d9/de/d12/d15/d2e/d32/d53/f68 [0,4194304] 0 2026-03-09T15:01:35.958 INFO:tasks.workunit.client.0.vm05.stdout:5/572: dread - d1/d4/d34/d56/d68/f8f zero size 2026-03-09T15:01:35.960 INFO:tasks.workunit.client.0.vm05.stdout:0/449: dread d9/de/d12/f3c [0,4194304] 0 2026-03-09T15:01:35.960 INFO:tasks.workunit.client.0.vm05.stdout:0/450: read - d9/d59/f79 zero size 2026-03-09T15:01:35.961 INFO:tasks.workunit.client.0.vm05.stdout:0/451: dread - d9/de/f7f zero size 2026-03-09T15:01:35.962 INFO:tasks.workunit.client.0.vm05.stdout:7/517: link d1/d9/d23/d31/c8a d1/d9/d23/d31/d8f/d93/ca0 0 2026-03-09T15:01:35.962 INFO:tasks.workunit.client.0.vm05.stdout:0/452: dread d9/f22 [8388608,4194304] 0 2026-03-09T15:01:35.963 INFO:tasks.workunit.client.0.vm05.stdout:0/453: dread - d9/de/f7f zero size 2026-03-09T15:01:35.968 INFO:tasks.workunit.client.0.vm05.stdout:2/502: mkdir da/d13/d9c 0 2026-03-09T15:01:35.972 INFO:tasks.workunit.client.0.vm05.stdout:9/560: dwrite d2/f17 [0,4194304] 0 2026-03-09T15:01:35.974 INFO:tasks.workunit.client.0.vm05.stdout:5/573: creat d1/d4/d34/fc1 x:0 0 0 2026-03-09T15:01:35.975 INFO:tasks.workunit.client.0.vm05.stdout:5/574: truncate d1/d5d/fbc 1040176 0 2026-03-09T15:01:35.976 INFO:tasks.workunit.client.0.vm05.stdout:5/575: readlink d1/d4/d34/d35/d3d/d38/d63/l7c 0 2026-03-09T15:01:35.982 INFO:tasks.workunit.client.0.vm05.stdout:0/454: symlink d9/de/d12/d15/d2e/d32/d53/d6e/l90 0 2026-03-09T15:01:35.983 INFO:tasks.workunit.client.0.vm05.stdout:0/455: chown d9/de/d25/d38/d41/f71 8373852 1 2026-03-09T15:01:35.985 INFO:tasks.workunit.client.0.vm05.stdout:2/503: creat da/f9d x:0 0 0 2026-03-09T15:01:35.989 INFO:tasks.workunit.client.0.vm05.stdout:9/561: mkdir d2/d4e/d56/d37/d9c/d8e/dcb 0 2026-03-09T15:01:35.991 INFO:tasks.workunit.client.0.vm05.stdout:7/518: truncate d1/d9/d23/f4c 560382 0 2026-03-09T15:01:35.991 INFO:tasks.workunit.client.0.vm05.stdout:7/519: stat d1/d9/d23/d31/d32/f90 0 2026-03-09T15:01:35.993 INFO:tasks.workunit.client.0.vm05.stdout:0/456: creat d9/de/d12/d15/d2e/d32/d53/f91 x:0 0 0 2026-03-09T15:01:35.994 INFO:tasks.workunit.client.0.vm05.stdout:0/457: dread d9/de/f5d [0,4194304] 0 2026-03-09T15:01:35.996 INFO:tasks.workunit.client.0.vm05.stdout:7/520: dread d1/d9/d72/f7c [0,4194304] 0 2026-03-09T15:01:35.999 INFO:tasks.workunit.client.0.vm05.stdout:0/458: dwrite d9/de/d12/f4c [4194304,4194304] 0 2026-03-09T15:01:35.999 INFO:tasks.workunit.client.0.vm05.stdout:0/459: chown d9/de/d12/d15/d2e/l3f 3374699 1 2026-03-09T15:01:35.999 INFO:tasks.workunit.client.0.vm05.stdout:0/460: dread - d9/de/d12/f7a zero size 2026-03-09T15:01:36.011 INFO:tasks.workunit.client.0.vm05.stdout:4/490: link d2/c91 d2/d4/d50/d8a/c9d 0 2026-03-09T15:01:36.013 INFO:tasks.workunit.client.0.vm05.stdout:6/483: rename da/d19/c4c to da/c8b 0 2026-03-09T15:01:36.014 INFO:tasks.workunit.client.0.vm05.stdout:5/576: getdents d1/da 0 2026-03-09T15:01:36.014 INFO:tasks.workunit.client.0.vm05.stdout:3/526: write d3/d29/d2d/d77/d4d/f80 [1048137,91540] 0 2026-03-09T15:01:36.023 INFO:tasks.workunit.client.0.vm05.stdout:8/555: truncate d0/d1/d12/d1b/d95/f4d 191538 0 2026-03-09T15:01:36.027 INFO:tasks.workunit.client.0.vm05.stdout:7/521: mknod d1/d9/d23/d31/d32/ca1 0 2026-03-09T15:01:36.027 INFO:tasks.workunit.client.0.vm05.stdout:7/522: dread - d1/d9/d23/d31/d32/d78/f92 zero size 2026-03-09T15:01:36.035 INFO:tasks.workunit.client.0.vm05.stdout:6/484: mknod da/d17/d3b/d81/c8c 0 2026-03-09T15:01:36.039 INFO:tasks.workunit.client.0.vm05.stdout:4/491: creat d2/d4/d8/d4a/d6e/f9e x:0 0 0 2026-03-09T15:01:36.039 INFO:tasks.workunit.client.0.vm05.stdout:6/485: dread da/f18 [0,4194304] 0 2026-03-09T15:01:36.040 INFO:tasks.workunit.client.0.vm05.stdout:6/486: chown da/d43/d7b/d89 2 1 2026-03-09T15:01:36.041 INFO:tasks.workunit.client.0.vm05.stdout:9/562: creat d2/d4e/d56/d37/d9c/d8e/dcb/fcc x:0 0 0 2026-03-09T15:01:36.045 INFO:tasks.workunit.client.0.vm05.stdout:9/563: dwrite d2/d4e/d56/d37/d9c/f6e [0,4194304] 0 2026-03-09T15:01:36.058 INFO:tasks.workunit.client.0.vm05.stdout:8/556: unlink d0/d1/d12/d1b/d6e/d93/d9f/la3 0 2026-03-09T15:01:36.067 INFO:tasks.workunit.client.0.vm05.stdout:4/492: truncate d2/d49/f6a 49802 0 2026-03-09T15:01:36.080 INFO:tasks.workunit.client.0.vm05.stdout:2/504: rename f5 to da/dd/f9e 0 2026-03-09T15:01:36.080 INFO:tasks.workunit.client.0.vm05.stdout:2/505: chown da/d29/d6a 2329343 1 2026-03-09T15:01:36.080 INFO:tasks.workunit.client.0.vm05.stdout:2/506: chown da/d29/d3f/f5b 327431 1 2026-03-09T15:01:36.087 INFO:tasks.workunit.client.0.vm05.stdout:2/507: dread da/f4e [0,4194304] 0 2026-03-09T15:01:36.088 INFO:tasks.workunit.client.0.vm05.stdout:2/508: readlink da/dd/l77 0 2026-03-09T15:01:36.093 INFO:tasks.workunit.client.0.vm05.stdout:3/527: symlink d3/df/d1e/da9/db3/lb6 0 2026-03-09T15:01:36.098 INFO:tasks.workunit.client.0.vm05.stdout:8/557: creat d0/d1/d12/d1b/d6e/d93/d9f/fbf x:0 0 0 2026-03-09T15:01:36.100 INFO:tasks.workunit.client.0.vm05.stdout:4/493: creat d2/d4/d7/d48/d6b/f9f x:0 0 0 2026-03-09T15:01:36.105 INFO:tasks.workunit.client.0.vm05.stdout:7/523: rename d1/d22/f65 to d1/d22/d3c/fa2 0 2026-03-09T15:01:36.122 INFO:tasks.workunit.client.0.vm05.stdout:3/528: mknod d3/df/d10/d34/d8c/d90/cb7 0 2026-03-09T15:01:36.131 INFO:tasks.workunit.client.0.vm05.stdout:0/461: dwrite d9/de/d12/d15/f5e [0,4194304] 0 2026-03-09T15:01:36.132 INFO:tasks.workunit.client.0.vm05.stdout:0/462: truncate d9/de/d6a/f75 1308999 0 2026-03-09T15:01:36.137 INFO:tasks.workunit.client.0.vm05.stdout:0/463: dwrite d9/de/f1e [0,4194304] 0 2026-03-09T15:01:36.137 INFO:tasks.workunit.client.0.vm05.stdout:8/558: creat d0/d1/d12/d1b/d95/d42/d60/fc0 x:0 0 0 2026-03-09T15:01:36.138 INFO:tasks.workunit.client.0.vm05.stdout:0/464: chown d9/de/d25/d38/d78 25341 1 2026-03-09T15:01:36.149 INFO:tasks.workunit.client.0.vm05.stdout:1/509: dwrite d9/d2f/d55/f5e [0,4194304] 0 2026-03-09T15:01:36.163 INFO:tasks.workunit.client.0.vm05.stdout:4/494: unlink d2/d4/d1e/d71/c99 0 2026-03-09T15:01:36.163 INFO:tasks.workunit.client.0.vm05.stdout:6/487: creat da/d17/f8d x:0 0 0 2026-03-09T15:01:36.163 INFO:tasks.workunit.client.0.vm05.stdout:4/495: chown d2/d4/d7 827333255 1 2026-03-09T15:01:36.173 INFO:tasks.workunit.client.0.vm05.stdout:6/488: dread da/d17/f3c [0,4194304] 0 2026-03-09T15:01:36.174 INFO:tasks.workunit.client.0.vm05.stdout:6/489: dread - da/d17/d3b/f85 zero size 2026-03-09T15:01:36.180 INFO:tasks.workunit.client.0.vm05.stdout:9/564: creat d2/d4e/d56/fcd x:0 0 0 2026-03-09T15:01:36.191 INFO:tasks.workunit.client.0.vm05.stdout:2/509: dwrite da/f21 [4194304,4194304] 0 2026-03-09T15:01:36.192 INFO:tasks.workunit.client.0.vm05.stdout:5/577: getdents d1/d4/d34/d56/d68 0 2026-03-09T15:01:36.192 INFO:tasks.workunit.client.0.vm05.stdout:2/510: chown da/d16/d46/c47 49399 1 2026-03-09T15:01:36.196 INFO:tasks.workunit.client.0.vm05.stdout:8/559: symlink d0/d1/d12/d1b/d95/lc1 0 2026-03-09T15:01:36.197 INFO:tasks.workunit.client.0.vm05.stdout:0/465: mknod d9/de/d12/d15/d2e/d32/d53/c92 0 2026-03-09T15:01:36.198 INFO:tasks.workunit.client.0.vm05.stdout:0/466: readlink d9/de/d12/d15/d2e/d32/d53/d6e/l90 0 2026-03-09T15:01:36.198 INFO:tasks.workunit.client.0.vm05.stdout:0/467: readlink d9/de/l72 0 2026-03-09T15:01:36.209 INFO:tasks.workunit.client.0.vm05.stdout:1/510: stat d9/c10 0 2026-03-09T15:01:36.214 INFO:tasks.workunit.client.0.vm05.stdout:4/496: creat d2/d43/fa0 x:0 0 0 2026-03-09T15:01:36.215 INFO:tasks.workunit.client.0.vm05.stdout:4/497: readlink d2/d4/l82 0 2026-03-09T15:01:36.216 INFO:tasks.workunit.client.0.vm05.stdout:7/524: read d1/d9/d23/f4c [537750,2994] 0 2026-03-09T15:01:36.220 INFO:tasks.workunit.client.0.vm05.stdout:6/490: write da/d43/f72 [3050830,35449] 0 2026-03-09T15:01:36.223 INFO:tasks.workunit.client.0.vm05.stdout:4/498: dread d2/d4/d1e/f40 [0,4194304] 0 2026-03-09T15:01:36.224 INFO:tasks.workunit.client.0.vm05.stdout:4/499: dread d2/d4/d1e/f40 [0,4194304] 0 2026-03-09T15:01:36.233 INFO:tasks.workunit.client.0.vm05.stdout:2/511: fsync da/d13/d30/f41 0 2026-03-09T15:01:36.236 INFO:tasks.workunit.client.0.vm05.stdout:4/500: dread d2/f33 [0,4194304] 0 2026-03-09T15:01:36.243 INFO:tasks.workunit.client.0.vm05.stdout:0/468: mkdir d9/d59/d93 0 2026-03-09T15:01:36.245 INFO:tasks.workunit.client.0.vm05.stdout:1/511: fdatasync d9/f12 0 2026-03-09T15:01:36.245 INFO:tasks.workunit.client.0.vm05.stdout:8/560: dread d0/dc/f7a [0,4194304] 0 2026-03-09T15:01:36.251 INFO:tasks.workunit.client.0.vm05.stdout:3/529: truncate d3/df/d10/d34/d8c/d90/fa6 158049 0 2026-03-09T15:01:36.263 INFO:tasks.workunit.client.0.vm05.stdout:8/561: creat d0/d1/d12/d1b/d6e/fc2 x:0 0 0 2026-03-09T15:01:36.266 INFO:tasks.workunit.client.0.vm05.stdout:3/530: mknod d3/df/d1e/da9/db3/cb8 0 2026-03-09T15:01:36.266 INFO:tasks.workunit.client.0.vm05.stdout:3/531: readlink d3/df/d10/d34/l64 0 2026-03-09T15:01:36.270 INFO:tasks.workunit.client.0.vm05.stdout:1/512: dwrite d9/d17/f79 [0,4194304] 0 2026-03-09T15:01:36.276 INFO:tasks.workunit.client.0.vm05.stdout:6/491: symlink da/d43/d7b/d89/l8e 0 2026-03-09T15:01:36.277 INFO:tasks.workunit.client.0.vm05.stdout:9/565: creat d2/d4e/d56/fce x:0 0 0 2026-03-09T15:01:36.280 INFO:tasks.workunit.client.0.vm05.stdout:7/525: write d1/d12/f18 [790332,45847] 0 2026-03-09T15:01:36.286 INFO:tasks.workunit.client.0.vm05.stdout:4/501: mknod d2/d4/d8/d4a/d94/ca1 0 2026-03-09T15:01:36.289 INFO:tasks.workunit.client.0.vm05.stdout:8/562: truncate d0/d1/d12/d1b/d21/f65 931482 0 2026-03-09T15:01:36.290 INFO:tasks.workunit.client.0.vm05.stdout:3/532: creat d3/df/d1e/d2f/fb9 x:0 0 0 2026-03-09T15:01:36.291 INFO:tasks.workunit.client.0.vm05.stdout:3/533: write d3/df/d1e/d2f/d52/f93 [120904,105857] 0 2026-03-09T15:01:36.294 INFO:tasks.workunit.client.0.vm05.stdout:3/534: dwrite d3/df/f1b [0,4194304] 0 2026-03-09T15:01:36.298 INFO:tasks.workunit.client.0.vm05.stdout:1/513: creat d9/d2f/d55/fb0 x:0 0 0 2026-03-09T15:01:36.299 INFO:tasks.workunit.client.0.vm05.stdout:6/492: creat da/d17/d7c/f8f x:0 0 0 2026-03-09T15:01:36.300 INFO:tasks.workunit.client.0.vm05.stdout:0/469: write d9/f82 [605814,47017] 0 2026-03-09T15:01:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:35 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:35 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:35 vm05.local ceph-mon[50611]: pgmap v174: 65 pgs: 65 active+clean; 2.2 GiB data, 7.6 GiB used, 112 GiB / 120 GiB avail; 34 MiB/s rd, 124 MiB/s wr, 253 op/s 2026-03-09T15:01:36.307 INFO:tasks.workunit.client.0.vm05.stdout:7/526: creat d1/d9/d23/d31/d8f/d93/fa3 x:0 0 0 2026-03-09T15:01:36.307 INFO:tasks.workunit.client.0.vm05.stdout:9/566: truncate d2/d10/f48 4483977 0 2026-03-09T15:01:36.308 INFO:tasks.workunit.client.0.vm05.stdout:7/527: write d1/f45 [625805,49232] 0 2026-03-09T15:01:36.309 INFO:tasks.workunit.client.0.vm05.stdout:7/528: write d1/d9/d23/d31/d8f/d93/fa3 [633415,88826] 0 2026-03-09T15:01:36.309 INFO:tasks.workunit.client.0.vm05.stdout:2/512: link da/d13/c40 da/d13/d30/d91/c9f 0 2026-03-09T15:01:36.310 INFO:tasks.workunit.client.0.vm05.stdout:2/513: stat da/dd/d4a 0 2026-03-09T15:01:36.312 INFO:tasks.workunit.client.0.vm05.stdout:4/502: mkdir d2/d4/d1e/da2 0 2026-03-09T15:01:36.315 INFO:tasks.workunit.client.0.vm05.stdout:8/563: fsync d0/d24/f8a 0 2026-03-09T15:01:36.316 INFO:tasks.workunit.client.0.vm05.stdout:2/514: dwrite da/d29/f7d [0,4194304] 0 2026-03-09T15:01:36.324 INFO:tasks.workunit.client.0.vm05.stdout:6/493: unlink da/f14 0 2026-03-09T15:01:36.325 INFO:tasks.workunit.client.0.vm05.stdout:1/514: dwrite d9/f12 [4194304,4194304] 0 2026-03-09T15:01:36.330 INFO:tasks.workunit.client.0.vm05.stdout:1/515: write d9/d2f/d83/d98/d59/d49/d48/f8f [470272,25208] 0 2026-03-09T15:01:36.336 INFO:tasks.workunit.client.0.vm05.stdout:9/567: mknod d2/d8b/dae/ccf 0 2026-03-09T15:01:36.337 INFO:tasks.workunit.client.0.vm05.stdout:5/578: link d1/d4/d19/c2b d1/d5d/d7f/d91/cc2 0 2026-03-09T15:01:36.342 INFO:tasks.workunit.client.0.vm05.stdout:4/503: dread d2/d1d/f36 [0,4194304] 0 2026-03-09T15:01:36.345 INFO:tasks.workunit.client.0.vm05.stdout:7/529: mkdir d1/d22/da4 0 2026-03-09T15:01:36.348 INFO:tasks.workunit.client.0.vm05.stdout:8/564: truncate d0/f3b 1543622 0 2026-03-09T15:01:36.350 INFO:tasks.workunit.client.0.vm05.stdout:3/535: symlink d3/df/d10/d19/d44/da2/lba 0 2026-03-09T15:01:36.350 INFO:tasks.workunit.client.0.vm05.stdout:3/536: chown d3/df 11 1 2026-03-09T15:01:36.350 INFO:tasks.workunit.client.0.vm05.stdout:3/537: read d3/f17 [2109614,104846] 0 2026-03-09T15:01:36.355 INFO:tasks.workunit.client.0.vm05.stdout:0/470: fdatasync d9/de/d12/f23 0 2026-03-09T15:01:36.356 INFO:tasks.workunit.client.0.vm05.stdout:1/516: dread d9/d17/f81 [0,4194304] 0 2026-03-09T15:01:36.358 INFO:tasks.workunit.client.0.vm05.stdout:9/568: mknod d2/d10/d22/dc2/db1/cd0 0 2026-03-09T15:01:36.358 INFO:tasks.workunit.client.0.vm05.stdout:9/569: rename d2/d4e to d2/d4e/d56/d84/dd1 22 2026-03-09T15:01:36.361 INFO:tasks.workunit.client.0.vm05.stdout:4/504: read d2/d4/d7/dc/f45 [196846,39098] 0 2026-03-09T15:01:36.362 INFO:tasks.workunit.client.0.vm05.stdout:9/570: dwrite d2/d10/d22/d47/fc7 [0,4194304] 0 2026-03-09T15:01:36.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:35 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:36.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:35 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:36.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:35 vm09.local ceph-mon[59673]: pgmap v174: 65 pgs: 65 active+clean; 2.2 GiB data, 7.6 GiB used, 112 GiB / 120 GiB avail; 34 MiB/s rd, 124 MiB/s wr, 253 op/s 2026-03-09T15:01:36.367 INFO:tasks.workunit.client.0.vm05.stdout:9/571: dread d2/d10/d22/d47/fc7 [0,4194304] 0 2026-03-09T15:01:36.368 INFO:tasks.workunit.client.0.vm05.stdout:9/572: write d2/d10/d22/da0/fad [2560693,26695] 0 2026-03-09T15:01:36.369 INFO:tasks.workunit.client.0.vm05.stdout:9/573: fdatasync d2/d4e/d56/d37/f36 0 2026-03-09T15:01:36.372 INFO:tasks.workunit.client.0.vm05.stdout:2/515: getdents da/dd/d4a 0 2026-03-09T15:01:36.380 INFO:tasks.workunit.client.0.vm05.stdout:1/517: creat d9/d17/fb1 x:0 0 0 2026-03-09T15:01:36.387 INFO:tasks.workunit.client.0.vm05.stdout:7/530: chown d1/d49/d68 11873942 1 2026-03-09T15:01:36.392 INFO:tasks.workunit.client.0.vm05.stdout:9/574: unlink d2/ca4 0 2026-03-09T15:01:36.395 INFO:tasks.workunit.client.0.vm05.stdout:2/516: rename da/d13/d30 to da/d29/d6a/da0 0 2026-03-09T15:01:36.399 INFO:tasks.workunit.client.0.vm05.stdout:2/517: dwrite da/d29/f2d [4194304,4194304] 0 2026-03-09T15:01:36.399 INFO:tasks.workunit.client.0.vm05.stdout:2/518: write da/d13/f8c [992463,106838] 0 2026-03-09T15:01:36.400 INFO:tasks.workunit.client.0.vm05.stdout:2/519: fdatasync da/d16/d46/f85 0 2026-03-09T15:01:36.405 INFO:tasks.workunit.client.0.vm05.stdout:0/471: symlink d9/l94 0 2026-03-09T15:01:36.405 INFO:tasks.workunit.client.0.vm05.stdout:6/494: creat da/d17/f90 x:0 0 0 2026-03-09T15:01:36.431 INFO:tasks.workunit.client.0.vm05.stdout:6/495: unlink da/d17/f83 0 2026-03-09T15:01:36.436 INFO:tasks.workunit.client.0.vm05.stdout:9/575: creat d2/d4e/d56/d37/d9c/db2/fd2 x:0 0 0 2026-03-09T15:01:36.436 INFO:tasks.workunit.client.0.vm05.stdout:9/576: write d2/d4e/f72 [3163426,27759] 0 2026-03-09T15:01:36.437 INFO:tasks.workunit.client.0.vm05.stdout:9/577: dread - d2/d10/d22/d47/d95/f9d zero size 2026-03-09T15:01:36.448 INFO:tasks.workunit.client.0.vm05.stdout:8/565: dwrite d0/d1/f49 [0,4194304] 0 2026-03-09T15:01:36.449 INFO:tasks.workunit.client.0.vm05.stdout:8/566: write d0/d1/d12/d1b/d95/d54/f85 [954070,84002] 0 2026-03-09T15:01:36.450 INFO:tasks.workunit.client.0.vm05.stdout:5/579: write d1/d4/d34/d35/d3d/d38/f8a [588780,72860] 0 2026-03-09T15:01:36.454 INFO:tasks.workunit.client.0.vm05.stdout:5/580: dwrite d1/d4/d27/d5b/f7d [0,4194304] 0 2026-03-09T15:01:36.463 INFO:tasks.workunit.client.0.vm05.stdout:4/505: dwrite d2/d49/f6a [0,4194304] 0 2026-03-09T15:01:36.463 INFO:tasks.workunit.client.0.vm05.stdout:1/518: dwrite d9/d2f/d83/d98/f50 [0,4194304] 0 2026-03-09T15:01:36.496 INFO:tasks.workunit.client.0.vm05.stdout:2/520: link da/d13/f96 da/d29/d6a/da0/fa1 0 2026-03-09T15:01:36.496 INFO:tasks.workunit.client.0.vm05.stdout:3/538: link d3/d29/d2d/c9c d3/df/d1e/da9/db3/cbb 0 2026-03-09T15:01:36.497 INFO:tasks.workunit.client.0.vm05.stdout:3/539: chown d3/df/d1e/f2b 34936629 1 2026-03-09T15:01:36.497 INFO:tasks.workunit.client.0.vm05.stdout:3/540: stat d3/df/d59/d79/fa8 0 2026-03-09T15:01:36.498 INFO:tasks.workunit.client.0.vm05.stdout:3/541: fsync d3/df/d1e/d2f/d52/f57 0 2026-03-09T15:01:36.498 INFO:tasks.workunit.client.0.vm05.stdout:3/542: chown d3/df/d10/d34/f9d 79434 1 2026-03-09T15:01:36.499 INFO:tasks.workunit.client.0.vm05.stdout:0/472: mknod d9/d64/c95 0 2026-03-09T15:01:36.502 INFO:tasks.workunit.client.0.vm05.stdout:0/473: dwrite d9/de/d12/d15/d2e/f3a [0,4194304] 0 2026-03-09T15:01:36.509 INFO:tasks.workunit.client.0.vm05.stdout:0/474: fsync d9/de/f1e 0 2026-03-09T15:01:36.509 INFO:tasks.workunit.client.0.vm05.stdout:6/496: mknod da/d17/d7c/c91 0 2026-03-09T15:01:36.528 INFO:tasks.workunit.client.0.vm05.stdout:5/581: dread d1/d4/d34/d56/f6d [0,4194304] 0 2026-03-09T15:01:36.541 INFO:tasks.workunit.client.0.vm05.stdout:1/519: mkdir d9/d2f/d83/d98/d59/d49/d48/db2 0 2026-03-09T15:01:36.543 INFO:tasks.workunit.client.0.vm05.stdout:1/520: write d9/d17/fb1 [57508,55190] 0 2026-03-09T15:01:36.543 INFO:tasks.workunit.client.0.vm05.stdout:1/521: write d9/d2f/d83/d98/d59/d49/f69 [616991,51017] 0 2026-03-09T15:01:36.543 INFO:tasks.workunit.client.0.vm05.stdout:1/522: truncate d9/d2f/f91 480936 0 2026-03-09T15:01:36.543 INFO:tasks.workunit.client.0.vm05.stdout:1/523: chown d9/d17 460306720 1 2026-03-09T15:01:36.543 INFO:tasks.workunit.client.0.vm05.stdout:1/524: readlink d9/d2f/d83/l6b 0 2026-03-09T15:01:36.548 INFO:tasks.workunit.client.0.vm05.stdout:4/506: dread d2/d4/f15 [0,4194304] 0 2026-03-09T15:01:36.549 INFO:tasks.workunit.client.0.vm05.stdout:4/507: write d2/d1d/f5c [319501,70167] 0 2026-03-09T15:01:36.549 INFO:tasks.workunit.client.0.vm05.stdout:4/508: chown d2/d4/d7/d48/d6b/l7c 137349679 1 2026-03-09T15:01:36.553 INFO:tasks.workunit.client.0.vm05.stdout:4/509: dread d2/d43/f51 [0,4194304] 0 2026-03-09T15:01:36.558 INFO:tasks.workunit.client.0.vm05.stdout:8/567: fdatasync d0/d1/f49 0 2026-03-09T15:01:36.560 INFO:tasks.workunit.client.0.vm05.stdout:2/521: creat da/fa2 x:0 0 0 2026-03-09T15:01:36.571 INFO:tasks.workunit.client.0.vm05.stdout:6/497: fdatasync da/d17/f2c 0 2026-03-09T15:01:36.576 INFO:tasks.workunit.client.0.vm05.stdout:5/582: creat d1/db5/fc3 x:0 0 0 2026-03-09T15:01:36.588 INFO:tasks.workunit.client.0.vm05.stdout:7/531: rename d1/d9/d23/d31/d32/d78/d7e/d81/l85 to d1/d49/d4a/d77/la5 0 2026-03-09T15:01:36.594 INFO:tasks.workunit.client.0.vm05.stdout:8/568: read - d0/dc/f4a zero size 2026-03-09T15:01:36.597 INFO:tasks.workunit.client.0.vm05.stdout:8/569: dwrite d0/f4 [0,4194304] 0 2026-03-09T15:01:36.609 INFO:tasks.workunit.client.0.vm05.stdout:8/570: fdatasync d0/d1/d12/d1b/d95/d42/d60/fc0 0 2026-03-09T15:01:36.609 INFO:tasks.workunit.client.0.vm05.stdout:2/522: creat da/d16/d46/fa3 x:0 0 0 2026-03-09T15:01:36.609 INFO:tasks.workunit.client.0.vm05.stdout:2/523: dwrite da/d16/d46/f85 [0,4194304] 0 2026-03-09T15:01:36.609 INFO:tasks.workunit.client.0.vm05.stdout:0/475: link d9/de/d25/f47 d9/d64/f96 0 2026-03-09T15:01:36.610 INFO:tasks.workunit.client.0.vm05.stdout:0/476: stat d9/de/f3d 0 2026-03-09T15:01:36.617 INFO:tasks.workunit.client.0.vm05.stdout:9/578: creat d2/d10/d22/d47/fd3 x:0 0 0 2026-03-09T15:01:36.624 INFO:tasks.workunit.client.0.vm05.stdout:6/498: creat da/d43/d7b/f92 x:0 0 0 2026-03-09T15:01:36.648 INFO:tasks.workunit.client.0.vm05.stdout:4/510: sync 2026-03-09T15:01:36.648 INFO:tasks.workunit.client.0.vm05.stdout:8/571: sync 2026-03-09T15:01:36.650 INFO:tasks.workunit.client.0.vm05.stdout:3/543: rename d3/df/d10/d19/l5e to d3/df/d10/d19/db2/lbc 0 2026-03-09T15:01:36.654 INFO:tasks.workunit.client.0.vm05.stdout:2/524: creat da/d29/d6a/da0/d7c/fa4 x:0 0 0 2026-03-09T15:01:36.654 INFO:tasks.workunit.client.0.vm05.stdout:2/525: read - da/dd/d4a/f9a zero size 2026-03-09T15:01:36.660 INFO:tasks.workunit.client.0.vm05.stdout:4/511: dwrite d2/d49/f6a [4194304,4194304] 0 2026-03-09T15:01:36.668 INFO:tasks.workunit.client.0.vm05.stdout:0/477: rename d9/de/d25/f52 to d9/de/d25/f97 0 2026-03-09T15:01:36.669 INFO:tasks.workunit.client.0.vm05.stdout:0/478: chown d9/de/d25/c44 11218286 1 2026-03-09T15:01:36.675 INFO:tasks.workunit.client.0.vm05.stdout:9/579: mknod d2/d4e/d56/d37/cd4 0 2026-03-09T15:01:36.686 INFO:tasks.workunit.client.0.vm05.stdout:5/583: link d1/d4/d34/d56/d68/f74 d1/d4/d27/d75/d9c/fc4 0 2026-03-09T15:01:36.688 INFO:tasks.workunit.client.0.vm05.stdout:7/532: write d1/f16 [360462,31224] 0 2026-03-09T15:01:36.698 INFO:tasks.workunit.client.0.vm05.stdout:2/526: dread da/d29/d6a/da0/f41 [0,4194304] 0 2026-03-09T15:01:36.699 INFO:tasks.workunit.client.0.vm05.stdout:4/512: rmdir d2/d4/d7 39 2026-03-09T15:01:36.701 INFO:tasks.workunit.client.0.vm05.stdout:0/479: fsync d9/de/d12/d15/d2e/d32/d53/f68 0 2026-03-09T15:01:36.709 INFO:tasks.workunit.client.0.vm05.stdout:9/580: mknod d2/d8b/cd5 0 2026-03-09T15:01:36.710 INFO:tasks.workunit.client.0.vm05.stdout:6/499: symlink da/d17/l93 0 2026-03-09T15:01:36.710 INFO:tasks.workunit.client.0.vm05.stdout:6/500: chown da/d19/c37 12 1 2026-03-09T15:01:36.711 INFO:tasks.workunit.client.0.vm05.stdout:5/584: creat d1/d4/d34/d35/fc5 x:0 0 0 2026-03-09T15:01:36.712 INFO:tasks.workunit.client.0.vm05.stdout:7/533: symlink d1/d9/d23/d31/d32/d78/la6 0 2026-03-09T15:01:36.713 INFO:tasks.workunit.client.0.vm05.stdout:1/525: rename d9/d2f/d37/fac to d9/d2f/d83/d98/d59/d49/d78/d7e/fb3 0 2026-03-09T15:01:36.714 INFO:tasks.workunit.client.0.vm05.stdout:1/526: chown d9/d2f/d83/d98/d59/d49/d48/f8f 88522249 1 2026-03-09T15:01:36.715 INFO:tasks.workunit.client.0.vm05.stdout:2/527: creat da/d16/d46/fa5 x:0 0 0 2026-03-09T15:01:36.715 INFO:tasks.workunit.client.0.vm05.stdout:2/528: dread - da/d29/d6a/da0/d7c/f80 zero size 2026-03-09T15:01:36.716 INFO:tasks.workunit.client.0.vm05.stdout:2/529: chown da/fa2 126 1 2026-03-09T15:01:36.717 INFO:tasks.workunit.client.0.vm05.stdout:4/513: chown d2/d1d/d88/l9c 2026194 1 2026-03-09T15:01:36.719 INFO:tasks.workunit.client.0.vm05.stdout:0/480: creat d9/de/d25/d38/d41/f98 x:0 0 0 2026-03-09T15:01:36.721 INFO:tasks.workunit.client.0.vm05.stdout:9/581: creat d2/d4e/d56/d37/d9c/d8e/dcb/fd6 x:0 0 0 2026-03-09T15:01:36.725 INFO:tasks.workunit.client.0.vm05.stdout:1/527: dread d9/d2f/d37/d5a/f5b [0,4194304] 0 2026-03-09T15:01:36.727 INFO:tasks.workunit.client.0.vm05.stdout:3/544: rename d3/df/d1e/da9 to d3/df/d10/d34/d8c/dbd 0 2026-03-09T15:01:36.729 INFO:tasks.workunit.client.0.vm05.stdout:2/530: rmdir da/d13/d2f/d35 39 2026-03-09T15:01:36.734 INFO:tasks.workunit.client.0.vm05.stdout:9/582: creat d2/d10/d22/d52/fd7 x:0 0 0 2026-03-09T15:01:36.735 INFO:tasks.workunit.client.0.vm05.stdout:5/585: symlink d1/d4/d27/lc6 0 2026-03-09T15:01:36.736 INFO:tasks.workunit.client.0.vm05.stdout:1/528: symlink d9/d2f/d37/lb4 0 2026-03-09T15:01:36.738 INFO:tasks.workunit.client.0.vm05.stdout:8/572: getdents d0/d1/d12/d3c 0 2026-03-09T15:01:36.739 INFO:tasks.workunit.client.0.vm05.stdout:8/573: read d0/f10 [3105115,53784] 0 2026-03-09T15:01:36.740 INFO:tasks.workunit.client.0.vm05.stdout:8/574: write d0/d1/d12/d3c/f9a [1366892,86588] 0 2026-03-09T15:01:36.744 INFO:tasks.workunit.client.0.vm05.stdout:7/534: rename d1/d12/f8c to d1/d22/d3c/fa7 0 2026-03-09T15:01:36.745 INFO:tasks.workunit.client.0.vm05.stdout:7/535: write d1/d9/d23/d31/f55 [5804502,131032] 0 2026-03-09T15:01:36.749 INFO:tasks.workunit.client.0.vm05.stdout:2/531: mkdir da/d29/d64/da6 0 2026-03-09T15:01:36.749 INFO:tasks.workunit.client.0.vm05.stdout:9/583: sync 2026-03-09T15:01:36.751 INFO:tasks.workunit.client.0.vm05.stdout:0/481: getdents d9/de/d12/d8a 0 2026-03-09T15:01:36.752 INFO:tasks.workunit.client.0.vm05.stdout:6/501: creat da/f94 x:0 0 0 2026-03-09T15:01:36.765 INFO:tasks.workunit.client.0.vm05.stdout:3/545: mkdir d3/df/dbe 0 2026-03-09T15:01:36.766 INFO:tasks.workunit.client.0.vm05.stdout:3/546: chown d3/df/c5d 6 1 2026-03-09T15:01:36.766 INFO:tasks.workunit.client.0.vm05.stdout:3/547: write d3/d29/d7f/fa1 [4322898,7465] 0 2026-03-09T15:01:36.768 INFO:tasks.workunit.client.0.vm05.stdout:3/548: truncate d3/d29/d2d/d77/d4d/fa7 7634 0 2026-03-09T15:01:36.768 INFO:tasks.workunit.client.0.vm05.stdout:3/549: fdatasync d3/df/d10/d34/d8c/f85 0 2026-03-09T15:01:36.772 INFO:tasks.workunit.client.0.vm05.stdout:4/514: rename d2/d4/d7/d48/c86 to d2/d4/d7/dc/d2b/d97/ca3 0 2026-03-09T15:01:36.781 INFO:tasks.workunit.client.0.vm05.stdout:9/584: truncate d2/f12 7682564 0 2026-03-09T15:01:36.782 INFO:tasks.workunit.client.0.vm05.stdout:0/482: mknod d9/d64/c99 0 2026-03-09T15:01:36.788 INFO:tasks.workunit.client.0.vm05.stdout:5/586: symlink d1/d4/d27/lc7 0 2026-03-09T15:01:36.788 INFO:tasks.workunit.client.0.vm05.stdout:5/587: dread - d1/da/fb9 zero size 2026-03-09T15:01:36.794 INFO:tasks.workunit.client.0.vm05.stdout:6/502: dread da/d17/d3b/f4f [0,4194304] 0 2026-03-09T15:01:36.798 INFO:tasks.workunit.client.0.vm05.stdout:7/536: dwrite d1/d22/d3c/f70 [0,4194304] 0 2026-03-09T15:01:36.808 INFO:tasks.workunit.client.0.vm05.stdout:2/532: dwrite da/dd/ff [8388608,4194304] 0 2026-03-09T15:01:36.808 INFO:tasks.workunit.client.0.vm05.stdout:3/550: creat d3/df/d1e/d2f/fbf x:0 0 0 2026-03-09T15:01:36.810 INFO:tasks.workunit.client.0.vm05.stdout:2/533: dread - da/d13/f96 zero size 2026-03-09T15:01:36.817 INFO:tasks.workunit.client.0.vm05.stdout:9/585: creat d2/d10/d22/d2c/d69/fd8 x:0 0 0 2026-03-09T15:01:36.818 INFO:tasks.workunit.client.0.vm05.stdout:5/588: mkdir d1/d4/d34/d35/d4e/dc8 0 2026-03-09T15:01:36.820 INFO:tasks.workunit.client.0.vm05.stdout:6/503: dread da/d17/f58 [0,4194304] 0 2026-03-09T15:01:36.827 INFO:tasks.workunit.client.0.vm05.stdout:1/529: creat d9/d2f/d83/d98/d59/fb5 x:0 0 0 2026-03-09T15:01:36.835 INFO:tasks.workunit.client.0.vm05.stdout:2/534: creat da/d29/d6a/da0/fa7 x:0 0 0 2026-03-09T15:01:36.840 INFO:tasks.workunit.client.0.vm05.stdout:5/589: truncate d1/d4/d34/d56/f6d 1403511 0 2026-03-09T15:01:36.840 INFO:tasks.workunit.client.0.vm05.stdout:5/590: fdatasync d1/d4/d34/d6c/faf 0 2026-03-09T15:01:36.841 INFO:tasks.workunit.client.0.vm05.stdout:5/591: fdatasync d1/d4/d34/d35/d4e/d6f/fa5 0 2026-03-09T15:01:36.848 INFO:tasks.workunit.client.0.vm05.stdout:8/575: getdents d0/d2a 0 2026-03-09T15:01:36.848 INFO:tasks.workunit.client.0.vm05.stdout:0/483: creat d9/de/d12/d15/d2e/f9a x:0 0 0 2026-03-09T15:01:36.848 INFO:tasks.workunit.client.0.vm05.stdout:8/576: stat d0/d7/l91 0 2026-03-09T15:01:36.853 INFO:tasks.workunit.client.0.vm05.stdout:0/484: dread d9/de/f1e [0,4194304] 0 2026-03-09T15:01:36.855 INFO:tasks.workunit.client.0.vm05.stdout:5/592: symlink d1/d4/d34/d35/d4e/d6f/lc9 0 2026-03-09T15:01:36.856 INFO:tasks.workunit.client.0.vm05.stdout:5/593: dread - d1/d4/d34/d6c/faf zero size 2026-03-09T15:01:36.860 INFO:tasks.workunit.client.0.vm05.stdout:5/594: dwrite d1/d4/d34/d35/d4e/f8d [4194304,4194304] 0 2026-03-09T15:01:36.874 INFO:tasks.workunit.client.0.vm05.stdout:2/535: creat da/d29/d6a/d7f/fa8 x:0 0 0 2026-03-09T15:01:36.874 INFO:tasks.workunit.client.0.vm05.stdout:2/536: write da/dd/f83 [462547,19546] 0 2026-03-09T15:01:36.875 INFO:tasks.workunit.client.0.vm05.stdout:2/537: fsync da/d29/d6a/da0/f84 0 2026-03-09T15:01:36.882 INFO:tasks.workunit.client.0.vm05.stdout:8/577: mknod d0/d1/d12/d1b/d95/d42/d60/cc3 0 2026-03-09T15:01:36.885 INFO:tasks.workunit.client.0.vm05.stdout:4/515: dwrite d2/d4/f4e [0,4194304] 0 2026-03-09T15:01:36.885 INFO:tasks.workunit.client.0.vm05.stdout:4/516: chown d2/d49/c76 21436 1 2026-03-09T15:01:36.886 INFO:tasks.workunit.client.0.vm05.stdout:4/517: write d2/d4/d7/d21/f34 [2852396,121986] 0 2026-03-09T15:01:36.886 INFO:tasks.workunit.client.0.vm05.stdout:4/518: chown d2/d4/d8/d4a/d6e/f8d 13786146 1 2026-03-09T15:01:36.896 INFO:tasks.workunit.client.0.vm05.stdout:2/538: rmdir da/d16/d46 39 2026-03-09T15:01:36.901 INFO:tasks.workunit.client.0.vm05.stdout:2/539: dwrite da/d29/d6a/f81 [0,4194304] 0 2026-03-09T15:01:36.906 INFO:tasks.workunit.client.0.vm05.stdout:8/578: unlink d0/d1/d97/fa2 0 2026-03-09T15:01:36.912 INFO:tasks.workunit.client.0.vm05.stdout:7/537: dwrite d1/d9/d23/d31/d32/f5b [0,4194304] 0 2026-03-09T15:01:36.925 INFO:tasks.workunit.client.0.vm05.stdout:4/519: write d2/f14 [775434,8679] 0 2026-03-09T15:01:36.926 INFO:tasks.workunit.client.0.vm05.stdout:4/520: dread - d2/d4/d7/f90 zero size 2026-03-09T15:01:36.929 INFO:tasks.workunit.client.0.vm05.stdout:0/485: unlink d9/lb 0 2026-03-09T15:01:36.929 INFO:tasks.workunit.client.0.vm05.stdout:4/521: dwrite d2/d49/f4d [0,4194304] 0 2026-03-09T15:01:36.931 INFO:tasks.workunit.client.0.vm05.stdout:5/595: mknod d1/d4/cca 0 2026-03-09T15:01:36.936 INFO:tasks.workunit.client.0.vm05.stdout:3/551: dwrite d3/d29/d2d/d77/d4d/fa7 [0,4194304] 0 2026-03-09T15:01:36.950 INFO:tasks.workunit.client.0.vm05.stdout:6/504: dwrite da/f5d [0,4194304] 0 2026-03-09T15:01:36.957 INFO:tasks.workunit.client.0.vm05.stdout:7/538: creat d1/d12/fa8 x:0 0 0 2026-03-09T15:01:36.963 INFO:tasks.workunit.client.0.vm05.stdout:5/596: rmdir d1/da 39 2026-03-09T15:01:36.965 INFO:tasks.workunit.client.0.vm05.stdout:2/540: symlink da/d16/la9 0 2026-03-09T15:01:36.968 INFO:tasks.workunit.client.0.vm05.stdout:8/579: mknod d0/d1/d12/d3c/d8b/cc4 0 2026-03-09T15:01:36.970 INFO:tasks.workunit.client.0.vm05.stdout:0/486: creat d9/de/d12/d15/d2e/d32/d53/d6e/d85/f9b x:0 0 0 2026-03-09T15:01:36.974 INFO:tasks.workunit.client.0.vm05.stdout:5/597: symlink d1/db5/lcb 0 2026-03-09T15:01:36.975 INFO:tasks.workunit.client.0.vm05.stdout:3/552: mknod d3/df/d10/d19/cc0 0 2026-03-09T15:01:36.975 INFO:tasks.workunit.client.0.vm05.stdout:4/522: fsync d2/d4/d7/f2d 0 2026-03-09T15:01:36.975 INFO:tasks.workunit.client.0.vm05.stdout:6/505: mkdir da/d17/d95 0 2026-03-09T15:01:36.976 INFO:tasks.workunit.client.0.vm05.stdout:3/553: write d3/df/d10/d34/f5f [4510429,845] 0 2026-03-09T15:01:36.981 INFO:tasks.workunit.client.0.vm05.stdout:1/530: truncate d9/f12 4750981 0 2026-03-09T15:01:36.982 INFO:tasks.workunit.client.0.vm05.stdout:1/531: write d9/d2f/d83/d98/f50 [4671044,60809] 0 2026-03-09T15:01:36.983 INFO:tasks.workunit.client.0.vm05.stdout:1/532: fsync d9/d2f/d83/d98/d59/fb5 0 2026-03-09T15:01:36.987 INFO:tasks.workunit.client.0.vm05.stdout:1/533: dwrite f5 [4194304,4194304] 0 2026-03-09T15:01:36.987 INFO:tasks.workunit.client.0.vm05.stdout:1/534: fdatasync d9/d2f/d55/fb0 0 2026-03-09T15:01:36.989 INFO:tasks.workunit.client.0.vm05.stdout:8/580: dread d0/f3b [0,4194304] 0 2026-03-09T15:01:36.993 INFO:tasks.workunit.client.0.vm05.stdout:0/487: symlink d9/de/d25/l9c 0 2026-03-09T15:01:36.995 INFO:tasks.workunit.client.0.vm05.stdout:9/586: truncate d2/d10/f65 2224931 0 2026-03-09T15:01:37.007 INFO:tasks.workunit.client.0.vm05.stdout:3/554: symlink d3/df/d1e/d2c/d74/d9b/lc1 0 2026-03-09T15:01:37.009 INFO:tasks.workunit.client.0.vm05.stdout:3/555: chown d3/df/d10 31 1 2026-03-09T15:01:37.010 INFO:tasks.workunit.client.0.vm05.stdout:3/556: dread d3/df/f1b [0,4194304] 0 2026-03-09T15:01:37.014 INFO:tasks.workunit.client.0.vm05.stdout:1/535: write d9/d2f/f58 [4589601,28462] 0 2026-03-09T15:01:37.017 INFO:tasks.workunit.client.0.vm05.stdout:8/581: mknod d0/d1/d12/d1b/d6e/d93/cc5 0 2026-03-09T15:01:37.017 INFO:tasks.workunit.client.0.vm05.stdout:9/587: mkdir d2/d4e/d56/d53/d64/dd9 0 2026-03-09T15:01:37.017 INFO:tasks.workunit.client.0.vm05.stdout:7/539: link d1/d49/d4a/d77/la5 d1/d9/d23/d31/d8f/la9 0 2026-03-09T15:01:37.018 INFO:tasks.workunit.client.0.vm05.stdout:9/588: chown d2/d92/fa3 7 1 2026-03-09T15:01:37.018 INFO:tasks.workunit.client.0.vm05.stdout:8/582: chown d0/d1/d12/d1b/d6e/d93/d9f 1701881194 1 2026-03-09T15:01:37.019 INFO:tasks.workunit.client.0.vm05.stdout:9/589: chown d2/d10/d22/d47/d95/cc9 5162 1 2026-03-09T15:01:37.023 INFO:tasks.workunit.client.0.vm05.stdout:9/590: dread d2/d4e/d56/d37/d9c/f6e [0,4194304] 0 2026-03-09T15:01:37.024 INFO:tasks.workunit.client.0.vm05.stdout:9/591: dread - d2/d92/fa3 zero size 2026-03-09T15:01:37.024 INFO:tasks.workunit.client.0.vm05.stdout:9/592: stat d2/d10/d22/d2c/d69/c38 0 2026-03-09T15:01:37.027 INFO:tasks.workunit.client.0.vm05.stdout:3/557: fdatasync d3/df/d10/d34/f4c 0 2026-03-09T15:01:37.031 INFO:tasks.workunit.client.0.vm05.stdout:3/558: write d3/df/d59/d79/fa8 [646972,41367] 0 2026-03-09T15:01:37.031 INFO:tasks.workunit.client.0.vm05.stdout:2/541: getdents da/d29/d45 0 2026-03-09T15:01:37.031 INFO:tasks.workunit.client.0.vm05.stdout:2/542: chown da/dd/l77 147 1 2026-03-09T15:01:37.033 INFO:tasks.workunit.client.0.vm05.stdout:8/583: chown d0/d24/f2c 0 1 2026-03-09T15:01:37.039 INFO:tasks.workunit.client.0.vm05.stdout:0/488: dread d9/f2b [0,4194304] 0 2026-03-09T15:01:37.039 INFO:tasks.workunit.client.0.vm05.stdout:0/489: fsync d9/de/d12/f7a 0 2026-03-09T15:01:37.041 INFO:tasks.workunit.client.0.vm05.stdout:3/559: mkdir d3/df/d10/d34/d8c/dbd/dc2 0 2026-03-09T15:01:37.042 INFO:tasks.workunit.client.0.vm05.stdout:4/523: link d2/d4/d7/d21/d3d/c95 d2/d4/d7/d21/d3d/ca4 0 2026-03-09T15:01:37.043 INFO:tasks.workunit.client.0.vm05.stdout:3/560: chown d3/df/d10/d34/f48 5795 1 2026-03-09T15:01:37.046 INFO:tasks.workunit.client.0.vm05.stdout:1/536: dread d9/d2f/d83/d98/d59/d49/d48/f25 [0,4194304] 0 2026-03-09T15:01:37.058 INFO:tasks.workunit.client.0.vm05.stdout:7/540: getdents d1/d22/da4 0 2026-03-09T15:01:37.060 INFO:tasks.workunit.client.0.vm05.stdout:8/584: fsync d0/d24/f8a 0 2026-03-09T15:01:37.063 INFO:tasks.workunit.client.0.vm05.stdout:0/490: truncate d9/de/d25/f2d 2436188 0 2026-03-09T15:01:37.068 INFO:tasks.workunit.client.0.vm05.stdout:6/506: dread da/d19/f52 [0,4194304] 0 2026-03-09T15:01:37.069 INFO:tasks.workunit.client.0.vm05.stdout:6/507: write da/d19/f45 [1384882,70290] 0 2026-03-09T15:01:37.071 INFO:tasks.workunit.client.0.vm05.stdout:9/593: mknod d2/d10/d8c/cda 0 2026-03-09T15:01:37.072 INFO:tasks.workunit.client.0.vm05.stdout:9/594: chown d2/d10/d22/dc2/l7e 52060310 1 2026-03-09T15:01:37.088 INFO:tasks.workunit.client.0.vm05.stdout:2/543: rename da/d13/c40 to da/d13/d2f/d35/caa 0 2026-03-09T15:01:37.092 INFO:tasks.workunit.client.0.vm05.stdout:5/598: dwrite d1/d4/d34/d35/d3d/d96/fbe [0,4194304] 0 2026-03-09T15:01:37.095 INFO:tasks.workunit.client.0.vm05.stdout:4/524: mkdir d2/d1d/da5 0 2026-03-09T15:01:37.104 INFO:tasks.workunit.client.0.vm05.stdout:3/561: mkdir d3/d29/d7f/dc3 0 2026-03-09T15:01:37.114 INFO:tasks.workunit.client.0.vm05.stdout:9/595: dread d2/fc [0,4194304] 0 2026-03-09T15:01:37.115 INFO:tasks.workunit.client.0.vm05.stdout:9/596: fsync d2/d10/d22/d2c/d69/f67 0 2026-03-09T15:01:37.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:37 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:37.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:37 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:37.140 INFO:tasks.workunit.client.0.vm05.stdout:0/491: rename d9/de/d12/d15/d2e/l3f to d9/de/d12/d15/d2e/d32/d53/d61/l9d 0 2026-03-09T15:01:37.141 INFO:tasks.workunit.client.0.vm05.stdout:0/492: write d9/de/d12/d15/d2e/d32/d53/d61/f62 [4576531,128716] 0 2026-03-09T15:01:37.161 INFO:tasks.workunit.client.0.vm05.stdout:3/562: creat d3/df/d10/d19/d44/d50/fc4 x:0 0 0 2026-03-09T15:01:37.161 INFO:tasks.workunit.client.0.vm05.stdout:9/597: mkdir d2/ddb 0 2026-03-09T15:01:37.169 INFO:tasks.workunit.client.0.vm05.stdout:7/541: mknod d1/d49/d4a/d94/caa 0 2026-03-09T15:01:37.174 INFO:tasks.workunit.client.0.vm05.stdout:2/544: rename da/d13 to da/d29/d6a/da0/d91/dab 0 2026-03-09T15:01:37.179 INFO:tasks.workunit.client.0.vm05.stdout:8/585: dwrite d0/d1/d12/d3c/f51 [0,4194304] 0 2026-03-09T15:01:37.180 INFO:tasks.workunit.client.0.vm05.stdout:8/586: readlink d0/d1/d12/d1b/l52 0 2026-03-09T15:01:37.181 INFO:tasks.workunit.client.0.vm05.stdout:8/587: stat d0/d1/d12/d1b/d6e 0 2026-03-09T15:01:37.190 INFO:tasks.workunit.client.0.vm05.stdout:1/537: creat d9/fb6 x:0 0 0 2026-03-09T15:01:37.191 INFO:tasks.workunit.client.0.vm05.stdout:1/538: chown d9/d2f/d83/d98/d59/d49/f82 348575 1 2026-03-09T15:01:37.193 INFO:tasks.workunit.client.0.vm05.stdout:1/539: dread d9/d17/fb1 [0,4194304] 0 2026-03-09T15:01:37.193 INFO:tasks.workunit.client.0.vm05.stdout:1/540: fdatasync d9/fb6 0 2026-03-09T15:01:37.196 INFO:tasks.workunit.client.0.vm05.stdout:9/598: rmdir d2/d10/d22 39 2026-03-09T15:01:37.200 INFO:tasks.workunit.client.0.vm05.stdout:7/542: mkdir d1/d9/d23/d31/d32/d78/d7e/d81/dab 0 2026-03-09T15:01:37.201 INFO:tasks.workunit.client.0.vm05.stdout:7/543: dwrite d1/f62 [0,4194304] 0 2026-03-09T15:01:37.201 INFO:tasks.workunit.client.0.vm05.stdout:3/563: sync 2026-03-09T15:01:37.205 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:37 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:37.208 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:37 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:37.212 INFO:tasks.workunit.client.0.vm05.stdout:5/599: rename d1/d5d/d7f to d1/d4/d19/d93/dcc 0 2026-03-09T15:01:37.212 INFO:tasks.workunit.client.0.vm05.stdout:2/545: symlink da/d29/d3f/lac 0 2026-03-09T15:01:37.229 INFO:tasks.workunit.client.0.vm05.stdout:4/525: link d2/d4/d1e/l38 d2/d4/d8/d4a/d94/la6 0 2026-03-09T15:01:37.241 INFO:tasks.workunit.client.0.vm05.stdout:6/508: link da/d43/f56 da/d43/f96 0 2026-03-09T15:01:37.242 INFO:tasks.workunit.client.0.vm05.stdout:6/509: chown da/d17/d3b/f4a 68262121 1 2026-03-09T15:01:37.243 INFO:tasks.workunit.client.0.vm05.stdout:6/510: write da/d17/f90 [325646,121908] 0 2026-03-09T15:01:37.247 INFO:tasks.workunit.client.0.vm05.stdout:0/493: creat d9/de/d12/d15/f9e x:0 0 0 2026-03-09T15:01:37.249 INFO:tasks.workunit.client.0.vm05.stdout:5/600: creat d1/d4/d19/d93/dcc/d91/fcd x:0 0 0 2026-03-09T15:01:37.250 INFO:tasks.workunit.client.0.vm05.stdout:4/526: creat d2/d4/d50/fa7 x:0 0 0 2026-03-09T15:01:37.252 INFO:tasks.workunit.client.0.vm05.stdout:1/541: getdents d9/d2f/d83/d98/d59/d49/d48/db2 0 2026-03-09T15:01:37.256 INFO:tasks.workunit.client.0.vm05.stdout:4/527: read d2/d4/d7/f2d [2029850,97917] 0 2026-03-09T15:01:37.257 INFO:tasks.workunit.client.0.vm05.stdout:4/528: write d2/d4/d8/f13 [1519271,106507] 0 2026-03-09T15:01:37.258 INFO:tasks.workunit.client.0.vm05.stdout:3/564: mkdir d3/d29/d2d/d7b/dc5 0 2026-03-09T15:01:37.267 INFO:tasks.workunit.client.0.vm05.stdout:0/494: mkdir d9/de/d12/d15/d2e/d32/d9f 0 2026-03-09T15:01:37.278 INFO:tasks.workunit.client.0.vm05.stdout:5/601: symlink d1/d4/d34/d35/d3d/d38/lce 0 2026-03-09T15:01:37.282 INFO:tasks.workunit.client.0.vm05.stdout:2/546: dread da/f79 [0,4194304] 0 2026-03-09T15:01:37.284 INFO:tasks.workunit.client.0.vm05.stdout:1/542: stat d9/d2f/d83/d98/l3e 0 2026-03-09T15:01:37.286 INFO:tasks.workunit.client.0.vm05.stdout:6/511: creat da/d43/d7b/f97 x:0 0 0 2026-03-09T15:01:37.295 INFO:tasks.workunit.client.0.vm05.stdout:7/544: rename d1/d49/d4a/d77/f80 to d1/d9/d23/fac 0 2026-03-09T15:01:37.307 INFO:tasks.workunit.client.0.vm05.stdout:8/588: dwrite d0/f3b [0,4194304] 0 2026-03-09T15:01:37.313 INFO:tasks.workunit.client.0.vm05.stdout:1/543: mknod d9/d2f/d83/d98/d59/d49/d92/cb7 0 2026-03-09T15:01:37.313 INFO:tasks.workunit.client.0.vm05.stdout:4/529: mkdir d2/d4/d7/dc/da8 0 2026-03-09T15:01:37.313 INFO:tasks.workunit.client.0.vm05.stdout:3/565: link d3/d29/d2d/d77/f35 d3/df/d1e/daf/fc6 0 2026-03-09T15:01:37.315 INFO:tasks.workunit.client.0.vm05.stdout:3/566: write d3/df/d59/d79/fa8 [845149,21676] 0 2026-03-09T15:01:37.321 INFO:tasks.workunit.client.0.vm05.stdout:4/530: dwrite d2/f98 [0,4194304] 0 2026-03-09T15:01:37.324 INFO:tasks.workunit.client.0.vm05.stdout:4/531: stat d2/d4/d7/d48/d6b/l7c 0 2026-03-09T15:01:37.326 INFO:tasks.workunit.client.0.vm05.stdout:0/495: mkdir d9/de/d12/d15/d2e/d32/d9f/da0 0 2026-03-09T15:01:37.326 INFO:tasks.workunit.client.0.vm05.stdout:0/496: dread - d9/d59/f79 zero size 2026-03-09T15:01:37.330 INFO:tasks.workunit.client.0.vm05.stdout:5/602: symlink d1/d4/d34/d35/d4e/dc8/lcf 0 2026-03-09T15:01:37.337 INFO:tasks.workunit.client.0.vm05.stdout:5/603: read d1/d4/d27/f4f [2798907,50117] 0 2026-03-09T15:01:37.366 INFO:tasks.workunit.client.0.vm05.stdout:4/532: unlink d2/d49/f6a 0 2026-03-09T15:01:37.375 INFO:tasks.workunit.client.0.vm05.stdout:4/533: dwrite d2/d4/d8/d4a/d6e/f8d [0,4194304] 0 2026-03-09T15:01:37.389 INFO:tasks.workunit.client.0.vm05.stdout:1/544: mknod d9/d2f/d83/d98/d59/d49/cb8 0 2026-03-09T15:01:37.405 INFO:tasks.workunit.client.0.vm05.stdout:9/599: rename d2/d10/d8c/faa to d2/d10/d22/d47/fdc 0 2026-03-09T15:01:37.408 INFO:tasks.workunit.client.0.vm05.stdout:4/534: fdatasync d2/d4/d7/dc/f18 0 2026-03-09T15:01:37.411 INFO:tasks.workunit.client.0.vm05.stdout:1/545: write d9/d2f/d83/d98/d59/d49/d48/f63 [2040167,112111] 0 2026-03-09T15:01:37.412 INFO:tasks.workunit.client.0.vm05.stdout:2/547: rename da/d29/d6a/da0/d91/dab/d2f/d35/l74 to da/dd/lad 0 2026-03-09T15:01:37.413 INFO:tasks.workunit.client.0.vm05.stdout:2/548: write da/f9d [49089,26499] 0 2026-03-09T15:01:37.415 INFO:tasks.workunit.client.0.vm05.stdout:9/600: mkdir d2/d4e/d56/d37/d9c/ddd 0 2026-03-09T15:01:37.418 INFO:tasks.workunit.client.0.vm05.stdout:9/601: dwrite d2/d10/d22/d47/fd3 [0,4194304] 0 2026-03-09T15:01:37.422 INFO:tasks.workunit.client.0.vm05.stdout:4/535: creat d2/d4/d8/d4a/fa9 x:0 0 0 2026-03-09T15:01:37.434 INFO:tasks.workunit.client.0.vm05.stdout:3/567: rename d3/df/d10/d19/d44/f7e to d3/df/d59/d79/fc7 0 2026-03-09T15:01:37.442 INFO:tasks.workunit.client.0.vm05.stdout:9/602: write d2/d10/d22/d47/fdc [490091,119221] 0 2026-03-09T15:01:37.442 INFO:tasks.workunit.client.0.vm05.stdout:9/603: dread - d2/d10/d22/fb6 zero size 2026-03-09T15:01:37.448 INFO:tasks.workunit.client.0.vm05.stdout:4/536: creat d2/d1d/d88/faa x:0 0 0 2026-03-09T15:01:37.449 INFO:tasks.workunit.client.0.vm05.stdout:4/537: write d2/d43/fa0 [750180,34023] 0 2026-03-09T15:01:37.455 INFO:tasks.workunit.client.0.vm05.stdout:6/512: dwrite da/d43/f46 [0,4194304] 0 2026-03-09T15:01:37.462 INFO:tasks.workunit.client.0.vm05.stdout:7/545: truncate d1/d9/d23/fac 252091 0 2026-03-09T15:01:37.462 INFO:tasks.workunit.client.0.vm05.stdout:1/546: mkdir d9/db9 0 2026-03-09T15:01:37.463 INFO:tasks.workunit.client.0.vm05.stdout:1/547: read d9/d17/f79 [3727486,88847] 0 2026-03-09T15:01:37.468 INFO:tasks.workunit.client.0.vm05.stdout:5/604: dwrite d1/d4/d19/d93/f99 [0,4194304] 0 2026-03-09T15:01:37.468 INFO:tasks.workunit.client.0.vm05.stdout:3/568: read d3/d29/d2d/d77/f35 [2934852,91395] 0 2026-03-09T15:01:37.475 INFO:tasks.workunit.client.0.vm05.stdout:8/589: truncate d0/d1/d55/f6a 481789 0 2026-03-09T15:01:37.475 INFO:tasks.workunit.client.0.vm05.stdout:0/497: dwrite d9/f42 [0,4194304] 0 2026-03-09T15:01:37.476 INFO:tasks.workunit.client.0.vm05.stdout:0/498: write d9/de/d12/f4c [1695125,76763] 0 2026-03-09T15:01:37.481 INFO:tasks.workunit.client.0.vm05.stdout:6/513: mknod da/d19/c98 0 2026-03-09T15:01:37.499 INFO:tasks.workunit.client.0.vm05.stdout:2/549: creat da/d29/d6a/da0/d91/dab/d2f/fae x:0 0 0 2026-03-09T15:01:37.500 INFO:tasks.workunit.client.0.vm05.stdout:1/548: rmdir d9/d2f/d83/d98/d59/d49/d77 39 2026-03-09T15:01:37.501 INFO:tasks.workunit.client.0.vm05.stdout:1/549: chown d9/d17 481 1 2026-03-09T15:01:37.504 INFO:tasks.workunit.client.0.vm05.stdout:6/514: sync 2026-03-09T15:01:37.506 INFO:tasks.workunit.client.0.vm05.stdout:5/605: mkdir d1/d4/d34/d35/dd0 0 2026-03-09T15:01:37.511 INFO:tasks.workunit.client.0.vm05.stdout:4/538: creat d2/d4/d7/dc/da8/fab x:0 0 0 2026-03-09T15:01:37.512 INFO:tasks.workunit.client.0.vm05.stdout:1/550: dwrite d9/d2f/d83/d98/f50 [0,4194304] 0 2026-03-09T15:01:37.527 INFO:tasks.workunit.client.0.vm05.stdout:5/606: dread d1/db5/f5a [0,4194304] 0 2026-03-09T15:01:37.546 INFO:tasks.workunit.client.0.vm05.stdout:9/604: truncate d2/d10/d22/d2c/f91 2542677 0 2026-03-09T15:01:37.549 INFO:tasks.workunit.client.0.vm05.stdout:3/569: mkdir d3/df/d10/d34/da5/dc8 0 2026-03-09T15:01:37.565 INFO:tasks.workunit.client.0.vm05.stdout:1/551: rmdir d9/d2f/d83/d98/d59/d49/d78/d7e 39 2026-03-09T15:01:37.569 INFO:tasks.workunit.client.0.vm05.stdout:0/499: mknod d9/de/d12/d15/d2e/ca1 0 2026-03-09T15:01:37.585 INFO:tasks.workunit.client.0.vm05.stdout:3/570: dread d3/d29/d2d/d77/f6e [0,4194304] 0 2026-03-09T15:01:37.600 INFO:tasks.workunit.client.0.vm05.stdout:6/515: mknod da/d17/d95/c99 0 2026-03-09T15:01:37.601 INFO:tasks.workunit.client.0.vm05.stdout:9/605: creat d2/d10/d22/dc2/fde x:0 0 0 2026-03-09T15:01:37.604 INFO:tasks.workunit.client.0.vm05.stdout:4/539: creat d2/d4/d1e/da2/fac x:0 0 0 2026-03-09T15:01:37.609 INFO:tasks.workunit.client.0.vm05.stdout:4/540: dread d2/d4/d1e/f40 [0,4194304] 0 2026-03-09T15:01:37.609 INFO:tasks.workunit.client.0.vm05.stdout:4/541: write d2/d1d/f7d [420456,96491] 0 2026-03-09T15:01:37.612 INFO:tasks.workunit.client.0.vm05.stdout:1/552: write d9/d17/f79 [1858426,22683] 0 2026-03-09T15:01:37.623 INFO:tasks.workunit.client.0.vm05.stdout:8/590: dread d0/d1/f7f [0,4194304] 0 2026-03-09T15:01:37.624 INFO:tasks.workunit.client.0.vm05.stdout:8/591: write d0/d1/d12/d1b/d95/d4b/faa [238412,46023] 0 2026-03-09T15:01:37.636 INFO:tasks.workunit.client.0.vm05.stdout:7/546: dread d1/d9/d23/d31/d32/f58 [0,4194304] 0 2026-03-09T15:01:37.640 INFO:tasks.workunit.client.0.vm05.stdout:5/607: creat d1/d4/d34/d56/da6/fd1 x:0 0 0 2026-03-09T15:01:37.640 INFO:tasks.workunit.client.0.vm05.stdout:5/608: fdatasync d1/f30 0 2026-03-09T15:01:37.653 INFO:tasks.workunit.client.0.vm05.stdout:3/571: creat d3/df/d1e/d2c/d74/d9b/fc9 x:0 0 0 2026-03-09T15:01:37.653 INFO:tasks.workunit.client.0.vm05.stdout:3/572: chown d3/df/c1d 206320 1 2026-03-09T15:01:37.653 INFO:tasks.workunit.client.0.vm05.stdout:3/573: fsync d3/df/d10/d19/d44/f56 0 2026-03-09T15:01:37.680 INFO:tasks.workunit.client.0.vm05.stdout:2/550: link da/d29/d6a/da0/d91/dab/d2f/d35/l78 da/d29/d6a/da0/d7c/laf 0 2026-03-09T15:01:37.681 INFO:tasks.workunit.client.0.vm05.stdout:2/551: dread - da/d29/d6a/da0/fa7 zero size 2026-03-09T15:01:37.683 INFO:tasks.workunit.client.0.vm05.stdout:6/516: symlink da/d17/d3b/d81/l9a 0 2026-03-09T15:01:37.684 INFO:tasks.workunit.client.0.vm05.stdout:4/542: symlink d2/d49/d69/lad 0 2026-03-09T15:01:37.685 INFO:tasks.workunit.client.0.vm05.stdout:4/543: write d2/d4/d8/d4a/d6e/f93 [779277,35767] 0 2026-03-09T15:01:37.693 INFO:tasks.workunit.client.0.vm05.stdout:5/609: creat d1/d4/d19/d93/fd2 x:0 0 0 2026-03-09T15:01:37.694 INFO:tasks.workunit.client.0.vm05.stdout:0/500: truncate d9/de/d25/f2d 1105327 0 2026-03-09T15:01:37.701 INFO:tasks.workunit.client.0.vm05.stdout:2/552: mkdir da/d29/d6a/da0/d91/dab/d2f/d35/db0 0 2026-03-09T15:01:37.704 INFO:tasks.workunit.client.0.vm05.stdout:2/553: dwrite da/fa2 [0,4194304] 0 2026-03-09T15:01:37.710 INFO:tasks.workunit.client.0.vm05.stdout:7/547: write d1/d9/d23/d54/f6f [191423,2360] 0 2026-03-09T15:01:37.724 INFO:tasks.workunit.client.0.vm05.stdout:4/544: fdatasync d2/d49/f56 0 2026-03-09T15:01:37.725 INFO:tasks.workunit.client.0.vm05.stdout:1/553: symlink d9/d2f/d37/d5a/da9/lba 0 2026-03-09T15:01:37.728 INFO:tasks.workunit.client.0.vm05.stdout:1/554: dwrite d9/d2f/d55/f5e [0,4194304] 0 2026-03-09T15:01:37.731 INFO:tasks.workunit.client.0.vm05.stdout:1/555: fdatasync d9/d2f/d83/d98/d59/d49/d92/d75/f76 0 2026-03-09T15:01:37.732 INFO:tasks.workunit.client.0.vm05.stdout:8/592: mkdir d0/dc6 0 2026-03-09T15:01:37.765 INFO:tasks.workunit.client.0.vm05.stdout:2/554: write da/d16/d46/fa5 [706877,120010] 0 2026-03-09T15:01:37.780 INFO:tasks.workunit.client.0.vm05.stdout:7/548: readlink d1/d49/l91 0 2026-03-09T15:01:37.800 INFO:tasks.workunit.client.0.vm05.stdout:0/501: dwrite d9/de/d25/f47 [0,4194304] 0 2026-03-09T15:01:37.804 INFO:tasks.workunit.client.0.vm05.stdout:9/606: dwrite d2/f12 [4194304,4194304] 0 2026-03-09T15:01:37.837 INFO:tasks.workunit.client.0.vm05.stdout:3/574: truncate d3/df/d10/d34/d8c/dbd/fa4 1455956 0 2026-03-09T15:01:37.838 INFO:tasks.workunit.client.0.vm05.stdout:6/517: symlink da/d17/l9b 0 2026-03-09T15:01:37.838 INFO:tasks.workunit.client.0.vm05.stdout:3/575: fdatasync d3/d29/d2d/d77/d4d/f80 0 2026-03-09T15:01:37.838 INFO:tasks.workunit.client.0.vm05.stdout:3/576: dread - d3/df/d1e/d2c/d74/d9b/fc9 zero size 2026-03-09T15:01:37.840 INFO:tasks.workunit.client.0.vm05.stdout:4/545: creat d2/d4/d8/d4a/fae x:0 0 0 2026-03-09T15:01:37.847 INFO:tasks.workunit.client.0.vm05.stdout:5/610: mknod d1/d4/d19/cd3 0 2026-03-09T15:01:37.848 INFO:tasks.workunit.client.0.vm05.stdout:2/555: mkdir da/d29/d6a/db1 0 2026-03-09T15:01:37.849 INFO:tasks.workunit.client.0.vm05.stdout:2/556: fdatasync da/d29/d6a/da0/d7c/fa4 0 2026-03-09T15:01:37.852 INFO:tasks.workunit.client.0.vm05.stdout:1/556: dread d9/d17/f26 [0,4194304] 0 2026-03-09T15:01:37.853 INFO:tasks.workunit.client.0.vm05.stdout:8/593: dread d0/d1/d55/f6a [0,4194304] 0 2026-03-09T15:01:37.854 INFO:tasks.workunit.client.0.vm05.stdout:8/594: dread - d0/d1/d12/d1b/d95/d42/d60/fc0 zero size 2026-03-09T15:01:37.854 INFO:tasks.workunit.client.0.vm05.stdout:8/595: dread - d0/d1/d12/d3c/f99 zero size 2026-03-09T15:01:37.861 INFO:tasks.workunit.client.0.vm05.stdout:1/557: dread d9/d2f/d83/d98/d59/d49/f69 [0,4194304] 0 2026-03-09T15:01:37.866 INFO:tasks.workunit.client.0.vm05.stdout:9/607: symlink d2/d8b/dae/ldf 0 2026-03-09T15:01:37.870 INFO:tasks.workunit.client.0.vm05.stdout:6/518: write da/d19/f52 [4125443,17280] 0 2026-03-09T15:01:37.870 INFO:tasks.workunit.client.0.vm05.stdout:6/519: stat da/d17/l93 0 2026-03-09T15:01:37.872 INFO:tasks.workunit.client.0.vm05.stdout:3/577: rename d3/l4e to d3/lca 0 2026-03-09T15:01:37.873 INFO:tasks.workunit.client.0.vm05.stdout:3/578: dread d3/df/d10/d19/f58 [0,4194304] 0 2026-03-09T15:01:37.875 INFO:tasks.workunit.client.0.vm05.stdout:4/546: read - d2/d49/f56 zero size 2026-03-09T15:01:37.908 INFO:tasks.workunit.client.0.vm05.stdout:1/558: mknod d9/d97/cbb 0 2026-03-09T15:01:37.908 INFO:tasks.workunit.client.0.vm05.stdout:1/559: stat d9/d2f/d83/d98/f56 0 2026-03-09T15:01:37.910 INFO:tasks.workunit.client.0.vm05.stdout:9/608: creat d2/d4e/d56/d37/d9c/d8e/dcb/fe0 x:0 0 0 2026-03-09T15:01:37.913 INFO:tasks.workunit.client.0.vm05.stdout:3/579: rmdir d3/df/d1e/d2f 39 2026-03-09T15:01:37.918 INFO:tasks.workunit.client.0.vm05.stdout:4/547: mknod d2/d4/d50/caf 0 2026-03-09T15:01:37.921 INFO:tasks.workunit.client.0.vm05.stdout:7/549: write d1/f15 [1309365,63612] 0 2026-03-09T15:01:37.922 INFO:tasks.workunit.client.0.vm05.stdout:5/611: mkdir d1/d4/d27/dd4 0 2026-03-09T15:01:37.924 INFO:tasks.workunit.client.0.vm05.stdout:2/557: creat da/d29/d6a/da0/d91/dab/d9c/fb2 x:0 0 0 2026-03-09T15:01:37.940 INFO:tasks.workunit.client.0.vm05.stdout:7/550: creat d1/d9/d23/d31/fad x:0 0 0 2026-03-09T15:01:37.941 INFO:tasks.workunit.client.0.vm05.stdout:7/551: fdatasync d1/d9/d23/d54/d7b/f7f 0 2026-03-09T15:01:37.943 INFO:tasks.workunit.client.0.vm05.stdout:8/596: getdents d0/d1/d12/d3c 0 2026-03-09T15:01:37.948 INFO:tasks.workunit.client.0.vm05.stdout:8/597: dread d0/d2a/f2e [0,4194304] 0 2026-03-09T15:01:37.954 INFO:tasks.workunit.client.0.vm05.stdout:9/609: mknod d2/d4e/ce1 0 2026-03-09T15:01:37.954 INFO:tasks.workunit.client.0.vm05.stdout:9/610: dread - d2/d10/d22/d47/d95/f9d zero size 2026-03-09T15:01:37.955 INFO:tasks.workunit.client.0.vm05.stdout:9/611: readlink d2/d10/d22/d2c/l4c 0 2026-03-09T15:01:37.958 INFO:tasks.workunit.client.0.vm05.stdout:9/612: dwrite d2/d10/d22/d52/fb0 [0,4194304] 0 2026-03-09T15:01:37.961 INFO:tasks.workunit.client.0.vm05.stdout:6/520: creat da/d17/f9c x:0 0 0 2026-03-09T15:01:37.978 INFO:tasks.workunit.client.0.vm05.stdout:2/558: mkdir da/d29/d6a/da0/d91/dab/d2f/db3 0 2026-03-09T15:01:37.982 INFO:tasks.workunit.client.0.vm05.stdout:0/502: truncate d9/de/f5d 554535 0 2026-03-09T15:01:37.995 INFO:tasks.workunit.client.0.vm05.stdout:7/552: dread d1/d9/d23/d31/d32/f38 [0,4194304] 0 2026-03-09T15:01:37.995 INFO:tasks.workunit.client.0.vm05.stdout:1/560: creat d9/d2f/d83/d98/d59/fbc x:0 0 0 2026-03-09T15:01:37.996 INFO:tasks.workunit.client.0.vm05.stdout:6/521: unlink da/d43/d7b/f92 0 2026-03-09T15:01:37.997 INFO:tasks.workunit.client.0.vm05.stdout:4/548: link d2/d4/l25 d2/d4/d8/d4a/lb0 0 2026-03-09T15:01:38.001 INFO:tasks.workunit.client.0.vm05.stdout:9/613: sync 2026-03-09T15:01:38.001 INFO:tasks.workunit.client.0.vm05.stdout:9/614: chown d2/d4e/d56/d37/d9c/f18 906 1 2026-03-09T15:01:38.005 INFO:tasks.workunit.client.0.vm05.stdout:0/503: read d9/de/d12/d15/f50 [26844,13394] 0 2026-03-09T15:01:38.005 INFO:tasks.workunit.client.0.vm05.stdout:0/504: write d9/d64/f96 [3838466,93999] 0 2026-03-09T15:01:38.011 INFO:tasks.workunit.client.0.vm05.stdout:5/612: write d1/d4/d27/f57 [471315,128708] 0 2026-03-09T15:01:38.013 INFO:tasks.workunit.client.0.vm05.stdout:3/580: dwrite d3/df/d1e/daf/fc6 [0,4194304] 0 2026-03-09T15:01:38.019 INFO:tasks.workunit.client.0.vm05.stdout:7/553: fsync d1/d9/d23/d31/d51/f9b 0 2026-03-09T15:01:38.023 INFO:tasks.workunit.client.0.vm05.stdout:8/598: mknod d0/d24/d96/cc7 0 2026-03-09T15:01:38.079 INFO:tasks.workunit.client.0.vm05.stdout:4/549: dwrite d2/d49/f56 [0,4194304] 0 2026-03-09T15:01:38.092 INFO:tasks.workunit.client.0.vm05.stdout:2/559: truncate da/d29/f2d 5907485 0 2026-03-09T15:01:38.093 INFO:tasks.workunit.client.0.vm05.stdout:2/560: write da/d29/d3f/f5f [416053,100644] 0 2026-03-09T15:01:38.093 INFO:tasks.workunit.client.0.vm05.stdout:0/505: creat d9/de/d25/d38/d78/fa2 x:0 0 0 2026-03-09T15:01:38.115 INFO:tasks.workunit.client.0.vm05.stdout:7/554: creat d1/d9/d23/d31/d8f/d93/fae x:0 0 0 2026-03-09T15:01:38.131 INFO:tasks.workunit.client.0.vm05.stdout:1/561: mkdir d9/d2f/d83/d98/d59/d49/d78/dbd 0 2026-03-09T15:01:38.134 INFO:tasks.workunit.client.0.vm05.stdout:6/522: rename da/c50 to da/d17/d3b/c9d 0 2026-03-09T15:01:38.134 INFO:tasks.workunit.client.0.vm05.stdout:6/523: write da/d17/f1d [5202183,20381] 0 2026-03-09T15:01:38.135 INFO:tasks.workunit.client.0.vm05.stdout:6/524: write da/d17/f1d [6254722,6501] 0 2026-03-09T15:01:38.164 INFO:tasks.workunit.client.0.vm05.stdout:0/506: rmdir d9/de/d12/d15/d2e/d32/d53/d6e 39 2026-03-09T15:01:38.167 INFO:tasks.workunit.client.0.vm05.stdout:0/507: dwrite d9/d64/f8c [0,4194304] 0 2026-03-09T15:01:38.185 INFO:tasks.workunit.client.0.vm05.stdout:3/581: mkdir d3/df/dcb 0 2026-03-09T15:01:38.195 INFO:tasks.workunit.client.0.vm05.stdout:7/555: symlink d1/d22/laf 0 2026-03-09T15:01:38.224 INFO:tasks.workunit.client.0.vm05.stdout:9/615: truncate d2/d10/d22/d2c/d69/f67 458642 0 2026-03-09T15:01:38.227 INFO:tasks.workunit.client.0.vm05.stdout:9/616: dwrite d2/d10/d22/d52/fb0 [0,4194304] 0 2026-03-09T15:01:38.228 INFO:tasks.workunit.client.0.vm05.stdout:2/561: creat da/d29/d64/da6/fb4 x:0 0 0 2026-03-09T15:01:38.243 INFO:tasks.workunit.client.0.vm05.stdout:0/508: readlink d9/la 0 2026-03-09T15:01:38.248 INFO:tasks.workunit.client.0.vm05.stdout:3/582: creat d3/df/d59/fcc x:0 0 0 2026-03-09T15:01:38.253 INFO:tasks.workunit.client.0.vm05.stdout:2/562: dread da/d16/f1f [0,4194304] 0 2026-03-09T15:01:38.265 INFO:tasks.workunit.client.0.vm05.stdout:6/525: mkdir da/d9e 0 2026-03-09T15:01:38.271 INFO:tasks.workunit.client.0.vm05.stdout:6/526: dwrite da/d43/f46 [0,4194304] 0 2026-03-09T15:01:38.276 INFO:tasks.workunit.client.0.vm05.stdout:6/527: write da/d17/f8d [791035,60488] 0 2026-03-09T15:01:38.276 INFO:tasks.workunit.client.0.vm05.stdout:6/528: write da/d17/f8d [253057,40288] 0 2026-03-09T15:01:38.284 INFO:tasks.workunit.client.0.vm05.stdout:6/529: readlink da/d43/d7b/d89/l8e 0 2026-03-09T15:01:38.291 INFO:tasks.workunit.client.0.vm05.stdout:9/617: dread d2/d4e/d56/d53/f60 [0,4194304] 0 2026-03-09T15:01:38.292 INFO:tasks.workunit.client.0.vm05.stdout:9/618: chown d2/d10/d22/d47/d73/f81 556420434 1 2026-03-09T15:01:38.297 INFO:tasks.workunit.client.0.vm05.stdout:3/583: truncate d3/df/d1e/d2f/fb9 111297 0 2026-03-09T15:01:38.322 INFO:tasks.workunit.client.0.vm05.stdout:5/613: write d1/d4/d34/d35/d3d/f61 [895527,12906] 0 2026-03-09T15:01:38.325 INFO:tasks.workunit.client.0.vm05.stdout:5/614: dwrite d1/d4/d34/d35/f44 [4194304,4194304] 0 2026-03-09T15:01:38.331 INFO:tasks.workunit.client.0.vm05.stdout:6/530: creat da/d43/d7b/f9f x:0 0 0 2026-03-09T15:01:38.342 INFO:tasks.workunit.client.0.vm05.stdout:6/531: read da/d17/f3c [248130,30531] 0 2026-03-09T15:01:38.342 INFO:tasks.workunit.client.0.vm05.stdout:6/532: dread - da/d43/d7b/f9f zero size 2026-03-09T15:01:38.342 INFO:tasks.workunit.client.0.vm05.stdout:6/533: dread - da/d19/f7e zero size 2026-03-09T15:01:38.360 INFO:tasks.workunit.client.0.vm05.stdout:8/599: write d0/d2a/f2e [79025,44876] 0 2026-03-09T15:01:38.364 INFO:tasks.workunit.client.0.vm05.stdout:8/600: dwrite d0/d1/d12/d3c/f51 [0,4194304] 0 2026-03-09T15:01:38.365 INFO:tasks.workunit.client.0.vm05.stdout:8/601: fdatasync d0/d7/f8 0 2026-03-09T15:01:38.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:38 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:01:38.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:38 vm09.local ceph-mon[59673]: pgmap v175: 65 pgs: 65 active+clean; 2.3 GiB data, 7.7 GiB used, 112 GiB / 120 GiB avail; 35 MiB/s rd, 111 MiB/s wr, 229 op/s 2026-03-09T15:01:38.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:38 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:38.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:38 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:38.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:38 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:01:38.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:38 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:01:38.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:38 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:38.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:38 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:01:38.389 INFO:tasks.workunit.client.0.vm05.stdout:0/509: mkdir d9/de/d12/da3 0 2026-03-09T15:01:38.394 INFO:tasks.workunit.client.0.vm05.stdout:3/584: symlink d3/df/d59/lcd 0 2026-03-09T15:01:38.403 INFO:tasks.workunit.client.0.vm05.stdout:4/550: dwrite d2/d4/d7/dc/f45 [0,4194304] 0 2026-03-09T15:01:38.407 INFO:tasks.workunit.client.0.vm05.stdout:7/556: creat d1/fb0 x:0 0 0 2026-03-09T15:01:38.419 INFO:tasks.workunit.client.0.vm05.stdout:5/615: truncate d1/d4/d34/d35/d4e/d6f/f90 267923 0 2026-03-09T15:01:38.422 INFO:tasks.workunit.client.0.vm05.stdout:4/551: dread d2/d4/d7/d21/f34 [0,4194304] 0 2026-03-09T15:01:38.423 INFO:tasks.workunit.client.0.vm05.stdout:4/552: read d2/d1d/f5c [1490093,72746] 0 2026-03-09T15:01:38.442 INFO:tasks.workunit.client.0.vm05.stdout:1/562: dwrite d9/d17/f81 [0,4194304] 0 2026-03-09T15:01:38.444 INFO:tasks.workunit.client.0.vm05.stdout:1/563: chown d9/d2f/d37/d5f 1322814433 1 2026-03-09T15:01:38.476 INFO:tasks.workunit.client.0.vm05.stdout:2/563: write da/d29/f2d [4379869,114395] 0 2026-03-09T15:01:38.497 INFO:tasks.workunit.client.0.vm05.stdout:7/557: mknod d1/d9/d23/d31/d8f/d93/cb1 0 2026-03-09T15:01:38.500 INFO:tasks.workunit.client.0.vm05.stdout:5/616: write d1/da/fb7 [174974,35688] 0 2026-03-09T15:01:38.508 INFO:tasks.workunit.client.0.vm05.stdout:6/534: mknod da/d17/ca0 0 2026-03-09T15:01:38.513 INFO:tasks.workunit.client.0.vm05.stdout:5/617: dread d1/da/fe [0,4194304] 0 2026-03-09T15:01:38.518 INFO:tasks.workunit.client.0.vm05.stdout:9/619: dwrite d2/f8 [0,4194304] 0 2026-03-09T15:01:38.547 INFO:tasks.workunit.client.0.vm05.stdout:0/510: creat d9/de/d12/da3/fa4 x:0 0 0 2026-03-09T15:01:38.548 INFO:tasks.workunit.client.0.vm05.stdout:8/602: write d0/d24/f30 [1646097,14651] 0 2026-03-09T15:01:38.548 INFO:tasks.workunit.client.0.vm05.stdout:8/603: fdatasync d0/f3b 0 2026-03-09T15:01:38.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:38 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:01:38.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:38 vm05.local ceph-mon[50611]: pgmap v175: 65 pgs: 65 active+clean; 2.3 GiB data, 7.7 GiB used, 112 GiB / 120 GiB avail; 35 MiB/s rd, 111 MiB/s wr, 229 op/s 2026-03-09T15:01:38.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:38 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:38.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:38 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:38.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:38 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:01:38.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:38 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:01:38.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:38 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' 2026-03-09T15:01:38.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:38 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:01:38.555 INFO:tasks.workunit.client.0.vm05.stdout:3/585: dwrite d3/df/d59/f98 [0,4194304] 0 2026-03-09T15:01:38.569 INFO:tasks.workunit.client.0.vm05.stdout:2/564: rmdir da/d29/d64/da6 39 2026-03-09T15:01:38.570 INFO:tasks.workunit.client.0.vm05.stdout:2/565: chown da/dd/l77 9154 1 2026-03-09T15:01:38.570 INFO:tasks.workunit.client.0.vm05.stdout:2/566: chown da/d29/d64/l7e 3143315 1 2026-03-09T15:01:38.571 INFO:tasks.workunit.client.0.vm05.stdout:2/567: readlink da/d29/d6a/da0/l58 0 2026-03-09T15:01:38.571 INFO:tasks.workunit.client.0.vm05.stdout:2/568: fdatasync da/f10 0 2026-03-09T15:01:38.572 INFO:tasks.workunit.client.0.vm05.stdout:2/569: write da/d29/d6a/da0/d91/dab/f4b [1706462,23205] 0 2026-03-09T15:01:38.578 INFO:tasks.workunit.client.0.vm05.stdout:7/558: rename d1/d9/d23/d31/d32/ca1 to d1/d9/d72/cb2 0 2026-03-09T15:01:38.588 INFO:tasks.workunit.client.0.vm05.stdout:3/586: dread d3/f1f [0,4194304] 0 2026-03-09T15:01:38.588 INFO:tasks.workunit.client.0.vm05.stdout:6/535: write da/d19/f35 [822284,52952] 0 2026-03-09T15:01:38.588 INFO:tasks.workunit.client.0.vm05.stdout:5/618: mknod d1/db5/cd5 0 2026-03-09T15:01:38.589 INFO:tasks.workunit.client.0.vm05.stdout:8/604: sync 2026-03-09T15:01:38.599 INFO:tasks.workunit.client.0.vm05.stdout:4/553: write d2/f33 [6994184,76626] 0 2026-03-09T15:01:38.633 INFO:tasks.workunit.client.0.vm05.stdout:1/564: mknod d9/d2f/d83/d98/d59/d49/d78/cbe 0 2026-03-09T15:01:38.634 INFO:tasks.workunit.client.0.vm05.stdout:1/565: write d9/d2f/d83/d98/d59/d49/d48/f6f [2001802,88068] 0 2026-03-09T15:01:38.641 INFO:tasks.workunit.client.0.vm05.stdout:9/620: dread d2/d10/d22/d2c/d3c/f55 [0,4194304] 0 2026-03-09T15:01:38.641 INFO:tasks.workunit.client.0.vm05.stdout:9/621: dread - d2/d10/d22/d52/fd7 zero size 2026-03-09T15:01:38.676 INFO:tasks.workunit.client.0.vm05.stdout:2/570: mknod da/dd/cb5 0 2026-03-09T15:01:38.677 INFO:tasks.workunit.client.0.vm05.stdout:7/559: creat d1/d12/fb3 x:0 0 0 2026-03-09T15:01:38.679 INFO:tasks.workunit.client.0.vm05.stdout:3/587: truncate d3/df/f14 865361 0 2026-03-09T15:01:38.681 INFO:tasks.workunit.client.0.vm05.stdout:7/560: dwrite d1/d9/d23/d31/d8f/d93/fa3 [0,4194304] 0 2026-03-09T15:01:38.719 INFO:tasks.workunit.client.0.vm05.stdout:5/619: symlink d1/d4/d34/d35/d4e/d6f/d7e/ld6 0 2026-03-09T15:01:38.724 INFO:tasks.workunit.client.0.vm05.stdout:8/605: mknod d0/d1/d12/d1b/d6e/d93/cc8 0 2026-03-09T15:01:38.755 INFO:tasks.workunit.client.0.vm05.stdout:4/554: write d2/f7e [1013959,97899] 0 2026-03-09T15:01:38.759 INFO:tasks.workunit.client.0.vm05.stdout:1/566: creat d9/d97/fbf x:0 0 0 2026-03-09T15:01:38.763 INFO:tasks.workunit.client.0.vm05.stdout:9/622: dwrite d2/d10/d22/d2c/fbd [0,4194304] 0 2026-03-09T15:01:38.788 INFO:tasks.workunit.client.0.vm05.stdout:7/561: write d1/d9/fc [6734887,83618] 0 2026-03-09T15:01:38.788 INFO:tasks.workunit.client.0.vm05.stdout:6/536: write da/d43/f86 [2635666,49295] 0 2026-03-09T15:01:38.790 INFO:tasks.workunit.client.0.vm05.stdout:7/562: write d1/f16 [2319636,106449] 0 2026-03-09T15:01:38.790 INFO:tasks.workunit.client.0.vm05.stdout:7/563: fdatasync d1/d12/f20 0 2026-03-09T15:01:38.791 INFO:tasks.workunit.client.0.vm05.stdout:7/564: chown d1/d9/d23/d31/d32/d78 21 1 2026-03-09T15:01:38.807 INFO:tasks.workunit.client.0.vm05.stdout:7/565: sync 2026-03-09T15:01:38.814 INFO:tasks.workunit.client.0.vm05.stdout:4/555: dwrite d2/f67 [0,4194304] 0 2026-03-09T15:01:38.822 INFO:tasks.workunit.client.0.vm05.stdout:1/567: write d9/d2f/d37/d5a/f8c [919053,64569] 0 2026-03-09T15:01:38.823 INFO:tasks.workunit.client.0.vm05.stdout:1/568: write d9/d97/fbf [462112,100640] 0 2026-03-09T15:01:38.827 INFO:tasks.workunit.client.0.vm05.stdout:9/623: mknod d2/d4e/d56/d37/d99/ce2 0 2026-03-09T15:01:38.834 INFO:tasks.workunit.client.0.vm05.stdout:0/511: creat d9/de/d12/d15/fa5 x:0 0 0 2026-03-09T15:01:38.839 INFO:tasks.workunit.client.0.vm05.stdout:5/620: creat d1/d4/d34/d35/dd0/fd7 x:0 0 0 2026-03-09T15:01:38.841 INFO:tasks.workunit.client.0.vm05.stdout:1/569: dread d9/d2f/d55/f64 [0,4194304] 0 2026-03-09T15:01:38.842 INFO:tasks.workunit.client.0.vm05.stdout:1/570: truncate d9/d2f/d83/d98/d59/fb5 605578 0 2026-03-09T15:01:38.842 INFO:tasks.workunit.client.0.vm05.stdout:1/571: fdatasync d9/d17/f79 0 2026-03-09T15:01:38.851 INFO:tasks.workunit.client.0.vm05.stdout:6/537: symlink da/d17/d3b/la1 0 2026-03-09T15:01:38.854 INFO:tasks.workunit.client.0.vm05.stdout:8/606: unlink d0/d24/c36 0 2026-03-09T15:01:38.854 INFO:tasks.workunit.client.0.vm05.stdout:8/607: dread - d0/d1/d12/d1b/fbd zero size 2026-03-09T15:01:38.870 INFO:tasks.workunit.client.0.vm05.stdout:9/624: mkdir d2/d8b/de3 0 2026-03-09T15:01:38.878 INFO:tasks.workunit.client.0.vm05.stdout:3/588: rename d3/df/d10/d34/da5 to d3/df/d10/d19/dce 0 2026-03-09T15:01:38.884 INFO:tasks.workunit.client.0.vm05.stdout:0/512: write d9/de/d12/f23 [754779,90868] 0 2026-03-09T15:01:38.899 INFO:tasks.workunit.client.0.vm05.stdout:6/538: dwrite da/d17/f33 [0,4194304] 0 2026-03-09T15:01:38.911 INFO:tasks.workunit.client.0.vm05.stdout:8/608: rmdir d0/d1/d12/d1b/d66 39 2026-03-09T15:01:38.915 INFO:tasks.workunit.client.0.vm05.stdout:8/609: dwrite d0/d1/d12/d1b/f67 [4194304,4194304] 0 2026-03-09T15:01:38.918 INFO:tasks.workunit.client.0.vm05.stdout:4/556: creat d2/d7a/fb1 x:0 0 0 2026-03-09T15:01:38.919 INFO:tasks.workunit.client.0.vm05.stdout:4/557: chown d2/d4/d8/d4a/d6e 9 1 2026-03-09T15:01:38.919 INFO:tasks.workunit.client.0.vm05.stdout:4/558: chown d2/d1d/f7d 3307359 1 2026-03-09T15:01:38.932 INFO:tasks.workunit.client.0.vm05.stdout:0/513: unlink d9/de/d25/d38/d78/fa2 0 2026-03-09T15:01:38.941 INFO:tasks.workunit.client.0.vm05.stdout:7/566: creat d1/d9/d23/fb4 x:0 0 0 2026-03-09T15:01:38.942 INFO:tasks.workunit.client.0.vm05.stdout:3/589: dread d3/df/d1e/d2f/d52/f61 [0,4194304] 0 2026-03-09T15:01:38.942 INFO:tasks.workunit.client.0.vm05.stdout:7/567: stat d1/d9/d23/d31/d8f/d93/fa3 0 2026-03-09T15:01:38.947 INFO:tasks.workunit.client.0.vm05.stdout:4/559: truncate d2/f1b 3954267 0 2026-03-09T15:01:38.953 INFO:tasks.workunit.client.0.vm05.stdout:5/621: dwrite d1/f14 [0,4194304] 0 2026-03-09T15:01:38.960 INFO:tasks.workunit.client.0.vm05.stdout:5/622: dwrite d1/f2a [0,4194304] 0 2026-03-09T15:01:38.963 INFO:tasks.workunit.client.0.vm05.stdout:5/623: truncate d1/d4/d34/d35/fc5 570249 0 2026-03-09T15:01:38.970 INFO:tasks.workunit.client.0.vm05.stdout:8/610: write d0/d1/d12/d3c/f77 [4163434,97280] 0 2026-03-09T15:01:38.971 INFO:tasks.workunit.client.0.vm05.stdout:9/625: getdents d2/d10/d22/d9f 0 2026-03-09T15:01:38.972 INFO:tasks.workunit.client.0.vm05.stdout:9/626: truncate d2/d8b/dae/fbb 333646 0 2026-03-09T15:01:38.974 INFO:tasks.workunit.client.0.vm05.stdout:2/571: link da/d29/d6a/da0/d91/dab/d2f/l67 da/d29/d6a/da0/d91/dab/d2f/lb6 0 2026-03-09T15:01:38.977 INFO:tasks.workunit.client.0.vm05.stdout:1/572: link d9/d2f/d83/d98/l9f d9/d2f/d37/d5f/lc0 0 2026-03-09T15:01:38.981 INFO:tasks.workunit.client.0.vm05.stdout:3/590: creat d3/d29/d7f/fcf x:0 0 0 2026-03-09T15:01:38.984 INFO:tasks.workunit.client.0.vm05.stdout:4/560: symlink d2/d4/d50/lb2 0 2026-03-09T15:01:38.996 INFO:tasks.workunit.client.0.vm05.stdout:6/539: dwrite da/d17/d3b/f6b [0,4194304] 0 2026-03-09T15:01:38.999 INFO:tasks.workunit.client.0.vm05.stdout:7/568: mknod d1/d9/d23/cb5 0 2026-03-09T15:01:38.999 INFO:tasks.workunit.client.0.vm05.stdout:3/591: rmdir d3/df/d10/d19 39 2026-03-09T15:01:39.000 INFO:tasks.workunit.client.0.vm05.stdout:7/569: write d1/d9/d23/d31/d8f/d93/fa3 [3824417,69843] 0 2026-03-09T15:01:39.001 INFO:tasks.workunit.client.0.vm05.stdout:7/570: chown d1/c71 90473960 1 2026-03-09T15:01:39.007 INFO:tasks.workunit.client.0.vm05.stdout:1/573: dread d9/f15 [0,4194304] 0 2026-03-09T15:01:39.008 INFO:tasks.workunit.client.0.vm05.stdout:1/574: rename d9/d2f/d83/d98 to d9/d2f/d83/d98/d59/d49/d48/db2/dc1 22 2026-03-09T15:01:39.011 INFO:tasks.workunit.client.0.vm05.stdout:6/540: mkdir da/d17/d95/da2 0 2026-03-09T15:01:39.012 INFO:tasks.workunit.client.0.vm05.stdout:2/572: mkdir da/d29/d6a/db1/db7 0 2026-03-09T15:01:39.015 INFO:tasks.workunit.client.0.vm05.stdout:0/514: link d9/f42 d9/de/d25/d38/fa6 0 2026-03-09T15:01:39.019 INFO:tasks.workunit.client.0.vm05.stdout:7/571: rmdir d1/d9/d23/d31/d32 39 2026-03-09T15:01:39.022 INFO:tasks.workunit.client.0.vm05.stdout:5/624: write d1/db5/f5a [2467494,74625] 0 2026-03-09T15:01:39.026 INFO:tasks.workunit.client.0.vm05.stdout:0/515: dread d9/de/f1e [0,4194304] 0 2026-03-09T15:01:39.026 INFO:tasks.workunit.client.0.vm05.stdout:0/516: fdatasync d9/de/d12/d15/d2e/d32/d53/f91 0 2026-03-09T15:01:39.027 INFO:tasks.workunit.client.0.vm05.stdout:0/517: write d9/de/d12/f23 [631367,71331] 0 2026-03-09T15:01:39.029 INFO:tasks.workunit.client.0.vm05.stdout:4/561: symlink d2/d1d/da5/lb3 0 2026-03-09T15:01:39.033 INFO:tasks.workunit.client.0.vm05.stdout:9/627: write d2/d10/d22/d47/f62 [863380,27797] 0 2026-03-09T15:01:39.043 INFO:tasks.workunit.client.0.vm05.stdout:5/625: creat d1/d4/d34/dc0/fd8 x:0 0 0 2026-03-09T15:01:39.051 INFO:tasks.workunit.client.0.vm05.stdout:9/628: dread - d2/d10/d22/d2c/d69/d5a/f75 zero size 2026-03-09T15:01:39.052 INFO:tasks.workunit.client.0.vm05.stdout:8/611: getdents d0/d1/d12/d1b/d95/d42/d60/d73 0 2026-03-09T15:01:39.053 INFO:tasks.workunit.client.0.vm05.stdout:8/612: write d0/d1/d12/d3c/f9a [323486,19572] 0 2026-03-09T15:01:39.056 INFO:tasks.workunit.client.0.vm05.stdout:1/575: getdents d9/d2f/d83/d98/d59/d49/d48/db2 0 2026-03-09T15:01:39.059 INFO:tasks.workunit.client.0.vm05.stdout:1/576: dread d9/d2f/d83/d98/d59/d49/d4b/f8e [0,4194304] 0 2026-03-09T15:01:39.060 INFO:tasks.workunit.client.0.vm05.stdout:1/577: fsync d9/d2f/f4f 0 2026-03-09T15:01:39.066 INFO:tasks.workunit.client.0.vm05.stdout:6/541: creat da/d17/d95/da2/fa3 x:0 0 0 2026-03-09T15:01:39.067 INFO:tasks.workunit.client.0.vm05.stdout:6/542: write da/d17/f2a [2254853,29275] 0 2026-03-09T15:01:39.071 INFO:tasks.workunit.client.0.vm05.stdout:2/573: mknod da/d29/d6a/db1/db7/cb8 0 2026-03-09T15:01:39.074 INFO:tasks.workunit.client.0.vm05.stdout:0/518: write d9/de/d25/f97 [3602960,59059] 0 2026-03-09T15:01:39.088 INFO:tasks.workunit.client.0.vm05.stdout:9/629: fsync d2/f61 0 2026-03-09T15:01:39.089 INFO:tasks.workunit.client.0.vm05.stdout:6/543: sync 2026-03-09T15:01:39.089 INFO:tasks.workunit.client.0.vm05.stdout:9/630: dread - d2/d10/d22/d2c/d69/f4f zero size 2026-03-09T15:01:39.090 INFO:tasks.workunit.client.0.vm05.stdout:6/544: truncate da/d43/d7b/f97 438407 0 2026-03-09T15:01:39.114 INFO:tasks.workunit.client.0.vm05.stdout:6/545: dread da/d43/f96 [0,4194304] 0 2026-03-09T15:01:39.124 INFO:tasks.workunit.client.0.vm05.stdout:8/613: creat d0/d1/d12/d1b/d95/d78/d86/fc9 x:0 0 0 2026-03-09T15:01:39.128 INFO:tasks.workunit.client.0.vm05.stdout:8/614: dwrite d0/d1/d12/d1b/d95/d54/f64 [0,4194304] 0 2026-03-09T15:01:39.131 INFO:tasks.workunit.client.0.vm05.stdout:8/615: dwrite d0/d1/d12/fb6 [0,4194304] 0 2026-03-09T15:01:39.144 INFO:tasks.workunit.client.0.vm05.stdout:1/578: chown d9/d2f/d83/d98/d59/d49/d77/c89 919 1 2026-03-09T15:01:39.151 INFO:tasks.workunit.client.0.vm05.stdout:2/574: stat da/d16/c5c 0 2026-03-09T15:01:39.151 INFO:tasks.workunit.client.0.vm05.stdout:2/575: fdatasync da/f2c 0 2026-03-09T15:01:39.155 INFO:tasks.workunit.client.0.vm05.stdout:0/519: symlink d9/de/d6a/la7 0 2026-03-09T15:01:39.162 INFO:tasks.workunit.client.0.vm05.stdout:5/626: dwrite d1/d4/f55 [0,4194304] 0 2026-03-09T15:01:39.169 INFO:tasks.workunit.client.0.vm05.stdout:7/572: getdents d1/d9/d23/d31/d32/d78/d7e/d81/dab 0 2026-03-09T15:01:39.177 INFO:tasks.workunit.client.0.vm05.stdout:4/562: rename d2/d4/d1e/f40 to d2/d4/fb4 0 2026-03-09T15:01:39.186 INFO:tasks.workunit.client.0.vm05.stdout:9/631: unlink d2/d10/d22/d47/fd3 0 2026-03-09T15:01:39.191 INFO:tasks.workunit.client.0.vm05.stdout:6/546: write da/d17/f3c [2632841,123904] 0 2026-03-09T15:01:39.203 INFO:tasks.workunit.client.0.vm05.stdout:8/616: mkdir d0/d1/d12/d1b/d95/d78/dca 0 2026-03-09T15:01:39.225 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:39 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr fail", "who": "vm05.lhsexd"}]: dispatch 2026-03-09T15:01:39.225 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:39 vm09.local ceph-mon[59673]: osdmap e38: 6 total, 6 up, 6 in 2026-03-09T15:01:39.225 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:39 vm09.local ceph-mon[59673]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "mgr fail", "who": "vm05.lhsexd"}]': finished 2026-03-09T15:01:39.225 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:39 vm09.local ceph-mon[59673]: mgrmap e20: vm09.cfuwdz(active, starting, since 0.0402879s) 2026-03-09T15:01:39.230 INFO:tasks.workunit.client.0.vm05.stdout:0/520: dread d9/f82 [0,4194304] 0 2026-03-09T15:01:39.231 INFO:tasks.workunit.client.0.vm05.stdout:0/521: write d9/de/d12/d15/f5e [1724651,78120] 0 2026-03-09T15:01:39.232 INFO:tasks.workunit.client.0.vm05.stdout:0/522: chown d9/de/d12/d15/d2e/f3a 65318569 1 2026-03-09T15:01:39.241 INFO:tasks.workunit.client.0.vm05.stdout:5/627: rmdir d1/d4/d34/d56/d68 39 2026-03-09T15:01:39.242 INFO:tasks.workunit.client.0.vm05.stdout:5/628: chown d1/d4/d27/lc7 65267 1 2026-03-09T15:01:39.242 INFO:tasks.workunit.client.0.vm05.stdout:5/629: read d1/f14 [2677653,130498] 0 2026-03-09T15:01:39.243 INFO:tasks.workunit.client.0.vm05.stdout:5/630: truncate d1/db5/fc3 308862 0 2026-03-09T15:01:39.245 INFO:tasks.workunit.client.0.vm05.stdout:3/592: getdents d3/df/d10/d19 0 2026-03-09T15:01:39.248 INFO:tasks.workunit.client.0.vm05.stdout:7/573: symlink d1/d9/d23/d31/d32/d78/d7e/lb6 0 2026-03-09T15:01:39.251 INFO:tasks.workunit.client.0.vm05.stdout:4/563: dread - d2/d4/d7/dc/f8e zero size 2026-03-09T15:01:39.253 INFO:tasks.workunit.client.0.vm05.stdout:1/579: write d9/d2f/d83/d98/d59/d49/f51 [4765654,20505] 0 2026-03-09T15:01:39.254 INFO:tasks.workunit.client.0.vm05.stdout:1/580: write d9/d2f/d37/d5a/fa0 [183427,23887] 0 2026-03-09T15:01:39.255 INFO:tasks.workunit.client.0.vm05.stdout:1/581: chown d9/d2f/d83/d98/d59/d49/d48/la1 118 1 2026-03-09T15:01:39.255 INFO:tasks.workunit.client.0.vm05.stdout:1/582: fsync d9/d2f/d83/f9e 0 2026-03-09T15:01:39.257 INFO:tasks.workunit.client.0.vm05.stdout:1/583: write d9/d97/fbf [404904,112479] 0 2026-03-09T15:01:39.267 INFO:tasks.workunit.client.0.vm05.stdout:9/632: creat d2/d4e/d56/d37/d9c/d8e/fe4 x:0 0 0 2026-03-09T15:01:39.267 INFO:tasks.workunit.client.0.vm05.stdout:2/576: write da/d16/f69 [3794761,97464] 0 2026-03-09T15:01:39.271 INFO:tasks.workunit.client.0.vm05.stdout:6/547: creat da/d43/d7b/d89/fa4 x:0 0 0 2026-03-09T15:01:39.272 INFO:tasks.workunit.client.0.vm05.stdout:6/548: truncate da/d17/d95/da2/fa3 616772 0 2026-03-09T15:01:39.273 INFO:tasks.workunit.client.0.vm05.stdout:8/617: creat d0/d1/d12/d1b/d95/d42/da1/fcb x:0 0 0 2026-03-09T15:01:39.291 INFO:tasks.workunit.client.0.vm05.stdout:0/523: symlink d9/d59/la8 0 2026-03-09T15:01:39.293 INFO:tasks.workunit.client.0.vm05.stdout:0/524: readlink d9/de/d12/d15/l8b 0 2026-03-09T15:01:39.294 INFO:tasks.workunit.client.0.vm05.stdout:7/574: rename d1/d9/fd to d1/d12/fb7 0 2026-03-09T15:01:39.294 INFO:tasks.workunit.client.0.vm05.stdout:7/575: fdatasync d1/f15 0 2026-03-09T15:01:39.294 INFO:tasks.workunit.client.0.vm05.stdout:0/525: write d9/de/d12/d15/d2e/f9a [75713,62961] 0 2026-03-09T15:01:39.294 INFO:tasks.workunit.client.0.vm05.stdout:0/526: dread - d9/de/d12/f84 zero size 2026-03-09T15:01:39.294 INFO:tasks.workunit.client.0.vm05.stdout:4/564: symlink d2/d4/d8/d4a/d6e/lb5 0 2026-03-09T15:01:39.297 INFO:tasks.workunit.client.0.vm05.stdout:2/577: creat da/d16/d46/fb9 x:0 0 0 2026-03-09T15:01:39.299 INFO:tasks.workunit.client.0.vm05.stdout:6/549: stat da/c3a 0 2026-03-09T15:01:39.300 INFO:tasks.workunit.client.0.vm05.stdout:5/631: symlink d1/d4/ld9 0 2026-03-09T15:01:39.302 INFO:tasks.workunit.client.0.vm05.stdout:5/632: write d1/f5e [2245833,56631] 0 2026-03-09T15:01:39.302 INFO:tasks.workunit.client.0.vm05.stdout:0/527: dwrite d9/de/d12/f7a [0,4194304] 0 2026-03-09T15:01:39.321 INFO:tasks.workunit.client.0.vm05.stdout:7/576: creat d1/d9/d23/d31/d8f/d93/fb8 x:0 0 0 2026-03-09T15:01:39.325 INFO:tasks.workunit.client.0.vm05.stdout:7/577: dwrite d1/d9/d23/d31/fad [0,4194304] 0 2026-03-09T15:01:39.330 INFO:tasks.workunit.client.0.vm05.stdout:4/565: rmdir d2/d4/d7/dc/da8 39 2026-03-09T15:01:39.342 INFO:tasks.workunit.client.0.vm05.stdout:1/584: fdatasync f7 0 2026-03-09T15:01:39.343 INFO:tasks.workunit.client.0.vm05.stdout:1/585: chown d9/d2f/d83/d98/d59/d49/d48/f25 205076 1 2026-03-09T15:01:39.343 INFO:tasks.workunit.client.0.vm05.stdout:1/586: chown d9/d17/c28 138 1 2026-03-09T15:01:39.347 INFO:tasks.workunit.client.0.vm05.stdout:1/587: dwrite d9/d2f/d55/f5e [0,4194304] 0 2026-03-09T15:01:39.376 INFO:tasks.workunit.client.0.vm05.stdout:2/578: dwrite da/d29/d64/da6/fb4 [0,4194304] 0 2026-03-09T15:01:39.377 INFO:tasks.workunit.client.0.vm05.stdout:2/579: fsync da/dd/ff 0 2026-03-09T15:01:39.385 INFO:tasks.workunit.client.0.vm05.stdout:6/550: readlink da/d17/l73 0 2026-03-09T15:01:39.396 INFO:tasks.workunit.client.0.vm05.stdout:8/618: mkdir d0/d1/d12/d1b/d66/dcc 0 2026-03-09T15:01:39.401 INFO:tasks.workunit.client.0.vm05.stdout:5/633: creat d1/d4/d34/d35/d3d/d38/d63/fda x:0 0 0 2026-03-09T15:01:39.401 INFO:tasks.workunit.client.0.vm05.stdout:5/634: fdatasync d1/da/fb9 0 2026-03-09T15:01:39.403 INFO:tasks.workunit.client.0.vm05.stdout:0/528: truncate d9/de/d25/d38/f55 387118 0 2026-03-09T15:01:39.410 INFO:tasks.workunit.client.0.vm05.stdout:3/593: truncate d3/df/d59/d79/fa8 15421 0 2026-03-09T15:01:39.410 INFO:tasks.workunit.client.0.vm05.stdout:3/594: stat d3/d29/d2d 0 2026-03-09T15:01:39.412 INFO:tasks.workunit.client.0.vm05.stdout:7/578: creat d1/d12/fb9 x:0 0 0 2026-03-09T15:01:39.417 INFO:tasks.workunit.client.0.vm05.stdout:9/633: creat d2/d10/d22/d47/fe5 x:0 0 0 2026-03-09T15:01:39.428 INFO:tasks.workunit.client.0.vm05.stdout:2/580: rename da/d16/f72 to da/d29/d6a/db1/fba 0 2026-03-09T15:01:39.429 INFO:tasks.workunit.client.0.vm05.stdout:2/581: write da/d29/d6a/da0/f84 [874814,124061] 0 2026-03-09T15:01:39.429 INFO:tasks.workunit.client.0.vm05.stdout:2/582: fdatasync da/d29/d6a/d7f/fa8 0 2026-03-09T15:01:39.431 INFO:tasks.workunit.client.0.vm05.stdout:6/551: symlink da/d43/d7b/d89/la5 0 2026-03-09T15:01:39.433 INFO:tasks.workunit.client.0.vm05.stdout:2/583: dwrite da/dd/ff [4194304,4194304] 0 2026-03-09T15:01:39.438 INFO:tasks.workunit.client.0.vm05.stdout:9/634: sync 2026-03-09T15:01:39.456 INFO:tasks.workunit.client.0.vm05.stdout:5/635: chown d1/l1e 0 1 2026-03-09T15:01:39.456 INFO:tasks.workunit.client.0.vm05.stdout:5/636: chown d1/d4/d34/d35/d4e 338973 1 2026-03-09T15:01:39.465 INFO:tasks.workunit.client.0.vm05.stdout:0/529: rmdir d9/de/d12/d15 39 2026-03-09T15:01:39.548 INFO:tasks.workunit.client.0.vm05.stdout:3/595: mknod d3/df/d10/d19/db2/cd0 0 2026-03-09T15:01:39.550 INFO:tasks.workunit.client.0.vm05.stdout:7/579: write d1/d9/d23/d31/d51/f29 [2791875,119940] 0 2026-03-09T15:01:39.554 INFO:tasks.workunit.client.0.vm05.stdout:1/588: mkdir d9/d2f/d83/d98/d59/d49/dc2 0 2026-03-09T15:01:39.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:39 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr fail", "who": "vm05.lhsexd"}]: dispatch 2026-03-09T15:01:39.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:39 vm05.local ceph-mon[50611]: osdmap e38: 6 total, 6 up, 6 in 2026-03-09T15:01:39.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:39 vm05.local ceph-mon[50611]: from='mgr.14249 192.168.123.105:0/2' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "mgr fail", "who": "vm05.lhsexd"}]': finished 2026-03-09T15:01:39.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:39 vm05.local ceph-mon[50611]: mgrmap e20: vm09.cfuwdz(active, starting, since 0.0402879s) 2026-03-09T15:01:39.555 INFO:tasks.workunit.client.0.vm05.stdout:1/589: write d9/d2f/d55/fb0 [636945,74321] 0 2026-03-09T15:01:39.568 INFO:tasks.workunit.client.0.vm05.stdout:2/584: mknod da/d29/d6a/d7f/cbb 0 2026-03-09T15:01:39.568 INFO:tasks.workunit.client.0.vm05.stdout:2/585: chown da/dd/l77 986240553 1 2026-03-09T15:01:39.580 INFO:tasks.workunit.client.0.vm05.stdout:0/530: chown d9/de/d25/d38/f55 696 1 2026-03-09T15:01:39.588 INFO:tasks.workunit.client.0.vm05.stdout:4/566: creat d2/d4/fb6 x:0 0 0 2026-03-09T15:01:39.599 INFO:tasks.workunit.client.0.vm05.stdout:7/580: rename d1/d9/d72/d97/f9c to d1/d22/d3c/fba 0 2026-03-09T15:01:39.599 INFO:tasks.workunit.client.0.vm05.stdout:7/581: dread - d1/d12/fa8 zero size 2026-03-09T15:01:39.603 INFO:tasks.workunit.client.0.vm05.stdout:2/586: mknod da/d29/d6a/da0/d7c/cbc 0 2026-03-09T15:01:39.671 INFO:tasks.workunit.client.0.vm05.stdout:3/596: creat d3/df/d10/d19/dce/dc8/fd1 x:0 0 0 2026-03-09T15:01:39.672 INFO:tasks.workunit.client.0.vm05.stdout:3/597: read d3/df/d10/f28 [884202,127263] 0 2026-03-09T15:01:39.679 INFO:tasks.workunit.client.0.vm05.stdout:1/590: symlink d9/d2f/d83/d98/d59/d49/dc2/lc3 0 2026-03-09T15:01:39.685 INFO:tasks.workunit.client.0.vm05.stdout:9/635: creat d2/d10/fe6 x:0 0 0 2026-03-09T15:01:39.685 INFO:tasks.workunit.client.0.vm05.stdout:8/619: getdents d0/d24/d96 0 2026-03-09T15:01:39.686 INFO:tasks.workunit.client.0.vm05.stdout:9/636: write d2/f5 [4621605,105902] 0 2026-03-09T15:01:39.694 INFO:tasks.workunit.client.0.vm05.stdout:3/598: fsync d3/d29/d7f/f83 0 2026-03-09T15:01:39.695 INFO:tasks.workunit.client.0.vm05.stdout:3/599: truncate d3/df/d1e/d2c/d74/d9b/fc9 878001 0 2026-03-09T15:01:39.698 INFO:tasks.workunit.client.0.vm05.stdout:5/637: link d1/d5d/f81 d1/d4/d34/d6c/fdb 0 2026-03-09T15:01:39.700 INFO:tasks.workunit.client.0.vm05.stdout:9/637: mknod d2/d10/d22/da0/ce7 0 2026-03-09T15:01:39.700 INFO:tasks.workunit.client.0.vm05.stdout:9/638: readlink d2/d10/d22/l50 0 2026-03-09T15:01:39.712 INFO:tasks.workunit.client.0.vm05.stdout:5/638: symlink d1/d4/d34/d35/d3d/d38/d63/ldc 0 2026-03-09T15:01:39.712 INFO:tasks.workunit.client.0.vm05.stdout:3/600: write d3/df/d1e/d2f/d52/f95 [3180453,86951] 0 2026-03-09T15:01:39.720 INFO:tasks.workunit.client.0.vm05.stdout:8/620: creat d0/d1/d12/d1b/d95/d42/da1/db9/fcd x:0 0 0 2026-03-09T15:01:39.722 INFO:tasks.workunit.client.0.vm05.stdout:9/639: sync 2026-03-09T15:01:39.723 INFO:tasks.workunit.client.0.vm05.stdout:9/640: fdatasync d2/d10/d22/d47/d95/f9d 0 2026-03-09T15:01:39.726 INFO:tasks.workunit.client.0.vm05.stdout:3/601: dread - d3/df/d10/d19/d44/fb0 zero size 2026-03-09T15:01:39.731 INFO:tasks.workunit.client.0.vm05.stdout:1/591: getdents d9/d2f/d37/d5f 0 2026-03-09T15:01:39.746 INFO:tasks.workunit.client.0.vm05.stdout:5/639: fsync d1/d4/d34/d56/d68/f8f 0 2026-03-09T15:01:39.757 INFO:tasks.workunit.client.0.vm05.stdout:1/592: fdatasync d9/d2f/d83/d98/d59/d49/d48/f25 0 2026-03-09T15:01:39.782 INFO:tasks.workunit.client.0.vm05.stdout:0/531: rename d9/de/f69 to d9/de/d12/d15/d2e/d32/d9f/fa9 0 2026-03-09T15:01:39.798 INFO:tasks.workunit.client.0.vm05.stdout:5/640: creat d1/d4/d19/d93/dcc/d91/fdd x:0 0 0 2026-03-09T15:01:39.817 INFO:tasks.workunit.client.0.vm05.stdout:1/593: dwrite d9/f7f [0,4194304] 0 2026-03-09T15:01:39.822 INFO:tasks.workunit.client.0.vm05.stdout:2/587: creat da/d29/d6a/da0/d91/dab/fbd x:0 0 0 2026-03-09T15:01:39.823 INFO:tasks.workunit.client.0.vm05.stdout:2/588: truncate da/d29/d3f/f9b 627011 0 2026-03-09T15:01:39.824 INFO:tasks.workunit.client.0.vm05.stdout:2/589: read da/fa2 [2897243,94554] 0 2026-03-09T15:01:39.846 INFO:tasks.workunit.client.0.vm05.stdout:2/590: write da/d29/d6a/db1/fba [368884,118532] 0 2026-03-09T15:01:39.846 INFO:tasks.workunit.client.0.vm05.stdout:3/602: getdents d3/df/d1e/d2c/d74/d9b 0 2026-03-09T15:01:39.847 INFO:tasks.workunit.client.0.vm05.stdout:3/603: write d3/df/f88 [1333248,31451] 0 2026-03-09T15:01:39.850 INFO:tasks.workunit.client.0.vm05.stdout:8/621: rename d0/dc/c7d to d0/d1/d12/d3c/d8b/cce 0 2026-03-09T15:01:39.864 INFO:tasks.workunit.client.0.vm05.stdout:8/622: dread d0/f10 [0,4194304] 0 2026-03-09T15:01:39.869 INFO:tasks.workunit.client.0.vm05.stdout:3/604: mkdir d3/df/d10/d19/d44/dd2 0 2026-03-09T15:01:39.873 INFO:tasks.workunit.client.0.vm05.stdout:0/532: creat d9/faa x:0 0 0 2026-03-09T15:01:39.882 INFO:tasks.workunit.client.0.vm05.stdout:3/605: write d3/df/d59/d79/fc7 [148137,35654] 0 2026-03-09T15:01:39.887 INFO:tasks.workunit.client.0.vm05.stdout:0/533: read d9/de/d25/d38/f2f [1136733,74734] 0 2026-03-09T15:01:39.888 INFO:tasks.workunit.client.0.vm05.stdout:0/534: read - d9/de/d12/d15/d2e/f88 zero size 2026-03-09T15:01:39.888 INFO:tasks.workunit.client.0.vm05.stdout:0/535: chown d9/de/d25/d38/f55 428410 1 2026-03-09T15:01:39.893 INFO:tasks.workunit.client.0.vm05.stdout:3/606: truncate d3/df/d10/d34/f9d 453793 0 2026-03-09T15:01:39.893 INFO:tasks.workunit.client.0.vm05.stdout:3/607: chown d3/lc 45 1 2026-03-09T15:01:39.896 INFO:tasks.workunit.client.0.vm05.stdout:2/591: link da/dd/lad da/d29/d6a/da0/d91/dab/d2f/d35/d8a/lbe 0 2026-03-09T15:01:39.901 INFO:tasks.workunit.client.0.vm05.stdout:9/641: rename d2/d10/d22/d2c/d69/c38 to d2/d4e/d56/ce8 0 2026-03-09T15:01:39.904 INFO:tasks.workunit.client.0.vm05.stdout:4/567: write d2/d1d/d88/f8b [981003,65881] 0 2026-03-09T15:01:39.906 INFO:tasks.workunit.client.0.vm05.stdout:7/582: mkdir d1/d9/d23/d31/d32/d78/dbb 0 2026-03-09T15:01:39.909 INFO:tasks.workunit.client.0.vm05.stdout:0/536: symlink d9/de/d12/d15/lab 0 2026-03-09T15:01:39.910 INFO:tasks.workunit.client.0.vm05.stdout:2/592: truncate da/d16/f6e 842278 0 2026-03-09T15:01:39.916 INFO:tasks.workunit.client.0.vm05.stdout:4/568: symlink d2/d4/d8/d4a/lb7 0 2026-03-09T15:01:39.917 INFO:tasks.workunit.client.0.vm05.stdout:6/552: creat da/d43/fa6 x:0 0 0 2026-03-09T15:01:39.918 INFO:tasks.workunit.client.0.vm05.stdout:7/583: creat d1/d22/fbc x:0 0 0 2026-03-09T15:01:39.919 INFO:tasks.workunit.client.0.vm05.stdout:0/537: mkdir d9/de/d25/dac 0 2026-03-09T15:01:39.920 INFO:tasks.workunit.client.0.vm05.stdout:2/593: read da/d16/d46/f73 [145073,124371] 0 2026-03-09T15:01:39.921 INFO:tasks.workunit.client.0.vm05.stdout:1/594: creat d9/d2f/d83/d98/d59/fc4 x:0 0 0 2026-03-09T15:01:39.927 INFO:tasks.workunit.client.0.vm05.stdout:6/553: symlink da/d17/d3b/d81/la7 0 2026-03-09T15:01:39.929 INFO:tasks.workunit.client.0.vm05.stdout:7/584: mkdir d1/d9/d23/d31/d8f/d93/dbd 0 2026-03-09T15:01:39.929 INFO:tasks.workunit.client.0.vm05.stdout:7/585: chown d1/d9/d23/d54/f6f 37 1 2026-03-09T15:01:39.930 INFO:tasks.workunit.client.0.vm05.stdout:5/641: getdents d1/d4/d19/d93/dcc/d91 0 2026-03-09T15:01:39.933 INFO:tasks.workunit.client.0.vm05.stdout:7/586: dread d1/d9/d23/d31/d32/f63 [0,4194304] 0 2026-03-09T15:01:39.933 INFO:tasks.workunit.client.0.vm05.stdout:7/587: write d1/f16 [1909678,7603] 0 2026-03-09T15:01:39.934 INFO:tasks.workunit.client.0.vm05.stdout:7/588: chown d1/d9/fc 4 1 2026-03-09T15:01:39.939 INFO:tasks.workunit.client.0.vm05.stdout:1/595: symlink d9/d17/lc5 0 2026-03-09T15:01:39.940 INFO:tasks.workunit.client.0.vm05.stdout:1/596: read d9/d2f/d83/d98/d59/d49/d92/d75/f76 [64234,10390] 0 2026-03-09T15:01:39.940 INFO:tasks.workunit.client.0.vm05.stdout:3/608: getdents d3/df/d10/d34/d8c/dbd/db3 0 2026-03-09T15:01:39.941 INFO:tasks.workunit.client.0.vm05.stdout:3/609: write d3/df/d10/d19/dce/dc8/fd1 [179583,126828] 0 2026-03-09T15:01:39.945 INFO:tasks.workunit.client.0.vm05.stdout:8/623: dwrite d0/d7/f20 [0,4194304] 0 2026-03-09T15:01:39.951 INFO:tasks.workunit.client.0.vm05.stdout:6/554: rmdir da/d17 39 2026-03-09T15:01:39.954 INFO:tasks.workunit.client.0.vm05.stdout:5/642: rename d1/d4/d27/d5b to d1/d4/d34/d35/d3d/dde 0 2026-03-09T15:01:39.958 INFO:tasks.workunit.client.0.vm05.stdout:9/642: dwrite d2/d10/d22/d2c/d3c/fbc [0,4194304] 0 2026-03-09T15:01:39.973 INFO:tasks.workunit.client.0.vm05.stdout:2/594: dwrite da/f79 [0,4194304] 0 2026-03-09T15:01:39.976 INFO:tasks.workunit.client.0.vm05.stdout:2/595: dread - da/d16/d46/fb9 zero size 2026-03-09T15:01:39.992 INFO:tasks.workunit.client.0.vm05.stdout:1/597: fdatasync d9/d2f/d83/d98/f44 0 2026-03-09T15:01:39.993 INFO:tasks.workunit.client.0.vm05.stdout:4/569: creat d2/d4/d7/dc/fb8 x:0 0 0 2026-03-09T15:01:39.997 INFO:tasks.workunit.client.0.vm05.stdout:6/555: mkdir da/d43/d7b/d89/da8 0 2026-03-09T15:01:40.000 INFO:tasks.workunit.client.0.vm05.stdout:2/596: mkdir da/d29/d6a/da0/d91/dab/d9c/dbf 0 2026-03-09T15:01:40.010 INFO:tasks.workunit.client.0.vm05.stdout:9/643: truncate d2/f6 5112420 0 2026-03-09T15:01:40.014 INFO:tasks.workunit.client.0.vm05.stdout:2/597: creat da/d29/d6a/da0/d91/dab/d2f/d35/d8a/fc0 x:0 0 0 2026-03-09T15:01:40.015 INFO:tasks.workunit.client.0.vm05.stdout:5/643: sync 2026-03-09T15:01:40.018 INFO:tasks.workunit.client.0.vm05.stdout:4/570: creat d2/d4/d7/dc/fb9 x:0 0 0 2026-03-09T15:01:40.020 INFO:tasks.workunit.client.0.vm05.stdout:2/598: rmdir da/d29/d6a/da0/d91/dab/d2f 39 2026-03-09T15:01:40.023 INFO:tasks.workunit.client.0.vm05.stdout:0/538: write d9/de/d25/f2d [209807,14624] 0 2026-03-09T15:01:40.024 INFO:tasks.workunit.client.0.vm05.stdout:7/589: write d1/d9/d23/d31/d32/f63 [2345556,129483] 0 2026-03-09T15:01:40.025 INFO:tasks.workunit.client.0.vm05.stdout:2/599: sync 2026-03-09T15:01:40.026 INFO:tasks.workunit.client.0.vm05.stdout:2/600: chown da/f21 67539156 1 2026-03-09T15:01:40.030 INFO:tasks.workunit.client.0.vm05.stdout:3/610: dwrite d3/f1f [0,4194304] 0 2026-03-09T15:01:40.034 INFO:tasks.workunit.client.0.vm05.stdout:8/624: dwrite d0/d1/d12/d3c/f8c [0,4194304] 0 2026-03-09T15:01:40.037 INFO:tasks.workunit.client.0.vm05.stdout:1/598: dwrite d9/f21 [0,4194304] 0 2026-03-09T15:01:40.061 INFO:tasks.workunit.client.0.vm05.stdout:6/556: dwrite da/d17/f20 [0,4194304] 0 2026-03-09T15:01:40.066 INFO:tasks.workunit.client.0.vm05.stdout:9/644: write d2/d10/d22/d2c/d69/f86 [248810,68388] 0 2026-03-09T15:01:40.066 INFO:tasks.workunit.client.0.vm05.stdout:9/645: chown d2/d8b/dae/ldf 26707596 1 2026-03-09T15:01:40.070 INFO:tasks.workunit.client.0.vm05.stdout:4/571: stat d2/d4/d1e/l38 0 2026-03-09T15:01:40.072 INFO:tasks.workunit.client.0.vm05.stdout:0/539: fdatasync d9/de/f5d 0 2026-03-09T15:01:40.104 INFO:tasks.workunit.client.0.vm05.stdout:3/611: creat d3/df/d59/d79/fd3 x:0 0 0 2026-03-09T15:01:40.123 INFO:tasks.workunit.client.0.vm05.stdout:5/644: truncate d1/d4/d34/d56/f6d 100932 0 2026-03-09T15:01:40.125 INFO:tasks.workunit.client.0.vm05.stdout:5/645: truncate d1/d4/d34/d35/dd0/fd7 882358 0 2026-03-09T15:01:40.128 INFO:tasks.workunit.client.0.vm05.stdout:9/646: unlink d2/d10/d22/d2c/f93 0 2026-03-09T15:01:40.133 INFO:tasks.workunit.client.0.vm05.stdout:4/572: mknod d2/d4/d7/d21/cba 0 2026-03-09T15:01:40.137 INFO:tasks.workunit.client.0.vm05.stdout:4/573: dwrite d2/d4/fb6 [0,4194304] 0 2026-03-09T15:01:40.144 INFO:tasks.workunit.client.0.vm05.stdout:2/601: dwrite da/d16/f6b [0,4194304] 0 2026-03-09T15:01:40.148 INFO:tasks.workunit.client.0.vm05.stdout:4/574: dwrite d2/d4/d7/dc/f54 [0,4194304] 0 2026-03-09T15:01:40.160 INFO:tasks.workunit.client.0.vm05.stdout:0/540: unlink d9/d64/c95 0 2026-03-09T15:01:40.183 INFO:tasks.workunit.client.0.vm05.stdout:1/599: dwrite d9/d2f/d83/d98/f39 [0,4194304] 0 2026-03-09T15:01:40.185 INFO:tasks.workunit.client.0.vm05.stdout:1/600: dread - d9/d2f/d83/d98/d59/fbc zero size 2026-03-09T15:01:40.185 INFO:tasks.workunit.client.0.vm05.stdout:1/601: dread - d9/d2f/d83/fa3 zero size 2026-03-09T15:01:40.211 INFO:tasks.workunit.client.0.vm05.stdout:6/557: write da/d19/f6a [1343392,111518] 0 2026-03-09T15:01:40.211 INFO:tasks.workunit.client.0.vm05.stdout:6/558: chown da/d17/d3b/l48 1049023418 1 2026-03-09T15:01:40.216 INFO:tasks.workunit.client.0.vm05.stdout:6/559: dread da/d17/f8d [0,4194304] 0 2026-03-09T15:01:40.225 INFO:tasks.workunit.client.0.vm05.stdout:9/647: dread d2/d4e/f40 [0,4194304] 0 2026-03-09T15:01:40.232 INFO:tasks.workunit.client.0.vm05.stdout:3/612: dwrite d3/d29/d2d/d77/f9e [0,4194304] 0 2026-03-09T15:01:40.234 INFO:tasks.workunit.client.0.vm05.stdout:3/613: stat d3/df/d10/d34/f48 0 2026-03-09T15:01:40.234 INFO:tasks.workunit.client.0.vm05.stdout:2/602: rmdir da/d16 39 2026-03-09T15:01:40.235 INFO:tasks.workunit.client.0.vm05.stdout:2/603: chown da/dd/l77 5186 1 2026-03-09T15:01:40.241 INFO:tasks.workunit.client.0.vm05.stdout:4/575: mkdir d2/d43/dbb 0 2026-03-09T15:01:40.242 INFO:tasks.workunit.client.0.vm05.stdout:4/576: write d2/d4/d7/dc/fb9 [1039826,35923] 0 2026-03-09T15:01:40.248 INFO:tasks.workunit.client.0.vm05.stdout:0/541: truncate d9/de/d6a/f7c 565520 0 2026-03-09T15:01:40.251 INFO:tasks.workunit.client.0.vm05.stdout:0/542: dwrite d9/f2b [0,4194304] 0 2026-03-09T15:01:40.259 INFO:tasks.workunit.client.0.vm05.stdout:0/543: dwrite d9/de/d25/d38/f87 [0,4194304] 0 2026-03-09T15:01:40.278 INFO:tasks.workunit.client.0.vm05.stdout:5/646: getdents d1/da 0 2026-03-09T15:01:40.290 INFO:tasks.workunit.client.0.vm05.stdout:6/560: chown da/d17/f30 5428 1 2026-03-09T15:01:40.291 INFO:tasks.workunit.client.0.vm05.stdout:5/647: dread d1/d4/d34/d35/d3d/d96/fbe [0,4194304] 0 2026-03-09T15:01:40.299 INFO:tasks.workunit.client.0.vm05.stdout:7/590: getdents d1/d9/d23/d31/d32/d78/d7e 0 2026-03-09T15:01:40.303 INFO:tasks.workunit.client.0.vm05.stdout:7/591: dwrite d1/d22/fbc [0,4194304] 0 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: Active manager daemon vm09.cfuwdz restarted 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: Activating manager daemon vm09.cfuwdz 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/crt"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: osdmap e39: 6 total, 6 up, 6 in 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: mgrmap e21: vm09.cfuwdz(active, starting, since 0.0242373s) 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.nrocqt"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.jrhwzz"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mgr metadata", "who": "vm09.cfuwdz", "id": "vm09.cfuwdz"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/key"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: Manager daemon vm09.cfuwdz is now available 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:40.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:40 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:40.308 INFO:tasks.workunit.client.0.vm05.stdout:8/625: getdents d0/d1/d55 0 2026-03-09T15:01:40.312 INFO:tasks.workunit.client.0.vm05.stdout:6/561: mkdir da/d43/d7b/da9 0 2026-03-09T15:01:40.316 INFO:tasks.workunit.client.0.vm05.stdout:1/602: dwrite d9/d2f/d83/d98/f67 [0,4194304] 0 2026-03-09T15:01:40.322 INFO:tasks.workunit.client.0.vm05.stdout:9/648: write d2/d4e/d56/d37/d9c/d8e/f5f [377841,14739] 0 2026-03-09T15:01:40.323 INFO:tasks.workunit.client.0.vm05.stdout:9/649: dread - d2/d10/d22/d47/d73/f90 zero size 2026-03-09T15:01:40.328 INFO:tasks.workunit.client.0.vm05.stdout:3/614: dwrite d3/d29/d2d/d7b/fa3 [8388608,4194304] 0 2026-03-09T15:01:40.335 INFO:tasks.workunit.client.0.vm05.stdout:9/650: dwrite d2/d10/fe6 [0,4194304] 0 2026-03-09T15:01:40.341 INFO:tasks.workunit.client.0.vm05.stdout:2/604: mkdir da/d29/d64/dc1 0 2026-03-09T15:01:40.341 INFO:tasks.workunit.client.0.vm05.stdout:4/577: getdents d2/d4/d7/d79 0 2026-03-09T15:01:40.342 INFO:tasks.workunit.client.0.vm05.stdout:0/544: symlink d9/de/d12/d15/d2e/d32/d53/d6e/lad 0 2026-03-09T15:01:40.355 INFO:tasks.workunit.client.0.vm05.stdout:1/603: unlink d9/d2f/d37/c54 0 2026-03-09T15:01:40.361 INFO:tasks.workunit.client.0.vm05.stdout:1/604: dwrite d9/d2f/d83/d98/f67 [0,4194304] 0 2026-03-09T15:01:40.363 INFO:tasks.workunit.client.0.vm05.stdout:1/605: fdatasync d9/d2f/f3a 0 2026-03-09T15:01:40.371 INFO:tasks.workunit.client.0.vm05.stdout:3/615: dread d3/df/d10/d34/d8c/dbd/fa4 [0,4194304] 0 2026-03-09T15:01:40.376 INFO:tasks.workunit.client.0.vm05.stdout:7/592: write d1/d9/d23/d31/d32/d78/f88 [1703300,53552] 0 2026-03-09T15:01:40.376 INFO:tasks.workunit.client.0.vm05.stdout:5/648: write d1/d4/d19/f29 [1979584,4419] 0 2026-03-09T15:01:40.377 INFO:tasks.workunit.client.0.vm05.stdout:7/593: write d1/d9/d23/d54/f6f [307225,92025] 0 2026-03-09T15:01:40.383 INFO:tasks.workunit.client.0.vm05.stdout:9/651: truncate d2/d4e/d56/d53/f66 6265644 0 2026-03-09T15:01:40.384 INFO:tasks.workunit.client.0.vm05.stdout:9/652: chown d2/d9e 3046059 1 2026-03-09T15:01:40.388 INFO:tasks.workunit.client.0.vm05.stdout:2/605: mknod da/d29/d6a/db1/cc2 0 2026-03-09T15:01:40.389 INFO:tasks.workunit.client.0.vm05.stdout:2/606: chown da/dd/l53 2380741 1 2026-03-09T15:01:40.390 INFO:tasks.workunit.client.0.vm05.stdout:4/578: symlink d2/d4/d7/d48/d6b/lbc 0 2026-03-09T15:01:40.391 INFO:tasks.workunit.client.0.vm05.stdout:4/579: chown d2/d4/d7/dc/f27 130388979 1 2026-03-09T15:01:40.394 INFO:tasks.workunit.client.0.vm05.stdout:4/580: dwrite d2/d7a/fb1 [0,4194304] 0 2026-03-09T15:01:40.396 INFO:tasks.workunit.client.0.vm05.stdout:0/545: mkdir d9/de/d25/dae 0 2026-03-09T15:01:40.396 INFO:tasks.workunit.client.0.vm05.stdout:0/546: chown d9/de/d6a/la7 210 1 2026-03-09T15:01:40.399 INFO:tasks.workunit.client.0.vm05.stdout:6/562: symlink da/d43/d66/laa 0 2026-03-09T15:01:40.403 INFO:tasks.workunit.client.0.vm05.stdout:5/649: symlink d1/d4/d34/dc0/ldf 0 2026-03-09T15:01:40.404 INFO:tasks.workunit.client.0.vm05.stdout:5/650: dread - d1/d4/d34/d56/da6/fd1 zero size 2026-03-09T15:01:40.404 INFO:tasks.workunit.client.0.vm05.stdout:5/651: fdatasync d1/d4/d34/d35/d3d/d38/d63/fda 0 2026-03-09T15:01:40.407 INFO:tasks.workunit.client.0.vm05.stdout:7/594: rename d1/d9/d23/d31/d32/d78/d7e/c96 to d1/d22/cbe 0 2026-03-09T15:01:40.414 INFO:tasks.workunit.client.0.vm05.stdout:9/653: dread d2/d10/f71 [0,4194304] 0 2026-03-09T15:01:40.417 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: Active manager daemon vm09.cfuwdz restarted 2026-03-09T15:01:40.417 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: Activating manager daemon vm09.cfuwdz 2026-03-09T15:01:40.417 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/crt"}]: dispatch 2026-03-09T15:01:40.417 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:01:40.417 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: osdmap e39: 6 total, 6 up, 6 in 2026-03-09T15:01:40.417 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: mgrmap e21: vm09.cfuwdz(active, starting, since 0.0242373s) 2026-03-09T15:01:40.417 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:01:40.417 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T15:01:40.417 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T15:01:40.417 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.nrocqt"}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.jrhwzz"}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mgr metadata", "who": "vm09.cfuwdz", "id": "vm09.cfuwdz"}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/key"}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: Manager daemon vm09.cfuwdz is now available 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:40.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:40 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:40.422 INFO:tasks.workunit.client.0.vm05.stdout:1/606: symlink d9/d2f/d83/d98/d59/d49/d48/db2/lc6 0 2026-03-09T15:01:40.429 INFO:tasks.workunit.client.0.vm05.stdout:8/626: truncate d0/f10 2012243 0 2026-03-09T15:01:40.433 INFO:tasks.workunit.client.0.vm05.stdout:3/616: dwrite d3/f7 [4194304,4194304] 0 2026-03-09T15:01:40.439 INFO:tasks.workunit.client.0.vm05.stdout:3/617: dwrite d3/df/d10/d19/d44/d50/fc4 [0,4194304] 0 2026-03-09T15:01:40.449 INFO:tasks.workunit.client.0.vm05.stdout:5/652: dwrite d1/da/fe [4194304,4194304] 0 2026-03-09T15:01:40.449 INFO:tasks.workunit.client.0.vm05.stdout:5/653: readlink d1/d4/d34/d35/d4e/d6f/d7e/ld6 0 2026-03-09T15:01:40.466 INFO:tasks.workunit.client.0.vm05.stdout:2/607: readlink da/d16/l97 0 2026-03-09T15:01:40.469 INFO:tasks.workunit.client.0.vm05.stdout:9/654: mkdir d2/d4e/d56/d37/d99/de9 0 2026-03-09T15:01:40.474 INFO:tasks.workunit.client.0.vm05.stdout:4/581: symlink d2/d4/d50/d8a/lbd 0 2026-03-09T15:01:40.484 INFO:tasks.workunit.client.0.vm05.stdout:2/608: dread da/d29/d6a/da0/f8f [0,4194304] 0 2026-03-09T15:01:40.492 INFO:tasks.workunit.client.0.vm05.stdout:2/609: dread da/f21 [0,4194304] 0 2026-03-09T15:01:40.504 INFO:tasks.workunit.client.0.vm05.stdout:0/547: dwrite d9/de/d6a/f7c [0,4194304] 0 2026-03-09T15:01:40.506 INFO:tasks.workunit.client.0.vm05.stdout:0/548: chown d9/de/d25/d38 326862492 1 2026-03-09T15:01:40.519 INFO:tasks.workunit.client.0.vm05.stdout:5/654: creat d1/d4/d34/d35/d4e/fe0 x:0 0 0 2026-03-09T15:01:40.526 INFO:tasks.workunit.client.0.vm05.stdout:9/655: symlink d2/d4e/d56/d37/d99/lea 0 2026-03-09T15:01:40.527 INFO:tasks.workunit.client.0.vm05.stdout:6/563: creat da/fab x:0 0 0 2026-03-09T15:01:40.528 INFO:tasks.workunit.client.0.vm05.stdout:6/564: write da/f1a [1247729,34269] 0 2026-03-09T15:01:40.530 INFO:tasks.workunit.client.0.vm05.stdout:1/607: symlink d9/d2f/lc7 0 2026-03-09T15:01:40.539 INFO:tasks.workunit.client.0.vm05.stdout:4/582: dread d2/d43/f47 [0,4194304] 0 2026-03-09T15:01:40.540 INFO:tasks.workunit.client.0.vm05.stdout:2/610: rename da/dd/d4a to da/d29/d3f/dc3 0 2026-03-09T15:01:40.543 INFO:tasks.workunit.client.0.vm05.stdout:8/627: write d0/d1/d12/d1b/d95/f48 [1186363,28365] 0 2026-03-09T15:01:40.552 INFO:tasks.workunit.client.0.vm05.stdout:0/549: creat d9/de/d12/d15/d2e/d32/d53/d6e/faf x:0 0 0 2026-03-09T15:01:40.553 INFO:tasks.workunit.client.0.vm05.stdout:3/618: mknod d3/df/d1e/cd4 0 2026-03-09T15:01:40.564 INFO:tasks.workunit.client.0.vm05.stdout:1/608: mknod d9/d2f/d83/d98/d59/d49/d77/cc8 0 2026-03-09T15:01:40.566 INFO:tasks.workunit.client.0.vm05.stdout:5/655: dwrite d1/d4/d27/f4f [0,4194304] 0 2026-03-09T15:01:40.582 INFO:tasks.workunit.client.0.vm05.stdout:4/583: stat d2/d49/c78 0 2026-03-09T15:01:40.589 INFO:tasks.workunit.client.0.vm05.stdout:8/628: readlink d0/d1/d12/l9e 0 2026-03-09T15:01:40.598 INFO:tasks.workunit.client.0.vm05.stdout:0/550: stat d9/l29 0 2026-03-09T15:01:40.600 INFO:tasks.workunit.client.0.vm05.stdout:3/619: rmdir d3/df/d10/d19/db2 39 2026-03-09T15:01:40.601 INFO:tasks.workunit.client.0.vm05.stdout:7/595: getdents d1/d9/d72 0 2026-03-09T15:01:40.602 INFO:tasks.workunit.client.0.vm05.stdout:7/596: dwrite d1/d9/d23/d31/d8f/d93/fae [0,4194304] 0 2026-03-09T15:01:40.609 INFO:tasks.workunit.client.0.vm05.stdout:6/565: mknod da/d43/d7b/d89/da8/cac 0 2026-03-09T15:01:40.611 INFO:tasks.workunit.client.0.vm05.stdout:1/609: mkdir d9/d2f/d37/d5a/da9/dc9 0 2026-03-09T15:01:40.616 INFO:tasks.workunit.client.0.vm05.stdout:5/656: mknod d1/db5/ce1 0 2026-03-09T15:01:40.617 INFO:tasks.workunit.client.0.vm05.stdout:5/657: readlink d1/db5/l83 0 2026-03-09T15:01:40.617 INFO:tasks.workunit.client.0.vm05.stdout:5/658: fdatasync d1/da/fb7 0 2026-03-09T15:01:40.638 INFO:tasks.workunit.client.0.vm05.stdout:4/584: dwrite d2/d4/d7/dc/f8e [0,4194304] 0 2026-03-09T15:01:40.666 INFO:tasks.workunit.client.0.vm05.stdout:4/585: dread d2/d49/f4d [0,4194304] 0 2026-03-09T15:01:40.670 INFO:tasks.workunit.client.0.vm05.stdout:4/586: stat d2/d4/d50 0 2026-03-09T15:01:40.673 INFO:tasks.workunit.client.0.vm05.stdout:4/587: dread d2/d4/d7/d21/d3d/f65 [0,4194304] 0 2026-03-09T15:01:40.676 INFO:tasks.workunit.client.0.vm05.stdout:4/588: dwrite d2/d4/d7/dc/f45 [0,4194304] 0 2026-03-09T15:01:40.681 INFO:tasks.workunit.client.0.vm05.stdout:4/589: dwrite d2/d4/d7/dc/f54 [0,4194304] 0 2026-03-09T15:01:40.716 INFO:tasks.workunit.client.0.vm05.stdout:2/611: read - da/d29/d6a/da0/d91/dab/d2f/f93 zero size 2026-03-09T15:01:40.724 INFO:tasks.workunit.client.0.vm05.stdout:8/629: truncate d0/dc/f4a 868838 0 2026-03-09T15:01:40.724 INFO:tasks.workunit.client.0.vm05.stdout:8/630: readlink d0/d1/d12/d1b/d95/d78/d86/la4 0 2026-03-09T15:01:40.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.730+0000 7f7df467c700 1 -- 192.168.123.105:0/1664793548 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7dec072360 msgr2=0x7f7dec0770e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:40.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.730+0000 7f7df467c700 1 --2- 192.168.123.105:0/1664793548 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7dec072360 0x7f7dec0770e0 secure :-1 s=READY pgs=315 cs=0 l=1 rev1=1 crypto rx=0x7f7de400d3f0 tx=0x7f7de400d700 comp rx=0 tx=0).stop 2026-03-09T15:01:40.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.731+0000 7f7df467c700 1 -- 192.168.123.105:0/1664793548 shutdown_connections 2026-03-09T15:01:40.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.731+0000 7f7df467c700 1 --2- 192.168.123.105:0/1664793548 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7dec072360 0x7f7dec0770e0 unknown :-1 s=CLOSED pgs=315 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:40.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.731+0000 7f7df467c700 1 --2- 192.168.123.105:0/1664793548 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7dec071980 0x7f7dec071d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:40.731 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.731+0000 7f7df467c700 1 -- 192.168.123.105:0/1664793548 >> 192.168.123.105:0/1664793548 conn(0x7f7dec06d1a0 msgr2=0x7f7dec06f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:40.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.731+0000 7f7df467c700 1 -- 192.168.123.105:0/1664793548 shutdown_connections 2026-03-09T15:01:40.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.731+0000 7f7df467c700 1 -- 192.168.123.105:0/1664793548 wait complete. 2026-03-09T15:01:40.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.731+0000 7f7df467c700 1 Processor -- start 2026-03-09T15:01:40.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.732+0000 7f7df467c700 1 -- start start 2026-03-09T15:01:40.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.732+0000 7f7df467c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7dec071980 0x7f7dec131390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:40.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.732+0000 7f7df467c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7dec1318d0 0x7f7dec07f530 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:40.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.732+0000 7f7df467c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7dec131dd0 con 0x7f7dec1318d0 2026-03-09T15:01:40.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.732+0000 7f7df467c700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f7dec131f40 con 0x7f7dec071980 2026-03-09T15:01:40.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.732+0000 7f7df2418700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7dec071980 0x7f7dec131390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:40.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.732+0000 7f7df2418700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7dec071980 0x7f7dec131390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:37000/0 (socket says 192.168.123.105:37000) 2026-03-09T15:01:40.732 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.732+0000 7f7df2418700 1 -- 192.168.123.105:0/3106820222 learned_addr learned my addr 192.168.123.105:0/3106820222 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:01:40.736 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.735+0000 7f7df1c17700 1 --2- 192.168.123.105:0/3106820222 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7dec1318d0 0x7f7dec07f530 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:40.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.735+0000 7f7df2418700 1 -- 192.168.123.105:0/3106820222 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7dec1318d0 msgr2=0x7f7dec07f530 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:40.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.735+0000 7f7df2418700 1 --2- 192.168.123.105:0/3106820222 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7dec1318d0 0x7f7dec07f530 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:40.737 INFO:tasks.workunit.client.0.vm05.stdout:6/566: read da/d19/f5b [2236821,31831] 0 2026-03-09T15:01:40.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.735+0000 7f7df2418700 1 -- 192.168.123.105:0/3106820222 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f7de4007ed0 con 0x7f7dec071980 2026-03-09T15:01:40.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.737+0000 7f7df2418700 1 --2- 192.168.123.105:0/3106820222 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7dec071980 0x7f7dec131390 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f7de800eb10 tx=0x7f7de800eed0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:40.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.738+0000 7f7de37fe700 1 -- 192.168.123.105:0/3106820222 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7de800cca0 con 0x7f7dec071980 2026-03-09T15:01:40.740 INFO:tasks.workunit.client.0.vm05.stdout:0/551: dwrite d9/de/d12/f84 [0,4194304] 0 2026-03-09T15:01:40.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.738+0000 7f7df467c700 1 -- 192.168.123.105:0/3106820222 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f7dec07fad0 con 0x7f7dec071980 2026-03-09T15:01:40.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.739+0000 7f7df467c700 1 -- 192.168.123.105:0/3106820222 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f7dec07fff0 con 0x7f7dec071980 2026-03-09T15:01:40.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.742+0000 7f7de37fe700 1 -- 192.168.123.105:0/3106820222 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f7de800ce00 con 0x7f7dec071980 2026-03-09T15:01:40.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.742+0000 7f7de37fe700 1 -- 192.168.123.105:0/3106820222 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f7de8018910 con 0x7f7dec071980 2026-03-09T15:01:40.742 INFO:tasks.workunit.client.0.vm05.stdout:0/552: stat d9/de/l4e 0 2026-03-09T15:01:40.742 INFO:tasks.workunit.client.0.vm05.stdout:0/553: dread - d9/de/d25/d38/d78/f8e zero size 2026-03-09T15:01:40.743 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.743+0000 7f7df467c700 1 -- 192.168.123.105:0/3106820222 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f7dd0005320 con 0x7f7dec071980 2026-03-09T15:01:40.743 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.743+0000 7f7de37fe700 1 -- 192.168.123.105:0/3106820222 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 21) v1 ==== 49900+0+0 (secure 0 0 0) 0x7f7de8010c80 con 0x7f7dec071980 2026-03-09T15:01:40.743 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.743+0000 7f7de37fe700 1 -- 192.168.123.105:0/3106820222 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f7de8014070 con 0x7f7dec071980 2026-03-09T15:01:40.746 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.746+0000 7f7de37fe700 1 -- 192.168.123.105:0/3106820222 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+177933 (secure 0 0 0) 0x7f7de800f7f0 con 0x7f7dec071980 2026-03-09T15:01:40.753 INFO:tasks.workunit.client.0.vm05.stdout:0/554: dwrite d9/de/d12/d15/d2e/f3a [0,4194304] 0 2026-03-09T15:01:40.756 INFO:tasks.workunit.client.0.vm05.stdout:0/555: chown d9/de/d12/d15/f9e 466743 1 2026-03-09T15:01:40.768 INFO:tasks.workunit.client.0.vm05.stdout:0/556: dread d9/de/d25/d38/fa6 [0,4194304] 0 2026-03-09T15:01:40.787 INFO:tasks.workunit.client.0.vm05.stdout:8/631: creat d0/d1/d12/d1b/d95/d42/d60/da7/fcf x:0 0 0 2026-03-09T15:01:40.788 INFO:tasks.workunit.client.0.vm05.stdout:3/620: creat d3/d29/d7f/dc3/fd5 x:0 0 0 2026-03-09T15:01:40.789 INFO:tasks.workunit.client.0.vm05.stdout:3/621: stat d3/df/d59/l69 0 2026-03-09T15:01:40.790 INFO:tasks.workunit.client.0.vm05.stdout:7/597: symlink d1/d9/d23/d31/d8f/d93/d95/lbf 0 2026-03-09T15:01:40.794 INFO:tasks.workunit.client.0.vm05.stdout:9/656: getdents d2/d4e/d56/d37/d9c 0 2026-03-09T15:01:40.796 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.796+0000 7f7de37fe700 1 -- 192.168.123.105:0/3106820222 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mgrmap(e 22) v1 ==== 50027+0+0 (secure 0 0 0) 0x7f7de8005550 con 0x7f7dec071980 2026-03-09T15:01:40.796 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.796+0000 7f7de37fe700 1 --2- 192.168.123.105:0/3106820222 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7f7dd8042890 0x7f7dd8044c70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:40.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.800+0000 7f7df1c17700 1 --2- 192.168.123.105:0/3106820222 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7f7dd8042890 0x7f7dd8044c70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:40.805 INFO:tasks.workunit.client.0.vm05.stdout:1/610: symlink d9/d2f/d83/lca 0 2026-03-09T15:01:40.805 INFO:tasks.workunit.client.0.vm05.stdout:1/611: chown f7 770438663 1 2026-03-09T15:01:40.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.809+0000 7f7df1c17700 1 --2- 192.168.123.105:0/3106820222 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7f7dd8042890 0x7f7dd8044c70 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f7de4007ea0 tx=0x7f7de4007e10 comp rx=0 tx=0).ready entity=mgr.24413 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:40.824 INFO:tasks.workunit.client.0.vm05.stdout:6/567: mkdir da/d43/d7b/d89/da8/dad 0 2026-03-09T15:01:40.832 INFO:tasks.workunit.client.0.vm05.stdout:2/612: dread da/f4e [0,4194304] 0 2026-03-09T15:01:40.835 INFO:tasks.workunit.client.0.vm05.stdout:2/613: dwrite da/d29/d6a/da0/d91/dab/d2f/fae [0,4194304] 0 2026-03-09T15:01:40.838 INFO:tasks.workunit.client.0.vm05.stdout:2/614: write da/d29/d3f/dc3/f9a [235260,44722] 0 2026-03-09T15:01:40.838 INFO:tasks.workunit.client.0.vm05.stdout:6/568: dread da/d17/f2a [0,4194304] 0 2026-03-09T15:01:40.911 INFO:tasks.workunit.client.0.vm05.stdout:2/615: dread da/d29/d6a/da0/d91/dab/f4b [0,4194304] 0 2026-03-09T15:01:40.912 INFO:tasks.workunit.client.0.vm05.stdout:2/616: chown da/d29/f2d 1705 1 2026-03-09T15:01:40.916 INFO:tasks.workunit.client.0.vm05.stdout:2/617: dwrite da/f9d [0,4194304] 0 2026-03-09T15:01:40.926 INFO:tasks.workunit.client.0.vm05.stdout:0/557: mknod d9/de/d6a/cb0 0 2026-03-09T15:01:40.933 INFO:tasks.workunit.client.0.vm05.stdout:8/632: symlink d0/d1/d12/d3c/ld0 0 2026-03-09T15:01:40.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.936+0000 7f7df467c700 1 -- 192.168.123.105:0/3106820222 --> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f7dd0000bf0 con 0x7f7dd8042890 2026-03-09T15:01:40.938 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.938+0000 7f7de37fe700 1 -- 192.168.123.105:0/3106820222 <== mgr.24413 v2:192.168.123.109:6828/2887506718 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+310 (secure 0 0 0) 0x7f7dd0000bf0 con 0x7f7dd8042890 2026-03-09T15:01:40.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.943+0000 7f7de17fa700 1 -- 192.168.123.105:0/3106820222 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7f7dd8042890 msgr2=0x7f7dd8044c70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:40.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.943+0000 7f7de17fa700 1 --2- 192.168.123.105:0/3106820222 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7f7dd8042890 0x7f7dd8044c70 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f7de4007ea0 tx=0x7f7de4007e10 comp rx=0 tx=0).stop 2026-03-09T15:01:40.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.943+0000 7f7de17fa700 1 -- 192.168.123.105:0/3106820222 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7dec071980 msgr2=0x7f7dec131390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:40.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.943+0000 7f7de17fa700 1 --2- 192.168.123.105:0/3106820222 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7dec071980 0x7f7dec131390 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f7de800eb10 tx=0x7f7de800eed0 comp rx=0 tx=0).stop 2026-03-09T15:01:40.943 INFO:tasks.workunit.client.0.vm05.stdout:7/598: readlink d1/d9/d23/d31/d8f/la9 0 2026-03-09T15:01:40.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.945+0000 7f7de17fa700 1 -- 192.168.123.105:0/3106820222 shutdown_connections 2026-03-09T15:01:40.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.945+0000 7f7de17fa700 1 --2- 192.168.123.105:0/3106820222 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7f7dd8042890 0x7f7dd8044c70 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:40.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.945+0000 7f7de17fa700 1 --2- 192.168.123.105:0/3106820222 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f7dec071980 0x7f7dec131390 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:40.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.945+0000 7f7de17fa700 1 --2- 192.168.123.105:0/3106820222 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f7dec1318d0 0x7f7dec07f530 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:40.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.945+0000 7f7de17fa700 1 -- 192.168.123.105:0/3106820222 >> 192.168.123.105:0/3106820222 conn(0x7f7dec06d1a0 msgr2=0x7f7dec0764c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:40.946 INFO:tasks.workunit.client.0.vm05.stdout:1/612: fdatasync d9/d2f/d37/d5f/f73 0 2026-03-09T15:01:40.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.946+0000 7f7de17fa700 1 -- 192.168.123.105:0/3106820222 shutdown_connections 2026-03-09T15:01:40.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:40.946+0000 7f7de17fa700 1 -- 192.168.123.105:0/3106820222 wait complete. 2026-03-09T15:01:40.957 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:01:40.962 INFO:tasks.workunit.client.0.vm05.stdout:1/613: read d9/d2f/d83/d98/d59/d49/d48/f8f [140984,47815] 0 2026-03-09T15:01:40.976 INFO:tasks.workunit.client.0.vm05.stdout:2/618: mknod da/d29/d45/cc4 0 2026-03-09T15:01:40.977 INFO:tasks.workunit.client.0.vm05.stdout:2/619: truncate da/d29/d6a/da0/d91/dab/d9c/fb2 381857 0 2026-03-09T15:01:40.980 INFO:tasks.workunit.client.0.vm05.stdout:2/620: dwrite da/d29/d6a/da0/d91/dab/d9c/fb2 [0,4194304] 0 2026-03-09T15:01:40.995 INFO:tasks.workunit.client.0.vm05.stdout:4/590: creat d2/d4/fbe x:0 0 0 2026-03-09T15:01:41.003 INFO:tasks.workunit.client.0.vm05.stdout:4/591: dread d2/d4/d7/dc/fb9 [0,4194304] 0 2026-03-09T15:01:41.003 INFO:tasks.workunit.client.0.vm05.stdout:4/592: write d2/f33 [3560553,66321] 0 2026-03-09T15:01:41.004 INFO:tasks.workunit.client.0.vm05.stdout:4/593: write d2/f33 [102369,24199] 0 2026-03-09T15:01:41.006 INFO:tasks.workunit.client.0.vm05.stdout:4/594: write d2/d1d/d88/f8b [2594169,117229] 0 2026-03-09T15:01:41.026 INFO:tasks.workunit.client.0.vm05.stdout:9/657: symlink d2/d4e/d56/d37/d9c/ddd/leb 0 2026-03-09T15:01:41.027 INFO:tasks.workunit.client.0.vm05.stdout:5/659: getdents d1/d4/d34/d35/d3d 0 2026-03-09T15:01:41.048 INFO:tasks.workunit.client.0.vm05.stdout:1/614: creat d9/d2f/d37/d5a/fcb x:0 0 0 2026-03-09T15:01:41.081 INFO:tasks.workunit.client.0.vm05.stdout:5/660: sync 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.088+0000 7fae27f6a700 1 -- 192.168.123.105:0/4211362901 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae200722f0 msgr2=0x7fae20077070 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.088+0000 7fae27f6a700 1 --2- 192.168.123.105:0/4211362901 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae200722f0 0x7fae20077070 secure :-1 s=READY pgs=316 cs=0 l=1 rev1=1 crypto rx=0x7fae18008280 tx=0x7fae18008590 comp rx=0 tx=0).stop 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae27f6a700 1 -- 192.168.123.105:0/4211362901 shutdown_connections 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae27f6a700 1 --2- 192.168.123.105:0/4211362901 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae200722f0 0x7fae20077070 unknown :-1 s=CLOSED pgs=316 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae27f6a700 1 --2- 192.168.123.105:0/4211362901 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fae20071910 0x7fae20071d20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae27f6a700 1 -- 192.168.123.105:0/4211362901 >> 192.168.123.105:0/4211362901 conn(0x7fae2006d160 msgr2=0x7fae2006f5b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae27f6a700 1 -- 192.168.123.105:0/4211362901 shutdown_connections 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae27f6a700 1 -- 192.168.123.105:0/4211362901 wait complete. 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae27f6a700 1 Processor -- start 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae27f6a700 1 -- start start 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae27f6a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fae20071910 0x7fae200824a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae27f6a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae200829e0 0x7fae20082e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae27f6a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fae20083e50 con 0x7fae200829e0 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae27f6a700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fae201b2a90 con 0x7fae20071910 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae25505700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae200829e0 0x7fae20082e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae25505700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae200829e0 0x7fae20082e50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:54422/0 (socket says 192.168.123.105:54422) 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae25505700 1 -- 192.168.123.105:0/336473950 learned_addr learned my addr 192.168.123.105:0/336473950 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.089+0000 7fae25d06700 1 --2- 192.168.123.105:0/336473950 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fae20071910 0x7fae200824a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.090+0000 7fae25505700 1 -- 192.168.123.105:0/336473950 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fae20071910 msgr2=0x7fae200824a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.090+0000 7fae25505700 1 --2- 192.168.123.105:0/336473950 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fae20071910 0x7fae200824a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.090+0000 7fae25505700 1 -- 192.168.123.105:0/336473950 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fae18007ed0 con 0x7fae200829e0 2026-03-09T15:01:41.094 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.091+0000 7fae25505700 1 --2- 192.168.123.105:0/336473950 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae200829e0 0x7fae20082e50 secure :-1 s=READY pgs=317 cs=0 l=1 rev1=1 crypto rx=0x7fae18007590 tx=0x7fae18003f80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:41.094 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.094+0000 7fae16ffd700 1 -- 192.168.123.105:0/336473950 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fae18010280 con 0x7fae200829e0 2026-03-09T15:01:41.094 INFO:tasks.workunit.client.0.vm05.stdout:6/569: write da/d43/f56 [5074672,28468] 0 2026-03-09T15:01:41.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.094+0000 7fae27f6a700 1 -- 192.168.123.105:0/336473950 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fae20107d50 con 0x7fae200829e0 2026-03-09T15:01:41.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.094+0000 7fae27f6a700 1 -- 192.168.123.105:0/336473950 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fae201081f0 con 0x7fae200829e0 2026-03-09T15:01:41.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.095+0000 7fae16ffd700 1 -- 192.168.123.105:0/336473950 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fae180103e0 con 0x7fae200829e0 2026-03-09T15:01:41.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.095+0000 7fae16ffd700 1 -- 192.168.123.105:0/336473950 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fae1800a3f0 con 0x7fae200829e0 2026-03-09T15:01:41.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.096+0000 7fae27f6a700 1 -- 192.168.123.105:0/336473950 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fae04005320 con 0x7fae200829e0 2026-03-09T15:01:41.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.097+0000 7fae16ffd700 1 -- 192.168.123.105:0/336473950 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 22) v1 ==== 50027+0+0 (secure 0 0 0) 0x7fae18016020 con 0x7fae200829e0 2026-03-09T15:01:41.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.097+0000 7fae16ffd700 1 --2- 192.168.123.105:0/336473950 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fae0c03db10 0x7fae0c03ffc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:41.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.097+0000 7fae16ffd700 1 -- 192.168.123.105:0/336473950 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5448+0+0 (secure 0 0 0) 0x7fae18011a80 con 0x7fae200829e0 2026-03-09T15:01:41.099 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.099+0000 7fae25d06700 1 --2- 192.168.123.105:0/336473950 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fae0c03db10 0x7fae0c03ffc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:41.099 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.099+0000 7fae25d06700 1 --2- 192.168.123.105:0/336473950 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fae0c03db10 0x7fae0c03ffc0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fae1c005950 tx=0x7fae1c0058e0 comp rx=0 tx=0).ready entity=mgr.24413 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:41.106 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.106+0000 7fae16ffd700 1 -- 192.168.123.105:0/336473950 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7fae18018850 con 0x7fae200829e0 2026-03-09T15:01:41.152 INFO:tasks.workunit.client.0.vm05.stdout:4/595: creat d2/d7a/fbf x:0 0 0 2026-03-09T15:01:41.152 INFO:tasks.workunit.client.0.vm05.stdout:7/599: creat d1/d9/d23/d31/d8f/d93/dbd/fc0 x:0 0 0 2026-03-09T15:01:41.152 INFO:tasks.workunit.client.0.vm05.stdout:9/658: creat d2/d8b/dae/fec x:0 0 0 2026-03-09T15:01:41.154 INFO:tasks.workunit.client.0.vm05.stdout:7/600: write d1/d12/fb3 [957666,1739] 0 2026-03-09T15:01:41.154 INFO:tasks.workunit.client.0.vm05.stdout:7/601: readlink d1/d9/d23/l4b 0 2026-03-09T15:01:41.155 INFO:tasks.workunit.client.0.vm05.stdout:7/602: dread - d1/d12/fb9 zero size 2026-03-09T15:01:41.161 INFO:tasks.workunit.client.0.vm05.stdout:0/558: rename d9/l29 to d9/de/d12/d15/d2e/d32/d53/lb1 0 2026-03-09T15:01:41.164 INFO:tasks.workunit.client.0.vm05.stdout:9/659: dwrite d2/d10/d22/fb6 [0,4194304] 0 2026-03-09T15:01:41.164 INFO:tasks.workunit.client.0.vm05.stdout:8/633: creat d0/d7/fd1 x:0 0 0 2026-03-09T15:01:41.175 INFO:tasks.workunit.client.0.vm05.stdout:6/570: mkdir da/d17/d95/da2/dae 0 2026-03-09T15:01:41.175 INFO:tasks.workunit.client.0.vm05.stdout:7/603: dwrite d1/d9/d23/d31/d32/f5b [0,4194304] 0 2026-03-09T15:01:41.178 INFO:tasks.workunit.client.0.vm05.stdout:3/622: getdents d3/d29 0 2026-03-09T15:01:41.183 INFO:tasks.workunit.client.0.vm05.stdout:4/596: creat d2/d4/d50/fc0 x:0 0 0 2026-03-09T15:01:41.183 INFO:tasks.workunit.client.0.vm05.stdout:4/597: chown d2/d4/l6c 62490364 1 2026-03-09T15:01:41.185 INFO:tasks.workunit.client.0.vm05.stdout:7/604: sync 2026-03-09T15:01:41.185 INFO:tasks.workunit.client.0.vm05.stdout:7/605: chown d1/d12/f11 1 1 2026-03-09T15:01:41.186 INFO:tasks.workunit.client.0.vm05.stdout:7/606: write d1/d9/fc [4197186,125272] 0 2026-03-09T15:01:41.196 INFO:tasks.workunit.client.0.vm05.stdout:3/623: dread d3/df/d10/d34/f4c [0,4194304] 0 2026-03-09T15:01:41.199 INFO:tasks.workunit.client.0.vm05.stdout:7/607: dwrite d1/d9/d23/d31/d8f/d93/fa3 [4194304,4194304] 0 2026-03-09T15:01:41.202 INFO:tasks.workunit.client.0.vm05.stdout:1/615: mkdir d9/d2f/d83/d98/d59/d49/d78/dcc 0 2026-03-09T15:01:41.217 INFO:tasks.workunit.client.0.vm05.stdout:2/621: creat da/d29/d64/fc5 x:0 0 0 2026-03-09T15:01:41.228 INFO:tasks.workunit.client.0.vm05.stdout:0/559: creat d9/de/d12/da3/fb2 x:0 0 0 2026-03-09T15:01:41.229 INFO:tasks.workunit.client.0.vm05.stdout:0/560: chown d9/de/d12/da3/fa4 14909220 1 2026-03-09T15:01:41.231 INFO:tasks.workunit.client.0.vm05.stdout:6/571: dread - da/d17/f64 zero size 2026-03-09T15:01:41.236 INFO:tasks.workunit.client.0.vm05.stdout:5/661: dread d1/d4/d34/d35/f52 [0,4194304] 0 2026-03-09T15:01:41.239 INFO:tasks.workunit.client.0.vm05.stdout:5/662: dwrite d1/d4/d34/d35/fc5 [0,4194304] 0 2026-03-09T15:01:41.245 INFO:tasks.workunit.client.0.vm05.stdout:8/634: mkdir d0/d1/d12/d1b/d95/d78/d86/dd2 0 2026-03-09T15:01:41.253 INFO:tasks.workunit.client.0.vm05.stdout:9/660: rmdir d2/d10/d22/d2c/d69/d5a 39 2026-03-09T15:01:41.266 INFO:tasks.workunit.client.0.vm05.stdout:4/598: symlink d2/d1d/lc1 0 2026-03-09T15:01:41.266 INFO:tasks.workunit.client.0.vm05.stdout:4/599: write d2/d7a/fb1 [2469295,53105] 0 2026-03-09T15:01:41.267 INFO:tasks.workunit.client.0.vm05.stdout:4/600: readlink d2/d4/d1e/l38 0 2026-03-09T15:01:41.267 INFO:tasks.workunit.client.0.vm05.stdout:4/601: dread - d2/d4/d7/f90 zero size 2026-03-09T15:01:41.268 INFO:tasks.workunit.client.0.vm05.stdout:4/602: truncate d2/d4/d7/dc/f27 1973433 0 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: Migrating agent root cert to cert store 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: Migrating agent root key to cert store 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: Checking for cert/key for grafana.vm05 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: Migrating grafana.vm05 cert to cert store 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: Migrating grafana.vm05 key to cert store 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm09.cfuwdz/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm09.cfuwdz/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm09.cfuwdz/trash_purge_schedule"}]: dispatch 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm09.cfuwdz/trash_purge_schedule"}]: dispatch 2026-03-09T15:01:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:41 vm05.local ceph-mon[50611]: mgrmap e22: vm09.cfuwdz(active, since 1.07771s) 2026-03-09T15:01:41.306 INFO:tasks.workunit.client.0.vm05.stdout:1/616: rename d9/d2f/d83/d98/d59/d49/d48 to d9/d2f/d37/d5a/da9/dc9/dcd 0 2026-03-09T15:01:41.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.309+0000 7fae27f6a700 1 -- 192.168.123.105:0/336473950 --> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fae04000bf0 con 0x7fae0c03db10 2026-03-09T15:01:41.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.311+0000 7fae16ffd700 1 -- 192.168.123.105:0/336473950 <== mgr.24413 v2:192.168.123.109:6828/2887506718 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+310 (secure 0 0 0) 0x7fae04000bf0 con 0x7fae0c03db10 2026-03-09T15:01:41.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.314+0000 7fae27f6a700 1 -- 192.168.123.105:0/336473950 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fae0c03db10 msgr2=0x7fae0c03ffc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:41.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.314+0000 7fae27f6a700 1 --2- 192.168.123.105:0/336473950 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fae0c03db10 0x7fae0c03ffc0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fae1c005950 tx=0x7fae1c0058e0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.314+0000 7fae27f6a700 1 -- 192.168.123.105:0/336473950 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae200829e0 msgr2=0x7fae20082e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:41.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.314+0000 7fae27f6a700 1 --2- 192.168.123.105:0/336473950 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae200829e0 0x7fae20082e50 secure :-1 s=READY pgs=317 cs=0 l=1 rev1=1 crypto rx=0x7fae18007590 tx=0x7fae18003f80 comp rx=0 tx=0).stop 2026-03-09T15:01:41.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.314+0000 7fae27f6a700 1 -- 192.168.123.105:0/336473950 shutdown_connections 2026-03-09T15:01:41.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.314+0000 7fae27f6a700 1 --2- 192.168.123.105:0/336473950 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fae0c03db10 0x7fae0c03ffc0 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.314+0000 7fae27f6a700 1 --2- 192.168.123.105:0/336473950 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fae20071910 0x7fae200824a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.314+0000 7fae27f6a700 1 --2- 192.168.123.105:0/336473950 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fae200829e0 0x7fae20082e50 unknown :-1 s=CLOSED pgs=317 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.314+0000 7fae27f6a700 1 -- 192.168.123.105:0/336473950 >> 192.168.123.105:0/336473950 conn(0x7fae2006d160 msgr2=0x7fae200764b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:41.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.314+0000 7fae27f6a700 1 -- 192.168.123.105:0/336473950 shutdown_connections 2026-03-09T15:01:41.314 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.315+0000 7fae27f6a700 1 -- 192.168.123.105:0/336473950 wait complete. 2026-03-09T15:01:41.323 INFO:tasks.workunit.client.0.vm05.stdout:6/572: read da/f16 [3551592,6181] 0 2026-03-09T15:01:41.334 INFO:tasks.workunit.client.0.vm05.stdout:9/661: fdatasync d2/d4e/f40 0 2026-03-09T15:01:41.344 INFO:tasks.workunit.client.0.vm05.stdout:7/608: unlink d1/d22/d3c/c5f 0 2026-03-09T15:01:41.349 INFO:tasks.workunit.client.0.vm05.stdout:2/622: rename da/d29/d6a/da0/d91/dab/d9c/fb2 to da/d16/d46/fc6 0 2026-03-09T15:01:41.350 INFO:tasks.workunit.client.0.vm05.stdout:2/623: chown da/d29/d6a/da0/d91/dab/d2f/db3 5362 1 2026-03-09T15:01:41.352 INFO:tasks.workunit.client.0.vm05.stdout:1/617: creat d9/d2f/d55/fce x:0 0 0 2026-03-09T15:01:41.352 INFO:tasks.workunit.client.0.vm05.stdout:1/618: truncate d9/d2f/d83/f9e 4299325 0 2026-03-09T15:01:41.363 INFO:tasks.workunit.client.0.vm05.stdout:0/561: dwrite d9/de/d12/d15/d2e/d32/d53/f5f [0,4194304] 0 2026-03-09T15:01:41.380 INFO:tasks.workunit.client.0.vm05.stdout:4/603: mknod d2/d4/d7/dc/da8/cc2 0 2026-03-09T15:01:41.391 INFO:tasks.workunit.client.0.vm05.stdout:8/635: rename d0/d1/d12/d1b/d95/d4b/l6d to d0/d1/d55/ld3 0 2026-03-09T15:01:41.393 INFO:tasks.workunit.client.0.vm05.stdout:5/663: truncate d1/d4/d34/d35/d3d/d38/f8a 1261002 0 2026-03-09T15:01:41.394 INFO:tasks.workunit.client.0.vm05.stdout:5/664: chown d1/d4/d34/d35/d3d/d38/d69/la1 0 1 2026-03-09T15:01:41.401 INFO:tasks.workunit.client.0.vm05.stdout:0/562: creat d9/de/d6a/fb3 x:0 0 0 2026-03-09T15:01:41.402 INFO:tasks.workunit.client.0.vm05.stdout:0/563: fsync d9/de/d12/d15/d2e/d32/d53/d6e/faf 0 2026-03-09T15:01:41.408 INFO:tasks.workunit.client.0.vm05.stdout:5/665: sync 2026-03-09T15:01:41.414 INFO:tasks.workunit.client.0.vm05.stdout:4/604: dwrite d2/d4/d7/d48/f5a [0,4194304] 0 2026-03-09T15:01:41.414 INFO:tasks.workunit.client.0.vm05.stdout:4/605: readlink d2/d4/d8/d4a/d6e/lb5 0 2026-03-09T15:01:41.419 INFO:tasks.workunit.client.0.vm05.stdout:3/624: unlink d3/df/d10/d19/db2/lbc 0 2026-03-09T15:01:41.436 INFO:tasks.workunit.client.0.vm05.stdout:9/662: rename d2/d4e/d56/d37 to d2/d4e/d56/d53/d64/ded 0 2026-03-09T15:01:41.446 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.447+0000 7f2ac6844700 1 -- 192.168.123.105:0/2714058619 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2ac0102540 msgr2=0x7f2ac0102950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:41.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.447+0000 7f2ac6844700 1 --2- 192.168.123.105:0/2714058619 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2ac0102540 0x7f2ac0102950 secure :-1 s=READY pgs=318 cs=0 l=1 rev1=1 crypto rx=0x7f2abc009b00 tx=0x7f2abc009e10 comp rx=0 tx=0).stop 2026-03-09T15:01:41.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.448+0000 7f2ac6844700 1 -- 192.168.123.105:0/2714058619 shutdown_connections 2026-03-09T15:01:41.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.448+0000 7f2ac6844700 1 --2- 192.168.123.105:0/2714058619 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2ac0103740 0x7f2ac0103b90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.448+0000 7f2ac6844700 1 --2- 192.168.123.105:0/2714058619 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2ac0102540 0x7f2ac0102950 unknown :-1 s=CLOSED pgs=318 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.448+0000 7f2ac6844700 1 -- 192.168.123.105:0/2714058619 >> 192.168.123.105:0/2714058619 conn(0x7f2ac00fdaf0 msgr2=0x7f2ac00fff20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:41.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.449+0000 7f2ac6844700 1 -- 192.168.123.105:0/2714058619 shutdown_connections 2026-03-09T15:01:41.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.450+0000 7f2ac6844700 1 -- 192.168.123.105:0/2714058619 wait complete. 2026-03-09T15:01:41.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.451+0000 7f2ac6844700 1 Processor -- start 2026-03-09T15:01:41.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.451+0000 7f2ac6844700 1 -- start start 2026-03-09T15:01:41.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.451+0000 7f2ac6844700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2ac0102540 0x7f2ac0197e10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:41.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.451+0000 7f2ac6844700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2ac0103740 0x7f2ac0198350 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:41.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.451+0000 7f2ac6844700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2ac0198970 con 0x7f2ac0102540 2026-03-09T15:01:41.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.451+0000 7f2ac6844700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2ac0198ab0 con 0x7f2ac0103740 2026-03-09T15:01:41.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.451+0000 7f2ac5041700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2ac0103740 0x7f2ac0198350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:41.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.451+0000 7f2ac5041700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2ac0103740 0x7f2ac0198350 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:37044/0 (socket says 192.168.123.105:37044) 2026-03-09T15:01:41.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.451+0000 7f2ac5041700 1 -- 192.168.123.105:0/4148126879 learned_addr learned my addr 192.168.123.105:0/4148126879 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:01:41.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.452+0000 7f2ac5842700 1 --2- 192.168.123.105:0/4148126879 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2ac0102540 0x7f2ac0197e10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:41.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.452+0000 7f2ac5041700 1 -- 192.168.123.105:0/4148126879 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2ac0102540 msgr2=0x7f2ac0197e10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:41.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.452+0000 7f2ac5041700 1 --2- 192.168.123.105:0/4148126879 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2ac0102540 0x7f2ac0197e10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.452+0000 7f2ac5041700 1 -- 192.168.123.105:0/4148126879 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2abc0097e0 con 0x7f2ac0103740 2026-03-09T15:01:41.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.452+0000 7f2ac5041700 1 --2- 192.168.123.105:0/4148126879 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2ac0103740 0x7f2ac0198350 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f2ab000d8d0 tx=0x7f2ab000dc90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:41.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.452+0000 7f2ab6ffd700 1 -- 192.168.123.105:0/4148126879 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2ab0009940 con 0x7f2ac0103740 2026-03-09T15:01:41.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.452+0000 7f2ac6844700 1 -- 192.168.123.105:0/4148126879 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f2ac019d560 con 0x7f2ac0103740 2026-03-09T15:01:41.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.452+0000 7f2ab6ffd700 1 -- 192.168.123.105:0/4148126879 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f2ab0010460 con 0x7f2ac0103740 2026-03-09T15:01:41.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.453+0000 7f2ab6ffd700 1 -- 192.168.123.105:0/4148126879 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f2ab000f5d0 con 0x7f2ac0103740 2026-03-09T15:01:41.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.453+0000 7f2ab6ffd700 1 -- 192.168.123.105:0/4148126879 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 22) v1 ==== 50027+0+0 (secure 0 0 0) 0x7f2ab000f7e0 con 0x7f2ac0103740 2026-03-09T15:01:41.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.453+0000 7f2ac6844700 1 -- 192.168.123.105:0/4148126879 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f2ac019dab0 con 0x7f2ac0103740 2026-03-09T15:01:41.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.454+0000 7f2ab6ffd700 1 --2- 192.168.123.105:0/4148126879 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7f2aac03d860 0x7f2aac03fd10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:41.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.454+0000 7f2ac6844700 1 -- 192.168.123.105:0/4148126879 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2ac004ea50 con 0x7f2ac0103740 2026-03-09T15:01:41.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.454+0000 7f2ac5842700 1 --2- 192.168.123.105:0/4148126879 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7f2aac03d860 0x7f2aac03fd10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:41.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.454+0000 7f2ab6ffd700 1 -- 192.168.123.105:0/4148126879 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5448+0+0 (secure 0 0 0) 0x7f2ab0009af0 con 0x7f2ac0103740 2026-03-09T15:01:41.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.458+0000 7f2ab6ffd700 1 -- 192.168.123.105:0/4148126879 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f2ab0058050 con 0x7f2ac0103740 2026-03-09T15:01:41.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.459+0000 7f2ac5842700 1 --2- 192.168.123.105:0/4148126879 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7f2aac03d860 0x7f2aac03fd10 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f2abc005340 tx=0x7f2abc005fb0 comp rx=0 tx=0).ready entity=mgr.24413 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:41.459 INFO:tasks.workunit.client.0.vm05.stdout:2/624: dwrite da/d16/f6e [0,4194304] 0 2026-03-09T15:01:41.465 INFO:tasks.workunit.client.0.vm05.stdout:0/564: creat d9/de/d25/d38/d41/fb4 x:0 0 0 2026-03-09T15:01:41.474 INFO:tasks.workunit.client.0.vm05.stdout:4/606: creat d2/d4/d50/d8a/fc3 x:0 0 0 2026-03-09T15:01:41.475 INFO:tasks.workunit.client.0.vm05.stdout:7/609: link d1/c71 d1/d9/d23/d31/d32/d78/d7e/d81/cc1 0 2026-03-09T15:01:41.475 INFO:tasks.workunit.client.0.vm05.stdout:7/610: dread - d1/d9/d23/d31/d8f/d93/fb8 zero size 2026-03-09T15:01:41.489 INFO:tasks.workunit.client.0.vm05.stdout:2/625: rename da/d29/d6a/da0/d91/dab/d2f/f42 to da/d29/d6a/d7f/fc7 0 2026-03-09T15:01:41.492 INFO:tasks.workunit.client.0.vm05.stdout:2/626: readlink da/d29/d6a/da0/d91/dab/l31 0 2026-03-09T15:01:41.492 INFO:tasks.workunit.client.0.vm05.stdout:2/627: chown da/d29/d3f/lac 63 1 2026-03-09T15:01:41.495 INFO:tasks.workunit.client.0.vm05.stdout:2/628: dread da/d29/d6a/db1/fba [0,4194304] 0 2026-03-09T15:01:41.499 INFO:tasks.workunit.client.0.vm05.stdout:8/636: truncate d0/d1/d12/d1b/d95/d54/f64 4081115 0 2026-03-09T15:01:41.500 INFO:tasks.workunit.client.0.vm05.stdout:1/619: creat d9/d2f/d83/fcf x:0 0 0 2026-03-09T15:01:41.502 INFO:tasks.workunit.client.0.vm05.stdout:6/573: getdents da/d17 0 2026-03-09T15:01:41.509 INFO:tasks.workunit.client.0.vm05.stdout:5/666: link d1/f5e d1/d4/d34/fe2 0 2026-03-09T15:01:41.515 INFO:tasks.workunit.client.0.vm05.stdout:0/565: dread d9/de/d12/d15/d2e/f40 [0,4194304] 0 2026-03-09T15:01:41.517 INFO:tasks.workunit.client.0.vm05.stdout:5/667: sync 2026-03-09T15:01:41.529 INFO:tasks.workunit.client.0.vm05.stdout:3/625: truncate d3/df/d10/d34/f9d 948425 0 2026-03-09T15:01:41.536 INFO:tasks.workunit.client.0.vm05.stdout:7/611: creat d1/d9/d72/d97/fc2 x:0 0 0 2026-03-09T15:01:41.537 INFO:tasks.workunit.client.0.vm05.stdout:7/612: chown d1/d9/d23/d31/d8f/d93/cb1 5600792 1 2026-03-09T15:01:41.565 INFO:tasks.workunit.client.0.vm05.stdout:1/620: mkdir d9/d2f/d55/dd0 0 2026-03-09T15:01:41.570 INFO:tasks.workunit.client.0.vm05.stdout:6/574: mknod da/d17/d3b/d81/caf 0 2026-03-09T15:01:41.579 INFO:tasks.workunit.client.0.vm05.stdout:4/607: mkdir d2/d4/d7/dc4 0 2026-03-09T15:01:41.597 INFO:tasks.workunit.client.0.vm05.stdout:0/566: rename d9/de/d12/d15/d2e/d32/d53/d6e to d9/de/d6a/db5 0 2026-03-09T15:01:41.602 INFO:tasks.workunit.client.0.vm05.stdout:0/567: sync 2026-03-09T15:01:41.602 INFO:tasks.workunit.client.0.vm05.stdout:5/668: symlink d1/d5d/le3 0 2026-03-09T15:01:41.611 INFO:tasks.workunit.client.0.vm05.stdout:3/626: write d3/df/d59/f75 [4576321,38603] 0 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: Migrating agent root cert to cert store 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: Migrating agent root key to cert store 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: Checking for cert/key for grafana.vm05 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: Migrating grafana.vm05 cert to cert store 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: Migrating grafana.vm05 key to cert store 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm09.cfuwdz/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm09.cfuwdz/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm09.cfuwdz/trash_purge_schedule"}]: dispatch 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm09.cfuwdz/trash_purge_schedule"}]: dispatch 2026-03-09T15:01:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:41 vm09.local ceph-mon[59673]: mgrmap e22: vm09.cfuwdz(active, since 1.07771s) 2026-03-09T15:01:41.618 INFO:tasks.workunit.client.0.vm05.stdout:7/613: mknod d1/d22/d3c/cc3 0 2026-03-09T15:01:41.618 INFO:tasks.workunit.client.0.vm05.stdout:7/614: stat d1/f15 0 2026-03-09T15:01:41.636 INFO:tasks.workunit.client.0.vm05.stdout:9/663: link d2/d10/d22/dc2/db1/fb8 d2/d4e/d56/d53/d64/ded/d9c/d94/fee 0 2026-03-09T15:01:41.638 INFO:tasks.workunit.client.0.vm05.stdout:7/615: dread d1/d12/fb7 [0,4194304] 0 2026-03-09T15:01:41.638 INFO:tasks.workunit.client.0.vm05.stdout:7/616: fdatasync d1/d9/d23/d31/d51/f29 0 2026-03-09T15:01:41.650 INFO:tasks.workunit.client.0.vm05.stdout:2/629: mkdir da/d29/d6a/da0/d91/dab/d9c/dbf/dc8 0 2026-03-09T15:01:41.660 INFO:tasks.workunit.client.0.vm05.stdout:4/608: mkdir d2/d4/d1e/da2/dc5 0 2026-03-09T15:01:41.674 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.673+0000 7f2ac6844700 1 -- 192.168.123.105:0/4148126879 --> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f2ac0108090 con 0x7f2aac03d860 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.679+0000 7f2ab6ffd700 1 -- 192.168.123.105:0/4148126879 <== mgr.24413 v2:192.168.123.109:6828/2887506718 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3528 (secure 0 0 0) 0x7f2ac0108090 con 0x7f2aac03d860 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (4m) 3m ago 5m 22.6M - 0.25.0 c8568f914cd2 35e160b8d1de 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (5m) 3m ago 5m 8061k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (4m) 6s ago 4m 8632k - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (5m) 3m ago 5m 7411k - 18.2.0 dc2bc1663786 1c577d7a0de0 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (4m) 6s ago 4m 7402k - 18.2.0 dc2bc1663786 9e4961442551 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (4m) 3m ago 5m 82.7M - 9.4.7 954c08fa6188 46e00e5e5b38 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (3m) 3m ago 3m 16.0M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (3m) 3m ago 3m 12.8M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (3m) 6s ago 3m 15.0M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (3m) 6s ago 3m 282M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:9283,8765,8443 running (6m) 3m ago 6m 499M - 18.2.0 dc2bc1663786 528c75e7c581 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (9s) 6s ago 4m 57.9M - 19.2.3-678-ge911bdeb 654f31e6858e 9e4386df1493 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (6m) 3m ago 6m 49.2M 2048M 18.2.0 dc2bc1663786 c83e96b62251 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (4m) 6s ago 4m 36.9M 2048M 18.2.0 dc2bc1663786 7963792b5376 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (5m) 3m ago 5m 13.9M - 1.5.0 0da6a335fe13 925d94d1da6f 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (4m) 6s ago 4m 14.5M - 1.5.0 0da6a335fe13 e0b25e3a046e 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (4m) 3m ago 4m 45.7M 4096M 18.2.0 dc2bc1663786 50f3ca995318 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (4m) 3m ago 4m 46.9M 4096M 18.2.0 dc2bc1663786 23e35bdafe50 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (4m) 3m ago 4m 45.8M 4096M 18.2.0 dc2bc1663786 75097dc12979 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (4m) 6s ago 4m 319M 4096M 18.2.0 dc2bc1663786 e79644a0564f 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (3m) 6s ago 3m 259M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (3m) 6s ago 3m 250M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:01:41.679 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (4m) 3m ago 5m 38.9M - 2.43.0 a07b618ecd1d c36363ff6641 2026-03-09T15:01:41.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.682+0000 7f2ac6844700 1 -- 192.168.123.105:0/4148126879 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7f2aac03d860 msgr2=0x7f2aac03fd10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:41.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.682+0000 7f2ac6844700 1 --2- 192.168.123.105:0/4148126879 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7f2aac03d860 0x7f2aac03fd10 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f2abc005340 tx=0x7f2abc005fb0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.682+0000 7f2ac6844700 1 -- 192.168.123.105:0/4148126879 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2ac0103740 msgr2=0x7f2ac0198350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:41.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.682+0000 7f2ac6844700 1 --2- 192.168.123.105:0/4148126879 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2ac0103740 0x7f2ac0198350 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f2ab000d8d0 tx=0x7f2ab000dc90 comp rx=0 tx=0).stop 2026-03-09T15:01:41.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.682+0000 7f2ac6844700 1 -- 192.168.123.105:0/4148126879 shutdown_connections 2026-03-09T15:01:41.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.682+0000 7f2ac6844700 1 --2- 192.168.123.105:0/4148126879 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7f2aac03d860 0x7f2aac03fd10 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.682+0000 7f2ac6844700 1 --2- 192.168.123.105:0/4148126879 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2ac0102540 0x7f2ac0197e10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.682+0000 7f2ac6844700 1 --2- 192.168.123.105:0/4148126879 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2ac0103740 0x7f2ac0198350 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.682+0000 7f2ac6844700 1 -- 192.168.123.105:0/4148126879 >> 192.168.123.105:0/4148126879 conn(0x7f2ac00fdaf0 msgr2=0x7f2ac0106970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:41.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.683+0000 7f2ac6844700 1 -- 192.168.123.105:0/4148126879 shutdown_connections 2026-03-09T15:01:41.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.683+0000 7f2ac6844700 1 -- 192.168.123.105:0/4148126879 wait complete. 2026-03-09T15:01:41.739 INFO:tasks.workunit.client.0.vm05.stdout:9/664: dread - d2/d10/d22/d47/d73/f81 zero size 2026-03-09T15:01:41.739 INFO:tasks.workunit.client.0.vm05.stdout:7/617: creat d1/d9/d72/fc4 x:0 0 0 2026-03-09T15:01:41.740 INFO:tasks.workunit.client.0.vm05.stdout:9/665: chown d2/d10/d22/d47/l78 1011502 1 2026-03-09T15:01:41.744 INFO:tasks.workunit.client.0.vm05.stdout:9/666: dwrite d2/d10/d22/d2c/d69/fd8 [0,4194304] 0 2026-03-09T15:01:41.749 INFO:tasks.workunit.client.0.vm05.stdout:6/575: symlink da/d43/lb0 0 2026-03-09T15:01:41.749 INFO:tasks.workunit.client.0.vm05.stdout:4/609: mknod d2/d49/d69/cc6 0 2026-03-09T15:01:41.754 INFO:tasks.workunit.client.0.vm05.stdout:5/669: dread d1/d4/d19/d93/dcc/d91/fb0 [0,4194304] 0 2026-03-09T15:01:41.754 INFO:tasks.workunit.client.0.vm05.stdout:5/670: write d1/db5/fc3 [120719,108818] 0 2026-03-09T15:01:41.761 INFO:tasks.workunit.client.0.vm05.stdout:9/667: sync 2026-03-09T15:01:41.775 INFO:tasks.workunit.client.0.vm05.stdout:8/637: getdents d0/d1/d12/d1b/d6e 0 2026-03-09T15:01:41.788 INFO:tasks.workunit.client.0.vm05.stdout:4/610: symlink d2/d1d/d88/lc7 0 2026-03-09T15:01:41.793 INFO:tasks.workunit.client.0.vm05.stdout:6/576: mknod da/d43/d7b/d89/da8/cb1 0 2026-03-09T15:01:41.795 INFO:tasks.workunit.client.0.vm05.stdout:3/627: creat d3/df/d10/fd6 x:0 0 0 2026-03-09T15:01:41.796 INFO:tasks.workunit.client.0.vm05.stdout:3/628: chown d3/df/d10 620 1 2026-03-09T15:01:41.797 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.797+0000 7fccd3c8d700 1 -- 192.168.123.105:0/468134310 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcccc072360 msgr2=0x7fcccc0770e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:41.797 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.797+0000 7fccd3c8d700 1 --2- 192.168.123.105:0/468134310 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcccc072360 0x7fcccc0770e0 secure :-1 s=READY pgs=319 cs=0 l=1 rev1=1 crypto rx=0x7fccc400cd40 tx=0x7fccc400a320 comp rx=0 tx=0).stop 2026-03-09T15:01:41.797 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd3c8d700 1 -- 192.168.123.105:0/468134310 shutdown_connections 2026-03-09T15:01:41.797 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd3c8d700 1 --2- 192.168.123.105:0/468134310 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcccc072360 0x7fcccc0770e0 unknown :-1 s=CLOSED pgs=319 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.797 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd3c8d700 1 --2- 192.168.123.105:0/468134310 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcccc071980 0x7fcccc071d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.798 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd3c8d700 1 -- 192.168.123.105:0/468134310 >> 192.168.123.105:0/468134310 conn(0x7fcccc06d1a0 msgr2=0x7fcccc06f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd3c8d700 1 -- 192.168.123.105:0/468134310 shutdown_connections 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd3c8d700 1 -- 192.168.123.105:0/468134310 wait complete. 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd3c8d700 1 Processor -- start 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd3c8d700 1 -- start start 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd3c8d700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcccc071980 0x7fcccc082510 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd3c8d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcccc082a50 0x7fcccc082ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd3c8d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcccc1b2a90 con 0x7fcccc082a50 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd3c8d700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcccc1b2bd0 con 0x7fcccc071980 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd1228700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcccc082a50 0x7fcccc082ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd1228700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcccc082a50 0x7fcccc082ec0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:54470/0 (socket says 192.168.123.105:54470) 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.798+0000 7fccd1228700 1 -- 192.168.123.105:0/1326738389 learned_addr learned my addr 192.168.123.105:0/1326738389 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.799+0000 7fccd1228700 1 -- 192.168.123.105:0/1326738389 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcccc071980 msgr2=0x7fcccc082510 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.799+0000 7fccd1228700 1 --2- 192.168.123.105:0/1326738389 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcccc071980 0x7fcccc082510 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.799+0000 7fccd1228700 1 -- 192.168.123.105:0/1326738389 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fccc400c9f0 con 0x7fcccc082a50 2026-03-09T15:01:41.799 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.799+0000 7fccd1228700 1 --2- 192.168.123.105:0/1326738389 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcccc082a50 0x7fcccc082ec0 secure :-1 s=READY pgs=320 cs=0 l=1 rev1=1 crypto rx=0x7fccc400bb40 tx=0x7fccc400bc20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:41.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.800+0000 7fccc2ffd700 1 -- 192.168.123.105:0/1326738389 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fccc400dea0 con 0x7fcccc082a50 2026-03-09T15:01:41.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.800+0000 7fccc2ffd700 1 -- 192.168.123.105:0/1326738389 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fccc4009d70 con 0x7fcccc082a50 2026-03-09T15:01:41.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.800+0000 7fccc2ffd700 1 -- 192.168.123.105:0/1326738389 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fccc401c970 con 0x7fcccc082a50 2026-03-09T15:01:41.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.800+0000 7fccd3c8d700 1 -- 192.168.123.105:0/1326738389 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcccc1b2d10 con 0x7fcccc082a50 2026-03-09T15:01:41.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.800+0000 7fccd3c8d700 1 -- 192.168.123.105:0/1326738389 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcccc1b3200 con 0x7fcccc082a50 2026-03-09T15:01:41.800 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.800+0000 7fccd3c8d700 1 -- 192.168.123.105:0/1326738389 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcccc07c8d0 con 0x7fcccc082a50 2026-03-09T15:01:41.801 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.801+0000 7fccc2ffd700 1 -- 192.168.123.105:0/1326738389 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 22) v1 ==== 50027+0+0 (secure 0 0 0) 0x7fccc40074e0 con 0x7fcccc082a50 2026-03-09T15:01:41.801 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.802+0000 7fccc2ffd700 1 --2- 192.168.123.105:0/1326738389 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fccb803db10 0x7fccb803ffc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:41.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.802+0000 7fccc2ffd700 1 -- 192.168.123.105:0/1326738389 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5448+0+0 (secure 0 0 0) 0x7fccc4022030 con 0x7fcccc082a50 2026-03-09T15:01:41.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.802+0000 7fccd1a29700 1 --2- 192.168.123.105:0/1326738389 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fccb803db10 0x7fccb803ffc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:41.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.802+0000 7fccd1a29700 1 --2- 192.168.123.105:0/1326738389 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fccb803db10 0x7fccb803ffc0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fccc80098a0 tx=0x7fccc8006d90 comp rx=0 tx=0).ready entity=mgr.24413 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:41.805 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:41.805+0000 7fccc2ffd700 1 -- 192.168.123.105:0/1326738389 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7fccc40078e0 con 0x7fcccc082a50 2026-03-09T15:01:41.816 INFO:tasks.workunit.client.0.vm05.stdout:9/668: fdatasync d2/d10/f48 0 2026-03-09T15:01:41.819 INFO:tasks.workunit.client.0.vm05.stdout:9/669: dwrite d2/d8b/dae/fec [0,4194304] 0 2026-03-09T15:01:41.825 INFO:tasks.workunit.client.0.vm05.stdout:9/670: read d2/d4e/d56/d53/d64/ded/d9c/d8e/f5f [292919,127557] 0 2026-03-09T15:01:41.829 INFO:tasks.workunit.client.0.vm05.stdout:1/621: getdents d9/d2f/d83/d98/d59/d49/d78/d94 0 2026-03-09T15:01:41.831 INFO:tasks.workunit.client.0.vm05.stdout:4/611: fsync d2/d4/d7/f7b 0 2026-03-09T15:01:41.833 INFO:tasks.workunit.client.0.vm05.stdout:7/618: creat d1/d49/d4a/d77/fc5 x:0 0 0 2026-03-09T15:01:41.837 INFO:tasks.workunit.client.0.vm05.stdout:2/630: getdents da/d29/d3f 0 2026-03-09T15:01:41.839 INFO:tasks.workunit.client.0.vm05.stdout:8/638: mkdir d0/d1/d12/d1b/d66/dcc/dd4 0 2026-03-09T15:01:41.840 INFO:tasks.workunit.client.0.vm05.stdout:8/639: readlink d0/dc/l22 0 2026-03-09T15:01:41.843 INFO:tasks.workunit.client.0.vm05.stdout:7/619: sync 2026-03-09T15:01:41.844 INFO:tasks.workunit.client.0.vm05.stdout:7/620: readlink d1/d9/d23/d31/d32/d78/la6 0 2026-03-09T15:01:41.853 INFO:tasks.workunit.client.0.vm05.stdout:7/621: dwrite d1/d9/d23/d31/d51/f29 [4194304,4194304] 0 2026-03-09T15:01:41.861 INFO:tasks.workunit.client.0.vm05.stdout:4/612: creat d2/d4/d7/d48/fc8 x:0 0 0 2026-03-09T15:01:41.862 INFO:tasks.workunit.client.0.vm05.stdout:4/613: dread - d2/d4/d8/d4a/fa9 zero size 2026-03-09T15:01:41.863 INFO:tasks.workunit.client.0.vm05.stdout:4/614: fsync d2/d4/d7/dc/da8/fab 0 2026-03-09T15:01:41.872 INFO:tasks.workunit.client.0.vm05.stdout:7/622: dwrite d1/f15 [0,4194304] 0 2026-03-09T15:01:41.875 INFO:tasks.workunit.client.0.vm05.stdout:0/568: dwrite d9/d64/f8c [0,4194304] 0 2026-03-09T15:01:41.876 INFO:tasks.workunit.client.0.vm05.stdout:4/615: dwrite d2/d4/d50/d8a/fc3 [0,4194304] 0 2026-03-09T15:01:41.882 INFO:tasks.workunit.client.0.vm05.stdout:5/671: write d1/ff [4828561,7712] 0 2026-03-09T15:01:41.884 INFO:tasks.workunit.client.0.vm05.stdout:6/577: link da/d17/f33 da/d17/d3b/fb2 0 2026-03-09T15:01:41.890 INFO:tasks.workunit.client.0.vm05.stdout:9/671: mkdir d2/d4e/d56/d53/d64/dd9/def 0 2026-03-09T15:01:41.890 INFO:tasks.workunit.client.0.vm05.stdout:8/640: rename d0/d1/d12/d1b/l28 to d0/d1/d12/d1b/d95/d4b/ld5 0 2026-03-09T15:01:41.890 INFO:tasks.workunit.client.0.vm05.stdout:8/641: dread - d0/d1/d12/d1b/d6e/fc2 zero size 2026-03-09T15:01:41.891 INFO:tasks.workunit.client.0.vm05.stdout:9/672: write d2/d10/d22/d47/fe5 [597524,45266] 0 2026-03-09T15:01:41.891 INFO:tasks.workunit.client.0.vm05.stdout:9/673: readlink d2/d10/d22/d2c/l4c 0 2026-03-09T15:01:41.895 INFO:tasks.workunit.client.0.vm05.stdout:9/674: fdatasync d2/d4e/d56/d53/d64/ded/d9c/d8e/dcb/fd6 0 2026-03-09T15:01:41.901 INFO:tasks.workunit.client.0.vm05.stdout:1/622: mknod d9/d2f/d83/d98/d59/d49/d78/dbd/cd1 0 2026-03-09T15:01:41.902 INFO:tasks.workunit.client.0.vm05.stdout:1/623: readlink d9/d17/l1c 0 2026-03-09T15:01:41.903 INFO:tasks.workunit.client.0.vm05.stdout:4/616: unlink d2/d4/d7/dc/c5b 0 2026-03-09T15:01:41.906 INFO:tasks.workunit.client.0.vm05.stdout:7/623: mknod d1/d9/d23/d31/d32/d78/cc6 0 2026-03-09T15:01:41.916 INFO:tasks.workunit.client.0.vm05.stdout:6/578: rename da/d17/d3b/d81 to da/d43/d7b/db3 0 2026-03-09T15:01:41.934 INFO:tasks.workunit.client.0.vm05.stdout:5/672: dread d1/db5/f5a [0,4194304] 0 2026-03-09T15:01:41.939 INFO:tasks.workunit.client.0.vm05.stdout:9/675: stat d2/d10/d22/c32 0 2026-03-09T15:01:41.949 INFO:tasks.workunit.client.0.vm05.stdout:4/617: symlink d2/d4/d7/d79/lc9 0 2026-03-09T15:01:41.949 INFO:tasks.workunit.client.0.vm05.stdout:4/618: fdatasync d2/d4/d7/d21/d3d/f65 0 2026-03-09T15:01:41.952 INFO:tasks.workunit.client.0.vm05.stdout:4/619: dwrite d2/f14 [0,4194304] 0 2026-03-09T15:01:41.953 INFO:tasks.workunit.client.0.vm05.stdout:4/620: chown d2/d7a/fbf 185446 1 2026-03-09T15:01:41.954 INFO:tasks.workunit.client.0.vm05.stdout:4/621: write d2/d4/d8/d4a/d6e/f8d [2525926,52560] 0 2026-03-09T15:01:41.968 INFO:tasks.workunit.client.0.vm05.stdout:1/624: creat d9/d2f/d83/d98/d59/d49/d92/fd2 x:0 0 0 2026-03-09T15:01:41.983 INFO:tasks.workunit.client.0.vm05.stdout:7/624: rename d1/d9/d23/fb4 to d1/d9/d23/d31/d32/fc7 0 2026-03-09T15:01:41.990 INFO:tasks.workunit.client.0.vm05.stdout:5/673: creat d1/d4/d34/d35/d3d/d38/fe4 x:0 0 0 2026-03-09T15:01:41.990 INFO:tasks.workunit.client.0.vm05.stdout:5/674: dread d1/d4/d34/d35/dd0/fd7 [0,4194304] 0 2026-03-09T15:01:41.997 INFO:tasks.workunit.client.0.vm05.stdout:9/676: mkdir d2/d4e/d56/d53/d64/ded/d9c/df0 0 2026-03-09T15:01:41.998 INFO:tasks.workunit.client.0.vm05.stdout:9/677: truncate d2/d10/d22/dc2/fde 467690 0 2026-03-09T15:01:42.001 INFO:tasks.workunit.client.0.vm05.stdout:9/678: dwrite d2/d8b/dae/fec [0,4194304] 0 2026-03-09T15:01:42.007 INFO:tasks.workunit.client.0.vm05.stdout:3/629: write d3/d29/d2d/f31 [3451304,63359] 0 2026-03-09T15:01:42.017 INFO:tasks.workunit.client.0.vm05.stdout:6/579: truncate da/d43/f5c 1062326 0 2026-03-09T15:01:42.027 INFO:tasks.workunit.client.0.vm05.stdout:5/675: creat d1/d4/d34/dc0/fe5 x:0 0 0 2026-03-09T15:01:42.031 INFO:tasks.workunit.client.0.vm05.stdout:9/679: fdatasync d2/d4e/d56/d53/d64/ded/f36 0 2026-03-09T15:01:42.034 INFO:tasks.workunit.client.0.vm05.stdout:2/631: write da/d29/d6a/da0/d91/dab/f8c [1199594,78726] 0 2026-03-09T15:01:42.035 INFO:tasks.workunit.client.0.vm05.stdout:2/632: chown da/fa2 5 1 2026-03-09T15:01:42.051 INFO:tasks.workunit.client.0.vm05.stdout:4/622: rename d2/d4/d8/c75 to d2/d4/d8/d4a/d94/cca 0 2026-03-09T15:01:42.052 INFO:tasks.workunit.client.0.vm05.stdout:4/623: chown d2/d4/d1e/d71 79848 1 2026-03-09T15:01:42.052 INFO:tasks.workunit.client.0.vm05.stdout:4/624: chown d2/d4/d7/dc/f45 15 1 2026-03-09T15:01:42.063 INFO:tasks.workunit.client.0.vm05.stdout:0/569: dwrite d9/f82 [0,4194304] 0 2026-03-09T15:01:42.070 INFO:tasks.workunit.client.0.vm05.stdout:8/642: dwrite d0/d1/d12/d1b/d95/d54/f85 [0,4194304] 0 2026-03-09T15:01:42.083 INFO:tasks.workunit.client.0.vm05.stdout:6/580: write da/d17/f8d [1387601,79] 0 2026-03-09T15:01:42.091 INFO:tasks.workunit.client.0.vm05.stdout:7/625: dwrite d1/d9/d23/fac [0,4194304] 0 2026-03-09T15:01:42.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.096+0000 7fccd3c8d700 1 -- 192.168.123.105:0/1326738389 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fcccc04ea50 con 0x7fcccc082a50 2026-03-09T15:01:42.098 INFO:tasks.workunit.client.0.vm05.stdout:5/676: creat d1/da/fe6 x:0 0 0 2026-03-09T15:01:42.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.098+0000 7fccc2ffd700 1 -- 192.168.123.105:0/1326738389 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+679 (secure 0 0 0) 0x7fccc40263e0 con 0x7fcccc082a50 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 6 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 12, 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:01:42.101 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:01:42.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.101+0000 7fccd3c8d700 1 -- 192.168.123.105:0/1326738389 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fccb803db10 msgr2=0x7fccb803ffc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:42.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.101+0000 7fccd3c8d700 1 --2- 192.168.123.105:0/1326738389 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fccb803db10 0x7fccb803ffc0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fccc80098a0 tx=0x7fccc8006d90 comp rx=0 tx=0).stop 2026-03-09T15:01:42.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.101+0000 7fccd3c8d700 1 -- 192.168.123.105:0/1326738389 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcccc082a50 msgr2=0x7fcccc082ec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:42.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.101+0000 7fccd3c8d700 1 --2- 192.168.123.105:0/1326738389 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcccc082a50 0x7fcccc082ec0 secure :-1 s=READY pgs=320 cs=0 l=1 rev1=1 crypto rx=0x7fccc400bb40 tx=0x7fccc400bc20 comp rx=0 tx=0).stop 2026-03-09T15:01:42.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.102+0000 7fccd3c8d700 1 -- 192.168.123.105:0/1326738389 shutdown_connections 2026-03-09T15:01:42.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.102+0000 7fccd3c8d700 1 --2- 192.168.123.105:0/1326738389 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fccb803db10 0x7fccb803ffc0 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:42.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.102+0000 7fccd3c8d700 1 --2- 192.168.123.105:0/1326738389 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcccc071980 0x7fcccc082510 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:42.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.102+0000 7fccd3c8d700 1 --2- 192.168.123.105:0/1326738389 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcccc082a50 0x7fcccc082ec0 secure :-1 s=CLOSED pgs=320 cs=0 l=1 rev1=1 crypto rx=0x7fccc400bb40 tx=0x7fccc400bc20 comp rx=0 tx=0).stop 2026-03-09T15:01:42.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.102+0000 7fccd3c8d700 1 -- 192.168.123.105:0/1326738389 >> 192.168.123.105:0/1326738389 conn(0x7fcccc06d1a0 msgr2=0x7fcccc076420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:42.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.102+0000 7fccd3c8d700 1 -- 192.168.123.105:0/1326738389 shutdown_connections 2026-03-09T15:01:42.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.103+0000 7fccd3c8d700 1 -- 192.168.123.105:0/1326738389 wait complete. 2026-03-09T15:01:42.108 INFO:tasks.workunit.client.0.vm05.stdout:4/625: mkdir d2/d4/d7/d21/dcb 0 2026-03-09T15:01:42.109 INFO:tasks.workunit.client.0.vm05.stdout:4/626: read d2/d49/f4d [1964148,104283] 0 2026-03-09T15:01:42.121 INFO:tasks.workunit.client.0.vm05.stdout:0/570: rmdir d9/d64 39 2026-03-09T15:01:42.121 INFO:tasks.workunit.client.0.vm05.stdout:0/571: chown d9/de/d12/da3 30722452 1 2026-03-09T15:01:42.138 INFO:tasks.workunit.client.0.vm05.stdout:7/626: rmdir d1/d9/d23/d31/d8f/d93 39 2026-03-09T15:01:42.141 INFO:tasks.workunit.client.0.vm05.stdout:5/677: mknod d1/d4/d34/d35/d3d/d96/ce7 0 2026-03-09T15:01:42.143 INFO:tasks.workunit.client.0.vm05.stdout:2/633: mkdir da/d29/d6a/da0/d91/dab/d2f/d35/db0/dc9 0 2026-03-09T15:01:42.144 INFO:tasks.workunit.client.0.vm05.stdout:9/680: unlink d2/d10/f39 0 2026-03-09T15:01:42.145 INFO:tasks.workunit.client.0.vm05.stdout:3/630: getdents d3/df/d10/d19/db5 0 2026-03-09T15:01:42.148 INFO:tasks.workunit.client.0.vm05.stdout:1/625: getdents d9/d2f/d55 0 2026-03-09T15:01:42.148 INFO:tasks.workunit.client.0.vm05.stdout:1/626: stat d9/d17 0 2026-03-09T15:01:42.152 INFO:tasks.workunit.client.0.vm05.stdout:1/627: dwrite d9/d2f/d37/d5a/da9/dc9/dcd/f6f [0,4194304] 0 2026-03-09T15:01:42.164 INFO:tasks.workunit.client.0.vm05.stdout:0/572: rename d9/de/d12/d15/l7e to d9/d59/lb6 0 2026-03-09T15:01:42.168 INFO:tasks.workunit.client.0.vm05.stdout:8/643: creat d0/d1/d12/d1b/d66/db7/dbe/fd6 x:0 0 0 2026-03-09T15:01:42.188 INFO:tasks.workunit.client.0.vm05.stdout:7/627: symlink d1/d9/d23/d54/d7b/lc8 0 2026-03-09T15:01:42.188 INFO:tasks.workunit.client.0.vm05.stdout:1/628: dread d9/d2f/d83/d98/f4e [0,4194304] 0 2026-03-09T15:01:42.213 INFO:tasks.workunit.client.0.vm05.stdout:9/681: read d2/f46 [37277,83096] 0 2026-03-09T15:01:42.218 INFO:tasks.workunit.client.0.vm05.stdout:3/631: fsync d3/df/d10/d34/d8c/f6d 0 2026-03-09T15:01:42.219 INFO:tasks.workunit.client.0.vm05.stdout:2/634: dread da/d16/f1f [4194304,4194304] 0 2026-03-09T15:01:42.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.229+0000 7fa9a98ab700 1 -- 192.168.123.105:0/2625098275 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa9a4071950 msgr2=0x7fa9a4071d60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.229+0000 7fa9a98ab700 1 --2- 192.168.123.105:0/2625098275 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa9a4071950 0x7fa9a4071d60 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7fa994007780 tx=0x7fa99400c050 comp rx=0 tx=0).stop 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.229+0000 7fa9a98ab700 1 -- 192.168.123.105:0/2625098275 shutdown_connections 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.229+0000 7fa9a98ab700 1 --2- 192.168.123.105:0/2625098275 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa9a4072330 0x7fa9a40770b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.229+0000 7fa9a98ab700 1 --2- 192.168.123.105:0/2625098275 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa9a4071950 0x7fa9a4071d60 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.229+0000 7fa9a98ab700 1 -- 192.168.123.105:0/2625098275 >> 192.168.123.105:0/2625098275 conn(0x7fa9a406d1a0 msgr2=0x7fa9a406f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.229+0000 7fa9a98ab700 1 -- 192.168.123.105:0/2625098275 shutdown_connections 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.229+0000 7fa9a98ab700 1 -- 192.168.123.105:0/2625098275 wait complete. 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.230+0000 7fa9a98ab700 1 Processor -- start 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.230+0000 7fa9a98ab700 1 -- start start 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.230+0000 7fa9a98ab700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa9a4072330 0x7fa9a4131320 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.230+0000 7fa9a98ab700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa9a4131860 0x7fa9a407f4e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.230+0000 7fa9a98ab700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa9a4131d60 con 0x7fa9a4131860 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.230+0000 7fa9a98ab700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa9a4131ed0 con 0x7fa9a4072330 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.230+0000 7fa9a88a9700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa9a4072330 0x7fa9a4131320 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.230+0000 7fa9a88a9700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa9a4072330 0x7fa9a4131320 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:37074/0 (socket says 192.168.123.105:37074) 2026-03-09T15:01:42.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.230+0000 7fa9a88a9700 1 -- 192.168.123.105:0/3357252233 learned_addr learned my addr 192.168.123.105:0/3357252233 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:01:42.231 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:42 vm05.local ceph-mon[50611]: pgmap v3: 65 pgs: 65 active+clean; 2.4 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-09T15:01:42.231 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:42 vm05.local ceph-mon[50611]: Deploying cephadm binary to vm09 2026-03-09T15:01:42.231 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:42 vm05.local ceph-mon[50611]: from='client.24437 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:42.231 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:42 vm05.local ceph-mon[50611]: from='client.14640 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:42.231 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:42 vm05.local ceph-mon[50611]: [09/Mar/2026:15:01:41] ENGINE Bus STARTING 2026-03-09T15:01:42.231 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:42 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/1326738389' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:01:42.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.231+0000 7fa9a88a9700 1 -- 192.168.123.105:0/3357252233 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa9a4131860 msgr2=0x7fa9a407f4e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:42.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.231+0000 7fa9a88a9700 1 --2- 192.168.123.105:0/3357252233 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa9a4131860 0x7fa9a407f4e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:42.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.231+0000 7fa9a88a9700 1 -- 192.168.123.105:0/3357252233 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa994007430 con 0x7fa9a4072330 2026-03-09T15:01:42.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.232+0000 7fa9a88a9700 1 --2- 192.168.123.105:0/3357252233 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa9a4072330 0x7fa9a4131320 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7fa994007fd0 tx=0x7fa99400da70 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:42.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.232+0000 7fa9a1ffb700 1 -- 192.168.123.105:0/3357252233 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa99400f040 con 0x7fa9a4072330 2026-03-09T15:01:42.233 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.232+0000 7fa9a98ab700 1 -- 192.168.123.105:0/3357252233 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa9a407fa20 con 0x7fa9a4072330 2026-03-09T15:01:42.233 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.232+0000 7fa9a98ab700 1 -- 192.168.123.105:0/3357252233 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa9a407fee0 con 0x7fa9a4072330 2026-03-09T15:01:42.233 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.233+0000 7fa9a1ffb700 1 -- 192.168.123.105:0/3357252233 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa99400a5b0 con 0x7fa9a4072330 2026-03-09T15:01:42.233 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.233+0000 7fa9a1ffb700 1 -- 192.168.123.105:0/3357252233 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa9940085c0 con 0x7fa9a4072330 2026-03-09T15:01:42.234 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.234+0000 7fa9a1ffb700 1 -- 192.168.123.105:0/3357252233 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 23) v1 ==== 50327+0+0 (secure 0 0 0) 0x7fa99401a070 con 0x7fa9a4072330 2026-03-09T15:01:42.234 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.234+0000 7fa9a1ffb700 1 --2- 192.168.123.105:0/3357252233 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fa98c03dea0 0x7fa98c040350 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:01:42.234 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.234+0000 7fa9a1ffb700 1 -- 192.168.123.105:0/3357252233 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(39..39 src has 1..39) v4 ==== 5448+0+0 (secure 0 0 0) 0x7fa994053c60 con 0x7fa9a4072330 2026-03-09T15:01:42.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.236+0000 7fa9a3fff700 1 --2- 192.168.123.105:0/3357252233 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fa98c03dea0 0x7fa98c040350 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:01:42.236 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.236+0000 7fa9a3fff700 1 --2- 192.168.123.105:0/3357252233 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fa98c03dea0 0x7fa98c040350 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fa9a4072fc0 tx=0x7fa99c009250 comp rx=0 tx=0).ready entity=mgr.24413 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:01:42.236 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.236+0000 7fa9a98ab700 1 -- 192.168.123.105:0/3357252233 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa990005320 con 0x7fa9a4072330 2026-03-09T15:01:42.241 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.241+0000 7fa9a1ffb700 1 -- 192.168.123.105:0/3357252233 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7fa994018080 con 0x7fa9a4072330 2026-03-09T15:01:42.247 INFO:tasks.workunit.client.0.vm05.stdout:4/627: dwrite d2/d49/d69/f9b [0,4194304] 0 2026-03-09T15:01:42.262 INFO:tasks.workunit.client.0.vm05.stdout:2/635: dread da/dd/f9e [0,4194304] 0 2026-03-09T15:01:42.264 INFO:tasks.workunit.client.0.vm05.stdout:2/636: dread - da/d29/d6a/da0/fa1 zero size 2026-03-09T15:01:42.278 INFO:tasks.workunit.client.0.vm05.stdout:6/581: truncate da/d43/f96 3997032 0 2026-03-09T15:01:42.279 INFO:tasks.workunit.client.0.vm05.stdout:6/582: chown da/f3d 31972 1 2026-03-09T15:01:42.283 INFO:tasks.workunit.client.0.vm05.stdout:5/678: dwrite d1/d5d/f82 [0,4194304] 0 2026-03-09T15:01:42.284 INFO:tasks.workunit.client.0.vm05.stdout:2/637: dread da/dd/ff [0,4194304] 0 2026-03-09T15:01:42.285 INFO:tasks.workunit.client.0.vm05.stdout:2/638: chown da/d29/d6a/da0/d7c/cbc 116962485 1 2026-03-09T15:01:42.290 INFO:tasks.workunit.client.0.vm05.stdout:6/583: dwrite da/d43/fa6 [0,4194304] 0 2026-03-09T15:01:42.329 INFO:tasks.workunit.client.0.vm05.stdout:8/644: write d0/d1/d12/d1b/f27 [4133223,91150] 0 2026-03-09T15:01:42.330 INFO:tasks.workunit.client.0.vm05.stdout:8/645: readlink d0/d1/d12/d1b/d21/l2f 0 2026-03-09T15:01:42.331 INFO:tasks.workunit.client.0.vm05.stdout:1/629: dwrite d9/d2f/d37/d5a/da9/dc9/dcd/f8f [0,4194304] 0 2026-03-09T15:01:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:42 vm09.local ceph-mon[59673]: pgmap v3: 65 pgs: 65 active+clean; 2.4 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-09T15:01:42.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:42 vm09.local ceph-mon[59673]: Deploying cephadm binary to vm09 2026-03-09T15:01:42.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:42 vm09.local ceph-mon[59673]: from='client.24437 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:42.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:42 vm09.local ceph-mon[59673]: from='client.14640 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:42.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:42 vm09.local ceph-mon[59673]: [09/Mar/2026:15:01:41] ENGINE Bus STARTING 2026-03-09T15:01:42.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:42 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/1326738389' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:01:42.384 INFO:tasks.workunit.client.0.vm05.stdout:4/628: rmdir d2/d4/d7/d48/d6b 39 2026-03-09T15:01:42.390 INFO:tasks.workunit.client.0.vm05.stdout:0/573: mkdir d9/de/d12/d15/d2e/d32/d9f/da0/db7 0 2026-03-09T15:01:42.394 INFO:tasks.workunit.client.0.vm05.stdout:0/574: dwrite d9/de/d12/d15/d2e/d32/d53/f91 [0,4194304] 0 2026-03-09T15:01:42.395 INFO:tasks.workunit.client.0.vm05.stdout:0/575: write d9/de/f1e [660212,105710] 0 2026-03-09T15:01:42.416 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.415+0000 7fa9a98ab700 1 -- 192.168.123.105:0/3357252233 --> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fa990000bf0 con 0x7fa98c03dea0 2026-03-09T15:01:42.424 INFO:tasks.workunit.client.0.vm05.stdout:5/679: unlink d1/d4/d34/d35/d3d/d38/f4b 0 2026-03-09T15:01:42.434 INFO:tasks.workunit.client.0.vm05.stdout:2/639: symlink da/d29/d6a/d7f/lca 0 2026-03-09T15:01:42.440 INFO:tasks.workunit.client.0.vm05.stdout:9/682: symlink d2/d4e/d56/lf1 0 2026-03-09T15:01:42.440 INFO:tasks.workunit.client.0.vm05.stdout:9/683: readlink d2/d10/d22/d2c/l4c 0 2026-03-09T15:01:42.445 INFO:tasks.workunit.client.0.vm05.stdout:4/629: mkdir d2/d1d/d88/dcc 0 2026-03-09T15:01:42.446 INFO:tasks.workunit.client.0.vm05.stdout:0/576: chown d9/d64/f8c 27 1 2026-03-09T15:01:42.448 INFO:tasks.workunit.client.0.vm05.stdout:5/680: dread - d1/d4/d34/fc1 zero size 2026-03-09T15:01:42.453 INFO:tasks.workunit.client.0.vm05.stdout:8/646: unlink d0/d1/d12/d1b/d66/c76 0 2026-03-09T15:01:42.453 INFO:tasks.workunit.client.0.vm05.stdout:9/684: chown d2/d10/d22/d47/la6 1706475 1 2026-03-09T15:01:42.454 INFO:tasks.workunit.client.0.vm05.stdout:8/647: write d0/d1/d12/d1b/d95/d78/d86/fc9 [611383,28682] 0 2026-03-09T15:01:42.455 INFO:tasks.workunit.client.0.vm05.stdout:9/685: dread - d2/d4e/d56/d53/d64/ded/d9c/d8e/fe4 zero size 2026-03-09T15:01:42.455 INFO:tasks.workunit.client.0.vm05.stdout:9/686: stat d2/d8b/de3 0 2026-03-09T15:01:42.460 INFO:tasks.workunit.client.0.vm05.stdout:6/584: dwrite da/d43/d66/f6e [0,4194304] 0 2026-03-09T15:01:42.461 INFO:tasks.workunit.client.0.vm05.stdout:7/628: truncate d1/d9/d23/d31/d32/d78/f88 3238923 0 2026-03-09T15:01:42.464 INFO:tasks.workunit.client.0.vm05.stdout:3/632: rename d3/lca to d3/df/d10/d19/d44/dd2/ld7 0 2026-03-09T15:01:42.467 INFO:tasks.workunit.client.0.vm05.stdout:0/577: creat d9/de/d12/d15/d2e/d6b/fb8 x:0 0 0 2026-03-09T15:01:42.475 INFO:tasks.workunit.client.0.vm05.stdout:0/578: fdatasync d9/de/f1e 0 2026-03-09T15:01:42.475 INFO:tasks.workunit.client.0.vm05.stdout:0/579: fdatasync d9/de/d25/d38/f87 0 2026-03-09T15:01:42.484 INFO:tasks.workunit.client.0.vm05.stdout:9/687: creat d2/d10/d22/d47/d73/ff2 x:0 0 0 2026-03-09T15:01:42.488 INFO:tasks.workunit.client.0.vm05.stdout:7/629: mkdir d1/d9/d72/d97/dc9 0 2026-03-09T15:01:42.491 INFO:tasks.workunit.client.0.vm05.stdout:4/630: rename d2/d4/d7/d21 to d2/d4/d8/d4a/d8f/dcd 0 2026-03-09T15:01:42.492 INFO:tasks.workunit.client.0.vm05.stdout:5/681: sync 2026-03-09T15:01:42.492 INFO:tasks.workunit.client.0.vm05.stdout:3/633: sync 2026-03-09T15:01:42.493 INFO:tasks.workunit.client.0.vm05.stdout:5/682: chown d1/d4/d34/d35/d4e/d6f/l94 25529811 1 2026-03-09T15:01:42.495 INFO:tasks.workunit.client.0.vm05.stdout:9/688: mknod d2/d10/d22/da0/cf3 0 2026-03-09T15:01:42.496 INFO:tasks.workunit.client.0.vm05.stdout:9/689: write d2/d10/d22/d2c/d69/fd8 [3058793,94708] 0 2026-03-09T15:01:42.500 INFO:tasks.workunit.client.0.vm05.stdout:4/631: mknod d2/d4/d50/cce 0 2026-03-09T15:01:42.501 INFO:tasks.workunit.client.0.vm05.stdout:8/648: rename d0/d1/d12/d1b/d95/d78/d86 to d0/d1/d12/d1b/d95/dd7 0 2026-03-09T15:01:42.507 INFO:tasks.workunit.client.0.vm05.stdout:2/640: dwrite da/f3c [0,4194304] 0 2026-03-09T15:01:42.511 INFO:tasks.workunit.client.0.vm05.stdout:1/630: write d9/d2f/d83/d98/d59/d49/d78/d7e/fb3 [1457868,125665] 0 2026-03-09T15:01:42.518 INFO:tasks.workunit.client.0.vm05.stdout:9/690: truncate d2/d4e/f40 3875216 0 2026-03-09T15:01:42.520 INFO:tasks.workunit.client.0.vm05.stdout:4/632: mknod d2/d4/d8/ccf 0 2026-03-09T15:01:42.521 INFO:tasks.workunit.client.0.vm05.stdout:6/585: rename da/d43/d7b/db3/caf to da/d17/d95/da2/cb4 0 2026-03-09T15:01:42.522 INFO:tasks.workunit.client.0.vm05.stdout:6/586: read da/d19/f6a [2139346,99952] 0 2026-03-09T15:01:42.532 INFO:tasks.workunit.client.0.vm05.stdout:9/691: symlink d2/d10/d22/d52/lf4 0 2026-03-09T15:01:42.537 INFO:tasks.workunit.client.0.vm05.stdout:7/630: creat d1/d49/d4a/d77/fca x:0 0 0 2026-03-09T15:01:42.537 INFO:tasks.workunit.client.0.vm05.stdout:5/683: dread d1/da/f2f [0,4194304] 0 2026-03-09T15:01:42.537 INFO:tasks.workunit.client.0.vm05.stdout:4/633: creat d2/d1d/fd0 x:0 0 0 2026-03-09T15:01:42.540 INFO:tasks.workunit.client.0.vm05.stdout:4/634: dwrite d2/d7a/fb1 [4194304,4194304] 0 2026-03-09T15:01:42.541 INFO:tasks.workunit.client.0.vm05.stdout:4/635: stat d2/d4/d50/cce 0 2026-03-09T15:01:42.544 INFO:tasks.workunit.client.0.vm05.stdout:0/580: write d9/de/d12/d15/d2e/d32/d9f/fa9 [239012,116257] 0 2026-03-09T15:01:42.548 INFO:tasks.workunit.client.0.vm05.stdout:8/649: write d0/d1/d12/d1b/d66/f56 [100147,42312] 0 2026-03-09T15:01:42.551 INFO:tasks.workunit.client.0.vm05.stdout:3/634: truncate d3/df/d1e/f2b 2264031 0 2026-03-09T15:01:42.554 INFO:tasks.workunit.client.0.vm05.stdout:2/641: getdents da/d29/d6a/da0/d91/dab/d2f/db3 0 2026-03-09T15:01:42.556 INFO:tasks.workunit.client.0.vm05.stdout:1/631: mkdir d9/d2f/d83/d98/d59/d49/d78/dcc/dd3 0 2026-03-09T15:01:42.558 INFO:tasks.workunit.client.0.vm05.stdout:2/642: dwrite da/f10 [0,4194304] 0 2026-03-09T15:01:42.562 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.562+0000 7fa9a1ffb700 1 -- 192.168.123.105:0/3357252233 <== mgr.24413 v2:192.168.123.109:6828/2887506718 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+310 (secure 0 0 0) 0x7fa990000bf0 con 0x7fa98c03dea0 2026-03-09T15:01:42.565 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:01:42.565 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:01:42.565 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:01:42.565 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) mgr", 2026-03-09T15:01:42.565 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-09T15:01:42.565 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "1/2 daemons upgraded", 2026-03-09T15:01:42.565 INFO:teuthology.orchestra.run.vm05.stdout: "message": "", 2026-03-09T15:01:42.565 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:01:42.565 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:01:42.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.566+0000 7fa9a98ab700 1 -- 192.168.123.105:0/3357252233 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fa98c03dea0 msgr2=0x7fa98c040350 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:42.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.566+0000 7fa9a98ab700 1 --2- 192.168.123.105:0/3357252233 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fa98c03dea0 0x7fa98c040350 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fa9a4072fc0 tx=0x7fa99c009250 comp rx=0 tx=0).stop 2026-03-09T15:01:42.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.566+0000 7fa9a98ab700 1 -- 192.168.123.105:0/3357252233 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa9a4072330 msgr2=0x7fa9a4131320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:01:42.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.566+0000 7fa9a98ab700 1 --2- 192.168.123.105:0/3357252233 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa9a4072330 0x7fa9a4131320 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7fa994007fd0 tx=0x7fa99400da70 comp rx=0 tx=0).stop 2026-03-09T15:01:42.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.566+0000 7fa9a98ab700 1 -- 192.168.123.105:0/3357252233 shutdown_connections 2026-03-09T15:01:42.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.566+0000 7fa9a98ab700 1 --2- 192.168.123.105:0/3357252233 >> [v2:192.168.123.109:6828/2887506718,v1:192.168.123.109:6829/2887506718] conn(0x7fa98c03dea0 0x7fa98c040350 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:42.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.566+0000 7fa9a98ab700 1 --2- 192.168.123.105:0/3357252233 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa9a4072330 0x7fa9a4131320 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:42.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.566+0000 7fa9a98ab700 1 --2- 192.168.123.105:0/3357252233 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa9a4131860 0x7fa9a407f4e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:01:42.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.566+0000 7fa9a98ab700 1 -- 192.168.123.105:0/3357252233 >> 192.168.123.105:0/3357252233 conn(0x7fa9a406d1a0 msgr2=0x7fa9a4076490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:01:42.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.567+0000 7fa9a98ab700 1 -- 192.168.123.105:0/3357252233 shutdown_connections 2026-03-09T15:01:42.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:01:42.567+0000 7fa9a98ab700 1 -- 192.168.123.105:0/3357252233 wait complete. 2026-03-09T15:01:42.567 INFO:tasks.workunit.client.0.vm05.stdout:2/643: dwrite da/d29/d6a/d7f/fa8 [0,4194304] 0 2026-03-09T15:01:42.569 INFO:tasks.workunit.client.0.vm05.stdout:2/644: fdatasync da/f2c 0 2026-03-09T15:01:42.574 INFO:tasks.workunit.client.0.vm05.stdout:2/645: dread da/dd/f9e [0,4194304] 0 2026-03-09T15:01:42.574 INFO:tasks.workunit.client.0.vm05.stdout:2/646: chown da/d29/d3f/l7a 839 1 2026-03-09T15:01:42.578 INFO:tasks.workunit.client.0.vm05.stdout:9/692: mknod d2/d8b/cf5 0 2026-03-09T15:01:42.587 INFO:tasks.workunit.client.0.vm05.stdout:0/581: dread - d9/de/f8d zero size 2026-03-09T15:01:42.595 INFO:tasks.workunit.client.0.vm05.stdout:4/636: dread d2/f3e [0,4194304] 0 2026-03-09T15:01:42.600 INFO:tasks.workunit.client.0.vm05.stdout:6/587: mknod da/cb5 0 2026-03-09T15:01:42.603 INFO:tasks.workunit.client.0.vm05.stdout:4/637: dwrite d2/d1d/fd0 [0,4194304] 0 2026-03-09T15:01:42.620 INFO:tasks.workunit.client.0.vm05.stdout:2/647: rename da/f3c to da/d29/d6a/da0/d91/dab/d2f/db3/fcb 0 2026-03-09T15:01:42.647 INFO:tasks.workunit.client.0.vm05.stdout:7/631: dwrite d1/d9/d23/d31/d51/f6a [0,4194304] 0 2026-03-09T15:01:42.655 INFO:tasks.workunit.client.0.vm05.stdout:8/650: dwrite d0/d7/f14 [0,4194304] 0 2026-03-09T15:01:42.657 INFO:tasks.workunit.client.0.vm05.stdout:8/651: write d0/d1/d12/d3c/f77 [3230884,74253] 0 2026-03-09T15:01:42.665 INFO:tasks.workunit.client.0.vm05.stdout:5/684: dwrite d1/d5d/f81 [0,4194304] 0 2026-03-09T15:01:42.695 INFO:tasks.workunit.client.0.vm05.stdout:0/582: dwrite d9/de/d12/d15/d2e/d32/d53/f68 [0,4194304] 0 2026-03-09T15:01:42.746 INFO:tasks.workunit.client.0.vm05.stdout:4/638: dread - d2/d4/d8/d4a/d8f/dcd/d3d/f7f zero size 2026-03-09T15:01:42.757 INFO:tasks.workunit.client.0.vm05.stdout:2/648: rmdir da/d29/d6a/da0/d91/dab/d2f/d35/d8a 39 2026-03-09T15:01:42.761 INFO:tasks.workunit.client.0.vm05.stdout:9/693: truncate d2/f11 5554310 0 2026-03-09T15:01:42.780 INFO:tasks.workunit.client.0.vm05.stdout:9/694: sync 2026-03-09T15:01:42.791 INFO:tasks.workunit.client.0.vm05.stdout:5/685: mkdir d1/d4/d34/d35/d3d/d38/d69/de8 0 2026-03-09T15:01:42.792 INFO:tasks.workunit.client.0.vm05.stdout:5/686: dread - d1/d4/d34/dc0/fd8 zero size 2026-03-09T15:01:42.799 INFO:tasks.workunit.client.0.vm05.stdout:0/583: unlink d9/de/d25/f48 0 2026-03-09T15:01:42.799 INFO:tasks.workunit.client.0.vm05.stdout:1/632: creat d9/d2f/d83/d98/d59/fd4 x:0 0 0 2026-03-09T15:01:42.803 INFO:tasks.workunit.client.0.vm05.stdout:2/649: mknod da/d29/d6a/da0/d91/dab/d2f/d35/db0/ccc 0 2026-03-09T15:01:42.805 INFO:tasks.workunit.client.0.vm05.stdout:9/695: rmdir d2/d10/d22/dc1/dc3 39 2026-03-09T15:01:42.805 INFO:tasks.workunit.client.0.vm05.stdout:7/632: symlink d1/lcb 0 2026-03-09T15:01:42.806 INFO:tasks.workunit.client.0.vm05.stdout:8/652: mkdir d0/d1/d12/d1b/d95/dd7/dd2/dd8 0 2026-03-09T15:01:42.807 INFO:tasks.workunit.client.0.vm05.stdout:9/696: readlink d2/d10/d22/d2c/l4c 0 2026-03-09T15:01:42.809 INFO:tasks.workunit.client.0.vm05.stdout:5/687: dread - d1/d4/d19/d93/dcc/d91/fcd zero size 2026-03-09T15:01:42.815 INFO:tasks.workunit.client.0.vm05.stdout:0/584: dwrite d9/d64/f96 [0,4194304] 0 2026-03-09T15:01:42.815 INFO:tasks.workunit.client.0.vm05.stdout:5/688: chown d1/db5/ce1 973988 1 2026-03-09T15:01:42.816 INFO:tasks.workunit.client.0.vm05.stdout:3/635: dread d3/d29/d2d/d77/d4d/fa7 [0,4194304] 0 2026-03-09T15:01:42.819 INFO:tasks.workunit.client.0.vm05.stdout:5/689: sync 2026-03-09T15:01:42.820 INFO:tasks.workunit.client.0.vm05.stdout:5/690: write d1/ff [1180597,125342] 0 2026-03-09T15:01:42.824 INFO:tasks.workunit.client.0.vm05.stdout:4/639: mknod d2/d1d/d88/d92/cd1 0 2026-03-09T15:01:42.824 INFO:tasks.workunit.client.0.vm05.stdout:2/650: rename da/d16/d46/c47 to da/d29/d6a/da0/d7c/ccd 0 2026-03-09T15:01:42.831 INFO:tasks.workunit.client.0.vm05.stdout:9/697: mkdir d2/d9e/df6 0 2026-03-09T15:01:42.832 INFO:tasks.workunit.client.0.vm05.stdout:0/585: write d9/de/f3d [2557176,109490] 0 2026-03-09T15:01:42.833 INFO:tasks.workunit.client.0.vm05.stdout:6/588: getdents da/d43/d7b/d89 0 2026-03-09T15:01:42.834 INFO:tasks.workunit.client.0.vm05.stdout:0/586: write d9/de/d12/d15/d2e/d32/d53/f91 [3268843,5249] 0 2026-03-09T15:01:42.848 INFO:tasks.workunit.client.0.vm05.stdout:8/653: write d0/d1/d12/d1b/d95/d42/d60/f9c [3236017,49309] 0 2026-03-09T15:01:42.850 INFO:tasks.workunit.client.0.vm05.stdout:4/640: fsync d2/d4/d7/f9 0 2026-03-09T15:01:42.854 INFO:tasks.workunit.client.0.vm05.stdout:2/651: rmdir da/d29/d6a/da0/d91/dab/d9c 39 2026-03-09T15:01:42.854 INFO:tasks.workunit.client.0.vm05.stdout:1/633: dwrite d9/d2f/d83/d98/d59/f42 [0,4194304] 0 2026-03-09T15:01:42.854 INFO:tasks.workunit.client.0.vm05.stdout:3/636: dread d3/d29/d2d/f31 [0,4194304] 0 2026-03-09T15:01:42.869 INFO:tasks.workunit.client.0.vm05.stdout:2/652: dwrite da/d29/d6a/da0/d91/dab/d2f/fae [0,4194304] 0 2026-03-09T15:01:42.870 INFO:tasks.workunit.client.0.vm05.stdout:5/691: write d1/da/f2f [2537425,32864] 0 2026-03-09T15:01:42.872 INFO:tasks.workunit.client.0.vm05.stdout:9/698: fdatasync d2/d10/d8c/fa8 0 2026-03-09T15:01:42.873 INFO:tasks.workunit.client.0.vm05.stdout:0/587: creat d9/de/d12/da3/fb9 x:0 0 0 2026-03-09T15:01:42.885 INFO:tasks.workunit.client.0.vm05.stdout:8/654: symlink d0/d1/d12/d1b/d95/d78/ld9 0 2026-03-09T15:01:42.891 INFO:tasks.workunit.client.0.vm05.stdout:1/634: unlink d9/d2f/d83/d98/f95 0 2026-03-09T15:01:42.891 INFO:tasks.workunit.client.0.vm05.stdout:8/655: read d0/d1/d12/d3c/f8c [3208220,36887] 0 2026-03-09T15:01:42.900 INFO:tasks.workunit.client.0.vm05.stdout:3/637: rename d3/d29/d2d/c9c to d3/df/d10/d34/d8c/dbd/cd8 0 2026-03-09T15:01:42.908 INFO:tasks.workunit.client.0.vm05.stdout:8/656: dwrite d0/d2a/f2e [0,4194304] 0 2026-03-09T15:01:42.913 INFO:tasks.workunit.client.0.vm05.stdout:3/638: dwrite d3/df/d59/d79/fc7 [0,4194304] 0 2026-03-09T15:01:42.931 INFO:tasks.workunit.client.0.vm05.stdout:6/589: read da/d17/f69 [107172,69415] 0 2026-03-09T15:01:42.931 INFO:tasks.workunit.client.0.vm05.stdout:6/590: write da/d19/f7e [338957,11560] 0 2026-03-09T15:01:42.934 INFO:tasks.workunit.client.0.vm05.stdout:5/692: fsync d1/f14 0 2026-03-09T15:01:42.939 INFO:tasks.workunit.client.0.vm05.stdout:5/693: fsync d1/da/fb7 0 2026-03-09T15:01:42.948 INFO:tasks.workunit.client.0.vm05.stdout:9/699: rmdir d2/d8b 39 2026-03-09T15:01:42.951 INFO:tasks.workunit.client.0.vm05.stdout:7/633: dwrite d1/d9/d23/d31/d8f/d93/f82 [0,4194304] 0 2026-03-09T15:01:42.951 INFO:tasks.workunit.client.0.vm05.stdout:7/634: fdatasync d1/d9/d23/d54/d7b/f7f 0 2026-03-09T15:01:42.954 INFO:tasks.workunit.client.0.vm05.stdout:4/641: symlink d2/ld2 0 2026-03-09T15:01:42.972 INFO:tasks.workunit.client.0.vm05.stdout:4/642: sync 2026-03-09T15:01:42.973 INFO:tasks.workunit.client.0.vm05.stdout:9/700: dread d2/f8 [0,4194304] 0 2026-03-09T15:01:42.976 INFO:tasks.workunit.client.0.vm05.stdout:3/639: rmdir d3/df/d10/d7c 39 2026-03-09T15:01:42.977 INFO:tasks.workunit.client.0.vm05.stdout:6/591: dread - da/f80 zero size 2026-03-09T15:01:42.979 INFO:tasks.workunit.client.0.vm05.stdout:6/592: fdatasync da/d19/f7e 0 2026-03-09T15:01:42.979 INFO:tasks.workunit.client.0.vm05.stdout:6/593: stat da/d19/l32 0 2026-03-09T15:01:42.990 INFO:tasks.workunit.client.0.vm05.stdout:0/588: dwrite d9/de/f8d [0,4194304] 0 2026-03-09T15:01:43.010 INFO:tasks.workunit.client.0.vm05.stdout:5/694: write d1/d4/d34/f6a [1682018,72519] 0 2026-03-09T15:01:43.019 INFO:tasks.workunit.client.0.vm05.stdout:7/635: rename d1/d9/d23/d31/d32/f58 to d1/d49/d4a/fcc 0 2026-03-09T15:01:43.023 INFO:tasks.workunit.client.0.vm05.stdout:2/653: creat da/fce x:0 0 0 2026-03-09T15:01:43.024 INFO:tasks.workunit.client.0.vm05.stdout:1/635: dread f5 [0,4194304] 0 2026-03-09T15:01:43.027 INFO:tasks.workunit.client.0.vm05.stdout:1/636: chown d9/d2f/d83/d98/d59/d49/d92/lad 16298217 1 2026-03-09T15:01:43.036 INFO:tasks.workunit.client.0.vm05.stdout:4/643: fsync d2/d43/f47 0 2026-03-09T15:01:43.046 INFO:tasks.workunit.client.0.vm05.stdout:2/654: dread da/d29/d6a/da0/f41 [0,4194304] 0 2026-03-09T15:01:43.055 INFO:tasks.workunit.client.0.vm05.stdout:8/657: symlink d0/d1/d12/d1b/d66/dcc/dd4/lda 0 2026-03-09T15:01:43.056 INFO:tasks.workunit.client.0.vm05.stdout:6/594: creat da/d17/d95/da2/fb6 x:0 0 0 2026-03-09T15:01:43.056 INFO:tasks.workunit.client.0.vm05.stdout:6/595: read da/d43/d7b/f97 [424374,80567] 0 2026-03-09T15:01:43.064 INFO:tasks.workunit.client.0.vm05.stdout:5/695: creat d1/d4/d34/d56/d68/fe9 x:0 0 0 2026-03-09T15:01:43.071 INFO:tasks.workunit.client.0.vm05.stdout:0/589: write d9/de/f6c [344278,12963] 0 2026-03-09T15:01:43.073 INFO:tasks.workunit.client.0.vm05.stdout:1/637: creat d9/d2f/d83/d98/d59/d49/dc2/fd5 x:0 0 0 2026-03-09T15:01:43.073 INFO:tasks.workunit.client.0.vm05.stdout:9/701: mkdir d2/d4e/d56/d53/d64/dd9/def/df7 0 2026-03-09T15:01:43.077 INFO:tasks.workunit.client.0.vm05.stdout:1/638: write d9/d2f/d83/d98/d59/fc4 [1013581,2693] 0 2026-03-09T15:01:43.077 INFO:tasks.workunit.client.0.vm05.stdout:2/655: rmdir da/d29/d6a/da0/d91 39 2026-03-09T15:01:43.078 INFO:tasks.workunit.client.0.vm05.stdout:2/656: stat da/d29/d64 0 2026-03-09T15:01:43.078 INFO:tasks.workunit.client.0.vm05.stdout:2/657: chown da/d29/d6a/f81 224912718 1 2026-03-09T15:01:43.087 INFO:tasks.workunit.client.0.vm05.stdout:7/636: dwrite d1/d12/fb7 [0,4194304] 0 2026-03-09T15:01:43.089 INFO:tasks.workunit.client.0.vm05.stdout:8/658: dread d0/d1/d12/d3c/f51 [0,4194304] 0 2026-03-09T15:01:43.117 INFO:tasks.workunit.client.0.vm05.stdout:4/644: rename d2/d43/dbb to d2/d4/d7/d48/d6b/dd3 0 2026-03-09T15:01:43.126 INFO:tasks.workunit.client.0.vm05.stdout:4/645: sync 2026-03-09T15:01:43.126 INFO:tasks.workunit.client.0.vm05.stdout:3/640: creat d3/df/fd9 x:0 0 0 2026-03-09T15:01:43.134 INFO:tasks.workunit.client.0.vm05.stdout:4/646: dwrite d2/d1d/f7d [0,4194304] 0 2026-03-09T15:01:43.162 INFO:tasks.workunit.client.0.vm05.stdout:6/596: mkdir da/d43/d7b/da9/db7 0 2026-03-09T15:01:43.172 INFO:tasks.workunit.client.0.vm05.stdout:1/639: write d9/d2f/d83/d98/f56 [981591,78808] 0 2026-03-09T15:01:43.176 INFO:tasks.workunit.client.0.vm05.stdout:0/590: dwrite d9/de/d25/d38/f55 [0,4194304] 0 2026-03-09T15:01:43.177 INFO:tasks.workunit.client.0.vm05.stdout:1/640: chown d9/d2f/d37/d5a/da9/dc9/dcd/f8f 25986 1 2026-03-09T15:01:43.206 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:43 vm05.local ceph-mon[50611]: Deploying cephadm binary to vm05 2026-03-09T15:01:43.206 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:43 vm05.local ceph-mon[50611]: [09/Mar/2026:15:01:41] ENGINE Serving on https://192.168.123.109:7150 2026-03-09T15:01:43.206 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:43 vm05.local ceph-mon[50611]: [09/Mar/2026:15:01:41] ENGINE Client ('192.168.123.109', 59704) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:01:43.207 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:43 vm05.local ceph-mon[50611]: from='client.24445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:43.207 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:43 vm05.local ceph-mon[50611]: [09/Mar/2026:15:01:41] ENGINE Serving on http://192.168.123.109:8765 2026-03-09T15:01:43.207 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:43 vm05.local ceph-mon[50611]: [09/Mar/2026:15:01:41] ENGINE Bus STARTED 2026-03-09T15:01:43.207 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:43 vm05.local ceph-mon[50611]: pgmap v4: 65 pgs: 65 active+clean; 2.4 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-09T15:01:43.207 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:43 vm05.local ceph-mon[50611]: mgrmap e23: vm09.cfuwdz(active, since 2s) 2026-03-09T15:01:43.207 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:43 vm05.local ceph-mon[50611]: from='client.24449 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:43.207 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:43 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:43.207 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:43 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:43.211 INFO:tasks.workunit.client.0.vm05.stdout:7/637: fdatasync d1/d9/f59 0 2026-03-09T15:01:43.219 INFO:tasks.workunit.client.0.vm05.stdout:9/702: fsync d2/d4e/f40 0 2026-03-09T15:01:43.240 INFO:tasks.workunit.client.0.vm05.stdout:4/647: creat d2/d4/d7/d79/fd4 x:0 0 0 2026-03-09T15:01:43.268 INFO:tasks.workunit.client.0.vm05.stdout:6/597: truncate da/d17/f30 1749 0 2026-03-09T15:01:43.285 INFO:tasks.workunit.client.0.vm05.stdout:0/591: dread d9/de/d12/d15/d2e/f76 [0,4194304] 0 2026-03-09T15:01:43.304 INFO:tasks.workunit.client.0.vm05.stdout:5/696: rmdir d1/d4/d34/d35/d3d/d38/d69/de8 0 2026-03-09T15:01:43.308 INFO:tasks.workunit.client.0.vm05.stdout:9/703: unlink d2/d10/f48 0 2026-03-09T15:01:43.323 INFO:tasks.workunit.client.0.vm05.stdout:3/641: write d3/df/d59/d79/fa8 [392703,51943] 0 2026-03-09T15:01:43.323 INFO:tasks.workunit.client.0.vm05.stdout:4/648: write d2/d43/fa0 [1264175,39684] 0 2026-03-09T15:01:43.324 INFO:tasks.workunit.client.0.vm05.stdout:2/658: truncate da/f2c 3141216 0 2026-03-09T15:01:43.328 INFO:tasks.workunit.client.0.vm05.stdout:6/598: mkdir da/d43/d7b/d89/db8 0 2026-03-09T15:01:43.330 INFO:tasks.workunit.client.0.vm05.stdout:2/659: dwrite da/d16/f6e [4194304,4194304] 0 2026-03-09T15:01:43.333 INFO:tasks.workunit.client.0.vm05.stdout:1/641: link d9/f7f d9/d2f/d83/d98/d59/d49/d78/d94/fd6 0 2026-03-09T15:01:43.370 INFO:tasks.workunit.client.0.vm05.stdout:5/697: mkdir d1/d4/d34/d56/da6/dea 0 2026-03-09T15:01:43.407 INFO:tasks.workunit.client.0.vm05.stdout:3/642: mknod d3/df/d1e/d2c/d74/d9b/cda 0 2026-03-09T15:01:43.410 INFO:tasks.workunit.client.0.vm05.stdout:3/643: dwrite d3/d29/d7f/fa1 [0,4194304] 0 2026-03-09T15:01:43.413 INFO:tasks.workunit.client.0.vm05.stdout:3/644: dread d3/df/d1e/d2c/d74/d9b/fc9 [0,4194304] 0 2026-03-09T15:01:43.414 INFO:tasks.workunit.client.0.vm05.stdout:3/645: chown d3/df/d10/d19/d44/f60 1025160 1 2026-03-09T15:01:43.416 INFO:tasks.workunit.client.0.vm05.stdout:6/599: mknod da/d43/d7b/db3/cb9 0 2026-03-09T15:01:43.417 INFO:tasks.workunit.client.0.vm05.stdout:1/642: creat d9/d2f/d37/d5a/da9/dc9/dcd/db2/fd7 x:0 0 0 2026-03-09T15:01:43.417 INFO:tasks.workunit.client.0.vm05.stdout:8/659: getdents d0/d1/d12 0 2026-03-09T15:01:43.426 INFO:tasks.workunit.client.0.vm05.stdout:4/649: dwrite d2/d4/d7/dc/f64 [4194304,4194304] 0 2026-03-09T15:01:43.427 INFO:tasks.workunit.client.0.vm05.stdout:7/638: rmdir d1/d9/d72/d97/dc9 0 2026-03-09T15:01:43.427 INFO:tasks.workunit.client.0.vm05.stdout:4/650: chown d2/d4/d7/f7b 41208 1 2026-03-09T15:01:43.456 INFO:tasks.workunit.client.0.vm05.stdout:0/592: truncate d9/de/d12/d15/d2e/f3a 3893027 0 2026-03-09T15:01:43.457 INFO:tasks.workunit.client.0.vm05.stdout:2/660: dread da/fa2 [0,4194304] 0 2026-03-09T15:01:43.457 INFO:tasks.workunit.client.0.vm05.stdout:5/698: write d1/d4/d19/d93/dcc/d91/fb1 [282670,64479] 0 2026-03-09T15:01:43.458 INFO:tasks.workunit.client.0.vm05.stdout:2/661: write da/f9d [3099619,29944] 0 2026-03-09T15:01:43.484 INFO:tasks.workunit.client.0.vm05.stdout:8/660: creat d0/d24/d96/fdb x:0 0 0 2026-03-09T15:01:43.484 INFO:tasks.workunit.client.0.vm05.stdout:8/661: stat d0/d2a/l45 0 2026-03-09T15:01:43.499 INFO:tasks.workunit.client.0.vm05.stdout:9/704: link d2/d4e/d56/d53/d64/ded/d99/ce2 d2/d4e/d56/d53/d64/ded/d9c/db2/cf8 0 2026-03-09T15:01:43.518 INFO:tasks.workunit.client.0.vm05.stdout:5/699: mknod d1/d4/d34/d35/d3d/d38/d69/ceb 0 2026-03-09T15:01:43.519 INFO:tasks.workunit.client.0.vm05.stdout:3/646: dread d3/df/d10/d19/d44/d50/f65 [0,4194304] 0 2026-03-09T15:01:43.520 INFO:tasks.workunit.client.0.vm05.stdout:7/639: dwrite d1/f62 [0,4194304] 0 2026-03-09T15:01:43.524 INFO:tasks.workunit.client.0.vm05.stdout:2/662: unlink da/d29/d6a/da0/f41 0 2026-03-09T15:01:43.525 INFO:tasks.workunit.client.0.vm05.stdout:7/640: write d1/d9/d23/d31/d51/f9b [326555,26827] 0 2026-03-09T15:01:43.528 INFO:tasks.workunit.client.0.vm05.stdout:8/662: mkdir d0/d1/d12/d1b/d66/dcc/dd4/ddc 0 2026-03-09T15:01:43.528 INFO:tasks.workunit.client.0.vm05.stdout:4/651: symlink d2/d4/ld5 0 2026-03-09T15:01:43.531 INFO:tasks.workunit.client.0.vm05.stdout:9/705: creat d2/d4e/d56/d53/d64/ded/d9c/db2/ff9 x:0 0 0 2026-03-09T15:01:43.536 INFO:tasks.workunit.client.0.vm05.stdout:9/706: dread - d2/d10/d22/d52/fd7 zero size 2026-03-09T15:01:43.541 INFO:tasks.workunit.client.0.vm05.stdout:2/663: dread da/d29/d6a/da0/f34 [0,4194304] 0 2026-03-09T15:01:43.551 INFO:tasks.workunit.client.0.vm05.stdout:6/600: rename da/d17/d3b/la1 to da/lba 0 2026-03-09T15:01:43.552 INFO:tasks.workunit.client.0.vm05.stdout:3/647: dread d3/df/d10/d19/f26 [0,4194304] 0 2026-03-09T15:01:43.558 INFO:tasks.workunit.client.0.vm05.stdout:4/652: unlink d2/d4/l60 0 2026-03-09T15:01:43.562 INFO:tasks.workunit.client.0.vm05.stdout:7/641: write d1/d9/d23/d31/d32/d78/f9e [1810186,87438] 0 2026-03-09T15:01:43.569 INFO:tasks.workunit.client.0.vm05.stdout:0/593: write d9/de/d12/f3c [4378630,28783] 0 2026-03-09T15:01:43.592 INFO:tasks.workunit.client.0.vm05.stdout:5/700: write d1/d4/d34/d35/d3d/d96/fbe [4797977,38736] 0 2026-03-09T15:01:43.606 INFO:tasks.workunit.client.0.vm05.stdout:1/643: rename d9/d17/laf to d9/d2f/d37/ld8 0 2026-03-09T15:01:43.614 INFO:tasks.workunit.client.0.vm05.stdout:2/664: dwrite da/d29/d6a/da0/d91/dab/fbd [0,4194304] 0 2026-03-09T15:01:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:43 vm09.local ceph-mon[59673]: Deploying cephadm binary to vm05 2026-03-09T15:01:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:43 vm09.local ceph-mon[59673]: [09/Mar/2026:15:01:41] ENGINE Serving on https://192.168.123.109:7150 2026-03-09T15:01:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:43 vm09.local ceph-mon[59673]: [09/Mar/2026:15:01:41] ENGINE Client ('192.168.123.109', 59704) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:01:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:43 vm09.local ceph-mon[59673]: from='client.24445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:43 vm09.local ceph-mon[59673]: [09/Mar/2026:15:01:41] ENGINE Serving on http://192.168.123.109:8765 2026-03-09T15:01:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:43 vm09.local ceph-mon[59673]: [09/Mar/2026:15:01:41] ENGINE Bus STARTED 2026-03-09T15:01:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:43 vm09.local ceph-mon[59673]: pgmap v4: 65 pgs: 65 active+clean; 2.4 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-09T15:01:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:43 vm09.local ceph-mon[59673]: mgrmap e23: vm09.cfuwdz(active, since 2s) 2026-03-09T15:01:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:43 vm09.local ceph-mon[59673]: from='client.24449 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:01:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:43 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:43 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:43.626 INFO:tasks.workunit.client.0.vm05.stdout:2/665: stat da/d29/d6a/da0/d91/dab/l19 0 2026-03-09T15:01:43.626 INFO:tasks.workunit.client.0.vm05.stdout:6/601: dwrite da/d17/f58 [0,4194304] 0 2026-03-09T15:01:43.627 INFO:tasks.workunit.client.0.vm05.stdout:7/642: mkdir d1/d9/d23/d31/d32/d78/d7e/d81/dcd 0 2026-03-09T15:01:43.628 INFO:tasks.workunit.client.0.vm05.stdout:7/643: chown d1/d9/d72/d97 1 1 2026-03-09T15:01:43.640 INFO:tasks.workunit.client.0.vm05.stdout:4/653: dwrite d2/d4/d7/d48/d6b/f9f [0,4194304] 0 2026-03-09T15:01:43.642 INFO:tasks.workunit.client.0.vm05.stdout:4/654: write d2/f67 [2881594,86340] 0 2026-03-09T15:01:43.689 INFO:tasks.workunit.client.0.vm05.stdout:0/594: write d9/de/d6a/f75 [351433,49379] 0 2026-03-09T15:01:43.759 INFO:tasks.workunit.client.0.vm05.stdout:5/701: creat d1/d4/d34/d35/d4e/dc8/fec x:0 0 0 2026-03-09T15:01:43.768 INFO:tasks.workunit.client.0.vm05.stdout:8/663: rename d0/c18 to d0/d1/d12/d1b/d95/dd7/dd2/dd8/cdd 0 2026-03-09T15:01:43.769 INFO:tasks.workunit.client.0.vm05.stdout:1/644: mknod d9/d2f/d83/d98/d87/cd9 0 2026-03-09T15:01:43.789 INFO:tasks.workunit.client.0.vm05.stdout:2/666: truncate da/dd/f5d 730343 0 2026-03-09T15:01:43.802 INFO:tasks.workunit.client.0.vm05.stdout:4/655: mkdir d2/d43/dd6 0 2026-03-09T15:01:43.811 INFO:tasks.workunit.client.0.vm05.stdout:0/595: mkdir d9/de/d12/d15/d2e/d32/d9f/dba 0 2026-03-09T15:01:43.855 INFO:tasks.workunit.client.0.vm05.stdout:3/648: rename d3/df/d1e/d2c/d74/d9b/lc1 to d3/df/d10/d34/d8c/dbd/dc2/ldb 0 2026-03-09T15:01:43.855 INFO:tasks.workunit.client.0.vm05.stdout:8/664: mkdir d0/d1/dde 0 2026-03-09T15:01:43.861 INFO:tasks.workunit.client.0.vm05.stdout:2/667: fsync da/d29/f2d 0 2026-03-09T15:01:43.872 INFO:tasks.workunit.client.0.vm05.stdout:6/602: creat da/d17/d95/da2/dae/fbb x:0 0 0 2026-03-09T15:01:43.897 INFO:tasks.workunit.client.0.vm05.stdout:8/665: unlink d0/d1/d12/d3c/f99 0 2026-03-09T15:01:43.898 INFO:tasks.workunit.client.0.vm05.stdout:4/656: creat d2/d1d/d88/dcc/fd7 x:0 0 0 2026-03-09T15:01:43.898 INFO:tasks.workunit.client.0.vm05.stdout:9/707: rename d2/d4e/d56/d53/d64/ded/d99/ce2 to d2/d10/d22/d2c/d69/d5a/cfa 0 2026-03-09T15:01:43.898 INFO:tasks.workunit.client.0.vm05.stdout:3/649: creat d3/df/d10/d34/d8c/fdc x:0 0 0 2026-03-09T15:01:43.900 INFO:tasks.workunit.client.0.vm05.stdout:8/666: chown d0/d1/d12/d1b/d66/dcc/dd4/lda 6124116 1 2026-03-09T15:01:43.905 INFO:tasks.workunit.client.0.vm05.stdout:2/668: dwrite da/d29/f2d [0,4194304] 0 2026-03-09T15:01:43.906 INFO:tasks.workunit.client.0.vm05.stdout:2/669: fdatasync da/d29/d3f/f9b 0 2026-03-09T15:01:43.908 INFO:tasks.workunit.client.0.vm05.stdout:2/670: dread da/f10 [0,4194304] 0 2026-03-09T15:01:43.925 INFO:tasks.workunit.client.0.vm05.stdout:0/596: link d9/de/d12/da3/fb2 d9/de/d12/d15/fbb 0 2026-03-09T15:01:43.927 INFO:tasks.workunit.client.0.vm05.stdout:7/644: rename d1/d22/f67 to d1/d22/d3c/fce 0 2026-03-09T15:01:43.935 INFO:tasks.workunit.client.0.vm05.stdout:9/708: creat d2/d4e/d56/d53/d64/ded/d9c/d8e/dcb/ffb x:0 0 0 2026-03-09T15:01:43.937 INFO:tasks.workunit.client.0.vm05.stdout:3/650: creat d3/df/d1e/d2c/d74/d78/fdd x:0 0 0 2026-03-09T15:01:43.938 INFO:tasks.workunit.client.0.vm05.stdout:9/709: write d2/d10/d22/fb6 [75912,27027] 0 2026-03-09T15:01:43.954 INFO:tasks.workunit.client.0.vm05.stdout:0/597: mkdir d9/de/d12/da3/dbc 0 2026-03-09T15:01:43.960 INFO:tasks.workunit.client.0.vm05.stdout:5/702: rename d1/d4/d34/fc1 to d1/d4/d27/d75/fed 0 2026-03-09T15:01:43.961 INFO:tasks.workunit.client.0.vm05.stdout:5/703: read d1/d4/d34/d35/d3d/f32 [3233921,16374] 0 2026-03-09T15:01:43.962 INFO:tasks.workunit.client.0.vm05.stdout:5/704: write d1/d4/d34/d35/f44 [6108563,60180] 0 2026-03-09T15:01:43.967 INFO:tasks.workunit.client.0.vm05.stdout:5/705: dwrite d1/da/f2f [0,4194304] 0 2026-03-09T15:01:43.978 INFO:tasks.workunit.client.0.vm05.stdout:4/657: creat d2/d4/d8/d4a/d8f/dcd/dcb/fd8 x:0 0 0 2026-03-09T15:01:43.978 INFO:tasks.workunit.client.0.vm05.stdout:4/658: dread - d2/d4/fbe zero size 2026-03-09T15:01:43.992 INFO:tasks.workunit.client.0.vm05.stdout:9/710: symlink d2/d4e/d56/d53/d64/ded/d9c/d8e/dcb/lfc 0 2026-03-09T15:01:44.000 INFO:tasks.workunit.client.0.vm05.stdout:0/598: rmdir d9/de/d12/d15/d49 39 2026-03-09T15:01:44.000 INFO:tasks.workunit.client.0.vm05.stdout:0/599: write d9/de/d12/da3/fb9 [534869,90483] 0 2026-03-09T15:01:44.012 INFO:tasks.workunit.client.0.vm05.stdout:5/706: rmdir d1/d4/d34/d35/d3d 39 2026-03-09T15:01:44.024 INFO:tasks.workunit.client.0.vm05.stdout:9/711: rmdir d2/d10/d22/dc2 39 2026-03-09T15:01:44.031 INFO:tasks.workunit.client.0.vm05.stdout:8/667: link d0/d1/d12/l9e d0/d1/d12/d1b/d6e/ldf 0 2026-03-09T15:01:44.033 INFO:tasks.workunit.client.0.vm05.stdout:2/671: link da/d16/f69 da/d29/d6a/da0/d91/dab/fcf 0 2026-03-09T15:01:44.037 INFO:tasks.workunit.client.0.vm05.stdout:0/600: dread - d9/de/d25/d38/d41/f98 zero size 2026-03-09T15:01:44.038 INFO:tasks.workunit.client.0.vm05.stdout:7/645: link d1/d9/d23/d31/d32/l6c d1/d9/d23/d31/d32/d78/dbb/lcf 0 2026-03-09T15:01:44.043 INFO:tasks.workunit.client.0.vm05.stdout:1/645: rename d9/d2f/d83/d98/d59/d49/d92/l9c to d9/d2f/d37/d5a/da9/dc9/lda 0 2026-03-09T15:01:44.043 INFO:tasks.workunit.client.0.vm05.stdout:8/668: dwrite d0/d1/d12/d3c/f77 [0,4194304] 0 2026-03-09T15:01:44.048 INFO:tasks.workunit.client.0.vm05.stdout:8/669: chown d0/d1/c5d 310756885 1 2026-03-09T15:01:44.049 INFO:tasks.workunit.client.0.vm05.stdout:8/670: chown d0/d1/d12/d1b/d21/fb8 372 1 2026-03-09T15:01:44.050 INFO:tasks.workunit.client.0.vm05.stdout:8/671: write d0/d24/d96/fdb [396583,88762] 0 2026-03-09T15:01:44.100 INFO:tasks.workunit.client.0.vm05.stdout:0/601: mkdir d9/d64/dbd 0 2026-03-09T15:01:44.100 INFO:tasks.workunit.client.0.vm05.stdout:0/602: chown d9/de/d6a/db5 26 1 2026-03-09T15:01:44.100 INFO:tasks.workunit.client.0.vm05.stdout:0/603: write d9/d64/f96 [4685262,17313] 0 2026-03-09T15:01:44.106 INFO:tasks.workunit.client.0.vm05.stdout:4/659: write d2/d43/f47 [5062262,63848] 0 2026-03-09T15:01:44.107 INFO:tasks.workunit.client.0.vm05.stdout:4/660: write d2/d4/d7/d48/fc8 [1010241,46990] 0 2026-03-09T15:01:44.110 INFO:tasks.workunit.client.0.vm05.stdout:4/661: write d2/f67 [4348557,67349] 0 2026-03-09T15:01:44.118 INFO:tasks.workunit.client.0.vm05.stdout:9/712: write d2/f12 [900464,11093] 0 2026-03-09T15:01:44.118 INFO:tasks.workunit.client.0.vm05.stdout:8/672: truncate d0/d1/d12/d1b/d95/f3e 2433516 0 2026-03-09T15:01:44.119 INFO:tasks.workunit.client.0.vm05.stdout:8/673: write d0/d7/fd1 [406384,64845] 0 2026-03-09T15:01:44.128 INFO:tasks.workunit.client.0.vm05.stdout:2/672: dwrite da/d16/d46/f92 [0,4194304] 0 2026-03-09T15:01:44.157 INFO:tasks.workunit.client.0.vm05.stdout:6/603: rename da/d43/f56 to da/d43/d7b/d89/fbc 0 2026-03-09T15:01:44.202 INFO:tasks.workunit.client.0.vm05.stdout:7/646: truncate d1/d9/d23/d31/d8f/d93/fa3 6860668 0 2026-03-09T15:01:44.203 INFO:tasks.workunit.client.0.vm05.stdout:7/647: stat d1/d9/l66 0 2026-03-09T15:01:44.204 INFO:tasks.workunit.client.0.vm05.stdout:0/604: dwrite d9/de/d12/d15/f50 [0,4194304] 0 2026-03-09T15:01:44.205 INFO:tasks.workunit.client.0.vm05.stdout:0/605: write d9/de/d12/d15/d2e/d6b/fb8 [31453,6466] 0 2026-03-09T15:01:44.206 INFO:tasks.workunit.client.0.vm05.stdout:0/606: chown d9/de/d12/d15/d2e/d32/d53/f91 479882360 1 2026-03-09T15:01:44.209 INFO:tasks.workunit.client.0.vm05.stdout:0/607: chown d9/de/d12/d15/d2e/d32/d9f/fa9 40846708 1 2026-03-09T15:01:44.216 INFO:tasks.workunit.client.0.vm05.stdout:1/646: creat d9/d2f/d55/dd0/fdb x:0 0 0 2026-03-09T15:01:44.216 INFO:tasks.workunit.client.0.vm05.stdout:8/674: symlink d0/d1/d12/d1b/d95/d42/da1/db9/le0 0 2026-03-09T15:01:44.217 INFO:tasks.workunit.client.0.vm05.stdout:8/675: dread - d0/d1/d12/f4f zero size 2026-03-09T15:01:44.246 INFO:tasks.workunit.client.0.vm05.stdout:5/707: creat d1/d4/d34/d35/fee x:0 0 0 2026-03-09T15:01:44.251 INFO:tasks.workunit.client.0.vm05.stdout:6/604: mkdir da/d17/d3b/dbd 0 2026-03-09T15:01:44.256 INFO:tasks.workunit.client.0.vm05.stdout:7/648: fdatasync d1/d9/d23/d31/d51/f39 0 2026-03-09T15:01:44.265 INFO:tasks.workunit.client.0.vm05.stdout:8/676: fdatasync d0/d1/f49 0 2026-03-09T15:01:44.271 INFO:tasks.workunit.client.0.vm05.stdout:1/647: dread d9/d17/f22 [0,4194304] 0 2026-03-09T15:01:44.273 INFO:tasks.workunit.client.0.vm05.stdout:1/648: truncate d9/d2f/d83/d98/d59/d49/d92/d75/f76 753775 0 2026-03-09T15:01:44.275 INFO:tasks.workunit.client.0.vm05.stdout:3/651: rename d3/df/d10/d34/d8c/d90/fa6 to d3/df/d10/d34/d8c/dbd/fde 0 2026-03-09T15:01:44.281 INFO:tasks.workunit.client.0.vm05.stdout:4/662: creat d2/d4/d7/d48/fd9 x:0 0 0 2026-03-09T15:01:44.284 INFO:tasks.workunit.client.0.vm05.stdout:8/677: dread d0/d1/d12/d1b/d95/d42/d60/f9c [0,4194304] 0 2026-03-09T15:01:44.296 INFO:tasks.workunit.client.0.vm05.stdout:1/649: rename d9/d2f/la7 to d9/d2f/d37/ldc 0 2026-03-09T15:01:44.296 INFO:tasks.workunit.client.0.vm05.stdout:1/650: fsync d9/d2f/d83/fcf 0 2026-03-09T15:01:44.299 INFO:tasks.workunit.client.0.vm05.stdout:0/608: rmdir d9/de/d25/dac 0 2026-03-09T15:01:44.307 INFO:tasks.workunit.client.0.vm05.stdout:6/605: dread da/f5d [0,4194304] 0 2026-03-09T15:01:44.307 INFO:tasks.workunit.client.0.vm05.stdout:7/649: unlink d1/d12/l4f 0 2026-03-09T15:01:44.307 INFO:tasks.workunit.client.0.vm05.stdout:7/650: dwrite d1/d22/d3c/f70 [0,4194304] 0 2026-03-09T15:01:44.308 INFO:tasks.workunit.client.0.vm05.stdout:8/678: stat d0/d1/d12/d1b/d95/d4b/ld5 0 2026-03-09T15:01:44.312 INFO:tasks.workunit.client.0.vm05.stdout:0/609: dwrite d9/de/d6a/f7c [0,4194304] 0 2026-03-09T15:01:44.312 INFO:tasks.workunit.client.0.vm05.stdout:0/610: write d9/faa [542932,65000] 0 2026-03-09T15:01:44.333 INFO:tasks.workunit.client.0.vm05.stdout:3/652: mknod d3/df/d10/cdf 0 2026-03-09T15:01:44.336 INFO:tasks.workunit.client.0.vm05.stdout:3/653: chown d3/df/d59/fcc 0 1 2026-03-09T15:01:44.341 INFO:tasks.workunit.client.0.vm05.stdout:4/663: rename d2/d4/d7/dc/d2b/c3c to d2/d49/d69/cda 0 2026-03-09T15:01:44.341 INFO:tasks.workunit.client.0.vm05.stdout:4/664: chown d2/d4/d1e/l38 11592457 1 2026-03-09T15:01:44.351 INFO:tasks.workunit.client.0.vm05.stdout:1/651: symlink d9/d17/ldd 0 2026-03-09T15:01:44.357 INFO:tasks.workunit.client.0.vm05.stdout:2/673: getdents da/d29/d6a/da0/d91/dab/d9c 0 2026-03-09T15:01:44.368 INFO:tasks.workunit.client.0.vm05.stdout:8/679: symlink d0/d1/d97/le1 0 2026-03-09T15:01:44.384 INFO:tasks.workunit.client.0.vm05.stdout:9/713: link d2/d10/d22/dc2/f7c d2/d10/d22/dc1/dc3/ffd 0 2026-03-09T15:01:44.384 INFO:tasks.workunit.client.0.vm05.stdout:9/714: dwrite d2/d4e/d56/d53/d64/ded/d9c/d8e/dcb/fd6 [0,4194304] 0 2026-03-09T15:01:44.385 INFO:tasks.workunit.client.0.vm05.stdout:9/715: stat d2/d10/d22/d2c/d3c 0 2026-03-09T15:01:44.385 INFO:tasks.workunit.client.0.vm05.stdout:3/654: mkdir d3/df/d1e/d2c/d74/de0 0 2026-03-09T15:01:44.385 INFO:tasks.workunit.client.0.vm05.stdout:4/665: mkdir d2/d4/d7/d48/d6b/ddb 0 2026-03-09T15:01:44.385 INFO:tasks.workunit.client.0.vm05.stdout:4/666: write d2/d4/fbe [226818,24999] 0 2026-03-09T15:01:44.390 INFO:tasks.workunit.client.0.vm05.stdout:3/655: dread d3/df/d1e/d2f/d52/f61 [0,4194304] 0 2026-03-09T15:01:44.391 INFO:tasks.workunit.client.0.vm05.stdout:3/656: chown d3/df/d10/d34/d8c/dbd/fa4 4 1 2026-03-09T15:01:44.397 INFO:tasks.workunit.client.0.vm05.stdout:2/674: rename da/d29/d6a/da0/f5a to da/d29/d3f/fd0 0 2026-03-09T15:01:44.404 INFO:tasks.workunit.client.0.vm05.stdout:2/675: dread - da/d29/d64/fc5 zero size 2026-03-09T15:01:44.404 INFO:tasks.workunit.client.0.vm05.stdout:0/611: creat d9/de/d12/da3/dbc/fbe x:0 0 0 2026-03-09T15:01:44.404 INFO:tasks.workunit.client.0.vm05.stdout:3/657: fsync d3/df/f11 0 2026-03-09T15:01:44.406 INFO:tasks.workunit.client.0.vm05.stdout:1/652: read d9/d2f/d83/d98/d59/d49/f2c [2862361,115010] 0 2026-03-09T15:01:44.406 INFO:tasks.workunit.client.0.vm05.stdout:6/606: dread da/d19/f6a [0,4194304] 0 2026-03-09T15:01:44.407 INFO:tasks.workunit.client.0.vm05.stdout:6/607: truncate da/d43/d7b/d89/fa4 945530 0 2026-03-09T15:01:44.408 INFO:tasks.workunit.client.0.vm05.stdout:7/651: creat d1/d9/fd0 x:0 0 0 2026-03-09T15:01:44.408 INFO:tasks.workunit.client.0.vm05.stdout:8/680: getdents d0/d1/d12/d1b/d6e/d93/d9f/dad 0 2026-03-09T15:01:44.416 INFO:tasks.workunit.client.0.vm05.stdout:0/612: mkdir d9/de/d12/d15/d2e/d6b/dbf 0 2026-03-09T15:01:44.417 INFO:tasks.workunit.client.0.vm05.stdout:4/667: mknod d2/d4/d7/cdc 0 2026-03-09T15:01:44.418 INFO:tasks.workunit.client.0.vm05.stdout:3/658: mknod d3/df/d1e/d2c/ce1 0 2026-03-09T15:01:44.418 INFO:tasks.workunit.client.0.vm05.stdout:3/659: fdatasync d3/df/f4a 0 2026-03-09T15:01:44.419 INFO:tasks.workunit.client.0.vm05.stdout:3/660: truncate d3/d29/f92 1137701 0 2026-03-09T15:01:44.419 INFO:tasks.workunit.client.0.vm05.stdout:1/653: fdatasync d9/d2f/d37/d5a/f5b 0 2026-03-09T15:01:44.423 INFO:tasks.workunit.client.0.vm05.stdout:7/652: mkdir d1/d9/d23/d31/d8f/d93/dbd/dd1 0 2026-03-09T15:01:44.426 INFO:tasks.workunit.client.0.vm05.stdout:7/653: fdatasync d1/d9/d72/fc4 0 2026-03-09T15:01:44.426 INFO:tasks.workunit.client.0.vm05.stdout:4/668: stat d2/d4/d1e/d71/c85 0 2026-03-09T15:01:44.426 INFO:tasks.workunit.client.0.vm05.stdout:4/669: read - d2/d4/d8/d4a/d8f/dcd/dcb/fd8 zero size 2026-03-09T15:01:44.426 INFO:tasks.workunit.client.0.vm05.stdout:4/670: chown d2/d4/d1e/da2/fac 6179072 1 2026-03-09T15:01:44.458 INFO:tasks.workunit.client.0.vm05.stdout:5/708: dwrite d1/d4/d34/d56/f6d [0,4194304] 0 2026-03-09T15:01:44.467 INFO:tasks.workunit.client.0.vm05.stdout:9/716: getdents d2/d4e/d56/d53/d64/ded/d9c/d94 0 2026-03-09T15:01:44.513 INFO:tasks.workunit.client.0.vm05.stdout:4/671: sync 2026-03-09T15:01:44.525 INFO:tasks.workunit.client.0.vm05.stdout:2/676: dread da/d29/d3f/fd0 [0,4194304] 0 2026-03-09T15:01:44.544 INFO:tasks.workunit.client.0.vm05.stdout:4/672: dread d2/d4/d8/d4a/d8f/dcd/d3d/f65 [0,4194304] 0 2026-03-09T15:01:44.573 INFO:tasks.workunit.client.0.vm05.stdout:3/661: rename d3/df/d10/d34 to d3/df/d10/d19/dce/dc8/de2 0 2026-03-09T15:01:44.590 INFO:tasks.workunit.client.0.vm05.stdout:7/654: mknod d1/d49/cd2 0 2026-03-09T15:01:44.590 INFO:tasks.workunit.client.0.vm05.stdout:7/655: chown d1/d9/d23/d31 137410 1 2026-03-09T15:01:44.591 INFO:tasks.workunit.client.0.vm05.stdout:7/656: fsync d1/d9/d23/d31/d32/f5b 0 2026-03-09T15:01:44.602 INFO:tasks.workunit.client.0.vm05.stdout:8/681: dwrite d0/d1/d12/f4f [0,4194304] 0 2026-03-09T15:01:44.619 INFO:tasks.workunit.client.0.vm05.stdout:1/654: write d9/d2f/d83/d98/d59/d49/f82 [790418,68872] 0 2026-03-09T15:01:44.620 INFO:tasks.workunit.client.0.vm05.stdout:1/655: dread - d9/d2f/d83/fcf zero size 2026-03-09T15:01:44.624 INFO:tasks.workunit.client.0.vm05.stdout:9/717: dwrite d2/d8b/dae/fec [0,4194304] 0 2026-03-09T15:01:44.627 INFO:tasks.workunit.client.0.vm05.stdout:2/677: write da/dd/f9e [508024,47784] 0 2026-03-09T15:01:44.634 INFO:tasks.workunit.client.0.vm05.stdout:4/673: rename d2/d4/d50/caf to d2/d4/d7/dc/d2b/d97/cdd 0 2026-03-09T15:01:44.635 INFO:tasks.workunit.client.0.vm05.stdout:6/608: dwrite da/d43/f72 [0,4194304] 0 2026-03-09T15:01:44.641 INFO:tasks.workunit.client.0.vm05.stdout:2/678: write da/d29/d45/f7b [2115537,26659] 0 2026-03-09T15:01:44.641 INFO:tasks.workunit.client.0.vm05.stdout:3/662: creat d3/d29/d2d/d7b/fe3 x:0 0 0 2026-03-09T15:01:44.652 INFO:tasks.workunit.client.0.vm05.stdout:1/656: dread d9/d2f/d83/d98/d59/d49/d4b/f8e [0,4194304] 0 2026-03-09T15:01:44.653 INFO:tasks.workunit.client.0.vm05.stdout:4/674: dwrite d2/d4/d7/dc/da8/fab [0,4194304] 0 2026-03-09T15:01:44.666 INFO:tasks.workunit.client.0.vm05.stdout:4/675: dwrite d2/d4/d8/d4a/d8f/dcd/dcb/fd8 [0,4194304] 0 2026-03-09T15:01:44.735 INFO:tasks.workunit.client.0.vm05.stdout:5/709: rename d1/d4/d34/d35/d3d/cbd to d1/d4/d34/d35/dd0/cef 0 2026-03-09T15:01:44.737 INFO:tasks.workunit.client.0.vm05.stdout:2/679: creat da/d16/d46/fd1 x:0 0 0 2026-03-09T15:01:44.737 INFO:tasks.workunit.client.0.vm05.stdout:2/680: chown da/d16/d46 8 1 2026-03-09T15:01:44.738 INFO:tasks.workunit.client.0.vm05.stdout:5/710: read d1/d4/d34/d35/d3d/d38/f8a [1182157,115310] 0 2026-03-09T15:01:44.741 INFO:tasks.workunit.client.0.vm05.stdout:6/609: rename da/d43/d7b/db3/l9a to da/d43/d7b/db3/lbe 0 2026-03-09T15:01:44.747 INFO:tasks.workunit.client.0.vm05.stdout:1/657: creat d9/d2f/d83/d98/d59/d49/d78/dcc/fde x:0 0 0 2026-03-09T15:01:44.748 INFO:tasks.workunit.client.0.vm05.stdout:1/658: chown d9/d2f/d55/f5e 35339 1 2026-03-09T15:01:44.749 INFO:tasks.workunit.client.0.vm05.stdout:7/657: symlink d1/d9/d23/ld3 0 2026-03-09T15:01:44.752 INFO:tasks.workunit.client.0.vm05.stdout:4/676: write d2/d43/f51 [2286655,10096] 0 2026-03-09T15:01:44.763 INFO:tasks.workunit.client.0.vm05.stdout:6/610: unlink da/d19/l39 0 2026-03-09T15:01:44.766 INFO:tasks.workunit.client.0.vm05.stdout:1/659: symlink d9/d2f/d37/d5a/da9/dc9/dcd/ldf 0 2026-03-09T15:01:44.769 INFO:tasks.workunit.client.0.vm05.stdout:7/658: creat d1/d9/d23/d31/d51/fd4 x:0 0 0 2026-03-09T15:01:44.775 INFO:tasks.workunit.client.0.vm05.stdout:1/660: dread d9/d2f/d83/d98/f6e [0,4194304] 0 2026-03-09T15:01:44.775 INFO:tasks.workunit.client.0.vm05.stdout:1/661: readlink d9/d2f/d83/d98/l6c 0 2026-03-09T15:01:44.780 INFO:tasks.workunit.client.0.vm05.stdout:1/662: dwrite d9/d2f/d37/d5a/da9/dc9/dcd/f6f [0,4194304] 0 2026-03-09T15:01:44.804 INFO:tasks.workunit.client.0.vm05.stdout:2/681: link da/l52 da/d29/d6a/db1/ld2 0 2026-03-09T15:01:44.804 INFO:tasks.workunit.client.0.vm05.stdout:2/682: readlink da/d29/d64/l88 0 2026-03-09T15:01:44.809 INFO:tasks.workunit.client.0.vm05.stdout:5/711: link d1/d4/d19/d93/dcc/d91/fb0 d1/d4/d34/d35/d3d/d38/d63/ff0 0 2026-03-09T15:01:44.814 INFO:tasks.workunit.client.0.vm05.stdout:7/659: symlink d1/d9/d23/d31/d8f/ld5 0 2026-03-09T15:01:44.816 INFO:tasks.workunit.client.0.vm05.stdout:2/683: mkdir da/d29/d6a/da0/d91/dab/d9c/dd3 0 2026-03-09T15:01:44.826 INFO:tasks.workunit.client.0.vm05.stdout:5/712: truncate d1/da/f4a 433697 0 2026-03-09T15:01:44.827 INFO:tasks.workunit.client.0.vm05.stdout:7/660: unlink d1/d12/fb9 0 2026-03-09T15:01:44.829 INFO:tasks.workunit.client.0.vm05.stdout:2/684: unlink da/d29/d6a/da0/d91/dab/d2f/d35/c86 0 2026-03-09T15:01:44.829 INFO:tasks.workunit.client.0.vm05.stdout:2/685: stat da/lb 0 2026-03-09T15:01:44.857 INFO:tasks.workunit.client.0.vm05.stdout:2/686: sync 2026-03-09T15:01:44.857 INFO:tasks.workunit.client.0.vm05.stdout:2/687: readlink da/d29/d64/l7e 0 2026-03-09T15:01:44.860 INFO:tasks.workunit.client.0.vm05.stdout:2/688: dwrite da/f79 [4194304,4194304] 0 2026-03-09T15:01:44.869 INFO:tasks.workunit.client.0.vm05.stdout:2/689: dread da/d29/f39 [0,4194304] 0 2026-03-09T15:01:44.871 INFO:tasks.workunit.client.0.vm05.stdout:2/690: dread da/d29/f2d [0,4194304] 0 2026-03-09T15:01:44.873 INFO:tasks.workunit.client.0.vm05.stdout:2/691: chown da/d29/d6a/da0/d7c/ccd 45 1 2026-03-09T15:01:44.877 INFO:tasks.workunit.client.0.vm05.stdout:2/692: mkdir da/d29/d6a/da0/d91/dd4 0 2026-03-09T15:01:44.878 INFO:tasks.workunit.client.0.vm05.stdout:2/693: chown da/d16/l4d 72 1 2026-03-09T15:01:44.882 INFO:tasks.workunit.client.0.vm05.stdout:2/694: link da/f21 da/d29/d64/da6/fd5 0 2026-03-09T15:01:44.890 INFO:tasks.workunit.client.0.vm05.stdout:2/695: sync 2026-03-09T15:01:44.891 INFO:tasks.workunit.client.0.vm05.stdout:2/696: getdents da/d29/d3f/dc3 0 2026-03-09T15:01:44.893 INFO:tasks.workunit.client.0.vm05.stdout:2/697: mkdir da/d29/d6a/da0/d91/dab/dd6 0 2026-03-09T15:01:44.894 INFO:tasks.workunit.client.0.vm05.stdout:2/698: dread da/d29/f39 [0,4194304] 0 2026-03-09T15:01:44.896 INFO:tasks.workunit.client.0.vm05.stdout:2/699: truncate da/d29/d6a/da0/d91/dab/d2f/d35/f57 24606 0 2026-03-09T15:01:44.899 INFO:tasks.workunit.client.0.vm05.stdout:2/700: mknod da/d29/d6a/da0/d91/dab/d2f/d35/db0/dc9/cd7 0 2026-03-09T15:01:44.901 INFO:tasks.workunit.client.0.vm05.stdout:5/713: mknod d1/d4/d27/cf1 0 2026-03-09T15:01:44.907 INFO:tasks.workunit.client.0.vm05.stdout:5/714: getdents d1/d5d 0 2026-03-09T15:01:44.908 INFO:tasks.workunit.client.0.vm05.stdout:5/715: dread - d1/d4/d34/d35/d4e/dc8/fec zero size 2026-03-09T15:01:44.908 INFO:tasks.workunit.client.0.vm05.stdout:5/716: chown d1/d4/d34/d35/d3d/d38/d63/ca0 132471104 1 2026-03-09T15:01:44.912 INFO:tasks.workunit.client.0.vm05.stdout:5/717: dwrite d1/da/f2f [0,4194304] 0 2026-03-09T15:01:44.917 INFO:tasks.workunit.client.0.vm05.stdout:5/718: fdatasync d1/d4/d34/d35/d3d/d38/d63/ff0 0 2026-03-09T15:01:44.944 INFO:tasks.workunit.client.0.vm05.stdout:0/613: creat d9/fc0 x:0 0 0 2026-03-09T15:01:44.945 INFO:tasks.workunit.client.0.vm05.stdout:0/614: write d9/de/d25/f47 [1277266,81796] 0 2026-03-09T15:01:44.946 INFO:tasks.workunit.client.0.vm05.stdout:0/615: read - d9/de/d6a/fb3 zero size 2026-03-09T15:01:44.946 INFO:tasks.workunit.client.0.vm05.stdout:0/616: dread - d9/d59/f79 zero size 2026-03-09T15:01:44.949 INFO:tasks.workunit.client.0.vm05.stdout:0/617: getdents d9/de/d12/d15/d2e/d6b/dbf 0 2026-03-09T15:01:44.950 INFO:tasks.workunit.client.0.vm05.stdout:0/618: creat d9/d64/fc1 x:0 0 0 2026-03-09T15:01:44.955 INFO:tasks.workunit.client.0.vm05.stdout:4/677: link d2/d4/d8/d4a/c58 d2/cde 0 2026-03-09T15:01:44.957 INFO:tasks.workunit.client.0.vm05.stdout:4/678: creat d2/d4/d1e/d71/fdf x:0 0 0 2026-03-09T15:01:44.959 INFO:tasks.workunit.client.0.vm05.stdout:4/679: mkdir d2/de0 0 2026-03-09T15:01:45.003 INFO:tasks.workunit.client.0.vm05.stdout:8/682: rename d0/d1/d12/d1b/d66/d6f/d80 to d0/d1/de2 0 2026-03-09T15:01:45.006 INFO:tasks.workunit.client.0.vm05.stdout:9/718: rename d2/d4e/d56/d53/d64/ded/d9c/db2/cf8 to d2/d10/d22/da0/cfe 0 2026-03-09T15:01:45.011 INFO:tasks.workunit.client.0.vm05.stdout:6/611: rename da/d43/d7b/db3/cb9 to da/d17/d95/cbf 0 2026-03-09T15:01:45.012 INFO:tasks.workunit.client.0.vm05.stdout:3/663: symlink d3/df/le4 0 2026-03-09T15:01:45.023 INFO:tasks.workunit.client.0.vm05.stdout:7/661: rename d1/d12/fb3 to d1/d9/d23/d31/d32/d78/d7e/fd6 0 2026-03-09T15:01:45.023 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:44 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:45.023 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:44 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:45.023 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:44 vm05.local ceph-mon[50611]: pgmap v5: 65 pgs: 65 active+clean; 2.4 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-09T15:01:45.023 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:44 vm05.local ceph-mon[50611]: mgrmap e24: vm09.cfuwdz(active, since 4s) 2026-03-09T15:01:45.024 INFO:tasks.workunit.client.0.vm05.stdout:7/662: truncate d1/d22/d3c/fa7 993206 0 2026-03-09T15:01:45.025 INFO:tasks.workunit.client.0.vm05.stdout:7/663: chown d1/d9/d23/d31/d51/l86 464678661 1 2026-03-09T15:01:45.025 INFO:tasks.workunit.client.0.vm05.stdout:7/664: stat d1/d49/l9f 0 2026-03-09T15:01:45.027 INFO:tasks.workunit.client.0.vm05.stdout:6/612: creat da/d17/d95/da2/dae/fc0 x:0 0 0 2026-03-09T15:01:45.027 INFO:tasks.workunit.client.0.vm05.stdout:7/665: chown d1/d9/d23/d31/d8f/d93 627 1 2026-03-09T15:01:45.030 INFO:tasks.workunit.client.0.vm05.stdout:9/719: mkdir d2/d10/d22/d2c/d69/dff 0 2026-03-09T15:01:45.030 INFO:tasks.workunit.client.0.vm05.stdout:5/719: rename d1/d4/d34 to d1/d4/d34/d56/df2 22 2026-03-09T15:01:45.030 INFO:tasks.workunit.client.0.vm05.stdout:8/683: getdents d0/d1/de2 0 2026-03-09T15:01:45.032 INFO:tasks.workunit.client.0.vm05.stdout:5/720: dread - d1/d4/d34/d35/d3d/d38/fe4 zero size 2026-03-09T15:01:45.042 INFO:tasks.workunit.client.0.vm05.stdout:7/666: creat d1/d22/fd7 x:0 0 0 2026-03-09T15:01:45.042 INFO:tasks.workunit.client.0.vm05.stdout:9/720: stat d2/d4e/d56/d53/f66 0 2026-03-09T15:01:45.051 INFO:tasks.workunit.client.0.vm05.stdout:1/663: write d9/d2f/d83/d98/f50 [3502186,3014] 0 2026-03-09T15:01:45.055 INFO:tasks.workunit.client.0.vm05.stdout:1/664: chown d9/f12 20697 1 2026-03-09T15:01:45.056 INFO:tasks.workunit.client.0.vm05.stdout:8/684: dwrite d0/d1/d12/d3c/f8c [0,4194304] 0 2026-03-09T15:01:45.077 INFO:tasks.workunit.client.0.vm05.stdout:7/667: dread d1/d9/d23/d31/d32/f5b [0,4194304] 0 2026-03-09T15:01:45.078 INFO:tasks.workunit.client.0.vm05.stdout:7/668: readlink d1/d9/d23/d31/d51/l57 0 2026-03-09T15:01:45.081 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:44 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:45.082 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:44 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:45.082 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:44 vm09.local ceph-mon[59673]: pgmap v5: 65 pgs: 65 active+clean; 2.4 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-09T15:01:45.082 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:44 vm09.local ceph-mon[59673]: mgrmap e24: vm09.cfuwdz(active, since 4s) 2026-03-09T15:01:45.085 INFO:tasks.workunit.client.0.vm05.stdout:8/685: dread d0/d24/f2c [0,4194304] 0 2026-03-09T15:01:45.092 INFO:tasks.workunit.client.0.vm05.stdout:8/686: unlink d0/dc/f7a 0 2026-03-09T15:01:45.098 INFO:tasks.workunit.client.0.vm05.stdout:1/665: rename d9/d2f/d37/d5a/da9/dc9/dcd/c29 to d9/d2f/d83/ce0 0 2026-03-09T15:01:45.103 INFO:tasks.workunit.client.0.vm05.stdout:7/669: link d1/d9/l66 d1/d9/d23/d31/d51/ld8 0 2026-03-09T15:01:45.107 INFO:tasks.workunit.client.0.vm05.stdout:7/670: dwrite d1/d12/fa8 [0,4194304] 0 2026-03-09T15:01:45.137 INFO:tasks.workunit.client.0.vm05.stdout:1/666: rmdir d9/d2f/d83/d98/d59/d49/d78 39 2026-03-09T15:01:45.137 INFO:tasks.workunit.client.0.vm05.stdout:1/667: fsync d9/d2f/d83/d98/d59/fc4 0 2026-03-09T15:01:45.138 INFO:tasks.workunit.client.0.vm05.stdout:8/687: dread d0/d1/d12/d1b/d95/d4b/faa [0,4194304] 0 2026-03-09T15:01:45.149 INFO:tasks.workunit.client.0.vm05.stdout:7/671: fsync d1/d22/d3c/fba 0 2026-03-09T15:01:45.152 INFO:tasks.workunit.client.0.vm05.stdout:7/672: dwrite d1/d9/fc [0,4194304] 0 2026-03-09T15:01:45.169 INFO:tasks.workunit.client.0.vm05.stdout:0/619: dwrite d9/de/d12/f23 [0,4194304] 0 2026-03-09T15:01:45.170 INFO:tasks.workunit.client.0.vm05.stdout:2/701: dwrite da/d29/d6a/da0/d91/dab/d2f/d35/f3a [0,4194304] 0 2026-03-09T15:01:45.185 INFO:tasks.workunit.client.0.vm05.stdout:4/680: getdents d2 0 2026-03-09T15:01:45.189 INFO:tasks.workunit.client.0.vm05.stdout:0/620: dwrite d9/de/d25/f2d [0,4194304] 0 2026-03-09T15:01:45.211 INFO:tasks.workunit.client.0.vm05.stdout:4/681: dwrite d2/d4/fbe [0,4194304] 0 2026-03-09T15:01:45.220 INFO:tasks.workunit.client.0.vm05.stdout:8/688: fsync d0/d1/d12/d1b/d95/d42/d60/d73/f74 0 2026-03-09T15:01:45.221 INFO:tasks.workunit.client.0.vm05.stdout:8/689: readlink d0/d24/l44 0 2026-03-09T15:01:45.230 INFO:tasks.workunit.client.0.vm05.stdout:7/673: rename d1/f76 to d1/d9/d23/d31/d51/fd9 0 2026-03-09T15:01:45.241 INFO:tasks.workunit.client.0.vm05.stdout:2/702: mkdir da/d29/d6a/da0/d7c/dd8 0 2026-03-09T15:01:45.255 INFO:tasks.workunit.client.0.vm05.stdout:3/664: truncate d3/df/d1e/daf/fc6 402865 0 2026-03-09T15:01:45.275 INFO:tasks.workunit.client.0.vm05.stdout:6/613: write da/d43/f68 [102057,52512] 0 2026-03-09T15:01:45.275 INFO:tasks.workunit.client.0.vm05.stdout:1/668: symlink d9/d2f/d83/d98/d59/d49/d78/dbd/le1 0 2026-03-09T15:01:45.277 INFO:tasks.workunit.client.0.vm05.stdout:1/669: chown d9/d2f/d37/d5a/da9/dc9/dcd/db2 10436 1 2026-03-09T15:01:45.283 INFO:tasks.workunit.client.0.vm05.stdout:3/665: sync 2026-03-09T15:01:45.286 INFO:tasks.workunit.client.0.vm05.stdout:6/614: dwrite da/d43/f72 [0,4194304] 0 2026-03-09T15:01:45.292 INFO:tasks.workunit.client.0.vm05.stdout:2/703: unlink da/d16/d46/fb9 0 2026-03-09T15:01:45.292 INFO:tasks.workunit.client.0.vm05.stdout:2/704: dread - da/d16/d46/fd1 zero size 2026-03-09T15:01:45.298 INFO:tasks.workunit.client.0.vm05.stdout:5/721: write d1/f9 [3903361,19336] 0 2026-03-09T15:01:45.300 INFO:tasks.workunit.client.0.vm05.stdout:1/670: dwrite d9/f21 [0,4194304] 0 2026-03-09T15:01:45.317 INFO:tasks.workunit.client.0.vm05.stdout:8/690: dread d0/d24/f30 [0,4194304] 0 2026-03-09T15:01:45.323 INFO:tasks.workunit.client.0.vm05.stdout:6/615: creat da/d17/d95/fc1 x:0 0 0 2026-03-09T15:01:45.347 INFO:tasks.workunit.client.0.vm05.stdout:5/722: creat d1/d4/d34/ff3 x:0 0 0 2026-03-09T15:01:45.357 INFO:tasks.workunit.client.0.vm05.stdout:9/721: dwrite d2/d10/d22/d2c/d3c/f55 [0,4194304] 0 2026-03-09T15:01:45.358 INFO:tasks.workunit.client.0.vm05.stdout:8/691: stat d0/d7/c8d 0 2026-03-09T15:01:45.359 INFO:tasks.workunit.client.0.vm05.stdout:8/692: chown d0/d1/d12/c62 5680 1 2026-03-09T15:01:45.362 INFO:tasks.workunit.client.0.vm05.stdout:0/621: rename d9/de/d6a/f75 to d9/de/d12/d15/d2e/fc2 0 2026-03-09T15:01:45.362 INFO:tasks.workunit.client.0.vm05.stdout:5/723: mknod d1/d4/d34/d35/d3d/d96/cf4 0 2026-03-09T15:01:45.363 INFO:tasks.workunit.client.0.vm05.stdout:6/616: creat da/d43/d7b/d89/db8/fc2 x:0 0 0 2026-03-09T15:01:45.363 INFO:tasks.workunit.client.0.vm05.stdout:9/722: read d2/d8b/dae/fbb [112010,372] 0 2026-03-09T15:01:45.363 INFO:tasks.workunit.client.0.vm05.stdout:9/723: readlink d2/l34 0 2026-03-09T15:01:45.364 INFO:tasks.workunit.client.0.vm05.stdout:8/693: write d0/d1/d12/f4f [3469217,20857] 0 2026-03-09T15:01:45.365 INFO:tasks.workunit.client.0.vm05.stdout:8/694: stat d0/d1/d12/d1b/d95/d4b/fb0 0 2026-03-09T15:01:45.380 INFO:tasks.workunit.client.0.vm05.stdout:5/724: symlink d1/d4/d34/d35/d3d/dde/lf5 0 2026-03-09T15:01:45.389 INFO:tasks.workunit.client.0.vm05.stdout:6/617: creat da/d17/d95/da2/dae/fc3 x:0 0 0 2026-03-09T15:01:45.404 INFO:tasks.workunit.client.0.vm05.stdout:7/674: rename d1/l2e to d1/d9/d72/d97/lda 0 2026-03-09T15:01:45.405 INFO:tasks.workunit.client.0.vm05.stdout:2/705: getdents da/d16 0 2026-03-09T15:01:45.416 INFO:tasks.workunit.client.0.vm05.stdout:4/682: write d2/d4/d7/f7b [4424341,84051] 0 2026-03-09T15:01:45.418 INFO:tasks.workunit.client.0.vm05.stdout:3/666: write d3/df/d10/d19/dce/dc8/de2/f5f [115858,119151] 0 2026-03-09T15:01:45.424 INFO:tasks.workunit.client.0.vm05.stdout:1/671: dwrite d9/f15 [0,4194304] 0 2026-03-09T15:01:45.432 INFO:tasks.workunit.client.0.vm05.stdout:8/695: unlink d0/d1/d12/d1b/f67 0 2026-03-09T15:01:45.434 INFO:tasks.workunit.client.0.vm05.stdout:8/696: readlink d0/d1/d12/d1b/l52 0 2026-03-09T15:01:45.437 INFO:tasks.workunit.client.0.vm05.stdout:6/618: rmdir da/d17 39 2026-03-09T15:01:45.443 INFO:tasks.workunit.client.0.vm05.stdout:5/725: dread d1/d4/d34/d35/f36 [0,4194304] 0 2026-03-09T15:01:45.445 INFO:tasks.workunit.client.0.vm05.stdout:7/675: mkdir d1/d49/d4a/d94/ddb 0 2026-03-09T15:01:45.455 INFO:tasks.workunit.client.0.vm05.stdout:9/724: symlink d2/d10/d22/d47/dc4/l100 0 2026-03-09T15:01:45.473 INFO:tasks.workunit.client.0.vm05.stdout:8/697: creat d0/d24/fe3 x:0 0 0 2026-03-09T15:01:45.480 INFO:tasks.workunit.client.0.vm05.stdout:6/619: stat da/d43/f5c 0 2026-03-09T15:01:45.480 INFO:tasks.workunit.client.0.vm05.stdout:6/620: chown da/d43/fa6 16176808 1 2026-03-09T15:01:45.481 INFO:tasks.workunit.client.0.vm05.stdout:6/621: stat da/d43/d7b/db3 0 2026-03-09T15:01:45.489 INFO:tasks.workunit.client.0.vm05.stdout:8/698: dread d0/d7/f14 [0,4194304] 0 2026-03-09T15:01:45.495 INFO:tasks.workunit.client.0.vm05.stdout:5/726: fsync d1/f5e 0 2026-03-09T15:01:45.500 INFO:tasks.workunit.client.0.vm05.stdout:7/676: mkdir d1/d9/d23/d31/d32/ddc 0 2026-03-09T15:01:45.504 INFO:tasks.workunit.client.0.vm05.stdout:0/622: rmdir d9/de/d12/d15/d2e/d32/d9f/dba 0 2026-03-09T15:01:45.504 INFO:tasks.workunit.client.0.vm05.stdout:0/623: chown d9/f22 124274718 1 2026-03-09T15:01:45.510 INFO:tasks.workunit.client.0.vm05.stdout:2/706: mkdir da/d29/d6a/da0/dd9 0 2026-03-09T15:01:45.512 INFO:tasks.workunit.client.0.vm05.stdout:0/624: dwrite d9/de/d25/f47 [0,4194304] 0 2026-03-09T15:01:45.519 INFO:tasks.workunit.client.0.vm05.stdout:0/625: fdatasync d9/de/d25/f2d 0 2026-03-09T15:01:45.530 INFO:tasks.workunit.client.0.vm05.stdout:4/683: fdatasync d2/d4/fb4 0 2026-03-09T15:01:45.533 INFO:tasks.workunit.client.0.vm05.stdout:1/672: creat d9/db9/fe2 x:0 0 0 2026-03-09T15:01:45.545 INFO:tasks.workunit.client.0.vm05.stdout:9/725: dread d2/d10/d22/d52/fb0 [0,4194304] 0 2026-03-09T15:01:45.551 INFO:tasks.workunit.client.0.vm05.stdout:8/699: mknod d0/d1/d12/d1b/d95/d54/ce4 0 2026-03-09T15:01:45.566 INFO:tasks.workunit.client.0.vm05.stdout:8/700: sync 2026-03-09T15:01:45.566 INFO:tasks.workunit.client.0.vm05.stdout:7/677: dwrite d1/d9/d23/d31/d32/fc7 [0,4194304] 0 2026-03-09T15:01:45.570 INFO:tasks.workunit.client.0.vm05.stdout:2/707: truncate da/d16/f1e 5093541 0 2026-03-09T15:01:45.573 INFO:tasks.workunit.client.0.vm05.stdout:6/622: dread da/d17/f42 [0,4194304] 0 2026-03-09T15:01:45.579 INFO:tasks.workunit.client.0.vm05.stdout:5/727: dread d1/d4/d34/d35/d3d/d38/f6e [0,4194304] 0 2026-03-09T15:01:45.589 INFO:tasks.workunit.client.0.vm05.stdout:4/684: fdatasync d2/d4/d8/d4a/fae 0 2026-03-09T15:01:45.595 INFO:tasks.workunit.client.0.vm05.stdout:5/728: dread d1/d4/d34/d6c/fdb [0,4194304] 0 2026-03-09T15:01:45.598 INFO:tasks.workunit.client.0.vm05.stdout:1/673: creat d9/d2f/d37/fe3 x:0 0 0 2026-03-09T15:01:45.603 INFO:tasks.workunit.client.0.vm05.stdout:5/729: truncate d1/da/fe6 369267 0 2026-03-09T15:01:45.609 INFO:tasks.workunit.client.0.vm05.stdout:4/685: dread d2/d4/d7/dc/f45 [0,4194304] 0 2026-03-09T15:01:45.610 INFO:tasks.workunit.client.0.vm05.stdout:9/726: dread d2/d10/d22/dc2/db1/fb9 [0,4194304] 0 2026-03-09T15:01:45.610 INFO:tasks.workunit.client.0.vm05.stdout:4/686: read - d2/d1d/d88/dcc/fd7 zero size 2026-03-09T15:01:45.611 INFO:tasks.workunit.client.0.vm05.stdout:4/687: write d2/f67 [4897938,43360] 0 2026-03-09T15:01:45.625 INFO:tasks.workunit.client.0.vm05.stdout:7/678: fdatasync d1/d12/f56 0 2026-03-09T15:01:45.626 INFO:tasks.workunit.client.0.vm05.stdout:2/708: creat da/d29/d6a/fda x:0 0 0 2026-03-09T15:01:45.635 INFO:tasks.workunit.client.0.vm05.stdout:6/623: creat da/d19/fc4 x:0 0 0 2026-03-09T15:01:45.643 INFO:tasks.workunit.client.0.vm05.stdout:0/626: mkdir d9/de/d12/d8a/dc3 0 2026-03-09T15:01:45.649 INFO:tasks.workunit.client.0.vm05.stdout:3/667: getdents d3/d29/d7f/dc3 0 2026-03-09T15:01:45.649 INFO:tasks.workunit.client.0.vm05.stdout:3/668: fdatasync d3/df/d10/d19/dce/dc8/de2/f5f 0 2026-03-09T15:01:45.694 INFO:tasks.workunit.client.0.vm05.stdout:9/727: mkdir d2/d10/d22/d2c/d3c/d101 0 2026-03-09T15:01:45.699 INFO:tasks.workunit.client.0.vm05.stdout:7/679: truncate d1/d12/f11 2820216 0 2026-03-09T15:01:45.699 INFO:tasks.workunit.client.0.vm05.stdout:4/688: chown d2/c91 556 1 2026-03-09T15:01:45.700 INFO:tasks.workunit.client.0.vm05.stdout:0/627: creat d9/de/d12/da3/dbc/fc4 x:0 0 0 2026-03-09T15:01:45.704 INFO:tasks.workunit.client.0.vm05.stdout:3/669: dread d3/d29/d7f/fa1 [0,4194304] 0 2026-03-09T15:01:45.706 INFO:tasks.workunit.client.0.vm05.stdout:9/728: chown d2/f6 3 1 2026-03-09T15:01:45.722 INFO:tasks.workunit.client.0.vm05.stdout:1/674: dwrite d9/d97/fbf [0,4194304] 0 2026-03-09T15:01:45.725 INFO:tasks.workunit.client.0.vm05.stdout:5/730: write d1/d4/d34/d35/f4d [1197686,87397] 0 2026-03-09T15:01:45.733 INFO:tasks.workunit.client.0.vm05.stdout:2/709: dwrite da/d29/d6a/da0/d91/dab/d2f/d35/f57 [0,4194304] 0 2026-03-09T15:01:45.743 INFO:tasks.workunit.client.0.vm05.stdout:6/624: truncate da/d43/d7b/d89/fbc 180993 0 2026-03-09T15:01:45.751 INFO:tasks.workunit.client.0.vm05.stdout:9/729: creat d2/d4e/d56/d53/d64/ded/d9c/f102 x:0 0 0 2026-03-09T15:01:45.751 INFO:tasks.workunit.client.0.vm05.stdout:4/689: symlink d2/de0/le1 0 2026-03-09T15:01:45.752 INFO:tasks.workunit.client.0.vm05.stdout:8/701: getdents d0/d1/d12/d1b/d95/dd7/dd2 0 2026-03-09T15:01:45.755 INFO:tasks.workunit.client.0.vm05.stdout:0/628: mknod d9/de/d12/d15/d2e/d6b/dbf/cc5 0 2026-03-09T15:01:45.760 INFO:tasks.workunit.client.0.vm05.stdout:1/675: truncate d9/d2f/d83/d98/fa4 800581 0 2026-03-09T15:01:45.764 INFO:tasks.workunit.client.0.vm05.stdout:2/710: rmdir da/d29/d6a/da0/d91/dab/d2f/db3 39 2026-03-09T15:01:45.771 INFO:tasks.workunit.client.0.vm05.stdout:6/625: creat da/d17/d95/da2/fc5 x:0 0 0 2026-03-09T15:01:45.771 INFO:tasks.workunit.client.0.vm05.stdout:4/690: creat d2/d4/d1e/da2/fe2 x:0 0 0 2026-03-09T15:01:45.775 INFO:tasks.workunit.client.0.vm05.stdout:8/702: dread d0/d1/d12/d1b/d95/d42/d60/f9c [0,4194304] 0 2026-03-09T15:01:45.785 INFO:tasks.workunit.client.0.vm05.stdout:0/629: creat d9/de/d12/d8a/fc6 x:0 0 0 2026-03-09T15:01:45.788 INFO:tasks.workunit.client.0.vm05.stdout:1/676: mkdir d9/d2f/d83/d98/d59/d49/d92/d75/de4 0 2026-03-09T15:01:45.791 INFO:tasks.workunit.client.0.vm05.stdout:3/670: link d3/df/d10/d19/dce/dc8/de2/c3c d3/df/d10/d19/dce/dc8/de2/d8c/ce5 0 2026-03-09T15:01:45.795 INFO:tasks.workunit.client.0.vm05.stdout:6/626: fsync da/d17/d3b/f85 0 2026-03-09T15:01:45.799 INFO:tasks.workunit.client.0.vm05.stdout:4/691: creat d2/d4/d8/d4a/d8f/dcd/dcb/fe3 x:0 0 0 2026-03-09T15:01:45.800 INFO:tasks.workunit.client.0.vm05.stdout:4/692: fdatasync d2/d4/d7/dc/da8/fab 0 2026-03-09T15:01:45.801 INFO:tasks.workunit.client.0.vm05.stdout:4/693: chown d2/d4/d1e/da2 97229 1 2026-03-09T15:01:45.801 INFO:tasks.workunit.client.0.vm05.stdout:5/731: dread d1/d4/d27/f4f [0,4194304] 0 2026-03-09T15:01:45.810 INFO:tasks.workunit.client.0.vm05.stdout:9/730: creat d2/d4e/d56/d53/d64/ded/d9c/df0/f103 x:0 0 0 2026-03-09T15:01:45.814 INFO:tasks.workunit.client.0.vm05.stdout:7/680: getdents d1/d22/d3c 0 2026-03-09T15:01:45.814 INFO:tasks.workunit.client.0.vm05.stdout:0/630: dread - d9/de/d12/d15/d2e/f88 zero size 2026-03-09T15:01:45.835 INFO:tasks.workunit.client.0.vm05.stdout:2/711: mknod da/d29/d64/dc1/cdb 0 2026-03-09T15:01:45.851 INFO:tasks.workunit.client.0.vm05.stdout:6/627: mkdir da/d17/d7c/dc6 0 2026-03-09T15:01:45.851 INFO:tasks.workunit.client.0.vm05.stdout:4/694: unlink d2/d4/d8/d4a/d6e/f8d 0 2026-03-09T15:01:45.851 INFO:tasks.workunit.client.0.vm05.stdout:6/628: chown da/f82 21864692 1 2026-03-09T15:01:45.852 INFO:tasks.workunit.client.0.vm05.stdout:4/695: readlink d2/ld2 0 2026-03-09T15:01:45.852 INFO:tasks.workunit.client.0.vm05.stdout:4/696: chown d2/d4/d8/d4a/d94/la6 2059634961 1 2026-03-09T15:01:45.861 INFO:tasks.workunit.client.0.vm05.stdout:1/677: symlink d9/d2f/d83/le5 0 2026-03-09T15:01:45.863 INFO:tasks.workunit.client.0.vm05.stdout:2/712: mknod da/d29/d64/dc1/cdc 0 2026-03-09T15:01:45.865 INFO:tasks.workunit.client.0.vm05.stdout:2/713: dread da/d29/d3f/dc3/f9a [0,4194304] 0 2026-03-09T15:01:45.875 INFO:tasks.workunit.client.0.vm05.stdout:9/731: dread d2/d10/d22/d2c/fbd [0,4194304] 0 2026-03-09T15:01:45.895 INFO:tasks.workunit.client.0.vm05.stdout:6/629: truncate da/d43/f46 1058652 0 2026-03-09T15:01:45.900 INFO:tasks.workunit.client.0.vm05.stdout:8/703: link d0/d1/d12/c6c d0/d1/d12/d1b/d95/d42/da1/ce5 0 2026-03-09T15:01:45.902 INFO:tasks.workunit.client.0.vm05.stdout:7/681: dwrite d1/d9/d23/d31/d32/f38 [0,4194304] 0 2026-03-09T15:01:45.904 INFO:tasks.workunit.client.0.vm05.stdout:0/631: symlink d9/de/d12/d15/d49/lc7 0 2026-03-09T15:01:45.908 INFO:tasks.workunit.client.0.vm05.stdout:1/678: unlink d9/d2f/d37/d5a/fa0 0 2026-03-09T15:01:45.908 INFO:tasks.workunit.client.0.vm05.stdout:0/632: chown d9/de/d12/d15/d49 19897100 1 2026-03-09T15:01:45.912 INFO:tasks.workunit.client.0.vm05.stdout:2/714: creat da/d16/fdd x:0 0 0 2026-03-09T15:01:45.915 INFO:tasks.workunit.client.0.vm05.stdout:9/732: creat d2/d9e/f104 x:0 0 0 2026-03-09T15:01:45.917 INFO:tasks.workunit.client.0.vm05.stdout:3/671: truncate d3/df/d10/d7c/f94 38211 0 2026-03-09T15:01:45.919 INFO:tasks.workunit.client.0.vm05.stdout:6/630: mknod da/d43/d7b/db3/cc7 0 2026-03-09T15:01:45.929 INFO:tasks.workunit.client.0.vm05.stdout:5/732: creat d1/d4/d27/ff6 x:0 0 0 2026-03-09T15:01:45.929 INFO:tasks.workunit.client.0.vm05.stdout:5/733: chown d1/d4/d34/d35/d3d/f37 1588013048 1 2026-03-09T15:01:45.930 INFO:tasks.workunit.client.0.vm05.stdout:5/734: read d1/d4/d34/d56/f6d [670723,55158] 0 2026-03-09T15:01:45.935 INFO:tasks.workunit.client.0.vm05.stdout:4/697: symlink d2/d43/dd6/le4 0 2026-03-09T15:01:45.940 INFO:tasks.workunit.client.0.vm05.stdout:8/704: creat d0/d1/d12/d1b/d66/dcc/fe6 x:0 0 0 2026-03-09T15:01:45.941 INFO:tasks.workunit.client.0.vm05.stdout:7/682: mkdir d1/d9/d23/d31/d32/d78/ddd 0 2026-03-09T15:01:45.942 INFO:tasks.workunit.client.0.vm05.stdout:7/683: chown d1/c71 0 1 2026-03-09T15:01:45.946 INFO:tasks.workunit.client.0.vm05.stdout:1/679: sync 2026-03-09T15:01:45.946 INFO:tasks.workunit.client.0.vm05.stdout:4/698: sync 2026-03-09T15:01:45.958 INFO:tasks.workunit.client.0.vm05.stdout:9/733: rmdir d2/d4e/d56/d53/d64 39 2026-03-09T15:01:45.959 INFO:tasks.workunit.client.0.vm05.stdout:9/734: write d2/d10/d22/fb6 [1580593,119458] 0 2026-03-09T15:01:45.963 INFO:tasks.workunit.client.0.vm05.stdout:4/699: dwrite d2/d4/d8/d4a/d8f/dcd/dcb/fe3 [0,4194304] 0 2026-03-09T15:01:45.984 INFO:tasks.workunit.client.0.vm05.stdout:3/672: symlink d3/df/d10/d19/d44/dd2/le6 0 2026-03-09T15:01:45.985 INFO:tasks.workunit.client.0.vm05.stdout:6/631: creat da/d43/d7b/fc8 x:0 0 0 2026-03-09T15:01:45.994 INFO:tasks.workunit.client.0.vm05.stdout:1/680: dread d9/d2f/d37/d5f/f80 [0,4194304] 0 2026-03-09T15:01:45.994 INFO:tasks.workunit.client.0.vm05.stdout:1/681: chown c3 592108110 1 2026-03-09T15:01:46.019 INFO:tasks.workunit.client.0.vm05.stdout:8/705: write d0/d1/d12/d1b/d95/d42/d60/f8f [373318,35497] 0 2026-03-09T15:01:46.022 INFO:tasks.workunit.client.0.vm05.stdout:8/706: write d0/d1/d12/d3c/f77 [5100396,79805] 0 2026-03-09T15:01:46.027 INFO:tasks.workunit.client.0.vm05.stdout:0/633: symlink d9/de/d12/d8a/dc3/lc8 0 2026-03-09T15:01:46.041 INFO:tasks.workunit.client.0.vm05.stdout:2/715: dwrite da/d16/d46/f92 [0,4194304] 0 2026-03-09T15:01:46.041 INFO:tasks.workunit.client.0.vm05.stdout:3/673: chown d3/df/d10/d19/c8d 0 1 2026-03-09T15:01:46.046 INFO:tasks.workunit.client.0.vm05.stdout:2/716: read - da/d29/d6a/da0/d91/dab/d2f/f93 zero size 2026-03-09T15:01:46.066 INFO:tasks.workunit.client.0.vm05.stdout:4/700: write d2/d1d/f36 [3350507,48528] 0 2026-03-09T15:01:46.070 INFO:tasks.workunit.client.0.vm05.stdout:1/682: dwrite d9/d2f/d83/d98/f6e [0,4194304] 0 2026-03-09T15:01:46.077 INFO:tasks.workunit.client.0.vm05.stdout:7/684: mknod d1/d9/d23/d31/d8f/d93/dbd/dd1/cde 0 2026-03-09T15:01:46.090 INFO:tasks.workunit.client.0.vm05.stdout:0/634: mkdir d9/de/d25/d38/d78/dc9 0 2026-03-09T15:01:46.102 INFO:tasks.workunit.client.0.vm05.stdout:6/632: getdents da/d9e 0 2026-03-09T15:01:46.102 INFO:tasks.workunit.client.0.vm05.stdout:5/735: creat d1/d4/d34/d35/ff7 x:0 0 0 2026-03-09T15:01:46.129 INFO:tasks.workunit.client.0.vm05.stdout:9/735: truncate d2/d10/d22/d47/fc7 303158 0 2026-03-09T15:01:46.134 INFO:tasks.workunit.client.0.vm05.stdout:7/685: mknod d1/d9/d23/d31/d8f/d93/d95/cdf 0 2026-03-09T15:01:46.134 INFO:tasks.workunit.client.0.vm05.stdout:7/686: readlink d1/lcb 0 2026-03-09T15:01:46.140 INFO:tasks.workunit.client.0.vm05.stdout:8/707: symlink d0/d1/d12/d1b/d95/d42/le7 0 2026-03-09T15:01:46.140 INFO:tasks.workunit.client.0.vm05.stdout:0/635: symlink d9/de/d12/d15/d49/lca 0 2026-03-09T15:01:46.171 INFO:tasks.workunit.client.0.vm05.stdout:2/717: fsync da/d29/d6a/da0/d91/dab/d2f/db3/fcb 0 2026-03-09T15:01:46.193 INFO:tasks.workunit.client.0.vm05.stdout:3/674: dwrite d3/df/d10/d19/dce/dc8/de2/f9d [0,4194304] 0 2026-03-09T15:01:46.195 INFO:tasks.workunit.client.0.vm05.stdout:3/675: chown d3/df/d10/d19/dce/dc8/de2/d8c/d90 3783979 1 2026-03-09T15:01:46.215 INFO:tasks.workunit.client.0.vm05.stdout:9/736: creat d2/d10/d22/dc1/dc3/f105 x:0 0 0 2026-03-09T15:01:46.215 INFO:tasks.workunit.client.0.vm05.stdout:8/708: creat d0/d1/d12/d1b/d66/fe8 x:0 0 0 2026-03-09T15:01:46.230 INFO:tasks.workunit.client.0.vm05.stdout:5/736: dwrite d1/d4/d34/d35/d3d/f32 [0,4194304] 0 2026-03-09T15:01:46.236 INFO:tasks.workunit.client.0.vm05.stdout:5/737: chown d1/d4/d34/d35/d3d/dde/c7b 25 1 2026-03-09T15:01:46.243 INFO:tasks.workunit.client.0.vm05.stdout:6/633: fsync da/d43/f54 0 2026-03-09T15:01:46.252 INFO:tasks.workunit.client.0.vm05.stdout:6/634: dwrite da/d43/d66/f6e [4194304,4194304] 0 2026-03-09T15:01:46.256 INFO:tasks.workunit.client.0.vm05.stdout:6/635: truncate da/d43/d7b/d89/fa4 1240934 0 2026-03-09T15:01:46.257 INFO:tasks.workunit.client.0.vm05.stdout:0/636: dwrite d9/de/d12/d15/d2e/d32/f7d [0,4194304] 0 2026-03-09T15:01:46.276 INFO:tasks.workunit.client.0.vm05.stdout:4/701: link d2/d4/d7/d48/d6b/lbc d2/d4/d7/d48/d6b/le5 0 2026-03-09T15:01:46.277 INFO:tasks.workunit.client.0.vm05.stdout:1/683: creat d9/d2f/d83/d98/d59/fe6 x:0 0 0 2026-03-09T15:01:46.277 INFO:tasks.workunit.client.0.vm05.stdout:9/737: unlink d2/d8b/cd5 0 2026-03-09T15:01:46.289 INFO:tasks.workunit.client.0.vm05.stdout:2/718: dread da/d29/d6a/f81 [0,4194304] 0 2026-03-09T15:01:46.293 INFO:tasks.workunit.client.0.vm05.stdout:5/738: creat d1/d4/d27/d75/ff8 x:0 0 0 2026-03-09T15:01:46.308 INFO:tasks.workunit.client.0.vm05.stdout:8/709: dread d0/f4 [0,4194304] 0 2026-03-09T15:01:46.309 INFO:tasks.workunit.client.0.vm05.stdout:0/637: creat d9/de/d25/d38/d78/fcb x:0 0 0 2026-03-09T15:01:46.313 INFO:tasks.workunit.client.0.vm05.stdout:3/676: getdents d3/df/dbe 0 2026-03-09T15:01:46.313 INFO:tasks.workunit.client.0.vm05.stdout:3/677: readlink d3/lc 0 2026-03-09T15:01:46.319 INFO:tasks.workunit.client.0.vm05.stdout:7/687: link d1/d9/d23/d31/d32/c36 d1/d9/d23/d31/d8f/d93/dbd/ce0 0 2026-03-09T15:01:46.320 INFO:tasks.workunit.client.0.vm05.stdout:7/688: fsync d1/d9/d23/d31/d51/f29 0 2026-03-09T15:01:46.326 INFO:tasks.workunit.client.0.vm05.stdout:5/739: creat d1/d5d/ff9 x:0 0 0 2026-03-09T15:01:46.339 INFO:tasks.workunit.client.0.vm05.stdout:6/636: write da/d43/d7b/f9f [465092,76210] 0 2026-03-09T15:01:46.341 INFO:tasks.workunit.client.0.vm05.stdout:1/684: write d9/d2f/d55/f5e [86518,120430] 0 2026-03-09T15:01:46.345 INFO:tasks.workunit.client.0.vm05.stdout:1/685: fdatasync d9/d2f/d83/d98/d59/d49/f82 0 2026-03-09T15:01:46.351 INFO:tasks.workunit.client.0.vm05.stdout:2/719: dwrite da/d29/d6a/db1/fba [0,4194304] 0 2026-03-09T15:01:46.361 INFO:tasks.workunit.client.0.vm05.stdout:8/710: write d0/d1/d12/d1b/d95/d78/db5/fbb [569237,73135] 0 2026-03-09T15:01:46.384 INFO:tasks.workunit.client.0.vm05.stdout:5/740: dwrite d1/d4/d27/d75/d9c/fc4 [0,4194304] 0 2026-03-09T15:01:46.390 INFO:tasks.workunit.client.0.vm05.stdout:0/638: truncate d9/de/f3d 2929398 0 2026-03-09T15:01:46.399 INFO:tasks.workunit.client.0.vm05.stdout:9/738: creat d2/d10/d22/d2c/d69/f106 x:0 0 0 2026-03-09T15:01:46.408 INFO:tasks.workunit.client.0.vm05.stdout:4/702: creat d2/d4/d7/d48/fe6 x:0 0 0 2026-03-09T15:01:46.409 INFO:tasks.workunit.client.0.vm05.stdout:0/639: dwrite d9/d64/f96 [0,4194304] 0 2026-03-09T15:01:46.427 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:46 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:46.427 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:46 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:46.427 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:46 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:01:46.427 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:46 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:01:46.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:46 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:46.428 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:46 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:46.439 INFO:tasks.workunit.client.0.vm05.stdout:8/711: truncate d0/d1/d12/d1b/d95/d42/d60/da7/db3/fba 46729 0 2026-03-09T15:01:46.439 INFO:tasks.workunit.client.0.vm05.stdout:2/720: write da/d29/f39 [682273,12578] 0 2026-03-09T15:01:46.439 INFO:tasks.workunit.client.0.vm05.stdout:2/721: chown da/d29/d3f/f5b 104081172 1 2026-03-09T15:01:46.448 INFO:tasks.workunit.client.0.vm05.stdout:7/689: creat d1/d9/d23/d31/d32/d78/d7e/d81/dab/fe1 x:0 0 0 2026-03-09T15:01:46.449 INFO:tasks.workunit.client.0.vm05.stdout:7/690: chown d1/d12/f56 252335567 1 2026-03-09T15:01:46.449 INFO:tasks.workunit.client.0.vm05.stdout:7/691: chown d1/d9/d23/d31/d8f/d93/dbd 1 1 2026-03-09T15:01:46.449 INFO:tasks.workunit.client.0.vm05.stdout:7/692: fsync d1/d22/d3c/fba 0 2026-03-09T15:01:46.461 INFO:tasks.workunit.client.0.vm05.stdout:5/741: stat d1/d4/d34/d35/dd0/cef 0 2026-03-09T15:01:46.484 INFO:tasks.workunit.client.0.vm05.stdout:9/739: symlink d2/da9/l107 0 2026-03-09T15:01:46.487 INFO:tasks.workunit.client.0.vm05.stdout:1/686: rename d9/l16 to d9/d2f/d83/d98/d59/d49/d78/d94/le7 0 2026-03-09T15:01:46.487 INFO:tasks.workunit.client.0.vm05.stdout:1/687: readlink d9/d17/l1c 0 2026-03-09T15:01:46.508 INFO:tasks.workunit.client.0.vm05.stdout:7/693: rmdir d1/d9/d23/d31/d51 39 2026-03-09T15:01:46.517 INFO:tasks.workunit.client.0.vm05.stdout:2/722: write da/d29/f7d [4965049,18677] 0 2026-03-09T15:01:46.520 INFO:tasks.workunit.client.0.vm05.stdout:5/742: mkdir d1/d4/d34/d6c/dfa 0 2026-03-09T15:01:46.520 INFO:tasks.workunit.client.0.vm05.stdout:5/743: readlink d1/d4/d27/l8e 0 2026-03-09T15:01:46.522 INFO:tasks.workunit.client.0.vm05.stdout:3/678: getdents d3/df/d59/d79 0 2026-03-09T15:01:46.526 INFO:tasks.workunit.client.0.vm05.stdout:6/637: link da/d19/f22 da/d43/d7b/da9/fc9 0 2026-03-09T15:01:46.527 INFO:tasks.workunit.client.0.vm05.stdout:6/638: chown da/d17/d95/da2/dae/fc3 250064 1 2026-03-09T15:01:46.537 INFO:tasks.workunit.client.0.vm05.stdout:9/740: rename d2/d10/d22/d2c/d69/la1 to d2/d10/d22/d9f/l108 0 2026-03-09T15:01:46.550 INFO:tasks.workunit.client.0.vm05.stdout:5/744: sync 2026-03-09T15:01:46.555 INFO:tasks.workunit.client.0.vm05.stdout:5/745: dread d1/d4/d34/d35/d3d/d38/f8a [0,4194304] 0 2026-03-09T15:01:46.555 INFO:tasks.workunit.client.0.vm05.stdout:5/746: chown d1/db5/lcb 1045 1 2026-03-09T15:01:46.560 INFO:tasks.workunit.client.0.vm05.stdout:5/747: sync 2026-03-09T15:01:46.561 INFO:tasks.workunit.client.0.vm05.stdout:0/640: mknod d9/d59/d93/ccc 0 2026-03-09T15:01:46.564 INFO:tasks.workunit.client.0.vm05.stdout:8/712: getdents d0/d2a 0 2026-03-09T15:01:46.573 INFO:tasks.workunit.client.0.vm05.stdout:1/688: dwrite d9/d2f/d83/d98/d59/d49/d4b/f8e [0,4194304] 0 2026-03-09T15:01:46.577 INFO:tasks.workunit.client.0.vm05.stdout:7/694: readlink d1/d9/d72/d97/lda 0 2026-03-09T15:01:46.579 INFO:tasks.workunit.client.0.vm05.stdout:2/723: fsync da/d29/d6a/da0/d7c/f80 0 2026-03-09T15:01:46.579 INFO:tasks.workunit.client.0.vm05.stdout:1/689: write d9/d2f/d83/d98/d59/fc4 [1952387,126271] 0 2026-03-09T15:01:46.582 INFO:tasks.workunit.client.0.vm05.stdout:6/639: read da/d43/f72 [8739,63026] 0 2026-03-09T15:01:46.594 INFO:tasks.workunit.client.0.vm05.stdout:4/703: link d2/d4/d50/lb2 d2/d4/d1e/da2/dc5/le7 0 2026-03-09T15:01:46.602 INFO:tasks.workunit.client.0.vm05.stdout:5/748: unlink d1/d4/d34/d35/d3d/d38/d69/ceb 0 2026-03-09T15:01:46.603 INFO:tasks.workunit.client.0.vm05.stdout:3/679: write d3/df/d1e/d2f/d52/f95 [4544247,42972] 0 2026-03-09T15:01:46.616 INFO:tasks.workunit.client.0.vm05.stdout:8/713: write d0/f3b [1267549,23793] 0 2026-03-09T15:01:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:46 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:46 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:46 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:01:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:46 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:01:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:46 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:46 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:46.627 INFO:tasks.workunit.client.0.vm05.stdout:7/695: write d1/d9/f59 [4273925,58067] 0 2026-03-09T15:01:46.639 INFO:tasks.workunit.client.0.vm05.stdout:2/724: rmdir da/d29/d64/dc1 39 2026-03-09T15:01:46.639 INFO:tasks.workunit.client.0.vm05.stdout:1/690: unlink d9/d17/l1c 0 2026-03-09T15:01:46.654 INFO:tasks.workunit.client.0.vm05.stdout:4/704: creat d2/d4/d8/d4a/d8f/fe8 x:0 0 0 2026-03-09T15:01:46.657 INFO:tasks.workunit.client.0.vm05.stdout:6/640: dwrite da/d17/d3b/f85 [0,4194304] 0 2026-03-09T15:01:46.657 INFO:tasks.workunit.client.0.vm05.stdout:4/705: dread d2/d43/f47 [4194304,4194304] 0 2026-03-09T15:01:46.659 INFO:tasks.workunit.client.0.vm05.stdout:6/641: chown da/d19/f5b 80 1 2026-03-09T15:01:46.659 INFO:tasks.workunit.client.0.vm05.stdout:4/706: write d2/d4/d7/dc/da8/fab [4816466,78384] 0 2026-03-09T15:01:46.661 INFO:tasks.workunit.client.0.vm05.stdout:9/741: rename d2/d4e/d56/d53/d64/ded/d9c/f6f to d2/d10/d22/d2c/d3c/f109 0 2026-03-09T15:01:46.736 INFO:tasks.workunit.client.0.vm05.stdout:3/680: write d3/df/d1e/d2f/fbf [242854,39566] 0 2026-03-09T15:01:46.742 INFO:tasks.workunit.client.0.vm05.stdout:5/749: dwrite d1/d4/d19/d93/dcc/d91/f9f [0,4194304] 0 2026-03-09T15:01:46.745 INFO:tasks.workunit.client.0.vm05.stdout:0/641: unlink d9/de/d12/d15/d2e/f3a 0 2026-03-09T15:01:46.745 INFO:tasks.workunit.client.0.vm05.stdout:0/642: stat d9/de/l4e 0 2026-03-09T15:01:46.748 INFO:tasks.workunit.client.0.vm05.stdout:8/714: truncate d0/d1/d12/d1b/d6e/d93/d9f/fbf 346204 0 2026-03-09T15:01:46.748 INFO:tasks.workunit.client.0.vm05.stdout:0/643: read d9/de/d12/da3/fb9 [510495,9213] 0 2026-03-09T15:01:46.756 INFO:tasks.workunit.client.0.vm05.stdout:7/696: creat d1/d22/d3c/fe2 x:0 0 0 2026-03-09T15:01:46.756 INFO:tasks.workunit.client.0.vm05.stdout:7/697: chown d1/d9/d23/d31/d32/f90 32542391 1 2026-03-09T15:01:46.763 INFO:tasks.workunit.client.0.vm05.stdout:2/725: unlink da/d29/d64/c66 0 2026-03-09T15:01:46.763 INFO:tasks.workunit.client.0.vm05.stdout:2/726: stat da/d29/d6a/da0/d91/dab/d9c 0 2026-03-09T15:01:46.765 INFO:tasks.workunit.client.0.vm05.stdout:1/691: creat d9/d2f/d37/d5a/da9/dc9/fe8 x:0 0 0 2026-03-09T15:01:46.780 INFO:tasks.workunit.client.0.vm05.stdout:6/642: write da/f3d [3065188,36820] 0 2026-03-09T15:01:46.780 INFO:tasks.workunit.client.0.vm05.stdout:6/643: stat da/d43/d66/c77 0 2026-03-09T15:01:46.795 INFO:tasks.workunit.client.0.vm05.stdout:3/681: fdatasync d3/df/d10/d19/d44/d50/fc4 0 2026-03-09T15:01:46.798 INFO:tasks.workunit.client.0.vm05.stdout:4/707: dread d2/d43/f4f [0,4194304] 0 2026-03-09T15:01:46.816 INFO:tasks.workunit.client.0.vm05.stdout:5/750: dread - d1/d4/d34/d6c/faf zero size 2026-03-09T15:01:46.817 INFO:tasks.workunit.client.0.vm05.stdout:5/751: chown d1/d4/d34/d35/d4e/d6f/d7e/l87 4825837 1 2026-03-09T15:01:46.824 INFO:tasks.workunit.client.0.vm05.stdout:0/644: fsync d9/de/d25/d38/f2f 0 2026-03-09T15:01:46.841 INFO:tasks.workunit.client.0.vm05.stdout:2/727: dwrite da/d29/d6a/da0/d91/dab/d2f/f93 [0,4194304] 0 2026-03-09T15:01:46.844 INFO:tasks.workunit.client.0.vm05.stdout:6/644: mknod da/d43/d7b/cca 0 2026-03-09T15:01:46.844 INFO:tasks.workunit.client.0.vm05.stdout:9/742: mknod d2/d8b/de3/c10a 0 2026-03-09T15:01:46.844 INFO:tasks.workunit.client.0.vm05.stdout:9/743: chown d2/d4e/c96 10868 1 2026-03-09T15:01:46.844 INFO:tasks.workunit.client.0.vm05.stdout:2/728: chown da/d29/d6a/db1/db7 19 1 2026-03-09T15:01:46.849 INFO:tasks.workunit.client.0.vm05.stdout:4/708: creat d2/d4/d1e/d71/fe9 x:0 0 0 2026-03-09T15:01:46.850 INFO:tasks.workunit.client.0.vm05.stdout:5/752: symlink d1/d4/d34/d35/d4e/dc8/lfb 0 2026-03-09T15:01:46.864 INFO:tasks.workunit.client.0.vm05.stdout:8/715: getdents d0/d1/d12/d1b/d66/dcc/dd4/ddc 0 2026-03-09T15:01:46.881 INFO:tasks.workunit.client.0.vm05.stdout:2/729: read da/f10 [1002262,127167] 0 2026-03-09T15:01:46.884 INFO:tasks.workunit.client.0.vm05.stdout:6/645: symlink da/d43/d7b/d89/lcb 0 2026-03-09T15:01:46.884 INFO:tasks.workunit.client.0.vm05.stdout:6/646: chown da/fe 167759 1 2026-03-09T15:01:46.889 INFO:tasks.workunit.client.0.vm05.stdout:8/716: sync 2026-03-09T15:01:46.892 INFO:tasks.workunit.client.0.vm05.stdout:8/717: dread - d0/d1/d12/d1b/d66/dcc/fe6 zero size 2026-03-09T15:01:46.893 INFO:tasks.workunit.client.0.vm05.stdout:2/730: dwrite da/d29/d6a/da0/d91/dab/d2f/d35/f57 [0,4194304] 0 2026-03-09T15:01:46.895 INFO:tasks.workunit.client.0.vm05.stdout:8/718: chown d0/d1/d12/d1b/d95/d42/f4e 6174 1 2026-03-09T15:01:46.895 INFO:tasks.workunit.client.0.vm05.stdout:5/753: fdatasync d1/d4/d34/d35/d3d/d38/f6e 0 2026-03-09T15:01:46.895 INFO:tasks.workunit.client.0.vm05.stdout:6/647: dwrite da/d17/d95/fc1 [0,4194304] 0 2026-03-09T15:01:46.927 INFO:tasks.workunit.client.0.vm05.stdout:0/645: truncate d9/de/d12/d15/d2e/d32/d53/f68 2806764 0 2026-03-09T15:01:46.929 INFO:tasks.workunit.client.0.vm05.stdout:7/698: creat d1/d9/fe3 x:0 0 0 2026-03-09T15:01:46.944 INFO:tasks.workunit.client.0.vm05.stdout:1/692: mknod d9/d2f/d83/d98/d59/d49/d4b/ce9 0 2026-03-09T15:01:46.946 INFO:tasks.workunit.client.0.vm05.stdout:3/682: link d3/df/d59/lcd d3/df/d1e/daf/le7 0 2026-03-09T15:01:46.953 INFO:tasks.workunit.client.0.vm05.stdout:9/744: symlink d2/d10/d22/d2c/d3c/d101/l10b 0 2026-03-09T15:01:46.956 INFO:tasks.workunit.client.0.vm05.stdout:4/709: mknod d2/d4/d1e/da2/dc5/cea 0 2026-03-09T15:01:46.972 INFO:tasks.workunit.client.0.vm05.stdout:2/731: write da/d29/d6a/f71 [2980445,95498] 0 2026-03-09T15:01:46.986 INFO:tasks.workunit.client.0.vm05.stdout:0/646: rename d9/d59/la8 to d9/d64/dbd/lcd 0 2026-03-09T15:01:46.992 INFO:tasks.workunit.client.0.vm05.stdout:3/683: symlink d3/df/d10/d19/dce/le8 0 2026-03-09T15:01:46.996 INFO:tasks.workunit.client.0.vm05.stdout:1/693: write f5 [6541296,13629] 0 2026-03-09T15:01:47.011 INFO:tasks.workunit.client.0.vm05.stdout:8/719: mknod d0/d1/d12/d1b/ce9 0 2026-03-09T15:01:47.015 INFO:tasks.workunit.client.0.vm05.stdout:6/648: symlink da/d9e/lcc 0 2026-03-09T15:01:47.018 INFO:tasks.workunit.client.0.vm05.stdout:2/732: symlink da/d29/d6a/da0/d91/dab/d2f/d35/db0/dc9/lde 0 2026-03-09T15:01:47.022 INFO:tasks.workunit.client.0.vm05.stdout:5/754: mkdir d1/d4/d34/d56/da6/dea/dfc 0 2026-03-09T15:01:47.031 INFO:tasks.workunit.client.0.vm05.stdout:1/694: fsync d9/d2f/d55/f64 0 2026-03-09T15:01:47.034 INFO:tasks.workunit.client.0.vm05.stdout:9/745: mknod d2/d10/d22/dc1/c10c 0 2026-03-09T15:01:47.040 INFO:tasks.workunit.client.0.vm05.stdout:4/710: fsync d2/d4/d8/d4a/d8f/dcd/f61 0 2026-03-09T15:01:47.053 INFO:tasks.workunit.client.0.vm05.stdout:5/755: symlink d1/d4/d34/d35/d3d/d38/d63/lfd 0 2026-03-09T15:01:47.055 INFO:tasks.workunit.client.0.vm05.stdout:2/733: dwrite da/d29/d64/da6/fb4 [0,4194304] 0 2026-03-09T15:01:47.079 INFO:tasks.workunit.client.0.vm05.stdout:4/711: rmdir d2/d4/d8/d4a 39 2026-03-09T15:01:47.080 INFO:tasks.workunit.client.0.vm05.stdout:1/695: write d9/d2f/d83/d98/f67 [5208837,57637] 0 2026-03-09T15:01:47.091 INFO:tasks.workunit.client.0.vm05.stdout:8/720: write d0/d1/d12/d1b/d95/f3e [860497,104369] 0 2026-03-09T15:01:47.095 INFO:tasks.workunit.client.0.vm05.stdout:5/756: unlink d1/d4/d27/d75/d9c/lae 0 2026-03-09T15:01:47.141 INFO:tasks.workunit.client.0.vm05.stdout:7/699: rename d1/d9/d23/d31/d32/d78/d7e/d81/dab to d1/de4 0 2026-03-09T15:01:47.144 INFO:tasks.workunit.client.0.vm05.stdout:7/700: dread d1/d9/d23/d31/d32/f38 [0,4194304] 0 2026-03-09T15:01:47.150 INFO:tasks.workunit.client.0.vm05.stdout:0/647: link d9/de/d12/d15/d2e/d32/d53/d61/l63 d9/de/d25/lce 0 2026-03-09T15:01:47.156 INFO:tasks.workunit.client.0.vm05.stdout:1/696: rmdir d9 39 2026-03-09T15:01:47.169 INFO:tasks.workunit.client.0.vm05.stdout:3/684: rename d3/f42 to d3/d29/d2d/d77/d4d/fe9 0 2026-03-09T15:01:47.171 INFO:tasks.workunit.client.0.vm05.stdout:8/721: write d0/d7/f14 [1691384,76419] 0 2026-03-09T15:01:47.175 INFO:tasks.workunit.client.0.vm05.stdout:5/757: dwrite d1/d4/d19/d93/dcc/d91/fdd [0,4194304] 0 2026-03-09T15:01:47.185 INFO:tasks.workunit.client.0.vm05.stdout:5/758: dwrite d1/d4/d34/d56/d68/fe9 [0,4194304] 0 2026-03-09T15:01:47.214 INFO:tasks.workunit.client.0.vm05.stdout:0/648: rmdir d9/de/d12/d8a/dc3 39 2026-03-09T15:01:47.214 INFO:tasks.workunit.client.0.vm05.stdout:0/649: dread - d9/de/d12/da3/fb2 zero size 2026-03-09T15:01:47.219 INFO:tasks.workunit.client.0.vm05.stdout:4/712: creat d2/d4/d8/d4a/d8f/dcd/feb x:0 0 0 2026-03-09T15:01:47.219 INFO:tasks.workunit.client.0.vm05.stdout:4/713: fsync d2/d43/fa0 0 2026-03-09T15:01:47.220 INFO:tasks.workunit.client.0.vm05.stdout:4/714: readlink d2/d1d/da5/lb3 0 2026-03-09T15:01:47.227 INFO:tasks.workunit.client.0.vm05.stdout:7/701: dwrite d1/fb0 [0,4194304] 0 2026-03-09T15:01:47.241 INFO:tasks.workunit.client.0.vm05.stdout:6/649: getdents da/d17/d3b 0 2026-03-09T15:01:47.256 INFO:tasks.workunit.client.0.vm05.stdout:2/734: creat da/d16/fdf x:0 0 0 2026-03-09T15:01:47.258 INFO:tasks.workunit.client.0.vm05.stdout:9/746: rename d2/f6 to d2/da9/f10d 0 2026-03-09T15:01:47.266 INFO:tasks.workunit.client.0.vm05.stdout:4/715: rmdir d2/d4/d7/d48 39 2026-03-09T15:01:47.268 INFO:tasks.workunit.client.0.vm05.stdout:1/697: unlink d9/d2f/d83/d98/d59/f42 0 2026-03-09T15:01:47.271 INFO:tasks.workunit.client.0.vm05.stdout:2/735: truncate da/d29/d3f/fd0 4837847 0 2026-03-09T15:01:47.272 INFO:tasks.workunit.client.0.vm05.stdout:9/747: fdatasync d2/d4e/f6a 0 2026-03-09T15:01:47.279 INFO:tasks.workunit.client.0.vm05.stdout:3/685: rename d3/df/d10/d19/d44/d50/fc4 to d3/fea 0 2026-03-09T15:01:47.286 INFO:tasks.workunit.client.0.vm05.stdout:8/722: dwrite d0/d1/d12/d1b/d95/f48 [0,4194304] 0 2026-03-09T15:01:47.293 INFO:tasks.workunit.client.0.vm05.stdout:2/736: dread da/dd/f6f [0,4194304] 0 2026-03-09T15:01:47.305 INFO:tasks.workunit.client.0.vm05.stdout:1/698: chown d9/d2f/d83/d98/fa4 0 1 2026-03-09T15:01:47.306 INFO:tasks.workunit.client.0.vm05.stdout:7/702: mknod d1/d9/d23/d31/d32/ddc/ce5 0 2026-03-09T15:01:47.309 INFO:tasks.workunit.client.0.vm05.stdout:6/650: mknod da/d17/d7c/dc6/ccd 0 2026-03-09T15:01:47.316 INFO:tasks.workunit.client.0.vm05.stdout:3/686: creat d3/feb x:0 0 0 2026-03-09T15:01:47.320 INFO:tasks.workunit.client.0.vm05.stdout:0/650: rename d9/d64 to d9/de/d25/dcf 0 2026-03-09T15:01:47.322 INFO:tasks.workunit.client.0.vm05.stdout:9/748: write d2/d10/d22/d47/f7b [3128676,15561] 0 2026-03-09T15:01:47.335 INFO:tasks.workunit.client.0.vm05.stdout:3/687: dread d3/df/d10/d19/dce/dc8/de2/f4c [0,4194304] 0 2026-03-09T15:01:47.336 INFO:tasks.workunit.client.0.vm05.stdout:8/723: dread - d0/dc/f7e zero size 2026-03-09T15:01:47.337 INFO:tasks.workunit.client.0.vm05.stdout:8/724: readlink d0/d1/d12/d1b/d95/l58 0 2026-03-09T15:01:47.342 INFO:tasks.workunit.client.0.vm05.stdout:5/759: rmdir d1/d4/d27/dd4 0 2026-03-09T15:01:47.346 INFO:tasks.workunit.client.0.vm05.stdout:2/737: rename da/l6c to da/d29/d6a/db1/le0 0 2026-03-09T15:01:47.362 INFO:tasks.workunit.client.0.vm05.stdout:9/749: rename d2/d4e/d56/d53/d64/ded/d9c/d8e/dcb/fe0 to d2/d4e/d56/d53/d64/ded/d9c/d8e/dcb/f10e 0 2026-03-09T15:01:47.363 INFO:tasks.workunit.client.0.vm05.stdout:5/760: dwrite d1/d4/d19/d93/dcc/d91/fb1 [0,4194304] 0 2026-03-09T15:01:47.365 INFO:tasks.workunit.client.0.vm05.stdout:5/761: fsync d1/d4/d34/d56/d68/fe9 0 2026-03-09T15:01:47.378 INFO:tasks.workunit.client.0.vm05.stdout:4/716: dwrite d2/d4/d7/d48/f5a [4194304,4194304] 0 2026-03-09T15:01:47.393 INFO:tasks.workunit.client.0.vm05.stdout:3/688: chown d3/d29/d2d/d77/d4d/fe9 24867812 1 2026-03-09T15:01:47.397 INFO:tasks.workunit.client.0.vm05.stdout:0/651: dwrite d9/f2b [4194304,4194304] 0 2026-03-09T15:01:47.408 INFO:tasks.workunit.client.0.vm05.stdout:1/699: dread d9/d2f/f4f [0,4194304] 0 2026-03-09T15:01:47.416 INFO:tasks.workunit.client.0.vm05.stdout:8/725: mkdir d0/d1/d12/d1b/d95/d78/dea 0 2026-03-09T15:01:47.436 INFO:tasks.workunit.client.0.vm05.stdout:3/689: sync 2026-03-09T15:01:47.444 INFO:tasks.workunit.client.0.vm05.stdout:2/738: chown da/d16/f6b 4050139 1 2026-03-09T15:01:47.461 INFO:tasks.workunit.client.0.vm05.stdout:0/652: read d9/de/f3e [6176053,110519] 0 2026-03-09T15:01:47.463 INFO:tasks.workunit.client.0.vm05.stdout:6/651: link da/d43/d66/f6d da/d17/d95/fce 0 2026-03-09T15:01:47.468 INFO:tasks.workunit.client.0.vm05.stdout:5/762: mknod d1/cfe 0 2026-03-09T15:01:47.469 INFO:tasks.workunit.client.0.vm05.stdout:5/763: chown d1/da/l25 1783797 1 2026-03-09T15:01:47.473 INFO:tasks.workunit.client.0.vm05.stdout:7/703: link d1/d9/d23/d54/d7b/lc8 d1/d9/le6 0 2026-03-09T15:01:47.484 INFO:tasks.workunit.client.0.vm05.stdout:6/652: creat da/d17/d7c/fcf x:0 0 0 2026-03-09T15:01:47.484 INFO:tasks.workunit.client.0.vm05.stdout:3/690: dwrite d3/df/d10/d19/dce/dc8/de2/f8e [0,4194304] 0 2026-03-09T15:01:47.487 INFO:tasks.workunit.client.0.vm05.stdout:0/653: write d9/de/d12/d15/d2e/f76 [448057,88042] 0 2026-03-09T15:01:47.493 INFO:tasks.workunit.client.0.vm05.stdout:9/750: creat d2/d4e/f10f x:0 0 0 2026-03-09T15:01:47.493 INFO:tasks.workunit.client.0.vm05.stdout:4/717: dread d2/d4/d7/dc/f8e [0,4194304] 0 2026-03-09T15:01:47.494 INFO:tasks.workunit.client.0.vm05.stdout:4/718: dread - d2/d4/d1e/d71/fe9 zero size 2026-03-09T15:01:47.497 INFO:tasks.workunit.client.0.vm05.stdout:4/719: readlink d2/d1d/da5/lb3 0 2026-03-09T15:01:47.498 INFO:tasks.workunit.client.0.vm05.stdout:1/700: rename d9/d2f/d55/f64 to d9/fea 0 2026-03-09T15:01:47.507 INFO:tasks.workunit.client.0.vm05.stdout:4/720: dwrite d2/d1d/fd0 [0,4194304] 0 2026-03-09T15:01:47.528 INFO:tasks.workunit.client.0.vm05.stdout:7/704: creat d1/de4/fe7 x:0 0 0 2026-03-09T15:01:47.534 INFO:tasks.workunit.client.0.vm05.stdout:3/691: chown d3/df/d59/l7d 1 1 2026-03-09T15:01:47.536 INFO:tasks.workunit.client.0.vm05.stdout:3/692: stat d3/d29/f41 0 2026-03-09T15:01:47.536 INFO:tasks.workunit.client.0.vm05.stdout:3/693: readlink d3/df/l55 0 2026-03-09T15:01:47.551 INFO:tasks.workunit.client.0.vm05.stdout:6/653: dwrite da/f7a [0,4194304] 0 2026-03-09T15:01:47.554 INFO:tasks.workunit.client.0.vm05.stdout:2/739: rmdir da/d29/d6a/da0/d7c/dd8 0 2026-03-09T15:01:47.569 INFO:tasks.workunit.client.0.vm05.stdout:9/751: symlink d2/d4e/d56/d53/d64/ded/d9c/db2/l110 0 2026-03-09T15:01:47.574 INFO:tasks.workunit.client.0.vm05.stdout:8/726: rename d0/d24/d96/cc7 to d0/d1/d12/d1b/d95/d42/d60/d73/dac/ceb 0 2026-03-09T15:01:47.576 INFO:tasks.workunit.client.0.vm05.stdout:1/701: fdatasync d9/d2f/d37/d5a/da9/dc9/dcd/f96 0 2026-03-09T15:01:47.584 INFO:tasks.workunit.client.0.vm05.stdout:4/721: rmdir d2/d43/dd6 39 2026-03-09T15:01:47.593 INFO:tasks.workunit.client.0.vm05.stdout:1/702: dread d9/d2f/d55/f5e [0,4194304] 0 2026-03-09T15:01:47.597 INFO:tasks.workunit.client.0.vm05.stdout:3/694: readlink d3/df/d10/d19/l3e 0 2026-03-09T15:01:47.598 INFO:tasks.workunit.client.0.vm05.stdout:3/695: chown d3/df/d1e/d2f/f9a 0 1 2026-03-09T15:01:47.603 INFO:tasks.workunit.client.0.vm05.stdout:2/740: creat da/d29/d6a/da0/d7c/fe1 x:0 0 0 2026-03-09T15:01:47.608 INFO:tasks.workunit.client.0.vm05.stdout:0/654: creat d9/de/d12/d8a/dc3/fd0 x:0 0 0 2026-03-09T15:01:47.609 INFO:tasks.workunit.client.0.vm05.stdout:8/727: dread - d0/d1/d12/d1b/fbd zero size 2026-03-09T15:01:47.609 INFO:tasks.workunit.client.0.vm05.stdout:4/722: fsync d2/f98 0 2026-03-09T15:01:47.609 INFO:tasks.workunit.client.0.vm05.stdout:7/705: symlink d1/d49/d4a/d77/le8 0 2026-03-09T15:01:47.610 INFO:tasks.workunit.client.0.vm05.stdout:8/728: chown d0/d7/c50 382 1 2026-03-09T15:01:47.610 INFO:tasks.workunit.client.0.vm05.stdout:1/703: symlink d9/d2f/d37/d5a/da9/dc9/dcd/leb 0 2026-03-09T15:01:47.614 INFO:tasks.workunit.client.0.vm05.stdout:3/696: rmdir d3/d29/d2d/d77 39 2026-03-09T15:01:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:47 vm09.local ceph-mon[59673]: pgmap v6: 65 pgs: 65 active+clean; 2.4 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-09T15:01:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:47 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:47 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:47 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:01:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:47 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:01:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:47 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:01:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:47 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:01:47.620 INFO:tasks.workunit.client.0.vm05.stdout:6/654: dwrite da/fab [0,4194304] 0 2026-03-09T15:01:47.667 INFO:tasks.workunit.client.0.vm05.stdout:9/752: fdatasync d2/d10/d22/d47/fc7 0 2026-03-09T15:01:47.670 INFO:tasks.workunit.client.0.vm05.stdout:0/655: creat d9/d59/d93/fd1 x:0 0 0 2026-03-09T15:01:47.694 INFO:tasks.workunit.client.0.vm05.stdout:7/706: rename d1/d12/f18 to d1/d12/fe9 0 2026-03-09T15:01:47.695 INFO:tasks.workunit.client.0.vm05.stdout:8/729: mkdir d0/d1/d12/d1b/d95/d42/d60/da7/db3/dec 0 2026-03-09T15:01:47.697 INFO:tasks.workunit.client.0.vm05.stdout:6/655: creat da/d17/d7c/dc6/fd0 x:0 0 0 2026-03-09T15:01:47.697 INFO:tasks.workunit.client.0.vm05.stdout:3/697: chown d3/df/c46 4710 1 2026-03-09T15:01:47.704 INFO:tasks.workunit.client.0.vm05.stdout:5/764: rename d1/d4/d34/d35/d3d/d38/lb2 to d1/d4/d34/d35/lff 0 2026-03-09T15:01:47.705 INFO:tasks.workunit.client.0.vm05.stdout:0/656: mknod d9/de/d6a/cd2 0 2026-03-09T15:01:47.711 INFO:tasks.workunit.client.0.vm05.stdout:8/730: read - d0/d1/d12/d1b/d66/db7/dbe/fd6 zero size 2026-03-09T15:01:47.713 INFO:tasks.workunit.client.0.vm05.stdout:7/707: creat d1/d9/d23/d31/d8f/d93/dbd/fea x:0 0 0 2026-03-09T15:01:47.714 INFO:tasks.workunit.client.0.vm05.stdout:3/698: read - d3/d29/d7f/fcf zero size 2026-03-09T15:01:47.715 INFO:tasks.workunit.client.0.vm05.stdout:3/699: chown d3/df/d59/d79/fc7 329012406 1 2026-03-09T15:01:47.716 INFO:tasks.workunit.client.0.vm05.stdout:4/723: rename d2/d4/d8/d4a/d8f/dcd to d2/d4/d1e/da2/dec 0 2026-03-09T15:01:47.718 INFO:tasks.workunit.client.0.vm05.stdout:0/657: dread - d9/de/f7f zero size 2026-03-09T15:01:47.719 INFO:tasks.workunit.client.0.vm05.stdout:6/656: sync 2026-03-09T15:01:47.720 INFO:tasks.workunit.client.0.vm05.stdout:6/657: chown da/d9e/lcc 627 1 2026-03-09T15:01:47.726 INFO:tasks.workunit.client.0.vm05.stdout:7/708: dwrite d1/d9/d23/d31/d51/f9b [0,4194304] 0 2026-03-09T15:01:47.728 INFO:tasks.workunit.client.0.vm05.stdout:1/704: rename d9/d2f/d37/d5a/da9/dc9/dcd/la1 to d9/d2f/d83/d98/d59/d49/d78/d94/lec 0 2026-03-09T15:01:47.730 INFO:tasks.workunit.client.0.vm05.stdout:1/705: write d9/d2f/d37/f66 [2285980,69114] 0 2026-03-09T15:01:47.733 INFO:tasks.workunit.client.0.vm05.stdout:5/765: dread d1/d4/f55 [0,4194304] 0 2026-03-09T15:01:47.744 INFO:tasks.workunit.client.0.vm05.stdout:8/731: dwrite d0/d1/d12/d1b/d66/f56 [0,4194304] 0 2026-03-09T15:01:47.748 INFO:tasks.workunit.client.0.vm05.stdout:3/700: dwrite d3/f1f [0,4194304] 0 2026-03-09T15:01:47.769 INFO:tasks.workunit.client.0.vm05.stdout:2/741: getdents da/d29/d6a/da0 0 2026-03-09T15:01:47.773 INFO:tasks.workunit.client.0.vm05.stdout:0/658: mkdir d9/de/d12/d15/d2e/d32/d9f/dd3 0 2026-03-09T15:01:47.777 INFO:tasks.workunit.client.0.vm05.stdout:6/658: fdatasync da/d17/d3b/f3f 0 2026-03-09T15:01:47.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:47 vm05.local ceph-mon[50611]: pgmap v6: 65 pgs: 65 active+clean; 2.4 GiB data, 8.2 GiB used, 112 GiB / 120 GiB avail 2026-03-09T15:01:47.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:47 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:47.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:47 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:47.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:47 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:01:47.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:47 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:01:47.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:47 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:01:47.787 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:47 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:01:47.793 INFO:tasks.workunit.client.0.vm05.stdout:4/724: dwrite d2/f33 [4194304,4194304] 0 2026-03-09T15:01:47.795 INFO:tasks.workunit.client.0.vm05.stdout:9/753: rename d2/d10/d22/d2c/d69/fd8 to d2/d10/d22/d47/dc4/f111 0 2026-03-09T15:01:47.798 INFO:tasks.workunit.client.0.vm05.stdout:1/706: rmdir d9/d2f/d37/d5a/da9/dc9/dcd 39 2026-03-09T15:01:47.801 INFO:tasks.workunit.client.0.vm05.stdout:5/766: mkdir d1/d4/d34/d35/dd0/d100 0 2026-03-09T15:01:47.806 INFO:tasks.workunit.client.0.vm05.stdout:8/732: dread d0/d1/d12/d1b/d21/f92 [0,4194304] 0 2026-03-09T15:01:47.808 INFO:tasks.workunit.client.0.vm05.stdout:3/701: mknod d3/d29/d7f/cec 0 2026-03-09T15:01:47.814 INFO:tasks.workunit.client.0.vm05.stdout:7/709: symlink d1/leb 0 2026-03-09T15:01:47.814 INFO:tasks.workunit.client.0.vm05.stdout:7/710: chown d1/d22/c26 1 1 2026-03-09T15:01:47.818 INFO:tasks.workunit.client.0.vm05.stdout:9/754: truncate d2/d8b/dae/fbb 10 0 2026-03-09T15:01:47.821 INFO:tasks.workunit.client.0.vm05.stdout:1/707: mkdir d9/d2f/d37/ded 0 2026-03-09T15:01:47.825 INFO:tasks.workunit.client.0.vm05.stdout:5/767: symlink d1/d4/d34/d35/d4e/l101 0 2026-03-09T15:01:47.833 INFO:tasks.workunit.client.0.vm05.stdout:8/733: dwrite d0/d1/d97/fae [0,4194304] 0 2026-03-09T15:01:47.843 INFO:tasks.workunit.client.0.vm05.stdout:0/659: unlink d9/de/d12/d15/d2e/d32/d53/f68 0 2026-03-09T15:01:47.847 INFO:tasks.workunit.client.0.vm05.stdout:7/711: chown d1/d9/d23/d31/d32/l6c 0 1 2026-03-09T15:01:47.854 INFO:tasks.workunit.client.0.vm05.stdout:4/725: mkdir d2/d43/ded 0 2026-03-09T15:01:47.854 INFO:tasks.workunit.client.0.vm05.stdout:4/726: write d2/d4/d7/dc/f27 [344974,7104] 0 2026-03-09T15:01:47.856 INFO:tasks.workunit.client.0.vm05.stdout:9/755: fsync d2/d92/fa3 0 2026-03-09T15:01:47.860 INFO:tasks.workunit.client.0.vm05.stdout:1/708: truncate d9/d2f/d83/d98/f39 507325 0 2026-03-09T15:01:47.863 INFO:tasks.workunit.client.0.vm05.stdout:5/768: mkdir d1/d4/d34/d6c/d102 0 2026-03-09T15:01:47.864 INFO:tasks.workunit.client.0.vm05.stdout:5/769: write d1/d4/d34/f6a [2998619,130419] 0 2026-03-09T15:01:47.872 INFO:tasks.workunit.client.0.vm05.stdout:3/702: link d3/df/d59/d79/fc7 d3/d29/d7f/fed 0 2026-03-09T15:01:47.887 INFO:tasks.workunit.client.0.vm05.stdout:8/734: dwrite d0/d1/d12/d1b/d95/d4b/fb0 [0,4194304] 0 2026-03-09T15:01:47.889 INFO:tasks.workunit.client.0.vm05.stdout:8/735: dread d0/d1/d12/d1b/d21/f92 [0,4194304] 0 2026-03-09T15:01:47.905 INFO:tasks.workunit.client.0.vm05.stdout:4/727: creat d2/d4/d8/d4a/d94/fee x:0 0 0 2026-03-09T15:01:47.906 INFO:tasks.workunit.client.0.vm05.stdout:6/659: rename da/d17/c26 to da/d43/cd1 0 2026-03-09T15:01:47.907 INFO:tasks.workunit.client.0.vm05.stdout:2/742: rename da/d16 to da/d16/d46/de2 22 2026-03-09T15:01:47.910 INFO:tasks.workunit.client.0.vm05.stdout:9/756: truncate d2/d10/d22/d47/d95/f9d 56198 0 2026-03-09T15:01:47.910 INFO:tasks.workunit.client.0.vm05.stdout:6/660: dread da/d17/f2d [0,4194304] 0 2026-03-09T15:01:47.915 INFO:tasks.workunit.client.0.vm05.stdout:8/736: mknod d0/d1/d12/d1b/d95/d42/d60/da7/ced 0 2026-03-09T15:01:47.921 INFO:tasks.workunit.client.0.vm05.stdout:9/757: dread - d2/d4e/d56/d53/d64/ded/d9c/d8e/dcb/f10e zero size 2026-03-09T15:01:47.921 INFO:tasks.workunit.client.0.vm05.stdout:3/703: dread d3/df/d10/d19/dce/dc8/de2/f48 [0,4194304] 0 2026-03-09T15:01:47.928 INFO:tasks.workunit.client.0.vm05.stdout:1/709: creat d9/d2f/d37/d5a/da9/dc9/dcd/fee x:0 0 0 2026-03-09T15:01:47.928 INFO:tasks.workunit.client.0.vm05.stdout:6/661: dread da/d43/f86 [0,4194304] 0 2026-03-09T15:01:47.929 INFO:tasks.workunit.client.0.vm05.stdout:5/770: mkdir d1/d4/d27/d103 0 2026-03-09T15:01:47.930 INFO:tasks.workunit.client.0.vm05.stdout:5/771: chown d1/d4/d34/d35/d3d/d38/d69 51 1 2026-03-09T15:01:47.934 INFO:tasks.workunit.client.0.vm05.stdout:1/710: dread - d9/d2f/d55/fce zero size 2026-03-09T15:01:47.934 INFO:tasks.workunit.client.0.vm05.stdout:6/662: chown da/d43/f59 387 1 2026-03-09T15:01:47.935 INFO:tasks.workunit.client.0.vm05.stdout:6/663: chown da/d17/f58 4 1 2026-03-09T15:01:47.937 INFO:tasks.workunit.client.0.vm05.stdout:5/772: mkdir d1/d4/d34/d6c/d104 0 2026-03-09T15:01:47.938 INFO:tasks.workunit.client.0.vm05.stdout:5/773: chown d1/d4/d34/dc0/fe5 4362 1 2026-03-09T15:01:47.938 INFO:tasks.workunit.client.0.vm05.stdout:5/774: write d1/d4/d34/f65 [9628205,128746] 0 2026-03-09T15:01:47.939 INFO:tasks.workunit.client.0.vm05.stdout:5/775: write d1/ff [3946574,68912] 0 2026-03-09T15:01:47.940 INFO:tasks.workunit.client.0.vm05.stdout:9/758: sync 2026-03-09T15:01:47.941 INFO:tasks.workunit.client.0.vm05.stdout:0/660: link d9/de/d12/d15/d2e/c46 d9/de/d25/dae/cd4 0 2026-03-09T15:01:47.945 INFO:tasks.workunit.client.0.vm05.stdout:9/759: dwrite d2/d8b/dae/fec [0,4194304] 0 2026-03-09T15:01:47.953 INFO:tasks.workunit.client.0.vm05.stdout:8/737: write d0/dc/f4a [756037,54500] 0 2026-03-09T15:01:47.954 INFO:tasks.workunit.client.0.vm05.stdout:4/728: link d2/d4/l1a d2/d1d/da5/lef 0 2026-03-09T15:01:47.956 INFO:tasks.workunit.client.0.vm05.stdout:7/712: rename d1/d9/d72/d97 to d1/d49/dec 0 2026-03-09T15:01:47.959 INFO:tasks.workunit.client.0.vm05.stdout:3/704: mknod d3/df/d10/d19/db5/cee 0 2026-03-09T15:01:47.962 INFO:tasks.workunit.client.0.vm05.stdout:1/711: dread d9/d17/f26 [0,4194304] 0 2026-03-09T15:01:47.971 INFO:tasks.workunit.client.0.vm05.stdout:1/712: chown d9/d2f/d55/fce 24537 1 2026-03-09T15:01:47.971 INFO:tasks.workunit.client.0.vm05.stdout:5/776: truncate d1/d4/d34/d35/d3d/d38/f6e 4965572 0 2026-03-09T15:01:47.971 INFO:tasks.workunit.client.0.vm05.stdout:9/760: mknod d2/d10/d22/dc1/dc3/c112 0 2026-03-09T15:01:47.971 INFO:tasks.workunit.client.0.vm05.stdout:9/761: chown d2/d10/d22/da0/cf3 170 1 2026-03-09T15:01:47.971 INFO:tasks.workunit.client.0.vm05.stdout:8/738: mknod d0/d1/d12/d1b/d95/d78/db5/cee 0 2026-03-09T15:01:47.971 INFO:tasks.workunit.client.0.vm05.stdout:2/743: rename da/f4e to da/dd/fe3 0 2026-03-09T15:01:47.972 INFO:tasks.workunit.client.0.vm05.stdout:2/744: stat da/c33 0 2026-03-09T15:01:47.972 INFO:tasks.workunit.client.0.vm05.stdout:7/713: mknod d1/d22/ced 0 2026-03-09T15:01:47.972 INFO:tasks.workunit.client.0.vm05.stdout:0/661: read d9/de/d12/d15/d2e/f9a [136760,72892] 0 2026-03-09T15:01:47.972 INFO:tasks.workunit.client.0.vm05.stdout:4/729: sync 2026-03-09T15:01:47.974 INFO:tasks.workunit.client.0.vm05.stdout:6/664: creat da/d43/d7b/da9/db7/fd2 x:0 0 0 2026-03-09T15:01:47.974 INFO:tasks.workunit.client.0.vm05.stdout:6/665: read - da/d17/d95/da2/dae/fbb zero size 2026-03-09T15:01:47.975 INFO:tasks.workunit.client.0.vm05.stdout:6/666: read - da/d17/f64 zero size 2026-03-09T15:01:47.976 INFO:tasks.workunit.client.0.vm05.stdout:1/713: symlink d9/d2f/d37/lef 0 2026-03-09T15:01:47.981 INFO:tasks.workunit.client.0.vm05.stdout:9/762: creat d2/d10/d22/d47/dc4/f113 x:0 0 0 2026-03-09T15:01:47.982 INFO:tasks.workunit.client.0.vm05.stdout:8/739: creat d0/d1/d12/d1b/d95/dd7/dd2/fef x:0 0 0 2026-03-09T15:01:47.987 INFO:tasks.workunit.client.0.vm05.stdout:5/777: dwrite d1/f14 [0,4194304] 0 2026-03-09T15:01:47.995 INFO:tasks.workunit.client.0.vm05.stdout:5/778: dwrite d1/da/fb7 [0,4194304] 0 2026-03-09T15:01:47.997 INFO:tasks.workunit.client.0.vm05.stdout:3/705: unlink d3/df/d10/d7c/f94 0 2026-03-09T15:01:48.000 INFO:tasks.workunit.client.0.vm05.stdout:4/730: truncate d2/d4/d7/dc/fb9 1773975 0 2026-03-09T15:01:48.005 INFO:tasks.workunit.client.0.vm05.stdout:4/731: dread d2/d4/d1e/da2/dec/d3d/f65 [0,4194304] 0 2026-03-09T15:01:48.009 INFO:tasks.workunit.client.0.vm05.stdout:3/706: dread d3/df/f11 [0,4194304] 0 2026-03-09T15:01:48.010 INFO:tasks.workunit.client.0.vm05.stdout:1/714: dwrite d9/f15 [4194304,4194304] 0 2026-03-09T15:01:48.013 INFO:tasks.workunit.client.0.vm05.stdout:1/715: truncate d9/d2f/d83/d98/d59/fd4 641747 0 2026-03-09T15:01:48.016 INFO:tasks.workunit.client.0.vm05.stdout:1/716: chown d9/d2f/d83/fa3 235517858 1 2026-03-09T15:01:48.017 INFO:tasks.workunit.client.0.vm05.stdout:1/717: write d9/d2f/d83/d98/d59/d49/d92/d75/f76 [1105999,55651] 0 2026-03-09T15:01:48.019 INFO:tasks.workunit.client.0.vm05.stdout:1/718: write d9/d2f/d83/d98/d59/d49/d92/d75/f76 [297013,34636] 0 2026-03-09T15:01:48.020 INFO:tasks.workunit.client.0.vm05.stdout:1/719: write d9/d2f/d83/d98/d59/fd4 [1153990,34864] 0 2026-03-09T15:01:48.024 INFO:tasks.workunit.client.0.vm05.stdout:1/720: write d9/d2f/d37/d5a/da9/dc9/dcd/fee [471431,115886] 0 2026-03-09T15:01:48.024 INFO:tasks.workunit.client.0.vm05.stdout:1/721: chown d9/d2f/d37/d5f/f80 551740 1 2026-03-09T15:01:48.031 INFO:tasks.workunit.client.0.vm05.stdout:9/763: rename d2/d10/d22/d2c/fc8 to d2/d8b/f114 0 2026-03-09T15:01:48.039 INFO:tasks.workunit.client.0.vm05.stdout:2/745: creat da/d29/d6a/da0/d91/dab/d9c/dd3/fe4 x:0 0 0 2026-03-09T15:01:48.039 INFO:tasks.workunit.client.0.vm05.stdout:6/667: write da/d19/f5b [3352969,122098] 0 2026-03-09T15:01:48.040 INFO:tasks.workunit.client.0.vm05.stdout:7/714: mknod d1/d9/d23/d31/d8f/cee 0 2026-03-09T15:01:48.059 INFO:tasks.workunit.client.0.vm05.stdout:5/779: creat d1/da/f105 x:0 0 0 2026-03-09T15:01:48.080 INFO:tasks.workunit.client.0.vm05.stdout:4/732: write d2/d4/d8/d4a/fa9 [846610,34689] 0 2026-03-09T15:01:48.084 INFO:tasks.workunit.client.0.vm05.stdout:3/707: dwrite d3/df/d10/f28 [0,4194304] 0 2026-03-09T15:01:48.094 INFO:tasks.workunit.client.0.vm05.stdout:3/708: read d3/df/f1b [7691466,50929] 0 2026-03-09T15:01:48.094 INFO:tasks.workunit.client.0.vm05.stdout:8/740: rename d0/d1/d12/d1b/d95/d42/d60/d73/f74 to d0/d7/da8/ff0 0 2026-03-09T15:01:48.104 INFO:tasks.workunit.client.0.vm05.stdout:9/764: unlink d2/d10/d22/d47/l78 0 2026-03-09T15:01:48.106 INFO:tasks.workunit.client.0.vm05.stdout:9/765: dread - d2/d10/d22/dc1/dc3/f105 zero size 2026-03-09T15:01:48.107 INFO:tasks.workunit.client.0.vm05.stdout:9/766: readlink d2/d4e/d56/d53/d64/ded/d9c/l25 0 2026-03-09T15:01:48.147 INFO:tasks.workunit.client.0.vm05.stdout:6/668: write da/d43/f86 [3712143,130921] 0 2026-03-09T15:01:48.147 INFO:tasks.workunit.client.0.vm05.stdout:6/669: fdatasync da/d17/d95/da2/dae/fc0 0 2026-03-09T15:01:48.148 INFO:tasks.workunit.client.0.vm05.stdout:6/670: dread - da/d43/d7b/da9/db7/fd2 zero size 2026-03-09T15:01:48.181 INFO:tasks.workunit.client.0.vm05.stdout:7/715: write d1/d49/d4a/fcc [521249,48610] 0 2026-03-09T15:01:48.195 INFO:tasks.workunit.client.0.vm05.stdout:5/780: rmdir d1/d4/d27/d75/d9c 39 2026-03-09T15:01:48.200 INFO:tasks.workunit.client.0.vm05.stdout:1/722: dread d9/d2f/f58 [0,4194304] 0 2026-03-09T15:01:48.216 INFO:tasks.workunit.client.0.vm05.stdout:1/723: sync 2026-03-09T15:01:48.224 INFO:tasks.workunit.client.0.vm05.stdout:4/733: dwrite d2/d4/d50/d8a/fc3 [0,4194304] 0 2026-03-09T15:01:48.238 INFO:tasks.workunit.client.0.vm05.stdout:3/709: mknod d3/df/d10/d19/dce/dc8/de2/d8c/cef 0 2026-03-09T15:01:48.256 INFO:tasks.workunit.client.0.vm05.stdout:6/671: write da/d17/f42 [2528713,39817] 0 2026-03-09T15:01:48.258 INFO:tasks.workunit.client.0.vm05.stdout:6/672: dread - da/d17/d95/da2/fc5 zero size 2026-03-09T15:01:48.267 INFO:tasks.workunit.client.0.vm05.stdout:5/781: dwrite d1/d4/d34/d6c/fdb [0,4194304] 0 2026-03-09T15:01:48.284 INFO:tasks.workunit.client.0.vm05.stdout:0/662: link d9/de/d12/d15/d2e/d32/d53/d61/f62 d9/de/d12/d15/fd5 0 2026-03-09T15:01:48.292 INFO:tasks.workunit.client.0.vm05.stdout:1/724: dwrite d9/d2f/d83/d98/d59/d49/d92/fd2 [0,4194304] 0 2026-03-09T15:01:48.304 INFO:tasks.workunit.client.0.vm05.stdout:3/710: symlink d3/df/d10/d19/dce/dc8/lf0 0 2026-03-09T15:01:48.304 INFO:tasks.workunit.client.0.vm05.stdout:9/767: mknod d2/d10/d22/dc1/dc3/dc6/c115 0 2026-03-09T15:01:48.305 INFO:tasks.workunit.client.0.vm05.stdout:2/746: creat da/d29/d6a/da0/d91/dab/d2f/d35/fe5 x:0 0 0 2026-03-09T15:01:48.305 INFO:tasks.workunit.client.0.vm05.stdout:6/673: truncate da/d17/d3b/f6b 4470155 0 2026-03-09T15:01:48.305 INFO:tasks.workunit.client.0.vm05.stdout:5/782: creat d1/d4/d34/dc0/f106 x:0 0 0 2026-03-09T15:01:48.312 INFO:tasks.workunit.client.0.vm05.stdout:1/725: write d9/d97/fbf [3460373,102561] 0 2026-03-09T15:01:48.316 INFO:tasks.workunit.client.0.vm05.stdout:9/768: creat d2/d10/d22/da0/f116 x:0 0 0 2026-03-09T15:01:48.321 INFO:tasks.workunit.client.0.vm05.stdout:3/711: creat d3/df/d1e/d2c/d74/d78/ff1 x:0 0 0 2026-03-09T15:01:48.327 INFO:tasks.workunit.client.0.vm05.stdout:4/734: write d2/d4/fb4 [1102805,45059] 0 2026-03-09T15:01:48.332 INFO:tasks.workunit.client.0.vm05.stdout:8/741: getdents d0/dc 0 2026-03-09T15:01:48.350 INFO:tasks.workunit.client.0.vm05.stdout:1/726: mknod d9/d2f/d37/d5a/cf0 0 2026-03-09T15:01:48.359 INFO:tasks.workunit.client.0.vm05.stdout:0/663: dwrite d9/de/d6a/db5/d85/f9b [0,4194304] 0 2026-03-09T15:01:48.359 INFO:tasks.workunit.client.0.vm05.stdout:9/769: creat d2/d4e/d56/d53/d64/ded/d9c/d8e/dcb/f117 x:0 0 0 2026-03-09T15:01:48.371 INFO:tasks.workunit.client.0.vm05.stdout:6/674: write da/d17/f2c [5081381,122465] 0 2026-03-09T15:01:48.371 INFO:tasks.workunit.client.0.vm05.stdout:7/716: getdents d1/d49/d4a/d94 0 2026-03-09T15:01:48.377 INFO:tasks.workunit.client.0.vm05.stdout:0/664: dread d9/de/d12/d15/d2e/f40 [0,4194304] 0 2026-03-09T15:01:48.394 INFO:tasks.workunit.client.0.vm05.stdout:0/665: dread d9/de/d6a/f7c [0,4194304] 0 2026-03-09T15:01:48.395 INFO:tasks.workunit.client.0.vm05.stdout:0/666: write d9/de/d6a/db5/d85/f9b [2573514,35426] 0 2026-03-09T15:01:48.396 INFO:tasks.workunit.client.0.vm05.stdout:0/667: stat d9/de/d12/d15 0 2026-03-09T15:01:48.404 INFO:tasks.workunit.client.0.vm05.stdout:1/727: truncate d9/d17/f26 3082277 0 2026-03-09T15:01:48.408 INFO:tasks.workunit.client.0.vm05.stdout:8/742: dwrite d0/d1/f49 [0,4194304] 0 2026-03-09T15:01:48.432 INFO:tasks.workunit.client.0.vm05.stdout:6/675: creat da/d43/d7b/da9/fd3 x:0 0 0 2026-03-09T15:01:48.432 INFO:tasks.workunit.client.0.vm05.stdout:6/676: chown da/d17/d95/da2/fa3 12398 1 2026-03-09T15:01:48.450 INFO:tasks.workunit.client.0.vm05.stdout:5/783: link d1/d4/cca d1/db5/c107 0 2026-03-09T15:01:48.450 INFO:tasks.workunit.client.0.vm05.stdout:4/735: mkdir d2/d4/d7/d48/df0 0 2026-03-09T15:01:48.455 INFO:tasks.workunit.client.0.vm05.stdout:0/668: creat d9/de/fd6 x:0 0 0 2026-03-09T15:01:48.469 INFO:tasks.workunit.client.0.vm05.stdout:1/728: fdatasync d9/d2f/d37/d5f/f80 0 2026-03-09T15:01:48.478 INFO:tasks.workunit.client.0.vm05.stdout:9/770: mknod d2/d4e/d56/d53/d64/ded/d99/de9/c118 0 2026-03-09T15:01:48.492 INFO:tasks.workunit.client.0.vm05.stdout:6/677: creat da/d17/d3b/fd4 x:0 0 0 2026-03-09T15:01:48.499 INFO:tasks.workunit.client.0.vm05.stdout:6/678: dread da/d19/f6a [0,4194304] 0 2026-03-09T15:01:48.505 INFO:tasks.workunit.client.0.vm05.stdout:7/717: mkdir d1/d9/d23/d31/d32/d78/ddd/def 0 2026-03-09T15:01:48.533 INFO:tasks.workunit.client.0.vm05.stdout:2/747: getdents da/d29/d6a/da0/d91/dab/d2f/d35/db0 0 2026-03-09T15:01:48.538 INFO:tasks.workunit.client.0.vm05.stdout:4/736: fdatasync d2/f7e 0 2026-03-09T15:01:48.550 INFO:tasks.workunit.client.0.vm05.stdout:0/669: dwrite d9/d59/f83 [0,4194304] 0 2026-03-09T15:01:48.568 INFO:tasks.workunit.client.0.vm05.stdout:9/771: mkdir d2/d10/d22/d9f/d119 0 2026-03-09T15:01:48.576 INFO:tasks.workunit.client.0.vm05.stdout:3/712: link d3/df/d59/c76 d3/df/d10/d19/d44/da2/cf2 0 2026-03-09T15:01:48.597 INFO:tasks.workunit.client.0.vm05.stdout:5/784: link d1/d4/d34/dc0/fd8 d1/d4/d27/d103/f108 0 2026-03-09T15:01:48.597 INFO:tasks.workunit.client.0.vm05.stdout:3/713: sync 2026-03-09T15:01:48.608 INFO:tasks.workunit.client.0.vm05.stdout:0/670: truncate d9/de/d12/d15/f5e 2047342 0 2026-03-09T15:01:48.609 INFO:tasks.workunit.client.0.vm05.stdout:5/785: dwrite d1/da/f2f [0,4194304] 0 2026-03-09T15:01:48.610 INFO:tasks.workunit.client.0.vm05.stdout:5/786: stat d1/d4/d34/d35/f4d 0 2026-03-09T15:01:48.626 INFO:tasks.workunit.client.0.vm05.stdout:1/729: mkdir d9/d2f/d83/d98/d59/d49/d78/dcc/dd3/df1 0 2026-03-09T15:01:48.633 INFO:tasks.workunit.client.0.vm05.stdout:1/730: chown d9/d2f/d83/d98/d59/d49/d78/d94 3182673 1 2026-03-09T15:01:48.633 INFO:tasks.workunit.client.0.vm05.stdout:8/743: link d0/d1/d12/d1b/d95/d42/d60/f9c d0/d1/d12/d1b/d95/d78/dca/ff1 0 2026-03-09T15:01:48.633 INFO:tasks.workunit.client.0.vm05.stdout:8/744: dread - d0/d1/d12/d1b/d6e/fc2 zero size 2026-03-09T15:01:48.636 INFO:tasks.workunit.client.0.vm05.stdout:8/745: write d0/d7/f14 [481150,55566] 0 2026-03-09T15:01:48.637 INFO:tasks.workunit.client.0.vm05.stdout:9/772: creat d2/d4e/d56/d53/d64/f11a x:0 0 0 2026-03-09T15:01:48.637 INFO:tasks.workunit.client.0.vm05.stdout:8/746: write d0/dc/f4a [1197153,44358] 0 2026-03-09T15:01:48.655 INFO:tasks.workunit.client.0.vm05.stdout:2/748: fdatasync da/dd/f5d 0 2026-03-09T15:01:48.655 INFO:tasks.workunit.client.0.vm05.stdout:2/749: readlink da/le 0 2026-03-09T15:01:48.657 INFO:tasks.workunit.client.0.vm05.stdout:4/737: symlink d2/d4/d7/d48/lf1 0 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.690 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:48 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.773 INFO:tasks.workunit.client.0.vm05.stdout:0/671: truncate d9/de/d12/d15/fd5 2774063 0 2026-03-09T15:01:48.773 INFO:tasks.workunit.client.0.vm05.stdout:5/787: creat d1/d5d/f109 x:0 0 0 2026-03-09T15:01:48.774 INFO:tasks.workunit.client.0.vm05.stdout:1/731: symlink d9/d2f/d83/d98/d59/d49/d77/lf2 0 2026-03-09T15:01:48.774 INFO:tasks.workunit.client.0.vm05.stdout:9/773: mkdir d2/d10/d22/d2c/d3c/d11b 0 2026-03-09T15:01:48.774 INFO:tasks.workunit.client.0.vm05.stdout:1/732: stat d9/d97/cbb 0 2026-03-09T15:01:48.775 INFO:tasks.workunit.client.0.vm05.stdout:9/774: chown d2/d10/d22/d47 578326938 1 2026-03-09T15:01:48.775 INFO:tasks.workunit.client.0.vm05.stdout:8/747: symlink d0/d1/d12/d1b/d95/dd7/lf2 0 2026-03-09T15:01:48.777 INFO:tasks.workunit.client.0.vm05.stdout:6/679: link da/d17/f3c da/d17/d95/fd5 0 2026-03-09T15:01:48.782 INFO:tasks.workunit.client.0.vm05.stdout:8/748: dread d0/d1/d12/d1b/d95/d54/f64 [0,4194304] 0 2026-03-09T15:01:48.794 INFO:tasks.workunit.client.0.vm05.stdout:2/750: mknod da/dd/ce6 0 2026-03-09T15:01:48.799 INFO:tasks.workunit.client.0.vm05.stdout:4/738: mknod d2/d4/d1e/da2/dec/dcb/cf2 0 2026-03-09T15:01:48.802 INFO:tasks.workunit.client.0.vm05.stdout:3/714: truncate d3/df/d10/d19/dce/dc8/de2/d8c/dbd/fde 1114918 0 2026-03-09T15:01:48.813 INFO:tasks.workunit.client.0.vm05.stdout:5/788: mknod d1/d4/d34/d35/d4e/d6f/d7e/c10a 0 2026-03-09T15:01:48.838 INFO:tasks.workunit.client.0.vm05.stdout:7/718: getdents d1/d9/d23/d31/d8f 0 2026-03-09T15:01:48.839 INFO:tasks.workunit.client.0.vm05.stdout:7/719: fdatasync d1/d9/fe3 0 2026-03-09T15:01:48.850 INFO:tasks.workunit.client.0.vm05.stdout:4/739: creat d2/d4/d8/d4a/d94/ff3 x:0 0 0 2026-03-09T15:01:48.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T15:01:48.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:01:48.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T15:01:48.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T15:01:48.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:01:48.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:01:48.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T15:01:48.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T15:01:48.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:48 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:48.876 INFO:tasks.workunit.client.0.vm05.stdout:5/789: unlink d1/d4/ld9 0 2026-03-09T15:01:48.876 INFO:tasks.workunit.client.0.vm05.stdout:5/790: write d1/d4/d19/d93/dcc/d91/f9f [3166870,5841] 0 2026-03-09T15:01:48.892 INFO:tasks.workunit.client.0.vm05.stdout:5/791: sync 2026-03-09T15:01:48.895 INFO:tasks.workunit.client.0.vm05.stdout:5/792: dread d1/da/f2f [0,4194304] 0 2026-03-09T15:01:48.897 INFO:tasks.workunit.client.0.vm05.stdout:9/775: fsync d2/d10/d22/d47/d95/f9d 0 2026-03-09T15:01:48.898 INFO:tasks.workunit.client.0.vm05.stdout:9/776: truncate d2/d4e/d56/d53/d64/ded/d9c/d8e/dcb/ffb 191362 0 2026-03-09T15:01:48.907 INFO:tasks.workunit.client.0.vm05.stdout:8/749: creat d0/dc6/ff3 x:0 0 0 2026-03-09T15:01:48.912 INFO:tasks.workunit.client.0.vm05.stdout:7/720: creat d1/d49/d4a/d94/ff0 x:0 0 0 2026-03-09T15:01:48.913 INFO:tasks.workunit.client.0.vm05.stdout:7/721: chown d1/d12/cf 11959242 1 2026-03-09T15:01:48.916 INFO:tasks.workunit.client.0.vm05.stdout:2/751: mkdir da/d29/d6a/da0/d91/dab/d2f/de7 0 2026-03-09T15:01:48.919 INFO:tasks.workunit.client.0.vm05.stdout:0/672: write d9/f22 [4427803,108945] 0 2026-03-09T15:01:48.928 INFO:tasks.workunit.client.0.vm05.stdout:6/680: write da/f80 [707126,40125] 0 2026-03-09T15:01:48.930 INFO:tasks.workunit.client.0.vm05.stdout:9/777: read d2/d10/d22/d2c/f44 [1339585,97954] 0 2026-03-09T15:01:48.933 INFO:tasks.workunit.client.0.vm05.stdout:2/752: rmdir da/d29/d6a/da0/d7c 39 2026-03-09T15:01:48.934 INFO:tasks.workunit.client.0.vm05.stdout:2/753: dread - da/d29/d64/fc5 zero size 2026-03-09T15:01:48.934 INFO:tasks.workunit.client.0.vm05.stdout:2/754: readlink da/d29/d3f/dc3/l98 0 2026-03-09T15:01:48.944 INFO:tasks.workunit.client.0.vm05.stdout:3/715: dwrite d3/df/d59/d79/fd3 [0,4194304] 0 2026-03-09T15:01:48.954 INFO:tasks.workunit.client.0.vm05.stdout:1/733: truncate f5 7624185 0 2026-03-09T15:01:48.955 INFO:tasks.workunit.client.0.vm05.stdout:1/734: write d9/d2f/d83/d98/d59/d49/d92/d75/f76 [378892,24475] 0 2026-03-09T15:01:48.966 INFO:tasks.workunit.client.0.vm05.stdout:8/750: creat d0/d1/d12/d1b/d6e/d93/d9f/dad/ff4 x:0 0 0 2026-03-09T15:01:48.999 INFO:tasks.workunit.client.0.vm05.stdout:7/722: write d1/d9/d23/d31/d32/d78/f88 [1616609,38719] 0 2026-03-09T15:01:49.002 INFO:tasks.workunit.client.0.vm05.stdout:5/793: link d1/d4/f20 d1/d4/d34/d6c/d104/f10b 0 2026-03-09T15:01:49.003 INFO:tasks.workunit.client.0.vm05.stdout:5/794: write d1/d4/d19/d93/dcc/d91/f9f [2957438,9807] 0 2026-03-09T15:01:49.004 INFO:tasks.workunit.client.0.vm05.stdout:5/795: fdatasync d1/d4/d34/d56/d68/fe9 0 2026-03-09T15:01:49.008 INFO:tasks.workunit.client.0.vm05.stdout:6/681: rename da/d17/d3b/f6b to da/d43/d7b/fd6 0 2026-03-09T15:01:49.011 INFO:tasks.workunit.client.0.vm05.stdout:3/716: dwrite d3/df/d59/f75 [4194304,4194304] 0 2026-03-09T15:01:49.012 INFO:tasks.workunit.client.0.vm05.stdout:9/778: symlink d2/d10/d22/l11c 0 2026-03-09T15:01:49.021 INFO:tasks.workunit.client.0.vm05.stdout:4/740: getdents d2/d4/d1e/d71 0 2026-03-09T15:01:49.024 INFO:tasks.workunit.client.0.vm05.stdout:4/741: dwrite d2/f33 [8388608,4194304] 0 2026-03-09T15:01:49.033 INFO:tasks.workunit.client.0.vm05.stdout:4/742: chown d2/d4/d1e/da2/dec/dcb/cf2 13 1 2026-03-09T15:01:49.033 INFO:tasks.workunit.client.0.vm05.stdout:4/743: chown d2/d4/l6c 96720 1 2026-03-09T15:01:49.034 INFO:tasks.workunit.client.0.vm05.stdout:7/723: creat d1/d9/d23/d31/d8f/d93/dbd/ff1 x:0 0 0 2026-03-09T15:01:49.037 INFO:tasks.workunit.client.0.vm05.stdout:1/735: mknod d9/d97/daa/cf3 0 2026-03-09T15:01:49.040 INFO:tasks.workunit.client.0.vm05.stdout:6/682: mkdir da/d19/dd7 0 2026-03-09T15:01:49.043 INFO:tasks.workunit.client.0.vm05.stdout:8/751: symlink d0/lf5 0 2026-03-09T15:01:49.045 INFO:tasks.workunit.client.0.vm05.stdout:8/752: dread d0/d1/d12/d1b/d95/d78/db5/fbb [0,4194304] 0 2026-03-09T15:01:49.048 INFO:tasks.workunit.client.0.vm05.stdout:2/755: mknod da/d29/d6a/da0/d91/dab/d2f/d35/db0/ce8 0 2026-03-09T15:01:49.048 INFO:tasks.workunit.client.0.vm05.stdout:0/673: link d9/de/d25/dae/cd4 d9/de/d6a/cd7 0 2026-03-09T15:01:49.049 INFO:tasks.workunit.client.0.vm05.stdout:0/674: chown d9/de/d12/d15/d2e/d32/d9f/da0/db7 75698357 1 2026-03-09T15:01:49.058 INFO:tasks.workunit.client.0.vm05.stdout:4/744: creat d2/d4/d8/d4a/d6e/ff4 x:0 0 0 2026-03-09T15:01:49.063 INFO:tasks.workunit.client.0.vm05.stdout:7/724: rename d1/d9/d23/d31/d32/f5b to d1/d9/d23/d31/d51/ff2 0 2026-03-09T15:01:49.071 INFO:tasks.workunit.client.0.vm05.stdout:7/725: dread d1/d9/d23/d31/d51/f29 [4194304,4194304] 0 2026-03-09T15:01:49.075 INFO:tasks.workunit.client.0.vm05.stdout:1/736: unlink d9/d2f/d83/d98/l9f 0 2026-03-09T15:01:49.076 INFO:tasks.workunit.client.0.vm05.stdout:1/737: write d9/d2f/d37/d5a/da9/dc9/dcd/fee [756158,16138] 0 2026-03-09T15:01:49.085 INFO:tasks.workunit.client.0.vm05.stdout:3/717: mknod d3/d29/d2d/cf3 0 2026-03-09T15:01:49.093 INFO:tasks.workunit.client.0.vm05.stdout:5/796: dwrite d1/d4/d27/f57 [0,4194304] 0 2026-03-09T15:01:49.095 INFO:tasks.workunit.client.0.vm05.stdout:5/797: readlink d1/d4/d34/dc0/ldf 0 2026-03-09T15:01:49.101 INFO:tasks.workunit.client.0.vm05.stdout:8/753: mknod d0/d1/d12/d1b/d66/cf6 0 2026-03-09T15:01:49.101 INFO:tasks.workunit.client.0.vm05.stdout:8/754: fsync d0/d1/f49 0 2026-03-09T15:01:49.105 INFO:tasks.workunit.client.0.vm05.stdout:9/779: write d2/d10/d22/d47/fe5 [636851,55718] 0 2026-03-09T15:01:49.106 INFO:tasks.workunit.client.0.vm05.stdout:9/780: chown d2/d4e/d56/d53/d64/ded/d9c/d8e/dcb/fcc 2081109 1 2026-03-09T15:01:49.109 INFO:tasks.workunit.client.0.vm05.stdout:4/745: rename d2/d4/d7/dc/d2b to d2/d1d/df5 0 2026-03-09T15:01:49.119 INFO:tasks.workunit.client.0.vm05.stdout:7/726: dread d1/d9/d23/d31/d51/fd9 [0,4194304] 0 2026-03-09T15:01:49.122 INFO:tasks.workunit.client.0.vm05.stdout:3/718: creat d3/df/d59/d79/ff4 x:0 0 0 2026-03-09T15:01:49.130 INFO:tasks.workunit.client.0.vm05.stdout:5/798: truncate d1/f6 116899 0 2026-03-09T15:01:49.133 INFO:tasks.workunit.client.0.vm05.stdout:8/755: mknod d0/d1/d12/d1b/d21/cf7 0 2026-03-09T15:01:49.139 INFO:tasks.workunit.client.0.vm05.stdout:0/675: write d9/de/f5d [620202,87126] 0 2026-03-09T15:01:49.141 INFO:tasks.workunit.client.0.vm05.stdout:4/746: unlink d2/d4/f15 0 2026-03-09T15:01:49.146 INFO:tasks.workunit.client.0.vm05.stdout:7/727: dread - d1/d9/f75 zero size 2026-03-09T15:01:49.154 INFO:tasks.workunit.client.0.vm05.stdout:6/683: write da/d43/d7b/fd6 [1404712,102855] 0 2026-03-09T15:01:49.154 INFO:tasks.workunit.client.0.vm05.stdout:6/684: dread - da/d43/d7b/da9/db7/fd2 zero size 2026-03-09T15:01:49.164 INFO:tasks.workunit.client.0.vm05.stdout:2/756: write da/d29/d3f/fd0 [4142974,105597] 0 2026-03-09T15:01:49.164 INFO:tasks.workunit.client.0.vm05.stdout:2/757: dread - da/d16/fdf zero size 2026-03-09T15:01:49.166 INFO:tasks.workunit.client.0.vm05.stdout:3/719: write d3/df/d1e/d2f/d52/f57 [1134681,128990] 0 2026-03-09T15:01:49.172 INFO:tasks.workunit.client.0.vm05.stdout:9/781: symlink d2/d10/d22/d2c/d69/l11d 0 2026-03-09T15:01:49.175 INFO:tasks.workunit.client.0.vm05.stdout:5/799: dwrite d1/d4/d19/d93/dcc/d91/fb0 [0,4194304] 0 2026-03-09T15:01:49.187 INFO:tasks.workunit.client.0.vm05.stdout:2/758: sync 2026-03-09T15:01:49.195 INFO:tasks.workunit.client.0.vm05.stdout:0/676: truncate d9/de/d12/d15/d2e/f40 1126728 0 2026-03-09T15:01:49.195 INFO:tasks.workunit.client.0.vm05.stdout:0/677: stat d9/de/d12/d15/d2e/f76 0 2026-03-09T15:01:49.196 INFO:tasks.workunit.client.0.vm05.stdout:0/678: dread - d9/de/d12/da3/fb2 zero size 2026-03-09T15:01:49.200 INFO:tasks.workunit.client.0.vm05.stdout:8/756: dwrite d0/d1/d12/d1b/d95/d78/db5/fbb [0,4194304] 0 2026-03-09T15:01:49.212 INFO:tasks.workunit.client.0.vm05.stdout:4/747: write d2/d4/d8/d4a/d6e/f9e [365412,107743] 0 2026-03-09T15:01:49.216 INFO:tasks.workunit.client.0.vm05.stdout:4/748: dwrite d2/d4/d1e/da2/dec/dcb/fd8 [0,4194304] 0 2026-03-09T15:01:49.241 INFO:tasks.workunit.client.0.vm05.stdout:6/685: fdatasync da/d19/f35 0 2026-03-09T15:01:49.245 INFO:tasks.workunit.client.0.vm05.stdout:6/686: dwrite da/d17/d3b/f85 [0,4194304] 0 2026-03-09T15:01:49.248 INFO:tasks.workunit.client.0.vm05.stdout:3/720: mknod d3/df/d10/d19/dce/dc8/de2/cf5 0 2026-03-09T15:01:49.264 INFO:tasks.workunit.client.0.vm05.stdout:2/759: symlink da/d29/d64/da6/le9 0 2026-03-09T15:01:49.269 INFO:tasks.workunit.client.0.vm05.stdout:0/679: creat d9/de/d6a/db5/fd8 x:0 0 0 2026-03-09T15:01:49.276 INFO:tasks.workunit.client.0.vm05.stdout:0/680: read d9/faa [529374,93722] 0 2026-03-09T15:01:49.280 INFO:tasks.workunit.client.0.vm05.stdout:8/757: creat d0/d1/d12/d1b/d6e/d93/d9f/dad/ff8 x:0 0 0 2026-03-09T15:01:49.280 INFO:tasks.workunit.client.0.vm05.stdout:8/758: readlink d0/d1/de2/l8e 0 2026-03-09T15:01:49.284 INFO:tasks.workunit.client.0.vm05.stdout:8/759: dwrite d0/d24/fe3 [0,4194304] 0 2026-03-09T15:01:49.287 INFO:tasks.workunit.client.0.vm05.stdout:1/738: getdents d9/d2f/d37/d5a/da9 0 2026-03-09T15:01:49.296 INFO:tasks.workunit.client.0.vm05.stdout:4/749: symlink d2/d4/d8/d4a/d8f/lf6 0 2026-03-09T15:01:49.298 INFO:tasks.workunit.client.0.vm05.stdout:7/728: creat d1/d22/da4/ff3 x:0 0 0 2026-03-09T15:01:49.304 INFO:tasks.workunit.client.0.vm05.stdout:6/687: truncate da/d17/d95/da2/fa3 320720 0 2026-03-09T15:01:49.304 INFO:tasks.workunit.client.0.vm05.stdout:6/688: stat da/d17/d3b/c9d 0 2026-03-09T15:01:49.305 INFO:tasks.workunit.client.0.vm05.stdout:3/721: rmdir d3/df/d10/d19/d44/dd2 39 2026-03-09T15:01:49.307 INFO:tasks.workunit.client.0.vm05.stdout:9/782: unlink d2/d10/d22/da0/cfe 0 2026-03-09T15:01:49.317 INFO:tasks.workunit.client.0.vm05.stdout:2/760: mkdir da/d29/d6a/db1/db7/dea 0 2026-03-09T15:01:49.323 INFO:tasks.workunit.client.0.vm05.stdout:5/800: truncate d1/d4/d19/d93/dcc/d91/fb0 3098992 0 2026-03-09T15:01:49.328 INFO:tasks.workunit.client.0.vm05.stdout:8/760: creat d0/d1/d12/d3c/d8b/ff9 x:0 0 0 2026-03-09T15:01:49.331 INFO:tasks.workunit.client.0.vm05.stdout:4/750: rename d2/d1d/df5/l39 to d2/d4/d1e/da2/lf7 0 2026-03-09T15:01:49.335 INFO:tasks.workunit.client.0.vm05.stdout:0/681: write d9/de/d25/f97 [926501,73625] 0 2026-03-09T15:01:49.344 INFO:tasks.workunit.client.0.vm05.stdout:9/783: truncate d2/d4e/d56/fcd 144007 0 2026-03-09T15:01:49.347 INFO:tasks.workunit.client.0.vm05.stdout:3/722: write d3/df/f23 [1523285,46840] 0 2026-03-09T15:01:49.356 INFO:tasks.workunit.client.0.vm05.stdout:2/761: dwrite da/d16/d46/f82 [0,4194304] 0 2026-03-09T15:01:49.358 INFO:tasks.workunit.client.0.vm05.stdout:5/801: dwrite d1/d4/d34/d35/d4e/d6f/fa3 [0,4194304] 0 2026-03-09T15:01:49.370 INFO:tasks.workunit.client.0.vm05.stdout:7/729: rename d1/d49/l91 to d1/d9/d23/d31/d32/d78/d7e/d81/lf4 0 2026-03-09T15:01:49.377 INFO:tasks.workunit.client.0.vm05.stdout:0/682: creat d9/de/d25/d38/d78/fd9 x:0 0 0 2026-03-09T15:01:49.385 INFO:tasks.workunit.client.0.vm05.stdout:2/762: rmdir da/d29/d6a/da0/d91/dab/d2f/db3 39 2026-03-09T15:01:49.398 INFO:tasks.workunit.client.0.vm05.stdout:5/802: dread d1/f30 [0,4194304] 0 2026-03-09T15:01:49.400 INFO:tasks.workunit.client.0.vm05.stdout:8/761: link d0/dc6/ff3 d0/d1/d12/d3c/d8b/ffa 0 2026-03-09T15:01:49.403 INFO:tasks.workunit.client.0.vm05.stdout:1/739: link d9/d2f/d37/lb4 d9/d2f/d83/d98/d59/d49/d77/lf4 0 2026-03-09T15:01:49.407 INFO:tasks.workunit.client.0.vm05.stdout:1/740: dwrite d9/d2f/d83/d98/d59/d49/d92/d75/f76 [0,4194304] 0 2026-03-09T15:01:49.418 INFO:tasks.workunit.client.0.vm05.stdout:6/689: truncate da/d19/f22 2405435 0 2026-03-09T15:01:49.418 INFO:tasks.workunit.client.0.vm05.stdout:3/723: write d3/f17 [3610635,42821] 0 2026-03-09T15:01:49.418 INFO:tasks.workunit.client.0.vm05.stdout:3/724: chown d3/f1f 3 1 2026-03-09T15:01:49.420 INFO:tasks.workunit.client.0.vm05.stdout:3/725: stat d3/df/d10/d19/dce/dc8/de2/cb1 0 2026-03-09T15:01:49.426 INFO:tasks.workunit.client.0.vm05.stdout:6/690: dread da/f5d [0,4194304] 0 2026-03-09T15:01:49.427 INFO:tasks.workunit.client.0.vm05.stdout:7/730: fsync d1/d9/d23/d54/f6f 0 2026-03-09T15:01:49.430 INFO:tasks.workunit.client.0.vm05.stdout:0/683: mkdir d9/d59/d70/dda 0 2026-03-09T15:01:49.438 INFO:tasks.workunit.client.0.vm05.stdout:5/803: creat d1/d4/d34/d35/d3d/dde/f10c x:0 0 0 2026-03-09T15:01:49.440 INFO:tasks.workunit.client.0.vm05.stdout:8/762: mknod d0/d1/d12/d1b/d95/d4b/cfb 0 2026-03-09T15:01:49.444 INFO:tasks.workunit.client.0.vm05.stdout:1/741: creat d9/d2f/d83/d98/d87/ff5 x:0 0 0 2026-03-09T15:01:49.445 INFO:tasks.workunit.client.0.vm05.stdout:1/742: write d9/d2f/d83/d98/f6e [2789329,9856] 0 2026-03-09T15:01:49.447 INFO:tasks.workunit.client.0.vm05.stdout:6/691: fdatasync da/d17/f44 0 2026-03-09T15:01:49.451 INFO:tasks.workunit.client.0.vm05.stdout:3/726: sync 2026-03-09T15:01:49.454 INFO:tasks.workunit.client.0.vm05.stdout:7/731: creat d1/d9/d23/d31/d32/d78/d7e/ff5 x:0 0 0 2026-03-09T15:01:49.466 INFO:tasks.workunit.client.0.vm05.stdout:2/763: mkdir da/d29/d6a/da0/d91/dab/d2f/db3/deb 0 2026-03-09T15:01:49.470 INFO:tasks.workunit.client.0.vm05.stdout:5/804: rename d1/d4/d19/d93/dcc to d1/d4/d34/d56/da6/d10d 0 2026-03-09T15:01:49.470 INFO:tasks.workunit.client.0.vm05.stdout:5/805: readlink d1/d5d/le3 0 2026-03-09T15:01:49.490 INFO:tasks.workunit.client.0.vm05.stdout:6/692: dread da/d17/f20 [0,4194304] 0 2026-03-09T15:01:49.492 INFO:tasks.workunit.client.0.vm05.stdout:6/693: read da/d17/d3b/f4a [1731705,114735] 0 2026-03-09T15:01:49.527 INFO:tasks.workunit.client.0.vm05.stdout:7/732: creat d1/d12/ff6 x:0 0 0 2026-03-09T15:01:49.530 INFO:tasks.workunit.client.0.vm05.stdout:4/751: getdents d2/d4/d50 0 2026-03-09T15:01:49.532 INFO:tasks.workunit.client.0.vm05.stdout:0/684: mknod d9/de/d25/dae/cdb 0 2026-03-09T15:01:49.535 INFO:tasks.workunit.client.0.vm05.stdout:9/784: getdents d2/d10/d22/d2c/d69 0 2026-03-09T15:01:49.550 INFO:tasks.workunit.client.0.vm05.stdout:8/763: mknod d0/d1/d12/d1b/d95/d42/d60/da7/db3/dec/cfc 0 2026-03-09T15:01:49.555 INFO:tasks.workunit.client.0.vm05.stdout:0/685: creat d9/de/d6a/db5/fdc x:0 0 0 2026-03-09T15:01:49.557 INFO:tasks.workunit.client.0.vm05.stdout:9/785: rename d2/d4e/d56/d53/d64/dd9/def/df7 to d2/d4e/d56/d53/d64/ded/d9c/ddd/d11e 0 2026-03-09T15:01:49.588 INFO:tasks.workunit.client.0.vm05.stdout:0/686: readlink d9/de/d25/dcf/dbd/lcd 0 2026-03-09T15:01:49.590 INFO:tasks.workunit.client.0.vm05.stdout:9/786: dread - d2/d4e/d56/d53/d64/ded/d9c/db2/ff9 zero size 2026-03-09T15:01:49.593 INFO:tasks.workunit.client.0.vm05.stdout:9/787: dwrite d2/d10/d22/da0/f116 [0,4194304] 0 2026-03-09T15:01:49.595 INFO:tasks.workunit.client.0.vm05.stdout:5/806: creat d1/d4/d27/f10e x:0 0 0 2026-03-09T15:01:49.596 INFO:tasks.workunit.client.0.vm05.stdout:5/807: fsync d1/d4/d34/f6a 0 2026-03-09T15:01:49.605 INFO:tasks.workunit.client.0.vm05.stdout:8/764: mknod d0/cfd 0 2026-03-09T15:01:49.611 INFO:tasks.workunit.client.0.vm05.stdout:7/733: link d1/d9/d23/d31/d51/fd9 d1/d9/d23/d54/d7b/ff7 0 2026-03-09T15:01:49.621 INFO:tasks.workunit.client.0.vm05.stdout:5/808: chown d1/d4/d27/d75/d9c 0 1 2026-03-09T15:01:49.664 INFO:tasks.workunit.client.0.vm05.stdout:9/788: truncate d2/d10/d22/d2c/d3c/f109 131069 0 2026-03-09T15:01:49.664 INFO:tasks.workunit.client.0.vm05.stdout:9/789: stat d2/d10/d22/da0/ce7 0 2026-03-09T15:01:49.671 INFO:tasks.workunit.client.0.vm05.stdout:3/727: write d3/df/d10/d19/f26 [863672,45235] 0 2026-03-09T15:01:49.682 INFO:tasks.workunit.client.0.vm05.stdout:9/790: fdatasync d2/d10/d22/d47/dc4/f111 0 2026-03-09T15:01:49.690 INFO:tasks.workunit.client.0.vm05.stdout:8/765: link d0/d1/d12/d1b/d95/d42/d60/cbc d0/d1/d12/d1b/d95/dd7/dd2/cfe 0 2026-03-09T15:01:49.699 INFO:tasks.workunit.client.0.vm05.stdout:9/791: dwrite d2/d4e/d56/d53/d64/ded/d9c/d8e/f68 [4194304,4194304] 0 2026-03-09T15:01:49.702 INFO:tasks.workunit.client.0.vm05.stdout:8/766: creat d0/d1/fff x:0 0 0 2026-03-09T15:01:49.716 INFO:tasks.workunit.client.0.vm05.stdout:6/694: dwrite da/d17/f20 [4194304,4194304] 0 2026-03-09T15:01:49.717 INFO:tasks.workunit.client.0.vm05.stdout:6/695: chown da/d43/d66/c77 54 1 2026-03-09T15:01:49.732 INFO:tasks.workunit.client.0.vm05.stdout:4/752: dwrite d2/d1d/d88/f8b [0,4194304] 0 2026-03-09T15:01:49.733 INFO:tasks.workunit.client.0.vm05.stdout:4/753: stat d2/d4/d7/dc/f8e 0 2026-03-09T15:01:49.733 INFO:tasks.workunit.client.0.vm05.stdout:9/792: getdents d2/d10/d22/d9f/d119 0 2026-03-09T15:01:49.742 INFO:tasks.workunit.client.0.vm05.stdout:6/696: truncate da/d17/d95/fd5 4403855 0 2026-03-09T15:01:49.746 INFO:tasks.workunit.client.0.vm05.stdout:6/697: dwrite da/d17/d95/da2/dae/fc3 [0,4194304] 0 2026-03-09T15:01:49.748 INFO:tasks.workunit.client.0.vm05.stdout:4/754: mkdir d2/d1d/df5/df8 0 2026-03-09T15:01:49.749 INFO:tasks.workunit.client.0.vm05.stdout:4/755: write d2/f67 [289543,52351] 0 2026-03-09T15:01:49.751 INFO:tasks.workunit.client.0.vm05.stdout:2/764: dwrite da/d29/d6a/da0/d7c/f80 [0,4194304] 0 2026-03-09T15:01:49.795 INFO:tasks.workunit.client.0.vm05.stdout:4/756: creat d2/d4/d1e/da2/dec/ff9 x:0 0 0 2026-03-09T15:01:49.810 INFO:tasks.workunit.client.0.vm05.stdout:9/793: getdents d2/d10/d22/d47/d73 0 2026-03-09T15:01:49.810 INFO:tasks.workunit.client.0.vm05.stdout:9/794: fsync d2/d4e/d56/d53/f66 0 2026-03-09T15:01:49.811 INFO:tasks.workunit.client.0.vm05.stdout:9/795: chown d2/d8b/dae/ldf 3363 1 2026-03-09T15:01:49.818 INFO:tasks.workunit.client.0.vm05.stdout:9/796: dread d2/d4e/d56/d53/f66 [4194304,4194304] 0 2026-03-09T15:01:49.821 INFO:tasks.workunit.client.0.vm05.stdout:9/797: truncate d2/d10/d22/d47/d73/f81 971525 0 2026-03-09T15:01:49.821 INFO:tasks.workunit.client.0.vm05.stdout:9/798: stat d2/d8b/de3 0 2026-03-09T15:01:49.822 INFO:tasks.workunit.client.0.vm05.stdout:9/799: chown d2/d10/d22/d2c/f3a 1853099 1 2026-03-09T15:01:49.825 INFO:tasks.workunit.client.0.vm05.stdout:4/757: sync 2026-03-09T15:01:49.827 INFO:tasks.workunit.client.0.vm05.stdout:9/800: dread d2/d10/d22/d2c/d69/f4f [0,4194304] 0 2026-03-09T15:01:49.834 INFO:tasks.workunit.client.0.vm05.stdout:4/758: mkdir d2/d1d/dfa 0 2026-03-09T15:01:49.845 INFO:tasks.workunit.client.0.vm05.stdout:5/809: dwrite d1/db5/f5a [0,4194304] 0 2026-03-09T15:01:49.845 INFO:tasks.workunit.client.0.vm05.stdout:5/810: stat d1/db5/l83 0 2026-03-09T15:01:49.846 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:49 vm05.local ceph-mon[50611]: pgmap v7: 65 pgs: 65 active+clean; 2.5 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 25 MiB/s rd, 52 MiB/s wr, 138 op/s 2026-03-09T15:01:49.846 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:49 vm05.local ceph-mon[50611]: Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-09T15:01:49.846 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:49 vm05.local ceph-mon[50611]: Reconfiguring daemon prometheus.vm05 on vm05 2026-03-09T15:01:49.852 INFO:tasks.workunit.client.0.vm05.stdout:9/801: link d2/d10/d22/d2c/d3c/l3f d2/d8b/dae/l11f 0 2026-03-09T15:01:49.853 INFO:tasks.workunit.client.0.vm05.stdout:9/802: readlink d2/d4e/d56/d53/d64/ded/d9c/d8e/lb7 0 2026-03-09T15:01:49.853 INFO:tasks.workunit.client.0.vm05.stdout:9/803: chown d2/d4e/d56/d53/d64/ded/d9c/d8e/fe4 136265 1 2026-03-09T15:01:49.855 INFO:tasks.workunit.client.0.vm05.stdout:5/811: mkdir d1/da/d10f 0 2026-03-09T15:01:49.855 INFO:tasks.workunit.client.0.vm05.stdout:5/812: dread - d1/d5d/f109 zero size 2026-03-09T15:01:49.861 INFO:tasks.workunit.client.0.vm05.stdout:4/759: getdents d2/d4/d50 0 2026-03-09T15:01:49.864 INFO:tasks.workunit.client.0.vm05.stdout:9/804: mkdir d2/d4e/d56/d84/d120 0 2026-03-09T15:01:49.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:49 vm09.local ceph-mon[59673]: pgmap v7: 65 pgs: 65 active+clean; 2.5 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 25 MiB/s rd, 52 MiB/s wr, 138 op/s 2026-03-09T15:01:49.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:49 vm09.local ceph-mon[59673]: Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-09T15:01:49.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:49 vm09.local ceph-mon[59673]: Reconfiguring daemon prometheus.vm05 on vm05 2026-03-09T15:01:49.870 INFO:tasks.workunit.client.0.vm05.stdout:5/813: rename d1/d4/d27/cf1 to d1/d4/d34/d35/d3d/d96/c110 0 2026-03-09T15:01:49.874 INFO:tasks.workunit.client.0.vm05.stdout:9/805: unlink d2/d10/d22/d2c/d69/la5 0 2026-03-09T15:01:49.875 INFO:tasks.workunit.client.0.vm05.stdout:9/806: chown d2/d4e/d56/fce 1109 1 2026-03-09T15:01:49.882 INFO:tasks.workunit.client.0.vm05.stdout:4/760: mkdir d2/d1d/dfa/dfb 0 2026-03-09T15:01:49.890 INFO:tasks.workunit.client.0.vm05.stdout:4/761: stat d2/d4/d8/d4a/d6e/f9e 0 2026-03-09T15:01:49.891 INFO:tasks.workunit.client.0.vm05.stdout:4/762: read d2/d4/d7/dc/f64 [4629849,96571] 0 2026-03-09T15:01:49.891 INFO:tasks.workunit.client.0.vm05.stdout:4/763: chown d2/d4/d1e/da2/dec/feb 5626 1 2026-03-09T15:01:49.894 INFO:tasks.workunit.client.0.vm05.stdout:9/807: creat d2/d9e/df6/f121 x:0 0 0 2026-03-09T15:01:49.901 INFO:tasks.workunit.client.0.vm05.stdout:4/764: creat d2/d4/d7/dc/da8/ffc x:0 0 0 2026-03-09T15:01:49.904 INFO:tasks.workunit.client.0.vm05.stdout:9/808: write d2/d10/d22/d2c/f8d [549011,64653] 0 2026-03-09T15:01:49.915 INFO:tasks.workunit.client.0.vm05.stdout:9/809: getdents d2/d8b/de3 0 2026-03-09T15:01:49.916 INFO:tasks.workunit.client.0.vm05.stdout:9/810: write d2/d10/d22/d47/dc4/f113 [93495,79385] 0 2026-03-09T15:01:49.923 INFO:tasks.workunit.client.0.vm05.stdout:9/811: unlink d2/d4e/d56/d53/d64/ded/d9c/df0/f103 0 2026-03-09T15:01:49.933 INFO:tasks.workunit.client.0.vm05.stdout:1/743: creat d9/d2f/ff6 x:0 0 0 2026-03-09T15:01:49.938 INFO:tasks.workunit.client.0.vm05.stdout:3/728: write d3/df/fd9 [920211,115976] 0 2026-03-09T15:01:49.938 INFO:tasks.workunit.client.0.vm05.stdout:3/729: write d3/f17 [2803096,103245] 0 2026-03-09T15:01:49.939 INFO:tasks.workunit.client.0.vm05.stdout:3/730: stat d3/d29/d2d/d7b/fe3 0 2026-03-09T15:01:49.945 INFO:tasks.workunit.client.0.vm05.stdout:1/744: truncate d9/d2f/d37/d5a/f8c 1999472 0 2026-03-09T15:01:49.952 INFO:tasks.workunit.client.0.vm05.stdout:1/745: fsync d9/db9/fe2 0 2026-03-09T15:01:49.963 INFO:tasks.workunit.client.0.vm05.stdout:1/746: rename d9/d97/daa to d9/d2f/d37/d5a/da9/dc9/dcd/df7 0 2026-03-09T15:01:49.964 INFO:tasks.workunit.client.0.vm05.stdout:0/687: symlink d9/de/d12/ldd 0 2026-03-09T15:01:49.967 INFO:tasks.workunit.client.0.vm05.stdout:1/747: rmdir d9/d2f/d83 39 2026-03-09T15:01:49.969 INFO:tasks.workunit.client.0.vm05.stdout:0/688: symlink d9/de/d25/d38/d78/lde 0 2026-03-09T15:01:49.970 INFO:tasks.workunit.client.0.vm05.stdout:7/734: mknod d1/d9/d23/d31/cf8 0 2026-03-09T15:01:49.972 INFO:tasks.workunit.client.0.vm05.stdout:0/689: dwrite d9/de/d6a/db5/fd8 [0,4194304] 0 2026-03-09T15:01:49.974 INFO:tasks.workunit.client.0.vm05.stdout:0/690: chown d9/de/l2a 774 1 2026-03-09T15:01:49.990 INFO:tasks.workunit.client.0.vm05.stdout:8/767: write d0/d1/d12/d1b/d95/d54/f64 [3288077,69564] 0 2026-03-09T15:01:49.993 INFO:tasks.workunit.client.0.vm05.stdout:0/691: dread d9/f2b [4194304,4194304] 0 2026-03-09T15:01:49.996 INFO:tasks.workunit.client.0.vm05.stdout:3/731: getdents d3/df/d1e/d2f/d52 0 2026-03-09T15:01:49.998 INFO:tasks.workunit.client.0.vm05.stdout:3/732: read d3/df/d10/d19/dce/dc8/de2/f4c [2758314,66985] 0 2026-03-09T15:01:50.026 INFO:tasks.workunit.client.0.vm05.stdout:7/735: mkdir d1/d9/d23/d31/d51/df9 0 2026-03-09T15:01:50.027 INFO:tasks.workunit.client.0.vm05.stdout:1/748: dwrite d9/d2f/d83/d98/d59/d49/d78/dcc/fde [0,4194304] 0 2026-03-09T15:01:50.028 INFO:tasks.workunit.client.0.vm05.stdout:7/736: write d1/d12/fa8 [3600588,103120] 0 2026-03-09T15:01:50.057 INFO:tasks.workunit.client.0.vm05.stdout:2/765: write da/d29/d6a/da0/d91/dab/fcf [3693398,129545] 0 2026-03-09T15:01:50.058 INFO:tasks.workunit.client.0.vm05.stdout:2/766: chown da/d29/d6a/da0/d91/dab/d2f/d35/f3a 644 1 2026-03-09T15:01:50.061 INFO:tasks.workunit.client.0.vm05.stdout:2/767: chown da/d29/d6a/da0/d91/dab/f4b 290 1 2026-03-09T15:01:50.064 INFO:tasks.workunit.client.0.vm05.stdout:2/768: read da/d29/d45/f7b [2654809,54909] 0 2026-03-09T15:01:50.065 INFO:tasks.workunit.client.0.vm05.stdout:2/769: write da/d16/fdf [351906,4021] 0 2026-03-09T15:01:50.066 INFO:tasks.workunit.client.0.vm05.stdout:6/698: dwrite da/f41 [0,4194304] 0 2026-03-09T15:01:50.090 INFO:tasks.workunit.client.0.vm05.stdout:7/737: rename d1/d49/d4a/d77/fca to d1/de4/ffa 0 2026-03-09T15:01:50.095 INFO:tasks.workunit.client.0.vm05.stdout:5/814: write d1/d4/d34/d56/da6/d10d/d91/fb1 [4338428,17252] 0 2026-03-09T15:01:50.098 INFO:tasks.workunit.client.0.vm05.stdout:8/768: mkdir d0/d100 0 2026-03-09T15:01:50.106 INFO:tasks.workunit.client.0.vm05.stdout:4/765: dwrite d2/d1d/f7d [0,4194304] 0 2026-03-09T15:01:50.112 INFO:tasks.workunit.client.0.vm05.stdout:2/770: mknod da/d29/d6a/db1/db7/cec 0 2026-03-09T15:01:50.114 INFO:tasks.workunit.client.0.vm05.stdout:9/812: dwrite d2/d4e/d56/d53/d64/ded/d9c/f6e [0,4194304] 0 2026-03-09T15:01:50.117 INFO:tasks.workunit.client.0.vm05.stdout:6/699: mknod da/d19/cd8 0 2026-03-09T15:01:50.118 INFO:tasks.workunit.client.0.vm05.stdout:6/700: write da/d17/f42 [2927004,84113] 0 2026-03-09T15:01:50.118 INFO:tasks.workunit.client.0.vm05.stdout:5/815: dread d1/ff [4194304,4194304] 0 2026-03-09T15:01:50.125 INFO:tasks.workunit.client.0.vm05.stdout:6/701: dread da/d17/f20 [0,4194304] 0 2026-03-09T15:01:50.130 INFO:tasks.workunit.client.0.vm05.stdout:0/692: mkdir d9/de/d12/d15/ddf 0 2026-03-09T15:01:50.130 INFO:tasks.workunit.client.0.vm05.stdout:0/693: write d9/f22 [6871400,94793] 0 2026-03-09T15:01:50.131 INFO:tasks.workunit.client.0.vm05.stdout:0/694: write d9/de/d12/d15/d2e/f76 [558021,14306] 0 2026-03-09T15:01:50.142 INFO:tasks.workunit.client.0.vm05.stdout:1/749: mkdir d9/d2f/d83/d98/d59/df8 0 2026-03-09T15:01:50.155 INFO:tasks.workunit.client.0.vm05.stdout:2/771: mknod da/d29/d6a/da0/d91/dab/d2f/d35/db0/dc9/ced 0 2026-03-09T15:01:50.168 INFO:tasks.workunit.client.0.vm05.stdout:9/813: fdatasync d2/d4e/d56/d53/d64/ded/d9c/d94/fee 0 2026-03-09T15:01:50.177 INFO:tasks.workunit.client.0.vm05.stdout:5/816: dread d1/db5/fc3 [0,4194304] 0 2026-03-09T15:01:50.177 INFO:tasks.workunit.client.0.vm05.stdout:5/817: chown d1/d4/d34/d35/dd0/cef 31988749 1 2026-03-09T15:01:50.199 INFO:tasks.workunit.client.0.vm05.stdout:0/695: chown d9/l37 82040233 1 2026-03-09T15:01:50.206 INFO:tasks.workunit.client.0.vm05.stdout:3/733: link d3/df/d10/fb4 d3/d29/d7f/dc3/ff6 0 2026-03-09T15:01:50.223 INFO:tasks.workunit.client.0.vm05.stdout:0/696: sync 2026-03-09T15:01:50.223 INFO:tasks.workunit.client.0.vm05.stdout:0/697: dread - d9/de/d6a/fb3 zero size 2026-03-09T15:01:50.225 INFO:tasks.workunit.client.0.vm05.stdout:0/698: sync 2026-03-09T15:01:50.227 INFO:tasks.workunit.client.0.vm05.stdout:4/766: symlink d2/d4/d7/d48/d6b/ddb/lfd 0 2026-03-09T15:01:50.267 INFO:tasks.workunit.client.0.vm05.stdout:1/750: dwrite d9/d2f/d55/fce [0,4194304] 0 2026-03-09T15:01:50.273 INFO:tasks.workunit.client.0.vm05.stdout:8/769: truncate d0/d1/d12/d1b/d95/d54/f64 152254 0 2026-03-09T15:01:50.274 INFO:tasks.workunit.client.0.vm05.stdout:2/772: write da/d29/d3f/f9b [342541,71222] 0 2026-03-09T15:01:50.287 INFO:tasks.workunit.client.0.vm05.stdout:2/773: dwrite da/d16/d46/fd1 [0,4194304] 0 2026-03-09T15:01:50.302 INFO:tasks.workunit.client.0.vm05.stdout:3/734: write d3/df/d10/f2a [4676660,117933] 0 2026-03-09T15:01:50.318 INFO:tasks.workunit.client.0.vm05.stdout:0/699: write d9/de/d12/d15/f50 [790645,65402] 0 2026-03-09T15:01:50.318 INFO:tasks.workunit.client.0.vm05.stdout:4/767: write d2/d4/d7/f90 [496681,118840] 0 2026-03-09T15:01:50.321 INFO:tasks.workunit.client.0.vm05.stdout:0/700: write d9/de/d12/d15/d2e/d32/f7d [849269,47059] 0 2026-03-09T15:01:50.342 INFO:tasks.workunit.client.0.vm05.stdout:1/751: creat d9/d2f/d83/d98/d59/d49/dc2/ff9 x:0 0 0 2026-03-09T15:01:50.342 INFO:tasks.workunit.client.0.vm05.stdout:7/738: getdents d1/d9/d23/d31/d8f/d93/dbd/dd1 0 2026-03-09T15:01:50.348 INFO:tasks.workunit.client.0.vm05.stdout:8/770: write d0/d7/f8 [7490448,14438] 0 2026-03-09T15:01:50.356 INFO:tasks.workunit.client.0.vm05.stdout:3/735: mknod d3/df/d1e/d2f/cf7 0 2026-03-09T15:01:50.358 INFO:tasks.workunit.client.0.vm05.stdout:5/818: creat d1/f111 x:0 0 0 2026-03-09T15:01:50.358 INFO:tasks.workunit.client.0.vm05.stdout:6/702: getdents da/d17/d95 0 2026-03-09T15:01:50.390 INFO:tasks.workunit.client.0.vm05.stdout:4/768: write d2/d49/d69/f9b [4107034,126161] 0 2026-03-09T15:01:50.396 INFO:tasks.workunit.client.0.vm05.stdout:0/701: mknod d9/de/d12/da3/ce0 0 2026-03-09T15:01:50.409 INFO:tasks.workunit.client.0.vm05.stdout:0/702: dwrite d9/de/d6a/fb3 [0,4194304] 0 2026-03-09T15:01:50.419 INFO:tasks.workunit.client.0.vm05.stdout:2/774: dread da/dd/fe3 [0,4194304] 0 2026-03-09T15:01:50.419 INFO:tasks.workunit.client.0.vm05.stdout:8/771: truncate d0/d7/fd1 790374 0 2026-03-09T15:01:50.422 INFO:tasks.workunit.client.0.vm05.stdout:9/814: getdents d2/d10/d8c 0 2026-03-09T15:01:50.426 INFO:tasks.workunit.client.0.vm05.stdout:5/819: symlink d1/d4/d34/d35/d3d/dde/l112 0 2026-03-09T15:01:50.429 INFO:tasks.workunit.client.0.vm05.stdout:6/703: truncate da/d19/f7e 1374912 0 2026-03-09T15:01:50.433 INFO:tasks.workunit.client.0.vm05.stdout:5/820: dwrite d1/d5d/f81 [0,4194304] 0 2026-03-09T15:01:50.446 INFO:tasks.workunit.client.0.vm05.stdout:1/752: dread d9/d97/fbf [0,4194304] 0 2026-03-09T15:01:50.462 INFO:tasks.workunit.client.0.vm05.stdout:4/769: rename d2/d4/d1e/da2/dec/feb to d2/d4/d1e/da2/dec/d3d/ffe 0 2026-03-09T15:01:50.462 INFO:tasks.workunit.client.0.vm05.stdout:7/739: creat d1/d9/d23/d31/d32/d78/ddd/def/ffb x:0 0 0 2026-03-09T15:01:50.462 INFO:tasks.workunit.client.0.vm05.stdout:7/740: chown d1/d9/d23/d31/d32 2224798 1 2026-03-09T15:01:50.464 INFO:tasks.workunit.client.0.vm05.stdout:7/741: truncate d1/d12/ff6 533488 0 2026-03-09T15:01:50.464 INFO:tasks.workunit.client.0.vm05.stdout:2/775: mkdir da/d29/d6a/da0/d91/dab/d2f/db3/dee 0 2026-03-09T15:01:50.465 INFO:tasks.workunit.client.0.vm05.stdout:7/742: chown d1/d9/d23/d31/d32/fc7 12 1 2026-03-09T15:01:50.466 INFO:tasks.workunit.client.0.vm05.stdout:0/703: write d9/f2b [1992189,55613] 0 2026-03-09T15:01:50.476 INFO:tasks.workunit.client.0.vm05.stdout:4/770: dread d2/d4/d7/d48/f5a [4194304,4194304] 0 2026-03-09T15:01:50.482 INFO:tasks.workunit.client.0.vm05.stdout:8/772: write d0/d1/d12/d1b/d95/d54/f5b [30566,130833] 0 2026-03-09T15:01:50.485 INFO:tasks.workunit.client.0.vm05.stdout:3/736: truncate d3/d29/d2d/d77/d4d/f80 869292 0 2026-03-09T15:01:50.486 INFO:tasks.workunit.client.0.vm05.stdout:9/815: write d2/f46 [221750,64522] 0 2026-03-09T15:01:50.496 INFO:tasks.workunit.client.0.vm05.stdout:5/821: creat d1/d4/d34/d56/da6/f113 x:0 0 0 2026-03-09T15:01:50.511 INFO:tasks.workunit.client.0.vm05.stdout:1/753: rmdir d9/d2f 39 2026-03-09T15:01:50.517 INFO:tasks.workunit.client.0.vm05.stdout:2/776: read - da/d29/d6a/da0/fa7 zero size 2026-03-09T15:01:50.518 INFO:tasks.workunit.client.0.vm05.stdout:7/743: creat d1/d22/da4/ffc x:0 0 0 2026-03-09T15:01:50.528 INFO:tasks.workunit.client.0.vm05.stdout:3/737: mkdir d3/df/d10/d19/d44/da2/df8 0 2026-03-09T15:01:50.531 INFO:tasks.workunit.client.0.vm05.stdout:6/704: unlink c8 0 2026-03-09T15:01:50.556 INFO:tasks.workunit.client.0.vm05.stdout:5/822: dread d1/d4/d34/d35/d3d/d38/f6e [0,4194304] 0 2026-03-09T15:01:50.563 INFO:tasks.workunit.client.0.vm05.stdout:9/816: dwrite d2/f17 [4194304,4194304] 0 2026-03-09T15:01:50.573 INFO:tasks.workunit.client.0.vm05.stdout:2/777: read da/d29/d3f/f5f [493524,106209] 0 2026-03-09T15:01:50.592 INFO:tasks.workunit.client.0.vm05.stdout:8/773: truncate d0/d7/da8/ff0 1543119 0 2026-03-09T15:01:50.592 INFO:tasks.workunit.client.0.vm05.stdout:8/774: dread - d0/d1/d12/d1b/fbd zero size 2026-03-09T15:01:50.596 INFO:tasks.workunit.client.0.vm05.stdout:3/738: creat d3/df/d1e/d2c/d74/ff9 x:0 0 0 2026-03-09T15:01:50.600 INFO:tasks.workunit.client.0.vm05.stdout:6/705: mkdir da/d17/d95/da2/dae/dd9 0 2026-03-09T15:01:50.602 INFO:tasks.workunit.client.0.vm05.stdout:1/754: unlink d9/d2f/ff6 0 2026-03-09T15:01:50.604 INFO:tasks.workunit.client.0.vm05.stdout:5/823: mkdir d1/d4/d34/d56/d68/d114 0 2026-03-09T15:01:50.604 INFO:tasks.workunit.client.0.vm05.stdout:7/744: write d1/d9/d23/d54/f6f [34118,57100] 0 2026-03-09T15:01:50.607 INFO:tasks.workunit.client.0.vm05.stdout:9/817: fsync d2/d10/d22/dc2/db1/fb8 0 2026-03-09T15:01:50.617 INFO:tasks.workunit.client.0.vm05.stdout:1/755: sync 2026-03-09T15:01:50.617 INFO:tasks.workunit.client.0.vm05.stdout:8/775: sync 2026-03-09T15:01:50.618 INFO:tasks.workunit.client.0.vm05.stdout:8/776: stat d0/d1/d12/d1b/d95/dd7/lf2 0 2026-03-09T15:01:50.645 INFO:tasks.workunit.client.0.vm05.stdout:5/824: rename d1/d4/d27/lc7 to d1/d4/d34/l115 0 2026-03-09T15:01:50.645 INFO:tasks.workunit.client.0.vm05.stdout:7/745: mknod d1/d9/d72/cfd 0 2026-03-09T15:01:50.649 INFO:tasks.workunit.client.0.vm05.stdout:3/739: write d3/df/d10/d19/dce/dc8/de2/d8c/f85 [1644453,56103] 0 2026-03-09T15:01:50.660 INFO:tasks.workunit.client.0.vm05.stdout:9/818: write d2/d4e/d56/d53/d64/ded/d9c/db2/fd2 [92726,45666] 0 2026-03-09T15:01:50.666 INFO:tasks.workunit.client.0.vm05.stdout:6/706: dread da/d17/f44 [0,4194304] 0 2026-03-09T15:01:50.726 INFO:tasks.workunit.client.0.vm05.stdout:1/756: truncate d9/d17/fb1 40571 0 2026-03-09T15:01:50.731 INFO:tasks.workunit.client.0.vm05.stdout:0/704: getdents d9/de/d12/d15/d2e/d32/d53 0 2026-03-09T15:01:50.736 INFO:tasks.workunit.client.0.vm05.stdout:8/777: mknod d0/d1/de2/c101 0 2026-03-09T15:01:50.746 INFO:tasks.workunit.client.0.vm05.stdout:8/778: dwrite d0/d1/d12/d1b/d95/d4b/fb0 [0,4194304] 0 2026-03-09T15:01:50.747 INFO:tasks.workunit.client.0.vm05.stdout:8/779: dread - d0/d1/d12/d3c/f4c zero size 2026-03-09T15:01:50.757 INFO:tasks.workunit.client.0.vm05.stdout:4/771: link d2/d4/d1e/da2/dec/cba d2/d4/d7/d48/cff 0 2026-03-09T15:01:50.760 INFO:tasks.workunit.client.0.vm05.stdout:7/746: rmdir d1/d9/d23/d31/d32/d78/d7e/d81 39 2026-03-09T15:01:50.761 INFO:tasks.workunit.client.0.vm05.stdout:3/740: mknod d3/d29/d2d/d77/d4d/cfa 0 2026-03-09T15:01:50.762 INFO:tasks.workunit.client.0.vm05.stdout:2/778: truncate da/d29/d6a/da0/d91/dab/d2f/db3/fcb 4101918 0 2026-03-09T15:01:50.762 INFO:tasks.workunit.client.0.vm05.stdout:2/779: stat da/d29/d6a/da0/d91/dab/d9c/dd3/fe4 0 2026-03-09T15:01:50.764 INFO:tasks.workunit.client.0.vm05.stdout:6/707: creat da/d17/d95/fda x:0 0 0 2026-03-09T15:01:50.766 INFO:tasks.workunit.client.0.vm05.stdout:0/705: truncate d9/de/d12/f3c 2998593 0 2026-03-09T15:01:50.776 INFO:tasks.workunit.client.0.vm05.stdout:0/706: dread d9/de/d12/d15/f50 [0,4194304] 0 2026-03-09T15:01:50.776 INFO:tasks.workunit.client.0.vm05.stdout:0/707: chown d9/de/fd6 9 1 2026-03-09T15:01:50.777 INFO:tasks.workunit.client.0.vm05.stdout:0/708: chown d9/f22 686022393 1 2026-03-09T15:01:50.777 INFO:tasks.workunit.client.0.vm05.stdout:6/708: sync 2026-03-09T15:01:50.786 INFO:tasks.workunit.client.0.vm05.stdout:4/772: read d2/d4/d7/dc/fb9 [1082663,128852] 0 2026-03-09T15:01:50.793 INFO:tasks.workunit.client.0.vm05.stdout:5/825: mknod d1/d4/d34/d35/c116 0 2026-03-09T15:01:50.797 INFO:tasks.workunit.client.0.vm05.stdout:7/747: mknod d1/d22/cfe 0 2026-03-09T15:01:50.805 INFO:tasks.workunit.client.0.vm05.stdout:3/741: truncate d3/df/d1e/d2f/d52/f93 410889 0 2026-03-09T15:01:50.807 INFO:tasks.workunit.client.0.vm05.stdout:2/780: rmdir da/d29/d6a/da0/d91/dab 39 2026-03-09T15:01:50.823 INFO:tasks.workunit.client.0.vm05.stdout:1/757: creat d9/d2f/d83/d98/d59/df8/ffa x:0 0 0 2026-03-09T15:01:50.841 INFO:tasks.workunit.client.0.vm05.stdout:0/709: write d9/de/d12/d15/fbb [99230,104773] 0 2026-03-09T15:01:50.845 INFO:tasks.workunit.client.0.vm05.stdout:6/709: creat da/d19/fdb x:0 0 0 2026-03-09T15:01:50.849 INFO:tasks.workunit.client.0.vm05.stdout:0/710: sync 2026-03-09T15:01:50.849 INFO:tasks.workunit.client.0.vm05.stdout:5/826: mkdir d1/d4/d34/d35/d4e/d6f/d117 0 2026-03-09T15:01:50.849 INFO:tasks.workunit.client.0.vm05.stdout:4/773: dread - d2/d7a/fbf zero size 2026-03-09T15:01:50.849 INFO:tasks.workunit.client.0.vm05.stdout:6/710: chown da/d43/d7b/d89/l8e 5 1 2026-03-09T15:01:50.852 INFO:tasks.workunit.client.0.vm05.stdout:7/748: rename d1/d9/d23/d31/d32/d78/d7e/fd6 to d1/d9/d23/d31/d32/d78/dbb/fff 0 2026-03-09T15:01:50.853 INFO:tasks.workunit.client.0.vm05.stdout:7/749: write d1/d9/f59 [28127,32443] 0 2026-03-09T15:01:50.866 INFO:tasks.workunit.client.0.vm05.stdout:2/781: creat da/d29/d6a/db1/db7/fef x:0 0 0 2026-03-09T15:01:50.870 INFO:tasks.workunit.client.0.vm05.stdout:9/819: link d2/d10/d22/d47/fc7 d2/d10/d22/dc2/db1/f122 0 2026-03-09T15:01:50.881 INFO:tasks.workunit.client.0.vm05.stdout:7/750: unlink d1/d9/d23/ld3 0 2026-03-09T15:01:50.881 INFO:tasks.workunit.client.0.vm05.stdout:7/751: chown d1/c4e 20 1 2026-03-09T15:01:50.896 INFO:tasks.workunit.client.0.vm05.stdout:1/758: symlink d9/lfb 0 2026-03-09T15:01:50.896 INFO:tasks.workunit.client.0.vm05.stdout:6/711: dread da/fab [0,4194304] 0 2026-03-09T15:01:50.896 INFO:tasks.workunit.client.0.vm05.stdout:8/780: link d0/d7/fd1 d0/d1/d12/d1b/d66/dcc/dd4/f102 0 2026-03-09T15:01:50.898 INFO:tasks.workunit.client.0.vm05.stdout:4/774: symlink d2/d1d/dfa/dfb/l100 0 2026-03-09T15:01:50.899 INFO:tasks.workunit.client.0.vm05.stdout:8/781: chown d0/d1/d12/d1b/d95/d54/f85 4886791 1 2026-03-09T15:01:50.900 INFO:tasks.workunit.client.0.vm05.stdout:8/782: readlink d0/d1/d12/d1b/d21/l2f 0 2026-03-09T15:01:50.904 INFO:tasks.workunit.client.0.vm05.stdout:4/775: dwrite d2/d4/d8/d4a/d94/fee [0,4194304] 0 2026-03-09T15:01:50.905 INFO:tasks.workunit.client.0.vm05.stdout:4/776: truncate d2/f67 5956146 0 2026-03-09T15:01:50.919 INFO:tasks.workunit.client.0.vm05.stdout:2/782: write da/d29/d3f/dc3/f89 [257428,101010] 0 2026-03-09T15:01:50.919 INFO:tasks.workunit.client.0.vm05.stdout:7/752: write d1/d9/d23/d31/d32/f63 [204058,121682] 0 2026-03-09T15:01:50.925 INFO:tasks.workunit.client.0.vm05.stdout:5/827: dwrite d1/d4/d34/d56/da6/d10d/d91/fcd [0,4194304] 0 2026-03-09T15:01:50.932 INFO:tasks.workunit.client.0.vm05.stdout:9/820: dwrite d2/d10/d22/dc2/db1/fb8 [0,4194304] 0 2026-03-09T15:01:50.936 INFO:tasks.workunit.client.0.vm05.stdout:3/742: dwrite d3/df/f14 [0,4194304] 0 2026-03-09T15:01:50.936 INFO:tasks.workunit.client.0.vm05.stdout:6/712: rename da/d19/l32 to da/d19/ldc 0 2026-03-09T15:01:50.938 INFO:tasks.workunit.client.0.vm05.stdout:0/711: dwrite d9/de/d12/f4c [0,4194304] 0 2026-03-09T15:01:50.948 INFO:tasks.workunit.client.0.vm05.stdout:8/783: fdatasync d0/d1/d12/d1b/d95/d42/da1/fcb 0 2026-03-09T15:01:50.950 INFO:tasks.workunit.client.0.vm05.stdout:1/759: dwrite d9/d97/fbf [0,4194304] 0 2026-03-09T15:01:50.951 INFO:tasks.workunit.client.0.vm05.stdout:8/784: truncate d0/d1/d12/d1b/d66/fe8 476552 0 2026-03-09T15:01:50.951 INFO:tasks.workunit.client.0.vm05.stdout:8/785: chown d0/dc/l3a 4 1 2026-03-09T15:01:50.963 INFO:tasks.workunit.client.0.vm05.stdout:7/753: unlink d1/d9/l46 0 2026-03-09T15:01:50.974 INFO:tasks.workunit.client.0.vm05.stdout:2/783: dread da/d29/f39 [0,4194304] 0 2026-03-09T15:01:51.005 INFO:tasks.workunit.client.0.vm05.stdout:4/777: dwrite d2/d4/d1e/da2/dec/f34 [0,4194304] 0 2026-03-09T15:01:51.011 INFO:tasks.workunit.client.0.vm05.stdout:3/743: mknod d3/df/d59/cfb 0 2026-03-09T15:01:51.019 INFO:tasks.workunit.client.0.vm05.stdout:9/821: mknod d2/d4e/d56/d53/d64/ded/d99/de9/c123 0 2026-03-09T15:01:51.019 INFO:tasks.workunit.client.0.vm05.stdout:9/822: chown d2/d8b/dae/ccf 43545049 1 2026-03-09T15:01:51.019 INFO:tasks.workunit.client.0.vm05.stdout:9/823: chown d2/f17 2 1 2026-03-09T15:01:51.034 INFO:tasks.workunit.client.0.vm05.stdout:1/760: rmdir d9/d97 39 2026-03-09T15:01:51.040 INFO:tasks.workunit.client.0.vm05.stdout:8/786: truncate d0/d1/d12/d1b/d66/db7/dbe/fd6 521338 0 2026-03-09T15:01:51.057 INFO:tasks.workunit.client.0.vm05.stdout:9/824: creat d2/d4e/d56/d53/d64/ded/d9c/ddd/d11e/f124 x:0 0 0 2026-03-09T15:01:51.072 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: pgmap v8: 65 pgs: 65 active+clean; 2.5 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 19 MiB/s rd, 40 MiB/s wr, 107 op/s 2026-03-09T15:01:51.072 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:51.072 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:51.072 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:01:51.073 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:01:51.073 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:01:51.073 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: Upgrade: Need to upgrade myself (mgr.vm09.cfuwdz) 2026-03-09T15:01:51.073 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: Upgrade: Need to upgrade myself (mgr.vm09.cfuwdz) 2026-03-09T15:01:51.073 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:51.073 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.lhsexd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:01:51.073 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.lhsexd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:01:51.073 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:01:51.073 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:50 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:01:51.073 INFO:tasks.workunit.client.0.vm05.stdout:3/744: dwrite d3/df/f4a [0,4194304] 0 2026-03-09T15:01:51.080 INFO:tasks.workunit.client.0.vm05.stdout:4/778: dread d2/d43/f47 [0,4194304] 0 2026-03-09T15:01:51.087 INFO:tasks.workunit.client.0.vm05.stdout:8/787: creat d0/d1/d12/d1b/d6e/d93/d9f/f103 x:0 0 0 2026-03-09T15:01:51.090 INFO:tasks.workunit.client.0.vm05.stdout:0/712: creat d9/de/d12/d15/fe1 x:0 0 0 2026-03-09T15:01:51.090 INFO:tasks.workunit.client.0.vm05.stdout:6/713: link da/f5d da/d17/d95/da2/dae/fdd 0 2026-03-09T15:01:51.090 INFO:tasks.workunit.client.0.vm05.stdout:7/754: dread d1/d9/d23/d31/d8f/d93/fae [0,4194304] 0 2026-03-09T15:01:51.090 INFO:tasks.workunit.client.0.vm05.stdout:7/755: readlink d1/d9/d23/d31/d8f/la9 0 2026-03-09T15:01:51.092 INFO:tasks.workunit.client.0.vm05.stdout:2/784: creat da/d29/ff0 x:0 0 0 2026-03-09T15:01:51.096 INFO:tasks.workunit.client.0.vm05.stdout:3/745: creat d3/d29/d2d/d7b/ffc x:0 0 0 2026-03-09T15:01:51.101 INFO:tasks.workunit.client.0.vm05.stdout:4/779: mkdir d2/d4/d50/d8a/d101 0 2026-03-09T15:01:51.101 INFO:tasks.workunit.client.0.vm05.stdout:0/713: truncate d9/faa 350789 0 2026-03-09T15:01:51.102 INFO:tasks.workunit.client.0.vm05.stdout:0/714: chown d9/de/d12/d15/d2e/d6b/fb8 1 1 2026-03-09T15:01:51.105 INFO:tasks.workunit.client.0.vm05.stdout:8/788: mkdir d0/d1/d12/d1b/d95/d42/d60/d73/dac/d104 0 2026-03-09T15:01:51.114 INFO:tasks.workunit.client.0.vm05.stdout:3/746: read d3/df/d10/d19/dce/dc8/de2/d8c/dbd/fa4 [1110105,87811] 0 2026-03-09T15:01:51.120 INFO:tasks.workunit.client.0.vm05.stdout:3/747: dwrite d3/df/d10/d19/dce/dc8/de2/f9d [0,4194304] 0 2026-03-09T15:01:51.125 INFO:tasks.workunit.client.0.vm05.stdout:6/714: creat da/d17/d3b/fde x:0 0 0 2026-03-09T15:01:51.125 INFO:tasks.workunit.client.0.vm05.stdout:6/715: dread - da/d17/d95/da2/fc5 zero size 2026-03-09T15:01:51.165 INFO:tasks.workunit.client.0.vm05.stdout:4/780: creat d2/d1d/da5/f102 x:0 0 0 2026-03-09T15:01:51.165 INFO:tasks.workunit.client.0.vm05.stdout:4/781: fsync d2/d4/d50/d8a/fc3 0 2026-03-09T15:01:51.178 INFO:tasks.workunit.client.0.vm05.stdout:5/828: dread d1/d4/d34/d35/d3d/dde/f7d [0,4194304] 0 2026-03-09T15:01:51.220 INFO:tasks.workunit.client.0.vm05.stdout:9/825: dwrite d2/d10/d22/dc1/dc3/ffd [0,4194304] 0 2026-03-09T15:01:51.270 INFO:tasks.workunit.client.0.vm05.stdout:8/789: write d0/dc/f7e [1039202,104012] 0 2026-03-09T15:01:51.274 INFO:tasks.workunit.client.0.vm05.stdout:3/748: truncate d3/f7 5360031 0 2026-03-09T15:01:51.276 INFO:tasks.workunit.client.0.vm05.stdout:3/749: dread d3/df/d1e/d2c/d74/d9b/fc9 [0,4194304] 0 2026-03-09T15:01:51.276 INFO:tasks.workunit.client.0.vm05.stdout:3/750: write d3/df/d1e/d2f/d52/f95 [1273419,66308] 0 2026-03-09T15:01:51.277 INFO:tasks.workunit.client.0.vm05.stdout:3/751: fdatasync d3/df/d10/d19/dce/dc8/de2/f8e 0 2026-03-09T15:01:51.295 INFO:tasks.workunit.client.0.vm05.stdout:7/756: mknod d1/d9/c100 0 2026-03-09T15:01:51.300 INFO:tasks.workunit.client.0.vm05.stdout:2/785: mkdir da/d29/d6a/da0/d91/dab/d2f/db3/df1 0 2026-03-09T15:01:51.307 INFO:tasks.workunit.client.0.vm05.stdout:1/761: link d9/f21 d9/d2f/d83/d98/d59/d49/d77/ffc 0 2026-03-09T15:01:51.307 INFO:tasks.workunit.client.0.vm05.stdout:1/762: stat d9/d2f/d83/d98/d59/d49/cb8 0 2026-03-09T15:01:51.359 INFO:tasks.workunit.client.0.vm05.stdout:5/829: dread d1/d4/d34/d35/d4e/f8d [0,4194304] 0 2026-03-09T15:01:51.360 INFO:tasks.workunit.client.0.vm05.stdout:5/830: fdatasync d1/d4/d34/d56/da6/d10d/d91/fb1 0 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: pgmap v8: 65 pgs: 65 active+clean; 2.5 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 19 MiB/s rd, 40 MiB/s wr, 107 op/s 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: Upgrade: Need to upgrade myself (mgr.vm09.cfuwdz) 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: Upgrade: Need to upgrade myself (mgr.vm09.cfuwdz) 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.lhsexd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.lhsexd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:01:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:50 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:01:51.380 INFO:tasks.workunit.client.0.vm05.stdout:8/790: truncate d0/d1/d12/d1b/d95/d42/d60/f9c 2476315 0 2026-03-09T15:01:51.393 INFO:tasks.workunit.client.0.vm05.stdout:7/757: fdatasync d1/d9/d23/d31/d32/f3a 0 2026-03-09T15:01:51.405 INFO:tasks.workunit.client.0.vm05.stdout:6/716: symlink da/d43/ldf 0 2026-03-09T15:01:51.405 INFO:tasks.workunit.client.0.vm05.stdout:6/717: fsync da/fe 0 2026-03-09T15:01:51.407 INFO:tasks.workunit.client.0.vm05.stdout:2/786: dread - da/d29/f76 zero size 2026-03-09T15:01:51.411 INFO:tasks.workunit.client.0.vm05.stdout:1/763: unlink d9/d2f/d83/fcf 0 2026-03-09T15:01:51.422 INFO:tasks.workunit.client.0.vm05.stdout:9/826: mknod d2/d10/d22/d47/c125 0 2026-03-09T15:01:51.429 INFO:tasks.workunit.client.0.vm05.stdout:7/758: unlink d1/d9/d23/d31/d51/f9b 0 2026-03-09T15:01:51.431 INFO:tasks.workunit.client.0.vm05.stdout:5/831: dread d1/f9 [0,4194304] 0 2026-03-09T15:01:51.436 INFO:tasks.workunit.client.0.vm05.stdout:6/718: mkdir da/d43/d7b/de0 0 2026-03-09T15:01:51.440 INFO:tasks.workunit.client.0.vm05.stdout:6/719: dwrite da/d17/d95/da2/fc5 [0,4194304] 0 2026-03-09T15:01:51.444 INFO:tasks.workunit.client.0.vm05.stdout:2/787: dread - da/d29/d6a/da0/fa1 zero size 2026-03-09T15:01:51.453 INFO:tasks.workunit.client.0.vm05.stdout:7/759: truncate d1/d9/d23/d31/d51/fd4 554040 0 2026-03-09T15:01:51.453 INFO:tasks.workunit.client.0.vm05.stdout:4/782: write d2/d4/d7/d48/fd9 [29688,68395] 0 2026-03-09T15:01:51.460 INFO:tasks.workunit.client.0.vm05.stdout:0/715: write d9/de/d12/d15/d2e/d32/d53/d61/f62 [2181179,128182] 0 2026-03-09T15:01:51.469 INFO:tasks.workunit.client.0.vm05.stdout:3/752: dwrite d3/d29/d7f/dc3/fd5 [0,4194304] 0 2026-03-09T15:01:51.471 INFO:tasks.workunit.client.0.vm05.stdout:8/791: write d0/d1/d12/fb6 [1330949,28921] 0 2026-03-09T15:01:51.479 INFO:tasks.workunit.client.0.vm05.stdout:2/788: read - da/d16/d46/f50 zero size 2026-03-09T15:01:51.485 INFO:tasks.workunit.client.0.vm05.stdout:9/827: dwrite d2/d4e/f72 [0,4194304] 0 2026-03-09T15:01:51.486 INFO:tasks.workunit.client.0.vm05.stdout:9/828: write d2/d9e/df6/f121 [1015217,6246] 0 2026-03-09T15:01:51.492 INFO:tasks.workunit.client.0.vm05.stdout:7/760: unlink d1/d22/fd7 0 2026-03-09T15:01:51.496 INFO:tasks.workunit.client.0.vm05.stdout:4/783: read d2/d7a/fb1 [7688860,121471] 0 2026-03-09T15:01:51.505 INFO:tasks.workunit.client.0.vm05.stdout:0/716: unlink d9/de/d25/dcf/dbd/lcd 0 2026-03-09T15:01:51.522 INFO:tasks.workunit.client.0.vm05.stdout:3/753: rmdir d3/d29/d7f/dc3 39 2026-03-09T15:01:51.525 INFO:tasks.workunit.client.0.vm05.stdout:8/792: chown d0/d1/d55/ld3 532181742 1 2026-03-09T15:01:51.527 INFO:tasks.workunit.client.0.vm05.stdout:1/764: creat d9/d2f/d83/d98/d59/d49/ffd x:0 0 0 2026-03-09T15:01:51.533 INFO:tasks.workunit.client.0.vm05.stdout:4/784: fsync d2/d4/d7/d79/fd4 0 2026-03-09T15:01:51.534 INFO:tasks.workunit.client.0.vm05.stdout:5/832: rename d1/d4/f55 to d1/d4/d27/f118 0 2026-03-09T15:01:51.540 INFO:tasks.workunit.client.0.vm05.stdout:6/720: rmdir da/d43/d7b/d89/da8/dad 0 2026-03-09T15:01:51.543 INFO:tasks.workunit.client.0.vm05.stdout:8/793: chown d0/d1/d12/d1b/d66/dcc/dd4/f102 4926 1 2026-03-09T15:01:51.543 INFO:tasks.workunit.client.0.vm05.stdout:8/794: dread - d0/dc/f15 zero size 2026-03-09T15:01:51.544 INFO:tasks.workunit.client.0.vm05.stdout:8/795: truncate d0/d1/d12/d1b/d66/dcc/fe6 444020 0 2026-03-09T15:01:51.554 INFO:tasks.workunit.client.0.vm05.stdout:4/785: creat d2/d4/d1e/d71/f103 x:0 0 0 2026-03-09T15:01:51.557 INFO:tasks.workunit.client.0.vm05.stdout:3/754: sync 2026-03-09T15:01:51.558 INFO:tasks.workunit.client.0.vm05.stdout:4/786: dwrite d2/d4/d1e/da2/dec/ff9 [0,4194304] 0 2026-03-09T15:01:51.559 INFO:tasks.workunit.client.0.vm05.stdout:4/787: dread - d2/d4/d50/fc0 zero size 2026-03-09T15:01:51.562 INFO:tasks.workunit.client.0.vm05.stdout:3/755: dread d3/df/d10/d19/f26 [0,4194304] 0 2026-03-09T15:01:51.567 INFO:tasks.workunit.client.0.vm05.stdout:5/833: rename d1/d4/d34/d35/d4e/d6f/d7e/l85 to d1/d4/d34/d35/d3d/dde/l119 0 2026-03-09T15:01:51.577 INFO:tasks.workunit.client.0.vm05.stdout:2/789: dwrite da/dd/ff [0,4194304] 0 2026-03-09T15:01:51.593 INFO:tasks.workunit.client.0.vm05.stdout:7/761: dwrite d1/f62 [0,4194304] 0 2026-03-09T15:01:51.602 INFO:tasks.workunit.client.0.vm05.stdout:8/796: creat d0/d1/d12/d1b/d95/d78/f105 x:0 0 0 2026-03-09T15:01:51.603 INFO:tasks.workunit.client.0.vm05.stdout:8/797: chown d0/d1/d12/d1b/d95/d78/ld9 2 1 2026-03-09T15:01:51.603 INFO:tasks.workunit.client.0.vm05.stdout:8/798: write d0/d1/d12/d1b/d6e/d93/d9f/f103 [113734,765] 0 2026-03-09T15:01:51.606 INFO:tasks.workunit.client.0.vm05.stdout:9/829: link d2/d10/d22/c32 d2/d4e/d56/d53/d64/c126 0 2026-03-09T15:01:51.607 INFO:tasks.workunit.client.0.vm05.stdout:9/830: write d2/d9e/df6/f121 [1696721,96916] 0 2026-03-09T15:01:51.614 INFO:tasks.workunit.client.0.vm05.stdout:1/765: mknod d9/d97/cfe 0 2026-03-09T15:01:51.623 INFO:tasks.workunit.client.0.vm05.stdout:6/721: write da/d17/f8d [650562,24356] 0 2026-03-09T15:01:51.624 INFO:tasks.workunit.client.0.vm05.stdout:6/722: truncate da/d19/fdb 273344 0 2026-03-09T15:01:51.627 INFO:tasks.workunit.client.0.vm05.stdout:5/834: mknod d1/d4/d34/d35/dd0/c11a 0 2026-03-09T15:01:51.630 INFO:tasks.workunit.client.0.vm05.stdout:2/790: symlink da/d29/d6a/da0/d91/dab/d2f/db3/lf2 0 2026-03-09T15:01:51.632 INFO:tasks.workunit.client.0.vm05.stdout:8/799: fsync d0/d1/d12/d1b/f27 0 2026-03-09T15:01:51.636 INFO:tasks.workunit.client.0.vm05.stdout:9/831: unlink d2/d4e/d56/d53/d64/ded/d9c/f102 0 2026-03-09T15:01:51.687 INFO:tasks.workunit.client.0.vm05.stdout:1/766: truncate d9/d2f/d83/d98/d59/fb5 330816 0 2026-03-09T15:01:51.691 INFO:tasks.workunit.client.0.vm05.stdout:4/788: symlink d2/d1d/df5/df8/l104 0 2026-03-09T15:01:51.696 INFO:tasks.workunit.client.0.vm05.stdout:6/723: dwrite da/d43/d7b/d89/fa4 [0,4194304] 0 2026-03-09T15:01:51.698 INFO:tasks.workunit.client.0.vm05.stdout:6/724: dread - da/d17/d95/da2/dae/fc0 zero size 2026-03-09T15:01:51.707 INFO:tasks.workunit.client.0.vm05.stdout:3/756: creat d3/df/d10/d19/d44/da2/df8/ffd x:0 0 0 2026-03-09T15:01:51.713 INFO:tasks.workunit.client.0.vm05.stdout:5/835: dwrite d1/d4/d27/ff6 [0,4194304] 0 2026-03-09T15:01:51.717 INFO:tasks.workunit.client.0.vm05.stdout:5/836: read d1/d4/d27/f57 [1279651,62598] 0 2026-03-09T15:01:51.724 INFO:tasks.workunit.client.0.vm05.stdout:0/717: getdents d9/de/d12/d15 0 2026-03-09T15:01:51.724 INFO:tasks.workunit.client.0.vm05.stdout:0/718: stat d9/de/d25/d38/d78/fcb 0 2026-03-09T15:01:51.734 INFO:tasks.workunit.client.0.vm05.stdout:8/800: rename d0/d1/d12/d1b/d95/f48 to d0/d1/d12/d3c/d8b/f106 0 2026-03-09T15:01:51.746 INFO:tasks.workunit.client.0.vm05.stdout:9/832: dread d2/d10/fe6 [0,4194304] 0 2026-03-09T15:01:51.747 INFO:tasks.workunit.client.0.vm05.stdout:9/833: readlink d2/l34 0 2026-03-09T15:01:51.753 INFO:tasks.workunit.client.0.vm05.stdout:4/789: dread d2/d4/d7/d48/d6b/f9f [0,4194304] 0 2026-03-09T15:01:51.755 INFO:tasks.workunit.client.0.vm05.stdout:6/725: unlink da/d43/d7b/l84 0 2026-03-09T15:01:51.768 INFO:tasks.workunit.client.0.vm05.stdout:3/757: dwrite d3/df/d10/d19/f26 [0,4194304] 0 2026-03-09T15:01:51.776 INFO:tasks.workunit.client.0.vm05.stdout:5/837: mkdir d1/d4/d34/d35/d3d/d38/d69/d11b 0 2026-03-09T15:01:51.780 INFO:tasks.workunit.client.0.vm05.stdout:7/762: link d1/d9/d23/d54/f60 d1/d9/d23/d31/d51/df9/f101 0 2026-03-09T15:01:51.784 INFO:tasks.workunit.client.0.vm05.stdout:8/801: creat d0/d1/d12/d1b/d95/d42/d60/d73/f107 x:0 0 0 2026-03-09T15:01:51.785 INFO:tasks.workunit.client.0.vm05.stdout:8/802: read d0/d7/f14 [3954984,21328] 0 2026-03-09T15:01:51.789 INFO:tasks.workunit.client.0.vm05.stdout:4/790: chown d2/d49/d69/cda 3 1 2026-03-09T15:01:51.789 INFO:tasks.workunit.client.0.vm05.stdout:4/791: chown d2/d4/d50/d8a 0 1 2026-03-09T15:01:51.793 INFO:tasks.workunit.client.0.vm05.stdout:6/726: read da/f10 [3480094,121906] 0 2026-03-09T15:01:51.793 INFO:tasks.workunit.client.0.vm05.stdout:4/792: dwrite d2/d4/d1e/d71/f103 [0,4194304] 0 2026-03-09T15:01:51.798 INFO:tasks.workunit.client.0.vm05.stdout:3/758: creat d3/df/d59/ffe x:0 0 0 2026-03-09T15:01:51.799 INFO:tasks.workunit.client.0.vm05.stdout:3/759: write d3/df/f14 [4105395,124532] 0 2026-03-09T15:01:51.807 INFO:tasks.workunit.client.0.vm05.stdout:5/838: creat d1/d5d/f11c x:0 0 0 2026-03-09T15:01:51.818 INFO:tasks.workunit.client.0.vm05.stdout:2/791: creat da/d29/d6a/da0/d91/dab/d2f/d35/ff3 x:0 0 0 2026-03-09T15:01:51.820 INFO:tasks.workunit.client.0.vm05.stdout:7/763: creat d1/d22/f102 x:0 0 0 2026-03-09T15:01:51.820 INFO:tasks.workunit.client.0.vm05.stdout:7/764: write d1/fb0 [1146592,102726] 0 2026-03-09T15:01:51.829 INFO:tasks.workunit.client.0.vm05.stdout:8/803: symlink d0/d1/d12/d1b/d6e/l108 0 2026-03-09T15:01:51.829 INFO:tasks.workunit.client.0.vm05.stdout:8/804: dread - d0/d1/d12/d3c/f84 zero size 2026-03-09T15:01:51.830 INFO:tasks.workunit.client.0.vm05.stdout:8/805: write d0/d1/d12/d1b/d95/f3e [3425188,68527] 0 2026-03-09T15:01:51.839 INFO:tasks.workunit.client.0.vm05.stdout:6/727: rename da/d43/d66/c7d to da/d9e/ce1 0 2026-03-09T15:01:51.847 INFO:tasks.workunit.client.0.vm05.stdout:0/719: link d9/de/f3e d9/de/d25/dcf/dbd/fe2 0 2026-03-09T15:01:51.847 INFO:tasks.workunit.client.0.vm05.stdout:0/720: chown d9/de/d12/d15/d2e/d32/d53/d61/f62 377749722 1 2026-03-09T15:01:51.851 INFO:tasks.workunit.client.0.vm05.stdout:1/767: dwrite d9/d17/f22 [0,4194304] 0 2026-03-09T15:01:51.853 INFO:tasks.workunit.client.0.vm05.stdout:1/768: write d9/d2f/d83/d98/d59/fd4 [1852478,87943] 0 2026-03-09T15:01:51.862 INFO:tasks.workunit.client.0.vm05.stdout:7/765: fdatasync d1/d12/f20 0 2026-03-09T15:01:51.863 INFO:tasks.workunit.client.0.vm05.stdout:9/834: write d2/d4e/f51 [351230,90889] 0 2026-03-09T15:01:51.867 INFO:tasks.workunit.client.0.vm05.stdout:9/835: dread d2/d4e/d56/d53/f60 [0,4194304] 0 2026-03-09T15:01:51.869 INFO:tasks.workunit.client.0.vm05.stdout:9/836: dread d2/f46 [0,4194304] 0 2026-03-09T15:01:51.874 INFO:tasks.workunit.client.0.vm05.stdout:0/721: unlink d9/de/d25/dcf/c99 0 2026-03-09T15:01:51.874 INFO:tasks.workunit.client.0.vm05.stdout:0/722: chown d9/de/d25/d38/c45 26 1 2026-03-09T15:01:51.878 INFO:tasks.workunit.client.0.vm05.stdout:2/792: creat da/d29/d6a/da0/d91/dab/dd6/ff4 x:0 0 0 2026-03-09T15:01:51.878 INFO:tasks.workunit.client.0.vm05.stdout:2/793: fsync da/d29/d6a/da0/d7c/f80 0 2026-03-09T15:01:51.881 INFO:tasks.workunit.client.0.vm05.stdout:1/769: unlink d9/d2f/d83/d98/f4e 0 2026-03-09T15:01:51.882 INFO:tasks.workunit.client.0.vm05.stdout:1/770: readlink d9/d2f/d83/d98/d59/d49/d77/lf2 0 2026-03-09T15:01:51.887 INFO:tasks.workunit.client.0.vm05.stdout:7/766: creat d1/d9/d23/d31/d32/d78/ddd/f103 x:0 0 0 2026-03-09T15:01:51.887 INFO:tasks.workunit.client.0.vm05.stdout:7/767: chown d1/d9/l48 6668483 1 2026-03-09T15:01:51.893 INFO:tasks.workunit.client.0.vm05.stdout:5/839: dwrite d1/d4/d27/d75/d9c/fc4 [0,4194304] 0 2026-03-09T15:01:51.895 INFO:tasks.workunit.client.0.vm05.stdout:8/806: write d0/f47 [4287494,16067] 0 2026-03-09T15:01:51.895 INFO:tasks.workunit.client.0.vm05.stdout:0/723: mknod d9/de/d6a/ce3 0 2026-03-09T15:01:51.896 INFO:tasks.workunit.client.0.vm05.stdout:0/724: dread - d9/de/f7f zero size 2026-03-09T15:01:51.898 INFO:tasks.workunit.client.0.vm05.stdout:0/725: fsync d9/de/d12/d15/d2e/f76 0 2026-03-09T15:01:51.899 INFO:tasks.workunit.client.0.vm05.stdout:6/728: dwrite da/d17/d95/da2/fb6 [0,4194304] 0 2026-03-09T15:01:51.899 INFO:tasks.workunit.client.0.vm05.stdout:0/726: fsync d9/de/d12/d15/d2e/d32/f7d 0 2026-03-09T15:01:51.907 INFO:tasks.workunit.client.0.vm05.stdout:6/729: write da/d17/d95/da2/fc5 [2684310,19283] 0 2026-03-09T15:01:51.910 INFO:tasks.workunit.client.0.vm05.stdout:0/727: chown d9/de/d12/d15/d2e/d6b/l6d 11921 1 2026-03-09T15:01:51.954 INFO:tasks.workunit.client.0.vm05.stdout:4/793: link d2/f1b d2/d4/d7/f105 0 2026-03-09T15:01:51.956 INFO:tasks.workunit.client.0.vm05.stdout:9/837: dwrite d2/d10/d22/fb6 [0,4194304] 0 2026-03-09T15:01:51.957 INFO:tasks.workunit.client.0.vm05.stdout:9/838: write d2/d4e/f10f [403292,68358] 0 2026-03-09T15:01:51.964 INFO:tasks.workunit.client.0.vm05.stdout:7/768: unlink d1/d9/d23/d31/d32/d78/dbb/lcf 0 2026-03-09T15:01:51.976 INFO:tasks.workunit.client.0.vm05.stdout:3/760: rename d3/df/d1e/d2c/d74/d9b/cda to d3/df/d10/d19/cff 0 2026-03-09T15:01:51.977 INFO:tasks.workunit.client.0.vm05.stdout:3/761: chown d3/df/d10/d19/dce/dc8/de2/d8c/dbd/db3/lb6 7142 1 2026-03-09T15:01:51.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:51 vm05.local ceph-mon[50611]: Upgrade: Updating mgr.vm05.lhsexd 2026-03-09T15:01:51.995 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:51 vm05.local ceph-mon[50611]: Deploying daemon mgr.vm05.lhsexd on vm05 2026-03-09T15:01:51.997 INFO:tasks.workunit.client.0.vm05.stdout:5/840: mknod d1/d4/d34/c11d 0 2026-03-09T15:01:52.016 INFO:tasks.workunit.client.0.vm05.stdout:6/730: read da/d43/f68 [112473,48935] 0 2026-03-09T15:01:52.018 INFO:tasks.workunit.client.0.vm05.stdout:0/728: creat d9/de/d6a/fe4 x:0 0 0 2026-03-09T15:01:52.022 INFO:tasks.workunit.client.0.vm05.stdout:2/794: creat da/d29/d6a/da0/d91/dab/d9c/dbf/dc8/ff5 x:0 0 0 2026-03-09T15:01:52.037 INFO:tasks.workunit.client.0.vm05.stdout:7/769: symlink d1/d9/d23/d31/d8f/d93/dbd/l104 0 2026-03-09T15:01:52.041 INFO:tasks.workunit.client.0.vm05.stdout:8/807: creat d0/d1/d12/d1b/d95/d78/dea/f109 x:0 0 0 2026-03-09T15:01:52.042 INFO:tasks.workunit.client.0.vm05.stdout:7/770: chown d1/d22/d3c/c9a 8993335 1 2026-03-09T15:01:52.042 INFO:tasks.workunit.client.0.vm05.stdout:1/771: dread d9/d2f/d37/d5f/f80 [0,4194304] 0 2026-03-09T15:01:52.049 INFO:tasks.workunit.client.0.vm05.stdout:3/762: dwrite d3/d29/d2d/f31 [0,4194304] 0 2026-03-09T15:01:52.057 INFO:tasks.workunit.client.0.vm05.stdout:4/794: truncate d2/d1d/fd0 2505659 0 2026-03-09T15:01:52.058 INFO:tasks.workunit.client.0.vm05.stdout:4/795: fdatasync d2/d4/d7/f2d 0 2026-03-09T15:01:52.058 INFO:tasks.workunit.client.0.vm05.stdout:3/763: chown d3/df/d10/d19/dce/dc8/de2/d8c/dbd/db3/lb6 3488781 1 2026-03-09T15:01:52.059 INFO:tasks.workunit.client.0.vm05.stdout:4/796: stat d2/d4/l82 0 2026-03-09T15:01:52.060 INFO:tasks.workunit.client.0.vm05.stdout:3/764: chown d3/df/d1e/d2f/d52/f61 7203474 1 2026-03-09T15:01:52.061 INFO:tasks.workunit.client.0.vm05.stdout:3/765: stat d3/df/c1d 0 2026-03-09T15:01:52.066 INFO:tasks.workunit.client.0.vm05.stdout:4/797: dread d2/d4/d1e/da2/dec/ff9 [0,4194304] 0 2026-03-09T15:01:52.067 INFO:tasks.workunit.client.0.vm05.stdout:6/731: dread da/d17/f1d [4194304,4194304] 0 2026-03-09T15:01:52.070 INFO:tasks.workunit.client.0.vm05.stdout:5/841: dread - d1/d4/d34/d35/ff7 zero size 2026-03-09T15:01:52.070 INFO:tasks.workunit.client.0.vm05.stdout:6/732: write da/d17/d7c/dc6/fd0 [107134,46025] 0 2026-03-09T15:01:52.074 INFO:tasks.workunit.client.0.vm05.stdout:0/729: mkdir d9/de/d12/d15/d2e/d32/d9f/da0/db7/de5 0 2026-03-09T15:01:52.076 INFO:tasks.workunit.client.0.vm05.stdout:7/771: mkdir d1/d22/d3c/d105 0 2026-03-09T15:01:52.077 INFO:tasks.workunit.client.0.vm05.stdout:0/730: write d9/de/d6a/db5/fd8 [3698082,89709] 0 2026-03-09T15:01:52.078 INFO:tasks.workunit.client.0.vm05.stdout:0/731: chown d9/de/d12/da3 745286 1 2026-03-09T15:01:52.080 INFO:tasks.workunit.client.0.vm05.stdout:0/732: truncate d9/de/d12/da3/fb2 1056951 0 2026-03-09T15:01:52.082 INFO:tasks.workunit.client.0.vm05.stdout:0/733: dread - d9/de/d12/d15/f9e zero size 2026-03-09T15:01:52.093 INFO:tasks.workunit.client.0.vm05.stdout:2/795: symlink da/d29/d6a/da0/lf6 0 2026-03-09T15:01:52.094 INFO:tasks.workunit.client.0.vm05.stdout:6/733: dread da/f16 [0,4194304] 0 2026-03-09T15:01:52.095 INFO:tasks.workunit.client.0.vm05.stdout:3/766: mknod d3/d29/d2d/d77/c100 0 2026-03-09T15:01:52.106 INFO:tasks.workunit.client.0.vm05.stdout:5/842: fdatasync d1/f5e 0 2026-03-09T15:01:52.110 INFO:tasks.workunit.client.0.vm05.stdout:1/772: dwrite d9/d2f/d37/d5a/da9/dc9/dcd/f6f [0,4194304] 0 2026-03-09T15:01:52.119 INFO:tasks.workunit.client.0.vm05.stdout:7/772: creat d1/d9/d23/d31/d51/f106 x:0 0 0 2026-03-09T15:01:52.121 INFO:tasks.workunit.client.0.vm05.stdout:6/734: symlink da/d17/le2 0 2026-03-09T15:01:52.126 INFO:tasks.workunit.client.0.vm05.stdout:9/839: getdents d2/da9 0 2026-03-09T15:01:52.130 INFO:tasks.workunit.client.0.vm05.stdout:7/773: mkdir d1/d9/d23/d31/d8f/d93/dbd/d107 0 2026-03-09T15:01:52.131 INFO:tasks.workunit.client.0.vm05.stdout:6/735: rmdir da/d17/d95/da2 39 2026-03-09T15:01:52.134 INFO:tasks.workunit.client.0.vm05.stdout:6/736: dread da/d17/f2d [0,4194304] 0 2026-03-09T15:01:52.134 INFO:tasks.workunit.client.0.vm05.stdout:6/737: chown da/d43 1 1 2026-03-09T15:01:52.134 INFO:tasks.workunit.client.0.vm05.stdout:6/738: readlink l2 0 2026-03-09T15:01:52.136 INFO:tasks.workunit.client.0.vm05.stdout:0/734: sync 2026-03-09T15:01:52.136 INFO:tasks.workunit.client.0.vm05.stdout:5/843: sync 2026-03-09T15:01:52.139 INFO:tasks.workunit.client.0.vm05.stdout:1/773: symlink d9/d2f/d37/d5a/da9/dc9/lff 0 2026-03-09T15:01:52.144 INFO:tasks.workunit.client.0.vm05.stdout:4/798: write d2/d4/d7/f7b [4390622,54874] 0 2026-03-09T15:01:52.145 INFO:tasks.workunit.client.0.vm05.stdout:7/774: dread d1/d22/d3c/fba [0,4194304] 0 2026-03-09T15:01:52.147 INFO:tasks.workunit.client.0.vm05.stdout:8/808: write d0/d1/d12/d1b/d6e/d93/d9f/fbf [996499,110864] 0 2026-03-09T15:01:52.150 INFO:tasks.workunit.client.0.vm05.stdout:4/799: sync 2026-03-09T15:01:52.152 INFO:tasks.workunit.client.0.vm05.stdout:3/767: creat d3/df/d10/d19/d44/f101 x:0 0 0 2026-03-09T15:01:52.156 INFO:tasks.workunit.client.0.vm05.stdout:5/844: creat d1/d4/d19/d93/f11e x:0 0 0 2026-03-09T15:01:52.158 INFO:tasks.workunit.client.0.vm05.stdout:0/735: rename d9/de/d6a/db5 to d9/de/d25/dae/de6 0 2026-03-09T15:01:52.163 INFO:tasks.workunit.client.0.vm05.stdout:2/796: getdents da/d29/d6a/db1 0 2026-03-09T15:01:52.169 INFO:tasks.workunit.client.0.vm05.stdout:9/840: dwrite d2/d10/d22/d47/dc4/f111 [0,4194304] 0 2026-03-09T15:01:52.189 INFO:tasks.workunit.client.0.vm05.stdout:5/845: read - d1/da/fb9 zero size 2026-03-09T15:01:52.189 INFO:tasks.workunit.client.0.vm05.stdout:0/736: creat d9/de/d25/dae/de6/fe7 x:0 0 0 2026-03-09T15:01:52.190 INFO:tasks.workunit.client.0.vm05.stdout:0/737: readlink d9/de/d25/dae/de6/lad 0 2026-03-09T15:01:52.215 INFO:tasks.workunit.client.0.vm05.stdout:6/739: link da/d17/d3b/l3e da/d19/dd7/le3 0 2026-03-09T15:01:52.226 INFO:tasks.workunit.client.0.vm05.stdout:6/740: dread da/d17/d3b/fb2 [0,4194304] 0 2026-03-09T15:01:52.232 INFO:tasks.workunit.client.0.vm05.stdout:5/846: creat d1/d4/d34/d35/d3d/f11f x:0 0 0 2026-03-09T15:01:52.235 INFO:tasks.workunit.client.0.vm05.stdout:0/738: rmdir d9/de/d12/d8a/dc3 39 2026-03-09T15:01:52.241 INFO:tasks.workunit.client.0.vm05.stdout:7/775: write d1/d9/d23/d31/d8f/d93/fae [4855391,118592] 0 2026-03-09T15:01:52.244 INFO:tasks.workunit.client.0.vm05.stdout:1/774: read - d9/d2f/d83/d98/d59/fe6 zero size 2026-03-09T15:01:52.248 INFO:tasks.workunit.client.0.vm05.stdout:8/809: dwrite d0/d1/d12/d1b/d95/d42/da1/db9/fcd [0,4194304] 0 2026-03-09T15:01:52.252 INFO:tasks.workunit.client.0.vm05.stdout:3/768: write d3/df/d10/d19/d44/f60 [296713,96366] 0 2026-03-09T15:01:52.256 INFO:tasks.workunit.client.0.vm05.stdout:2/797: mkdir da/d29/d6a/da0/d91/dd4/df7 0 2026-03-09T15:01:52.258 INFO:tasks.workunit.client.0.vm05.stdout:9/841: mkdir d2/d127 0 2026-03-09T15:01:52.258 INFO:tasks.workunit.client.0.vm05.stdout:9/842: stat d2/d8b/de3/c10a 0 2026-03-09T15:01:52.259 INFO:tasks.workunit.client.0.vm05.stdout:9/843: stat d2/f17 0 2026-03-09T15:01:52.266 INFO:tasks.workunit.client.0.vm05.stdout:5/847: rename d1/d4/d34/d35/d4e/d6f/fa3 to d1/d4/d34/d56/d68/f120 0 2026-03-09T15:01:52.267 INFO:tasks.workunit.client.0.vm05.stdout:5/848: chown d1/d5d/la7 908 1 2026-03-09T15:01:52.276 INFO:tasks.workunit.client.0.vm05.stdout:0/739: dread d9/de/f3e [0,4194304] 0 2026-03-09T15:01:52.278 INFO:tasks.workunit.client.0.vm05.stdout:0/740: chown d9/de/d12/d15/d2e/d32/d53/d61/f62 44336216 1 2026-03-09T15:01:52.278 INFO:tasks.workunit.client.0.vm05.stdout:0/741: stat d9 0 2026-03-09T15:01:52.278 INFO:tasks.workunit.client.0.vm05.stdout:0/742: stat d9/de/d25/dcf 0 2026-03-09T15:01:52.279 INFO:tasks.workunit.client.0.vm05.stdout:0/743: chown d9/f22 305563910 1 2026-03-09T15:01:52.283 INFO:tasks.workunit.client.0.vm05.stdout:8/810: creat d0/d1/d12/d1b/d95/d42/d60/da7/f10a x:0 0 0 2026-03-09T15:01:52.286 INFO:tasks.workunit.client.0.vm05.stdout:3/769: mkdir d3/df/d10/d19/db2/d102 0 2026-03-09T15:01:52.294 INFO:tasks.workunit.client.0.vm05.stdout:4/800: getdents d2/d4/d1e/da2/dc5 0 2026-03-09T15:01:52.295 INFO:tasks.workunit.client.0.vm05.stdout:2/798: write da/dd/f9e [3488728,55052] 0 2026-03-09T15:01:52.297 INFO:tasks.workunit.client.0.vm05.stdout:2/799: dread - da/d29/d6a/da0/d7c/fe1 zero size 2026-03-09T15:01:52.309 INFO:tasks.workunit.client.0.vm05.stdout:9/844: unlink d2/d10/d22/d2c/c4d 0 2026-03-09T15:01:52.335 INFO:tasks.workunit.client.0.vm05.stdout:9/845: truncate d2/d4e/d56/d53/d64/ded/d9c/d8e/fe4 826699 0 2026-03-09T15:01:52.339 INFO:tasks.workunit.client.0.vm05.stdout:9/846: dwrite d2/d4e/d56/d53/f66 [0,4194304] 0 2026-03-09T15:01:52.351 INFO:tasks.workunit.client.0.vm05.stdout:7/776: rename d1/d9/d23/d31/d51/df9/f101 to d1/d9/d23/d31/d8f/d93/f108 0 2026-03-09T15:01:52.358 INFO:tasks.workunit.client.0.vm05.stdout:2/800: dwrite da/fa2 [0,4194304] 0 2026-03-09T15:01:52.363 INFO:tasks.workunit.client.0.vm05.stdout:5/849: mkdir d1/d4/d34/d35/d121 0 2026-03-09T15:01:52.363 INFO:tasks.workunit.client.0.vm05.stdout:5/850: stat d1/d4/d27/l2c 0 2026-03-09T15:01:52.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:51 vm09.local ceph-mon[59673]: Upgrade: Updating mgr.vm05.lhsexd 2026-03-09T15:01:52.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:51 vm09.local ceph-mon[59673]: Deploying daemon mgr.vm05.lhsexd on vm05 2026-03-09T15:01:52.372 INFO:tasks.workunit.client.0.vm05.stdout:5/851: dread d1/d4/d34/fe2 [0,4194304] 0 2026-03-09T15:01:52.373 INFO:tasks.workunit.client.0.vm05.stdout:0/744: mkdir d9/de/d12/d15/d2e/d32/d74/de8 0 2026-03-09T15:01:52.382 INFO:tasks.workunit.client.0.vm05.stdout:4/801: mkdir d2/d4/d7/d48/df0/d106 0 2026-03-09T15:01:52.383 INFO:tasks.workunit.client.0.vm05.stdout:4/802: chown d2/d1d/dfa/dfb/l100 27377441 1 2026-03-09T15:01:52.385 INFO:tasks.workunit.client.0.vm05.stdout:6/741: getdents da/d17/d7c 0 2026-03-09T15:01:52.388 INFO:tasks.workunit.client.0.vm05.stdout:9/847: creat d2/d4e/d56/d53/d64/ded/d9c/d8e/f128 x:0 0 0 2026-03-09T15:01:52.396 INFO:tasks.workunit.client.0.vm05.stdout:8/811: write d0/f4 [914382,54861] 0 2026-03-09T15:01:52.397 INFO:tasks.workunit.client.0.vm05.stdout:3/770: write d3/df/d1e/d2f/d52/f93 [662785,2117] 0 2026-03-09T15:01:52.403 INFO:tasks.workunit.client.0.vm05.stdout:2/801: creat da/d29/d6a/da0/d91/dab/d9c/dd3/ff8 x:0 0 0 2026-03-09T15:01:52.404 INFO:tasks.workunit.client.0.vm05.stdout:1/775: getdents d9/d2f/d83/d98/d59/d49/d78/dbd 0 2026-03-09T15:01:52.406 INFO:tasks.workunit.client.0.vm05.stdout:5/852: mknod d1/d4/d34/d35/d4e/d6f/d7e/c122 0 2026-03-09T15:01:52.407 INFO:tasks.workunit.client.0.vm05.stdout:5/853: readlink d1/d4/d34/d35/d3d/d38/d63/lfd 0 2026-03-09T15:01:52.409 INFO:tasks.workunit.client.0.vm05.stdout:0/745: mkdir d9/de/d12/d15/d2e/d32/d74/de9 0 2026-03-09T15:01:52.414 INFO:tasks.workunit.client.0.vm05.stdout:4/803: truncate d2/d4/d7/dc/f8e 1267095 0 2026-03-09T15:01:52.414 INFO:tasks.workunit.client.0.vm05.stdout:4/804: chown d2/d1d/df5/d97 4821330 1 2026-03-09T15:01:52.415 INFO:tasks.workunit.client.0.vm05.stdout:4/805: write d2/d43/f51 [1886650,81444] 0 2026-03-09T15:01:52.416 INFO:tasks.workunit.client.0.vm05.stdout:9/848: creat d2/d4e/d56/d53/d64/ded/d9c/ddd/f129 x:0 0 0 2026-03-09T15:01:52.422 INFO:tasks.workunit.client.0.vm05.stdout:7/777: rename d1/d9/d23/d31/d51/df9 to d1/d9/d23/d31/d32/d78/d7e/d81/dcd/d109 0 2026-03-09T15:01:52.428 INFO:tasks.workunit.client.0.vm05.stdout:3/771: stat d3/df/d10/d19/cff 0 2026-03-09T15:01:52.431 INFO:tasks.workunit.client.0.vm05.stdout:7/778: dread d1/d22/d3c/fa2 [0,4194304] 0 2026-03-09T15:01:52.432 INFO:tasks.workunit.client.0.vm05.stdout:2/802: creat da/d29/d6a/db1/db7/ff9 x:0 0 0 2026-03-09T15:01:52.433 INFO:tasks.workunit.client.0.vm05.stdout:2/803: write da/dd/ff [558644,48277] 0 2026-03-09T15:01:52.436 INFO:tasks.workunit.client.0.vm05.stdout:4/806: sync 2026-03-09T15:01:52.440 INFO:tasks.workunit.client.0.vm05.stdout:4/807: dwrite d2/d4/d7/f90 [0,4194304] 0 2026-03-09T15:01:52.442 INFO:tasks.workunit.client.0.vm05.stdout:4/808: fdatasync d2/d4/d8/d4a/d94/ff3 0 2026-03-09T15:01:52.447 INFO:tasks.workunit.client.0.vm05.stdout:8/812: write d0/d1/f7f [4184121,130582] 0 2026-03-09T15:01:52.453 INFO:tasks.workunit.client.0.vm05.stdout:1/776: dwrite d9/d2f/d37/d5a/da9/dc9/dcd/f25 [0,4194304] 0 2026-03-09T15:01:52.459 INFO:tasks.workunit.client.0.vm05.stdout:5/854: dwrite d1/d4/d34/d35/d4e/dc8/fec [0,4194304] 0 2026-03-09T15:01:52.462 INFO:tasks.workunit.client.0.vm05.stdout:5/855: chown d1/d4/d34/d35/d4e/c95 330080714 1 2026-03-09T15:01:52.469 INFO:tasks.workunit.client.0.vm05.stdout:4/809: dread d2/d1d/f7d [0,4194304] 0 2026-03-09T15:01:52.492 INFO:tasks.workunit.client.0.vm05.stdout:7/779: dread - d1/de4/ffa zero size 2026-03-09T15:01:52.506 INFO:tasks.workunit.client.0.vm05.stdout:1/777: rename d9/d2f/d83/d98/d59/d49/d4b/c74 to d9/d2f/d83/d98/d59/d49/d77/c100 0 2026-03-09T15:01:52.510 INFO:tasks.workunit.client.0.vm05.stdout:1/778: dwrite d9/d2f/d37/d5a/da9/dc9/dcd/f96 [0,4194304] 0 2026-03-09T15:01:52.516 INFO:tasks.workunit.client.0.vm05.stdout:0/746: write d9/de/d12/da3/dbc/fc4 [615672,22584] 0 2026-03-09T15:01:52.521 INFO:tasks.workunit.client.0.vm05.stdout:9/849: dwrite d2/d10/d22/d47/d73/f90 [0,4194304] 0 2026-03-09T15:01:52.526 INFO:tasks.workunit.client.0.vm05.stdout:4/810: creat d2/d49/f107 x:0 0 0 2026-03-09T15:01:52.531 INFO:tasks.workunit.client.0.vm05.stdout:2/804: mkdir da/d29/d6a/da0/d91/dab/d2f/db3/deb/dfa 0 2026-03-09T15:01:52.531 INFO:tasks.workunit.client.0.vm05.stdout:2/805: write da/d29/d64/da6/fb4 [524953,82948] 0 2026-03-09T15:01:52.534 INFO:tasks.workunit.client.0.vm05.stdout:2/806: dwrite da/d29/d6a/da0/d91/dab/d2f/f93 [0,4194304] 0 2026-03-09T15:01:52.544 INFO:tasks.workunit.client.0.vm05.stdout:5/856: rename d1/l1e to d1/d4/d27/d75/d9c/l123 0 2026-03-09T15:01:52.552 INFO:tasks.workunit.client.0.vm05.stdout:0/747: unlink d9/de/d6a/ce3 0 2026-03-09T15:01:52.552 INFO:tasks.workunit.client.0.vm05.stdout:6/742: getdents da/d43/d7b 0 2026-03-09T15:01:52.561 INFO:tasks.workunit.client.0.vm05.stdout:8/813: link d0/d1/d12/d1b/l38 d0/d1/d12/d1b/d66/dcc/l10b 0 2026-03-09T15:01:52.561 INFO:tasks.workunit.client.0.vm05.stdout:7/780: rename d1/d12/l2d to d1/d9/d23/d31/d8f/d93/dbd/dd1/l10a 0 2026-03-09T15:01:52.569 INFO:tasks.workunit.client.0.vm05.stdout:6/743: creat da/d17/fe4 x:0 0 0 2026-03-09T15:01:52.569 INFO:tasks.workunit.client.0.vm05.stdout:3/772: getdents d3/d29/d2d/d77/d4d 0 2026-03-09T15:01:52.570 INFO:tasks.workunit.client.0.vm05.stdout:6/744: stat da/d43/d7b/f9f 0 2026-03-09T15:01:52.572 INFO:tasks.workunit.client.0.vm05.stdout:1/779: read d9/d2f/d37/d5f/f73 [284284,23930] 0 2026-03-09T15:01:52.577 INFO:tasks.workunit.client.0.vm05.stdout:5/857: write d1/d4/d34/dc0/fd8 [645475,117081] 0 2026-03-09T15:01:52.583 INFO:tasks.workunit.client.0.vm05.stdout:0/748: dread d9/f2b [4194304,4194304] 0 2026-03-09T15:01:52.586 INFO:tasks.workunit.client.0.vm05.stdout:8/814: write d0/d1/d12/d1b/d95/f41 [2297144,130471] 0 2026-03-09T15:01:52.587 INFO:tasks.workunit.client.0.vm05.stdout:8/815: symlink d0/d1/d12/d1b/d95/d42/d60/d73/l10c 2 2026-03-09T15:01:52.590 INFO:tasks.workunit.client.0.vm05.stdout:7/781: rmdir d1/d9/d23/d31/d32/d78/ddd/def 39 2026-03-09T15:01:52.590 INFO:tasks.workunit.client.0.vm05.stdout:7/782: dread - d1/de4/fe7 zero size 2026-03-09T15:01:52.591 INFO:tasks.workunit.client.0.vm05.stdout:7/783: readlink d1/d9/d23/d31/d8f/ld5 0 2026-03-09T15:01:52.598 INFO:tasks.workunit.client.0.vm05.stdout:9/850: creat d2/d10/d22/d2c/d69/d5a/f12a x:0 0 0 2026-03-09T15:01:52.599 INFO:tasks.workunit.client.0.vm05.stdout:4/811: link d2/d49/f56 d2/d4/d7/d48/d6b/dd3/f108 0 2026-03-09T15:01:52.605 INFO:tasks.workunit.client.0.vm05.stdout:3/773: fsync d3/df/d10/d19/dce/dc8/de2/f48 0 2026-03-09T15:01:52.607 INFO:tasks.workunit.client.0.vm05.stdout:2/807: rename da/d29/d6a/da0/f84 to da/d29/d3f/ffb 0 2026-03-09T15:01:52.609 INFO:tasks.workunit.client.0.vm05.stdout:0/749: rmdir d9/de/d12/d15/d2e/d32/d53 39 2026-03-09T15:01:52.615 INFO:tasks.workunit.client.0.vm05.stdout:8/816: creat d0/d1/d12/d1b/d95/d54/f10d x:0 0 0 2026-03-09T15:01:52.622 INFO:tasks.workunit.client.0.vm05.stdout:4/812: creat d2/d4/d1e/da2/f109 x:0 0 0 2026-03-09T15:01:52.625 INFO:tasks.workunit.client.0.vm05.stdout:7/784: dwrite d1/d9/fc [0,4194304] 0 2026-03-09T15:01:52.627 INFO:tasks.workunit.client.0.vm05.stdout:6/745: mknod da/d17/d95/da2/ce5 0 2026-03-09T15:01:52.630 INFO:tasks.workunit.client.0.vm05.stdout:0/750: sync 2026-03-09T15:01:52.645 INFO:tasks.workunit.client.0.vm05.stdout:3/774: fdatasync d3/df/d10/fae 0 2026-03-09T15:01:52.645 INFO:tasks.workunit.client.0.vm05.stdout:0/751: dread d9/de/d12/da3/dbc/fc4 [0,4194304] 0 2026-03-09T15:01:52.645 INFO:tasks.workunit.client.0.vm05.stdout:1/780: rename d9/d2f/d83/d98/d59/d49/d78/dcc to d9/d2f/d37/d101 0 2026-03-09T15:01:52.646 INFO:tasks.workunit.client.0.vm05.stdout:3/775: write d3/df/d1e/d2c/d74/ff9 [896042,94473] 0 2026-03-09T15:01:52.646 INFO:tasks.workunit.client.0.vm05.stdout:3/776: readlink d3/df/d1e/daf/le7 0 2026-03-09T15:01:52.647 INFO:tasks.workunit.client.0.vm05.stdout:3/777: stat d3/df/d10/c81 0 2026-03-09T15:01:52.690 INFO:tasks.workunit.client.0.vm05.stdout:4/813: creat d2/d4/d1e/da2/dc5/f10a x:0 0 0 2026-03-09T15:01:52.690 INFO:tasks.workunit.client.0.vm05.stdout:8/817: write d0/d1/d12/d1b/f27 [5044749,84468] 0 2026-03-09T15:01:52.690 INFO:tasks.workunit.client.0.vm05.stdout:7/785: fsync d1/d9/f52 0 2026-03-09T15:01:52.696 INFO:tasks.workunit.client.0.vm05.stdout:6/746: mknod da/d43/d7b/d89/ce6 0 2026-03-09T15:01:52.696 INFO:tasks.workunit.client.0.vm05.stdout:9/851: dwrite d2/d10/d22/d47/d73/f81 [0,4194304] 0 2026-03-09T15:01:52.697 INFO:tasks.workunit.client.0.vm05.stdout:9/852: readlink d2/d10/d22/d47/la6 0 2026-03-09T15:01:52.703 INFO:tasks.workunit.client.0.vm05.stdout:9/853: chown d2/d10/d22/d52/lf4 50778 1 2026-03-09T15:01:52.723 INFO:tasks.workunit.client.0.vm05.stdout:2/808: mknod da/cfc 0 2026-03-09T15:01:52.735 INFO:tasks.workunit.client.0.vm05.stdout:8/818: creat d0/dc/f10e x:0 0 0 2026-03-09T15:01:52.738 INFO:tasks.workunit.client.0.vm05.stdout:7/786: unlink d1/d12/cf 0 2026-03-09T15:01:52.740 INFO:tasks.workunit.client.0.vm05.stdout:3/778: dread d3/d29/d7f/f83 [0,4194304] 0 2026-03-09T15:01:52.741 INFO:tasks.workunit.client.0.vm05.stdout:4/814: creat d2/d4/d50/d8a/f10b x:0 0 0 2026-03-09T15:01:52.742 INFO:tasks.workunit.client.0.vm05.stdout:6/747: creat da/d43/d7b/da9/fe7 x:0 0 0 2026-03-09T15:01:52.750 INFO:tasks.workunit.client.0.vm05.stdout:8/819: dread d0/d1/d12/d3c/d8b/f106 [0,4194304] 0 2026-03-09T15:01:52.757 INFO:tasks.workunit.client.0.vm05.stdout:9/854: write d2/f13 [741425,86159] 0 2026-03-09T15:01:52.766 INFO:tasks.workunit.client.0.vm05.stdout:1/781: fsync d9/f12 0 2026-03-09T15:01:52.766 INFO:tasks.workunit.client.0.vm05.stdout:5/858: rename d1/d4/d34/d35/d3d/d38/f8a to d1/d4/d34/d56/da6/f124 0 2026-03-09T15:01:52.773 INFO:tasks.workunit.client.0.vm05.stdout:7/787: fsync d1/d9/d23/d31/d8f/d93/fb8 0 2026-03-09T15:01:52.782 INFO:tasks.workunit.client.0.vm05.stdout:4/815: write d2/d4/d8/f13 [2358751,74547] 0 2026-03-09T15:01:52.783 INFO:tasks.workunit.client.0.vm05.stdout:4/816: stat d2/d4/d1e/da2/dec/f61 0 2026-03-09T15:01:52.784 INFO:tasks.workunit.client.0.vm05.stdout:3/779: dwrite d3/df/f11 [0,4194304] 0 2026-03-09T15:01:52.788 INFO:tasks.workunit.client.0.vm05.stdout:7/788: dwrite d1/d22/da4/ff3 [0,4194304] 0 2026-03-09T15:01:52.800 INFO:tasks.workunit.client.0.vm05.stdout:4/817: dwrite d2/d49/f107 [0,4194304] 0 2026-03-09T15:01:52.804 INFO:tasks.workunit.client.0.vm05.stdout:8/820: rmdir d0/d1 39 2026-03-09T15:01:52.816 INFO:tasks.workunit.client.0.vm05.stdout:0/752: rename d9/de/d12/d8a/dc3/fd0 to d9/de/d12/d15/ddf/fea 0 2026-03-09T15:01:52.821 INFO:tasks.workunit.client.0.vm05.stdout:6/748: truncate da/f1a 2063800 0 2026-03-09T15:01:52.825 INFO:tasks.workunit.client.0.vm05.stdout:5/859: creat d1/d4/d34/d35/d3d/dde/f125 x:0 0 0 2026-03-09T15:01:52.826 INFO:tasks.workunit.client.0.vm05.stdout:6/749: write da/d17/d3b/f85 [2795129,125050] 0 2026-03-09T15:01:52.826 INFO:tasks.workunit.client.0.vm05.stdout:4/818: rmdir d2/d4/d7/d48/d6b/ddb 39 2026-03-09T15:01:52.826 INFO:tasks.workunit.client.0.vm05.stdout:8/821: chown d0/d1/de2/c101 68640 1 2026-03-09T15:01:52.835 INFO:tasks.workunit.client.0.vm05.stdout:9/855: creat d2/d10/d22/d9f/d119/f12b x:0 0 0 2026-03-09T15:01:52.838 INFO:tasks.workunit.client.0.vm05.stdout:2/809: mkdir da/d29/d6a/da0/dd9/dfd 0 2026-03-09T15:01:52.847 INFO:tasks.workunit.client.0.vm05.stdout:1/782: dwrite d9/d2f/f43 [0,4194304] 0 2026-03-09T15:01:52.859 INFO:tasks.workunit.client.0.vm05.stdout:0/753: write d9/de/d12/da3/fa4 [377382,128199] 0 2026-03-09T15:01:52.860 INFO:tasks.workunit.client.0.vm05.stdout:1/783: dwrite d9/d17/f81 [0,4194304] 0 2026-03-09T15:01:52.883 INFO:tasks.workunit.client.0.vm05.stdout:5/860: mknod d1/d4/d34/d35/d3d/dde/c126 0 2026-03-09T15:01:52.892 INFO:tasks.workunit.client.0.vm05.stdout:9/856: unlink d2/d10/d22/d47/d95/f9d 0 2026-03-09T15:01:52.897 INFO:tasks.workunit.client.0.vm05.stdout:7/789: write d1/d12/fe9 [1303405,87973] 0 2026-03-09T15:01:52.900 INFO:tasks.workunit.client.0.vm05.stdout:2/810: truncate da/d29/d6a/da0/d91/dab/fbd 4803583 0 2026-03-09T15:01:52.906 INFO:tasks.workunit.client.0.vm05.stdout:8/822: dwrite d0/d1/d12/d1b/d95/d42/da1/fcb [0,4194304] 0 2026-03-09T15:01:52.911 INFO:tasks.workunit.client.0.vm05.stdout:8/823: readlink d0/d1/d12/d1b/d95/dd7/la4 0 2026-03-09T15:01:52.912 INFO:tasks.workunit.client.0.vm05.stdout:8/824: dread - d0/d1/d12/d1b/d95/d42/d60/d73/f107 zero size 2026-03-09T15:01:52.940 INFO:tasks.workunit.client.0.vm05.stdout:0/754: dread d9/de/d25/dcf/f8c [0,4194304] 0 2026-03-09T15:01:52.960 INFO:tasks.workunit.client.0.vm05.stdout:6/750: dwrite da/d19/f7e [0,4194304] 0 2026-03-09T15:01:52.965 INFO:tasks.workunit.client.0.vm05.stdout:6/751: dread - da/d17/d3b/fde zero size 2026-03-09T15:01:52.968 INFO:tasks.workunit.client.0.vm05.stdout:6/752: write da/d43/d7b/da9/db7/fd2 [295851,37190] 0 2026-03-09T15:01:52.989 INFO:tasks.workunit.client.0.vm05.stdout:9/857: fdatasync d2/d4e/d56/d53/d64/ded/f36 0 2026-03-09T15:01:53.000 INFO:tasks.workunit.client.0.vm05.stdout:7/790: dread - d1/d9/d72/fc4 zero size 2026-03-09T15:01:53.001 INFO:tasks.workunit.client.0.vm05.stdout:3/780: rename d3/lc to d3/df/d10/d19/l103 0 2026-03-09T15:01:53.004 INFO:tasks.workunit.client.0.vm05.stdout:3/781: chown d3/df/d1e/d2f/fbf 6 1 2026-03-09T15:01:53.004 INFO:tasks.workunit.client.0.vm05.stdout:8/825: read d0/d1/d12/d1b/d21/f65 [780008,122118] 0 2026-03-09T15:01:53.005 INFO:tasks.workunit.client.0.vm05.stdout:0/755: rmdir d9/de/d25/dae 39 2026-03-09T15:01:53.007 INFO:tasks.workunit.client.0.vm05.stdout:6/753: creat da/d43/d66/fe8 x:0 0 0 2026-03-09T15:01:53.008 INFO:tasks.workunit.client.0.vm05.stdout:7/791: read d1/fb0 [2434866,128615] 0 2026-03-09T15:01:53.014 INFO:tasks.workunit.client.0.vm05.stdout:8/826: dread d0/dc/f4a [0,4194304] 0 2026-03-09T15:01:53.023 INFO:tasks.workunit.client.0.vm05.stdout:8/827: dread d0/d1/d12/d1b/d95/f3e [0,4194304] 0 2026-03-09T15:01:53.028 INFO:tasks.workunit.client.0.vm05.stdout:8/828: dwrite d0/d1/d12/d3c/d8b/ff9 [0,4194304] 0 2026-03-09T15:01:53.040 INFO:tasks.workunit.client.0.vm05.stdout:5/861: write d1/d4/d19/fab [784887,35137] 0 2026-03-09T15:01:53.044 INFO:tasks.workunit.client.0.vm05.stdout:0/756: mkdir d9/de/d25/d38/d41/deb 0 2026-03-09T15:01:53.051 INFO:tasks.workunit.client.0.vm05.stdout:3/782: truncate d3/df/d10/d19/dce/dc8/de2/d8c/dbd/fa4 410120 0 2026-03-09T15:01:53.051 INFO:tasks.workunit.client.0.vm05.stdout:2/811: dwrite da/d16/d46/f99 [0,4194304] 0 2026-03-09T15:01:53.060 INFO:tasks.workunit.client.0.vm05.stdout:8/829: symlink d0/d1/d12/d1b/d95/dd7/dd2/l10f 0 2026-03-09T15:01:53.061 INFO:tasks.workunit.client.0.vm05.stdout:8/830: truncate d0/d1/f7f 4591388 0 2026-03-09T15:01:53.063 INFO:tasks.workunit.client.0.vm05.stdout:4/819: rename d2/d4/d8/l73 to d2/d1d/d88/d92/l10c 0 2026-03-09T15:01:53.068 INFO:tasks.workunit.client.0.vm05.stdout:7/792: creat d1/d9/d23/d31/d8f/d93/dbd/d107/f10b x:0 0 0 2026-03-09T15:01:53.078 INFO:tasks.workunit.client.0.vm05.stdout:0/757: readlink d9/de/d12/d15/d2e/d32/d53/d61/l63 0 2026-03-09T15:01:53.089 INFO:tasks.workunit.client.0.vm05.stdout:3/783: dwrite d3/df/d10/d19/d44/f56 [0,4194304] 0 2026-03-09T15:01:53.094 INFO:tasks.workunit.client.0.vm05.stdout:6/754: symlink da/d43/d7b/de0/le9 0 2026-03-09T15:01:53.102 INFO:tasks.workunit.client.0.vm05.stdout:2/812: dread da/dd/f25 [4194304,4194304] 0 2026-03-09T15:01:53.102 INFO:tasks.workunit.client.0.vm05.stdout:2/813: readlink da/d29/d3f/lac 0 2026-03-09T15:01:53.106 INFO:tasks.workunit.client.0.vm05.stdout:8/831: mknod d0/d1/d12/d1b/d95/d42/da1/db9/c110 0 2026-03-09T15:01:53.115 INFO:tasks.workunit.client.0.vm05.stdout:1/784: rename d9/d2f/d37/l47 to d9/d2f/d83/d98/d59/d49/d92/d75/de4/l102 0 2026-03-09T15:01:53.116 INFO:tasks.workunit.client.0.vm05.stdout:4/820: symlink d2/d1d/df5/d97/l10d 0 2026-03-09T15:01:53.118 INFO:tasks.workunit.client.0.vm05.stdout:4/821: read d2/d4/d1e/da2/dec/d3d/f65 [2886853,129094] 0 2026-03-09T15:01:53.119 INFO:tasks.workunit.client.0.vm05.stdout:4/822: chown d2/d49/d69/lad 104516 1 2026-03-09T15:01:53.122 INFO:tasks.workunit.client.0.vm05.stdout:5/862: symlink d1/d4/d34/d35/d121/l127 0 2026-03-09T15:01:53.127 INFO:tasks.workunit.client.0.vm05.stdout:7/793: mkdir d1/d9/d72/d10c 0 2026-03-09T15:01:53.127 INFO:tasks.workunit.client.0.vm05.stdout:7/794: chown d1/de4 104092506 1 2026-03-09T15:01:53.130 INFO:tasks.workunit.client.0.vm05.stdout:0/758: mknod d9/d59/cec 0 2026-03-09T15:01:53.133 INFO:tasks.workunit.client.0.vm05.stdout:0/759: dwrite d9/de/d6a/fe4 [0,4194304] 0 2026-03-09T15:01:53.145 INFO:tasks.workunit.client.0.vm05.stdout:3/784: mknod d3/c104 0 2026-03-09T15:01:53.154 INFO:tasks.workunit.client.0.vm05.stdout:9/858: getdents d2/d4e/d56/d53/d64/ded/d9c/df0 0 2026-03-09T15:01:53.159 INFO:tasks.workunit.client.0.vm05.stdout:8/832: mknod d0/d1/d12/d1b/d66/c111 0 2026-03-09T15:01:53.160 INFO:tasks.workunit.client.0.vm05.stdout:8/833: chown d0/d7/c8d 4413 1 2026-03-09T15:01:53.204 INFO:tasks.workunit.client.0.vm05.stdout:1/785: dwrite d9/d2f/d55/fb0 [0,4194304] 0 2026-03-09T15:01:53.210 INFO:tasks.workunit.client.0.vm05.stdout:5/863: mkdir d1/d4/d34/d35/d3d/dde/d128 0 2026-03-09T15:01:53.224 INFO:tasks.workunit.client.0.vm05.stdout:7/795: creat d1/d9/d23/d31/d32/d78/d7e/d81/f10d x:0 0 0 2026-03-09T15:01:53.243 INFO:tasks.workunit.client.0.vm05.stdout:0/760: dwrite d9/de/d25/d38/f55 [0,4194304] 0 2026-03-09T15:01:53.250 INFO:tasks.workunit.client.0.vm05.stdout:3/785: write d3/df/d1e/f2b [2083437,33189] 0 2026-03-09T15:01:53.260 INFO:tasks.workunit.client.0.vm05.stdout:8/834: mkdir d0/d24/d112 0 2026-03-09T15:01:53.260 INFO:tasks.workunit.client.0.vm05.stdout:4/823: mknod d2/d4/c10e 0 2026-03-09T15:01:53.268 INFO:tasks.workunit.client.0.vm05.stdout:7/796: rmdir d1/d9/d23/d31/d32/d78 39 2026-03-09T15:01:53.269 INFO:tasks.workunit.client.0.vm05.stdout:2/814: getdents da/d29/d6a/db1 0 2026-03-09T15:01:53.271 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:52 vm05.local ceph-mon[50611]: pgmap v9: 65 pgs: 65 active+clean; 2.5 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 28 MiB/s rd, 62 MiB/s wr, 171 op/s 2026-03-09T15:01:53.271 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:52 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:53.271 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:52 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:53.271 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:52 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:01:53.274 INFO:tasks.workunit.client.0.vm05.stdout:4/824: mknod d2/d4/d1e/d71/c10f 0 2026-03-09T15:01:53.292 INFO:tasks.workunit.client.0.vm05.stdout:4/825: write d2/d4/d7/f2d [4871089,850] 0 2026-03-09T15:01:53.292 INFO:tasks.workunit.client.0.vm05.stdout:6/755: getdents da/d17/d95/da2/dae 0 2026-03-09T15:01:53.292 INFO:tasks.workunit.client.0.vm05.stdout:4/826: fsync d2/d4/d50/fa7 0 2026-03-09T15:01:53.292 INFO:tasks.workunit.client.0.vm05.stdout:1/786: creat d9/f103 x:0 0 0 2026-03-09T15:01:53.292 INFO:tasks.workunit.client.0.vm05.stdout:6/756: creat da/d17/d7c/fea x:0 0 0 2026-03-09T15:01:53.292 INFO:tasks.workunit.client.0.vm05.stdout:7/797: symlink d1/d49/d4a/d77/l10e 0 2026-03-09T15:01:53.295 INFO:tasks.workunit.client.0.vm05.stdout:9/859: getdents d2/d10/d22/da0 0 2026-03-09T15:01:53.299 INFO:tasks.workunit.client.0.vm05.stdout:0/761: sync 2026-03-09T15:01:53.301 INFO:tasks.workunit.client.0.vm05.stdout:1/787: sync 2026-03-09T15:01:53.321 INFO:tasks.workunit.client.0.vm05.stdout:5/864: write d1/d5d/fbc [1895265,108243] 0 2026-03-09T15:01:53.321 INFO:tasks.workunit.client.0.vm05.stdout:2/815: write da/dd/f6f [4812784,729] 0 2026-03-09T15:01:53.323 INFO:tasks.workunit.client.0.vm05.stdout:4/827: truncate d2/d4/d1e/da2/dec/dcb/fe3 3294422 0 2026-03-09T15:01:53.323 INFO:tasks.workunit.client.0.vm05.stdout:8/835: write d0/d1/d12/d1b/d95/d54/f85 [1864593,14231] 0 2026-03-09T15:01:53.324 INFO:tasks.workunit.client.0.vm05.stdout:6/757: symlink da/d43/d7b/d89/leb 0 2026-03-09T15:01:53.326 INFO:tasks.workunit.client.0.vm05.stdout:3/786: dwrite d3/d29/d2d/d77/f9e [0,4194304] 0 2026-03-09T15:01:53.332 INFO:tasks.workunit.client.0.vm05.stdout:7/798: mknod d1/d9/d23/d54/c10f 0 2026-03-09T15:01:53.347 INFO:tasks.workunit.client.0.vm05.stdout:2/816: dread da/d29/d6a/f81 [0,4194304] 0 2026-03-09T15:01:53.349 INFO:tasks.workunit.client.0.vm05.stdout:9/860: write d2/d4e/d56/d53/d64/ded/d9c/db2/ff9 [224039,25730] 0 2026-03-09T15:01:53.349 INFO:tasks.workunit.client.0.vm05.stdout:0/762: creat d9/d59/fed x:0 0 0 2026-03-09T15:01:53.352 INFO:tasks.workunit.client.0.vm05.stdout:2/817: sync 2026-03-09T15:01:53.354 INFO:tasks.workunit.client.0.vm05.stdout:0/763: dread d9/de/d12/da3/fa4 [0,4194304] 0 2026-03-09T15:01:53.356 INFO:tasks.workunit.client.0.vm05.stdout:6/758: dread da/d19/f5b [0,4194304] 0 2026-03-09T15:01:53.356 INFO:tasks.workunit.client.0.vm05.stdout:6/759: chown da/d17/d95 0 1 2026-03-09T15:01:53.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:52 vm09.local ceph-mon[59673]: pgmap v9: 65 pgs: 65 active+clean; 2.5 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 28 MiB/s rd, 62 MiB/s wr, 171 op/s 2026-03-09T15:01:53.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:52 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:53.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:52 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:53.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:52 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:01:53.369 INFO:tasks.workunit.client.0.vm05.stdout:5/865: creat d1/db5/f129 x:0 0 0 2026-03-09T15:01:53.370 INFO:tasks.workunit.client.0.vm05.stdout:4/828: unlink d2/d4/d1e/da2/dec/d3d/l8c 0 2026-03-09T15:01:53.372 INFO:tasks.workunit.client.0.vm05.stdout:7/799: mknod d1/d9/d23/d31/d32/ddc/c110 0 2026-03-09T15:01:53.373 INFO:tasks.workunit.client.0.vm05.stdout:4/829: write d2/d4/d7/f2d [4560272,130275] 0 2026-03-09T15:01:53.380 INFO:tasks.workunit.client.0.vm05.stdout:9/861: rmdir d2/d4e 39 2026-03-09T15:01:53.386 INFO:tasks.workunit.client.0.vm05.stdout:1/788: dwrite d9/d2f/d37/d5a/f5b [0,4194304] 0 2026-03-09T15:01:53.389 INFO:tasks.workunit.client.0.vm05.stdout:9/862: dread d2/d10/d22/d47/d73/f81 [0,4194304] 0 2026-03-09T15:01:53.392 INFO:tasks.workunit.client.0.vm05.stdout:1/789: chown d9/d2f/d37/d5a/da9/dc9/dcd/f96 7 1 2026-03-09T15:01:53.393 INFO:tasks.workunit.client.0.vm05.stdout:5/866: dread d1/d4/d34/d56/d68/f120 [0,4194304] 0 2026-03-09T15:01:53.403 INFO:tasks.workunit.client.0.vm05.stdout:2/818: write da/d29/d6a/fda [152131,4923] 0 2026-03-09T15:01:53.405 INFO:tasks.workunit.client.0.vm05.stdout:8/836: unlink d0/d1/d12/d1b/d6e/ldf 0 2026-03-09T15:01:53.405 INFO:tasks.workunit.client.0.vm05.stdout:2/819: chown da/d29/d64/fc5 3994 1 2026-03-09T15:01:53.405 INFO:tasks.workunit.client.0.vm05.stdout:3/787: creat d3/df/d10/d19/db2/d102/f105 x:0 0 0 2026-03-09T15:01:53.411 INFO:tasks.workunit.client.0.vm05.stdout:6/760: write da/d43/f72 [1162832,42921] 0 2026-03-09T15:01:53.411 INFO:tasks.workunit.client.0.vm05.stdout:7/800: mknod d1/d22/da4/c111 0 2026-03-09T15:01:53.412 INFO:tasks.workunit.client.0.vm05.stdout:6/761: fdatasync da/d19/f7e 0 2026-03-09T15:01:53.414 INFO:tasks.workunit.client.0.vm05.stdout:4/830: creat d2/d4/d1e/da2/dec/f110 x:0 0 0 2026-03-09T15:01:53.438 INFO:tasks.workunit.client.0.vm05.stdout:5/867: rmdir d1/d4/d34/d56/da6/d10d/d91 39 2026-03-09T15:01:53.440 INFO:tasks.workunit.client.0.vm05.stdout:1/790: unlink d9/d2f/d83/d98/d59/d49/d77/la6 0 2026-03-09T15:01:53.443 INFO:tasks.workunit.client.0.vm05.stdout:1/791: fdatasync d9/d2f/d55/fce 0 2026-03-09T15:01:53.446 INFO:tasks.workunit.client.0.vm05.stdout:2/820: readlink da/d16/l97 0 2026-03-09T15:01:53.447 INFO:tasks.workunit.client.0.vm05.stdout:8/837: creat d0/d1/d12/d1b/d95/d42/d60/d73/f113 x:0 0 0 2026-03-09T15:01:53.447 INFO:tasks.workunit.client.0.vm05.stdout:3/788: mknod d3/df/d10/d19/dce/dc8/de2/d8c/d90/c106 0 2026-03-09T15:01:53.456 INFO:tasks.workunit.client.0.vm05.stdout:6/762: symlink da/d19/lec 0 2026-03-09T15:01:53.481 INFO:tasks.workunit.client.0.vm05.stdout:0/764: creat d9/de/d12/d15/d2e/fee x:0 0 0 2026-03-09T15:01:53.481 INFO:tasks.workunit.client.0.vm05.stdout:1/792: creat d9/d2f/d37/d101/f104 x:0 0 0 2026-03-09T15:01:53.481 INFO:tasks.workunit.client.0.vm05.stdout:9/863: dwrite d2/da9/f10d [0,4194304] 0 2026-03-09T15:01:53.483 INFO:tasks.workunit.client.0.vm05.stdout:0/765: dread - d9/de/f7f zero size 2026-03-09T15:01:53.486 INFO:tasks.workunit.client.0.vm05.stdout:0/766: chown d9/de/f6c 1928 1 2026-03-09T15:01:53.498 INFO:tasks.workunit.client.0.vm05.stdout:3/789: unlink d3/df/d10/d19/dce/le8 0 2026-03-09T15:01:53.498 INFO:tasks.workunit.client.0.vm05.stdout:8/838: fsync d0/d1/d12/d1b/d95/d42/f4e 0 2026-03-09T15:01:53.503 INFO:tasks.workunit.client.0.vm05.stdout:6/763: unlink da/c6c 0 2026-03-09T15:01:53.505 INFO:tasks.workunit.client.0.vm05.stdout:7/801: rename d1/d9/d23/d31/d32/l6c to d1/d9/d23/d31/d8f/d93/dbd/d107/l112 0 2026-03-09T15:01:53.508 INFO:tasks.workunit.client.0.vm05.stdout:7/802: read - d1/d9/d23/d31/d8f/d93/dbd/fea zero size 2026-03-09T15:01:53.512 INFO:tasks.workunit.client.0.vm05.stdout:5/868: creat d1/d4/d34/d56/da6/d10d/d91/f12a x:0 0 0 2026-03-09T15:01:53.520 INFO:tasks.workunit.client.0.vm05.stdout:9/864: dread - d2/d10/fc5 zero size 2026-03-09T15:01:53.520 INFO:tasks.workunit.client.0.vm05.stdout:7/803: dread d1/d9/d23/d31/d32/f3a [0,4194304] 0 2026-03-09T15:01:53.525 INFO:tasks.workunit.client.0.vm05.stdout:2/821: write da/d16/f1f [7579433,2177] 0 2026-03-09T15:01:53.526 INFO:tasks.workunit.client.0.vm05.stdout:7/804: dread - d1/d9/d23/d31/d8f/d93/dbd/fea zero size 2026-03-09T15:01:53.533 INFO:tasks.workunit.client.0.vm05.stdout:0/767: rmdir d9/de/d12/d15/d2e/d32 39 2026-03-09T15:01:53.534 INFO:tasks.workunit.client.0.vm05.stdout:2/822: dwrite da/f10 [4194304,4194304] 0 2026-03-09T15:01:53.535 INFO:tasks.workunit.client.0.vm05.stdout:3/790: mkdir d3/df/d10/d19/dce/d107 0 2026-03-09T15:01:53.536 INFO:tasks.workunit.client.0.vm05.stdout:3/791: chown d3/df/d10/c15 86349 1 2026-03-09T15:01:53.546 INFO:tasks.workunit.client.0.vm05.stdout:8/839: rename d0/d7 to d0/d1/d12/d1b/d95/dd7/dd2/dd8/d114 0 2026-03-09T15:01:53.547 INFO:tasks.workunit.client.0.vm05.stdout:4/831: link d2/d4/d1e/da2/dec/f68 d2/d4/d1e/da2/f111 0 2026-03-09T15:01:53.551 INFO:tasks.workunit.client.0.vm05.stdout:5/869: fdatasync d1/d4/d34/d35/f47 0 2026-03-09T15:01:53.551 INFO:tasks.workunit.client.0.vm05.stdout:5/870: dread - d1/d5d/f11c zero size 2026-03-09T15:01:53.560 INFO:tasks.workunit.client.0.vm05.stdout:4/832: dread d2/d4/d7/f105 [0,4194304] 0 2026-03-09T15:01:53.568 INFO:tasks.workunit.client.0.vm05.stdout:1/793: fdatasync d9/d2f/d83/d98/f39 0 2026-03-09T15:01:53.568 INFO:tasks.workunit.client.0.vm05.stdout:1/794: write d9/d2f/d37/d5a/da9/dc9/dcd/f25 [3363167,84259] 0 2026-03-09T15:01:53.578 INFO:tasks.workunit.client.0.vm05.stdout:1/795: dread d9/d2f/f58 [0,4194304] 0 2026-03-09T15:01:53.582 INFO:tasks.workunit.client.0.vm05.stdout:1/796: dread d9/d2f/d37/d5a/da9/dc9/dcd/fee [0,4194304] 0 2026-03-09T15:01:53.586 INFO:tasks.workunit.client.0.vm05.stdout:4/833: dread d2/f67 [0,4194304] 0 2026-03-09T15:01:53.590 INFO:tasks.workunit.client.0.vm05.stdout:9/865: rename d2/f12 to d2/d10/d22/da0/f12c 0 2026-03-09T15:01:53.593 INFO:tasks.workunit.client.0.vm05.stdout:5/871: symlink d1/d4/d34/d35/d3d/d38/d63/l12b 0 2026-03-09T15:01:53.599 INFO:tasks.workunit.client.0.vm05.stdout:7/805: fdatasync d1/d9/d23/d31/d32/d78/ddd/def/ffb 0 2026-03-09T15:01:53.599 INFO:tasks.workunit.client.0.vm05.stdout:7/806: write d1/de4/fe7 [748556,38234] 0 2026-03-09T15:01:53.621 INFO:tasks.workunit.client.0.vm05.stdout:0/768: dread d9/de/d12/f7a [0,4194304] 0 2026-03-09T15:01:53.630 INFO:tasks.workunit.client.0.vm05.stdout:3/792: readlink d3/df/d10/d19/d44/dd2/ld7 0 2026-03-09T15:01:53.643 INFO:tasks.workunit.client.0.vm05.stdout:9/866: dread d2/fc [0,4194304] 0 2026-03-09T15:01:53.646 INFO:tasks.workunit.client.0.vm05.stdout:5/872: rename d1/d4/d27 to d1/d4/d34/d35/d3d/d38/d69/d11b/d12c 0 2026-03-09T15:01:53.653 INFO:tasks.workunit.client.0.vm05.stdout:4/834: write d2/d49/f4d [4743920,95607] 0 2026-03-09T15:01:53.716 INFO:tasks.workunit.client.0.vm05.stdout:3/793: fdatasync d3/df/d59/f98 0 2026-03-09T15:01:53.716 INFO:tasks.workunit.client.0.vm05.stdout:0/769: dwrite d9/de/d12/d15/d2e/d32/d53/d61/f62 [0,4194304] 0 2026-03-09T15:01:53.717 INFO:tasks.workunit.client.0.vm05.stdout:1/797: mknod d9/d2f/d83/d98/d59/c105 0 2026-03-09T15:01:53.717 INFO:tasks.workunit.client.0.vm05.stdout:6/764: getdents da/d43/d7b/d89/da8 0 2026-03-09T15:01:53.719 INFO:tasks.workunit.client.0.vm05.stdout:5/873: fsync d1/d4/d34/ff3 0 2026-03-09T15:01:53.730 INFO:tasks.workunit.client.0.vm05.stdout:3/794: dwrite d3/df/d10/d19/d44/f56 [0,4194304] 0 2026-03-09T15:01:53.740 INFO:tasks.workunit.client.0.vm05.stdout:3/795: dwrite d3/d29/d2d/f31 [4194304,4194304] 0 2026-03-09T15:01:53.744 INFO:tasks.workunit.client.0.vm05.stdout:7/807: mknod d1/d49/c113 0 2026-03-09T15:01:53.751 INFO:tasks.workunit.client.0.vm05.stdout:7/808: truncate d1/d9/d23/d31/d32/d78/dbb/fff 1607838 0 2026-03-09T15:01:53.752 INFO:tasks.workunit.client.0.vm05.stdout:3/796: dwrite d3/df/f23 [0,4194304] 0 2026-03-09T15:01:53.772 INFO:tasks.workunit.client.0.vm05.stdout:2/823: getdents da/dd 0 2026-03-09T15:01:53.775 INFO:tasks.workunit.client.0.vm05.stdout:4/835: dread d2/d4/d7/d48/fc8 [0,4194304] 0 2026-03-09T15:01:53.775 INFO:tasks.workunit.client.0.vm05.stdout:8/840: getdents d0/d1/d12/d1b/d95 0 2026-03-09T15:01:53.776 INFO:tasks.workunit.client.0.vm05.stdout:1/798: creat d9/db9/f106 x:0 0 0 2026-03-09T15:01:53.778 INFO:tasks.workunit.client.0.vm05.stdout:1/799: dread - d9/db9/f106 zero size 2026-03-09T15:01:53.782 INFO:tasks.workunit.client.0.vm05.stdout:6/765: rename da/d17/f9c to da/d43/d7b/db3/fed 0 2026-03-09T15:01:53.783 INFO:tasks.workunit.client.0.vm05.stdout:9/867: mkdir d2/d4e/d56/d53/d64/dd9/def/d12d 0 2026-03-09T15:01:53.783 INFO:tasks.workunit.client.0.vm05.stdout:9/868: rename d2/d10/d22 to d2/d10/d22/d9f/d119/d12e 22 2026-03-09T15:01:53.785 INFO:tasks.workunit.client.0.vm05.stdout:5/874: truncate d1/d4/d19/d93/fd2 491850 0 2026-03-09T15:01:53.811 INFO:tasks.workunit.client.0.vm05.stdout:7/809: truncate d1/f15 4406285 0 2026-03-09T15:01:53.812 INFO:tasks.workunit.client.0.vm05.stdout:7/810: chown d1/d22/f102 3915 1 2026-03-09T15:01:53.812 INFO:tasks.workunit.client.0.vm05.stdout:7/811: stat d1/d12 0 2026-03-09T15:01:53.813 INFO:tasks.workunit.client.0.vm05.stdout:7/812: chown d1/d49/cd2 454546 1 2026-03-09T15:01:53.817 INFO:tasks.workunit.client.0.vm05.stdout:2/824: mknod da/d29/d6a/da0/d91/dab/d9c/cfe 0 2026-03-09T15:01:53.817 INFO:tasks.workunit.client.0.vm05.stdout:8/841: creat d0/d1/d12/d1b/d95/d4b/f115 x:0 0 0 2026-03-09T15:01:53.829 INFO:tasks.workunit.client.0.vm05.stdout:0/770: write d9/de/d12/d15/d2e/f40 [190518,48008] 0 2026-03-09T15:01:53.838 INFO:tasks.workunit.client.0.vm05.stdout:3/797: creat d3/df/d1e/d2c/d74/de0/f108 x:0 0 0 2026-03-09T15:01:53.843 INFO:tasks.workunit.client.0.vm05.stdout:5/875: dwrite d1/d4/d34/d56/f59 [0,4194304] 0 2026-03-09T15:01:53.847 INFO:tasks.workunit.client.0.vm05.stdout:4/836: mknod d2/d4/d50/d8a/d101/c112 0 2026-03-09T15:01:53.854 INFO:tasks.workunit.client.0.vm05.stdout:6/766: mkdir da/d17/d3b/dbd/dee 0 2026-03-09T15:01:53.859 INFO:tasks.workunit.client.0.vm05.stdout:7/813: dread d1/d9/d23/d31/d8f/d93/f108 [0,4194304] 0 2026-03-09T15:01:53.879 INFO:tasks.workunit.client.0.vm05.stdout:2/825: write da/d16/d46/fa3 [392392,129803] 0 2026-03-09T15:01:53.889 INFO:tasks.workunit.client.0.vm05.stdout:9/869: dwrite d2/d10/d22/d2c/d69/f86 [0,4194304] 0 2026-03-09T15:01:53.894 INFO:tasks.workunit.client.0.vm05.stdout:5/876: symlink d1/d4/d34/d35/d4e/d6f/l12d 0 2026-03-09T15:01:53.896 INFO:tasks.workunit.client.0.vm05.stdout:9/870: dread d2/d4e/d56/d53/d64/ded/d9c/d8e/f5f [0,4194304] 0 2026-03-09T15:01:53.901 INFO:tasks.workunit.client.0.vm05.stdout:6/767: dread - da/d17/d95/da2/dae/fbb zero size 2026-03-09T15:01:53.903 INFO:tasks.workunit.client.0.vm05.stdout:8/842: truncate d0/d1/d12/d1b/d95/d42/d60/f9c 1964888 0 2026-03-09T15:01:53.906 INFO:tasks.workunit.client.0.vm05.stdout:7/814: readlink d1/d9/le6 0 2026-03-09T15:01:53.915 INFO:tasks.workunit.client.0.vm05.stdout:0/771: fsync d9/de/f3d 0 2026-03-09T15:01:53.919 INFO:tasks.workunit.client.0.vm05.stdout:3/798: mkdir d3/df/d10/d109 0 2026-03-09T15:01:53.930 INFO:tasks.workunit.client.0.vm05.stdout:5/877: mkdir d1/da/d12e 0 2026-03-09T15:01:53.938 INFO:tasks.workunit.client.0.vm05.stdout:9/871: rename d2/d4e/d56/d53/d64/ded/d9c/ddd/d11e to d2/d4e/d56/d53/d64/dd9/def/d12d/d12f 0 2026-03-09T15:01:53.938 INFO:tasks.workunit.client.0.vm05.stdout:9/872: readlink d2/d10/d22/l50 0 2026-03-09T15:01:53.939 INFO:tasks.workunit.client.0.vm05.stdout:9/873: chown d2/d4e/d56/d53/d64/ded/d9c/d8e/f128 14 1 2026-03-09T15:01:53.943 INFO:tasks.workunit.client.0.vm05.stdout:9/874: dwrite d2/d4e/d56/d53/d64/ded/d9c/ddd/f129 [0,4194304] 0 2026-03-09T15:01:53.947 INFO:tasks.workunit.client.0.vm05.stdout:6/768: creat da/d17/d95/da2/dae/fef x:0 0 0 2026-03-09T15:01:53.951 INFO:tasks.workunit.client.0.vm05.stdout:6/769: dread da/d43/d7b/f9f [0,4194304] 0 2026-03-09T15:01:53.960 INFO:tasks.workunit.client.0.vm05.stdout:6/770: dread da/d17/d3b/f4a [0,4194304] 0 2026-03-09T15:01:53.961 INFO:tasks.workunit.client.0.vm05.stdout:6/771: dread - da/d17/d95/da2/dae/fef zero size 2026-03-09T15:01:53.966 INFO:tasks.workunit.client.0.vm05.stdout:1/800: link d9/d17/ldd d9/l107 0 2026-03-09T15:01:53.973 INFO:tasks.workunit.client.0.vm05.stdout:2/826: mknod da/d16/cff 0 2026-03-09T15:01:53.975 INFO:tasks.workunit.client.0.vm05.stdout:2/827: dread da/d16/fdf [0,4194304] 0 2026-03-09T15:01:53.980 INFO:tasks.workunit.client.0.vm05.stdout:0/772: read - d9/de/d25/dae/de6/fdc zero size 2026-03-09T15:01:53.985 INFO:tasks.workunit.client.0.vm05.stdout:2/828: dread da/d16/d46/fc6 [0,4194304] 0 2026-03-09T15:01:53.990 INFO:tasks.workunit.client.0.vm05.stdout:5/878: unlink d1/d4/d34/f65 0 2026-03-09T15:01:53.990 INFO:tasks.workunit.client.0.vm05.stdout:5/879: stat d1/f5e 0 2026-03-09T15:01:53.993 INFO:tasks.workunit.client.0.vm05.stdout:5/880: dwrite d1/d4/d34/d56/f59 [4194304,4194304] 0 2026-03-09T15:01:53.998 INFO:tasks.workunit.client.0.vm05.stdout:4/837: rename d2/d49/d69/lad to d2/d43/dd6/l113 0 2026-03-09T15:01:53.999 INFO:tasks.workunit.client.0.vm05.stdout:4/838: write d2/d4/d8/d4a/d6e/ff4 [85858,102249] 0 2026-03-09T15:01:54.000 INFO:tasks.workunit.client.0.vm05.stdout:4/839: fsync d2/d43/f51 0 2026-03-09T15:01:54.013 INFO:tasks.workunit.client.0.vm05.stdout:8/843: dwrite d0/d1/d12/f4f [0,4194304] 0 2026-03-09T15:01:54.034 INFO:tasks.workunit.client.0.vm05.stdout:7/815: creat d1/d9/d72/d10c/f114 x:0 0 0 2026-03-09T15:01:54.036 INFO:tasks.workunit.client.0.vm05.stdout:6/772: unlink da/d17/d95/da2/dae/fc3 0 2026-03-09T15:01:54.040 INFO:tasks.workunit.client.0.vm05.stdout:9/875: write d2/d10/d22/d47/fdc [1435371,25236] 0 2026-03-09T15:01:54.047 INFO:tasks.workunit.client.0.vm05.stdout:0/773: symlink d9/de/d12/d15/d2e/d32/d53/lef 0 2026-03-09T15:01:54.056 INFO:tasks.workunit.client.0.vm05.stdout:3/799: truncate d3/df/d1e/d2c/d74/ff9 514914 0 2026-03-09T15:01:54.066 INFO:tasks.workunit.client.0.vm05.stdout:5/881: dwrite d1/d4/d34/d56/da6/fd1 [0,4194304] 0 2026-03-09T15:01:54.068 INFO:tasks.workunit.client.0.vm05.stdout:8/844: mkdir d0/d1/d12/d116 0 2026-03-09T15:01:54.073 INFO:tasks.workunit.client.0.vm05.stdout:7/816: creat d1/d49/d4a/f115 x:0 0 0 2026-03-09T15:01:54.080 INFO:tasks.workunit.client.0.vm05.stdout:6/773: rmdir da/d43/d7b/da9 39 2026-03-09T15:01:54.081 INFO:tasks.workunit.client.0.vm05.stdout:4/840: dread d2/d43/fa0 [0,4194304] 0 2026-03-09T15:01:54.082 INFO:tasks.workunit.client.0.vm05.stdout:4/841: chown d2/d4/d7/dc 667 1 2026-03-09T15:01:54.087 INFO:tasks.workunit.client.0.vm05.stdout:1/801: truncate f5 1946421 0 2026-03-09T15:01:54.091 INFO:tasks.workunit.client.0.vm05.stdout:0/774: truncate d9/de/d25/dcf/f96 5650791 0 2026-03-09T15:01:54.092 INFO:tasks.workunit.client.0.vm05.stdout:0/775: stat d9/f22 0 2026-03-09T15:01:54.105 INFO:tasks.workunit.client.0.vm05.stdout:3/800: mkdir d3/df/d10/d19/dce/dc8/de2/d8c/d90/d10a 0 2026-03-09T15:01:54.106 INFO:tasks.workunit.client.0.vm05.stdout:3/801: write d3/df/d59/d79/ff4 [566057,108123] 0 2026-03-09T15:01:54.112 INFO:tasks.workunit.client.0.vm05.stdout:2/829: truncate da/d16/fdf 277456 0 2026-03-09T15:01:54.118 INFO:tasks.workunit.client.0.vm05.stdout:3/802: dread d3/d29/d2d/d7b/fa3 [0,4194304] 0 2026-03-09T15:01:54.143 INFO:tasks.workunit.client.0.vm05.stdout:5/882: mknod d1/db5/c12f 0 2026-03-09T15:01:54.153 INFO:tasks.workunit.client.0.vm05.stdout:5/883: dwrite d1/d4/d34/d56/da6/f113 [0,4194304] 0 2026-03-09T15:01:54.157 INFO:tasks.workunit.client.0.vm05.stdout:7/817: creat d1/d9/d23/d31/d8f/d93/dbd/f116 x:0 0 0 2026-03-09T15:01:54.205 INFO:tasks.workunit.client.0.vm05.stdout:4/842: dread d2/f98 [0,4194304] 0 2026-03-09T15:01:54.208 INFO:tasks.workunit.client.0.vm05.stdout:9/876: truncate d2/d10/d22/d2c/d69/f67 1459079 0 2026-03-09T15:01:54.209 INFO:tasks.workunit.client.0.vm05.stdout:0/776: mkdir d9/de/d12/d15/df0 0 2026-03-09T15:01:54.209 INFO:tasks.workunit.client.0.vm05.stdout:2/830: creat da/d29/d6a/da0/d91/dab/d2f/db3/f100 x:0 0 0 2026-03-09T15:01:54.219 INFO:tasks.workunit.client.0.vm05.stdout:6/774: mkdir da/d43/d7b/da9/df0 0 2026-03-09T15:01:54.220 INFO:tasks.workunit.client.0.vm05.stdout:8/845: fsync d0/d1/d12/d1b/d95/d4b/fb0 0 2026-03-09T15:01:54.222 INFO:tasks.workunit.client.0.vm05.stdout:7/818: sync 2026-03-09T15:01:54.223 INFO:tasks.workunit.client.0.vm05.stdout:5/884: truncate d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/f57 3990224 0 2026-03-09T15:01:54.224 INFO:tasks.workunit.client.0.vm05.stdout:8/846: read - d0/d1/d12/d1b/d6e/fc2 zero size 2026-03-09T15:01:54.226 INFO:tasks.workunit.client.0.vm05.stdout:7/819: read d1/d9/d23/d31/d32/f63 [2000991,33986] 0 2026-03-09T15:01:54.256 INFO:tasks.workunit.client.0.vm05.stdout:1/802: fdatasync d9/fb6 0 2026-03-09T15:01:54.259 INFO:tasks.workunit.client.0.vm05.stdout:1/803: read d9/d2f/d55/fb0 [1662506,77197] 0 2026-03-09T15:01:54.271 INFO:tasks.workunit.client.0.vm05.stdout:4/843: dread d2/d43/f51 [0,4194304] 0 2026-03-09T15:01:54.277 INFO:tasks.workunit.client.0.vm05.stdout:3/803: dwrite d3/df/d59/f98 [0,4194304] 0 2026-03-09T15:01:54.279 INFO:tasks.workunit.client.0.vm05.stdout:4/844: dwrite d2/f33 [4194304,4194304] 0 2026-03-09T15:01:54.279 INFO:tasks.workunit.client.0.vm05.stdout:4/845: readlink d2/d4/d1e/l9a 0 2026-03-09T15:01:54.333 INFO:tasks.workunit.client.0.vm05.stdout:2/831: write da/d16/d46/f73 [257069,45668] 0 2026-03-09T15:01:54.336 INFO:tasks.workunit.client.0.vm05.stdout:8/847: mkdir d0/d1/d12/d1b/d95/dd7/dd2/d117 0 2026-03-09T15:01:54.340 INFO:tasks.workunit.client.0.vm05.stdout:5/885: unlink d1/d4/d34/d35/d3d/d38/f6e 0 2026-03-09T15:01:54.342 INFO:tasks.workunit.client.0.vm05.stdout:6/775: write da/d17/d3b/f5a [204896,72670] 0 2026-03-09T15:01:54.347 INFO:tasks.workunit.client.0.vm05.stdout:7/820: creat d1/d49/dec/f117 x:0 0 0 2026-03-09T15:01:54.378 INFO:tasks.workunit.client.0.vm05.stdout:9/877: symlink d2/d10/l130 0 2026-03-09T15:01:54.385 INFO:tasks.workunit.client.0.vm05.stdout:4/846: dwrite d2/d1d/d88/faa [0,4194304] 0 2026-03-09T15:01:54.386 INFO:tasks.workunit.client.0.vm05.stdout:3/804: write d3/df/d1e/d2c/d74/d78/fdd [1022080,94540] 0 2026-03-09T15:01:54.390 INFO:tasks.workunit.client.0.vm05.stdout:3/805: chown d3/df/d1e/d2c/d74/d78/ff1 4520 1 2026-03-09T15:01:54.391 INFO:tasks.workunit.client.0.vm05.stdout:3/806: chown d3/df/d10/d19/dce/dc8/de2/f4c 31 1 2026-03-09T15:01:54.391 INFO:tasks.workunit.client.0.vm05.stdout:3/807: truncate d3/df/f23 4757258 0 2026-03-09T15:01:54.400 INFO:tasks.workunit.client.0.vm05.stdout:8/848: creat d0/d24/d96/f118 x:0 0 0 2026-03-09T15:01:54.404 INFO:tasks.workunit.client.0.vm05.stdout:0/777: truncate d9/de/d25/d38/f55 1328335 0 2026-03-09T15:01:54.404 INFO:tasks.workunit.client.0.vm05.stdout:0/778: read - d9/d59/fed zero size 2026-03-09T15:01:54.406 INFO:tasks.workunit.client.0.vm05.stdout:5/886: write d1/d4/d34/d35/dd0/fd7 [574910,57348] 0 2026-03-09T15:01:54.407 INFO:tasks.workunit.client.0.vm05.stdout:6/776: creat da/d43/d7b/db3/ff1 x:0 0 0 2026-03-09T15:01:54.408 INFO:tasks.workunit.client.0.vm05.stdout:3/808: dread d3/df/d1e/d2f/d52/f95 [0,4194304] 0 2026-03-09T15:01:54.424 INFO:tasks.workunit.client.0.vm05.stdout:7/821: dread - d1/d49/d4a/d77/fc5 zero size 2026-03-09T15:01:54.425 INFO:tasks.workunit.client.0.vm05.stdout:1/804: truncate d9/d2f/d83/d98/f50 5302735 0 2026-03-09T15:01:54.427 INFO:tasks.workunit.client.0.vm05.stdout:7/822: chown d1/d9/d23/d54 44372110 1 2026-03-09T15:01:54.431 INFO:tasks.workunit.client.0.vm05.stdout:2/832: mknod da/d29/d64/dc1/c101 0 2026-03-09T15:01:54.432 INFO:tasks.workunit.client.0.vm05.stdout:5/887: dread d1/d4/d34/f6a [0,4194304] 0 2026-03-09T15:01:54.449 INFO:tasks.workunit.client.0.vm05.stdout:9/878: write d2/d10/d22/d52/fd7 [225369,86022] 0 2026-03-09T15:01:54.451 INFO:tasks.workunit.client.0.vm05.stdout:9/879: write d2/d4e/d56/d53/d64/ded/d9c/f6e [1323619,114344] 0 2026-03-09T15:01:54.456 INFO:tasks.workunit.client.0.vm05.stdout:3/809: chown d3/d29/d7f/dc3/fd5 5715 1 2026-03-09T15:01:54.456 INFO:tasks.workunit.client.0.vm05.stdout:4/847: write d2/d4/d1e/da2/fe2 [156329,51925] 0 2026-03-09T15:01:54.472 INFO:tasks.workunit.client.0.vm05.stdout:2/833: read da/d16/f6e [3645014,108616] 0 2026-03-09T15:01:54.473 INFO:tasks.workunit.client.0.vm05.stdout:7/823: rename d1/d22/d3c/fe2 to d1/d12/f118 0 2026-03-09T15:01:54.486 INFO:tasks.workunit.client.0.vm05.stdout:5/888: mkdir d1/d4/d34/d56/da6/dea/d130 0 2026-03-09T15:01:54.650 INFO:tasks.workunit.client.0.vm05.stdout:0/779: dwrite d9/de/d12/d15/d2e/f40 [0,4194304] 0 2026-03-09T15:01:54.675 INFO:tasks.workunit.client.0.vm05.stdout:1/805: fsync d9/d17/f26 0 2026-03-09T15:01:54.692 INFO:tasks.workunit.client.0.vm05.stdout:7/824: read d1/d49/d4a/fcc [1346967,101721] 0 2026-03-09T15:01:54.698 INFO:tasks.workunit.client.0.vm05.stdout:8/849: link d0/d1/d12/d1b/d95/dd7/la4 d0/d1/d12/d1b/d95/dd7/dd2/dd8/l119 0 2026-03-09T15:01:54.699 INFO:tasks.workunit.client.0.vm05.stdout:8/850: read - d0/d1/d12/d1b/d6e/fc2 zero size 2026-03-09T15:01:54.699 INFO:tasks.workunit.client.0.vm05.stdout:8/851: read d0/d1/d12/d1b/d21/f92 [113927,97630] 0 2026-03-09T15:01:54.704 INFO:tasks.workunit.client.0.vm05.stdout:5/889: creat d1/d4/d34/d35/d3d/d38/d69/f131 x:0 0 0 2026-03-09T15:01:54.712 INFO:tasks.workunit.client.0.vm05.stdout:5/890: chown d1/d5d/la7 269 1 2026-03-09T15:01:54.714 INFO:tasks.workunit.client.0.vm05.stdout:2/834: dread da/d29/d6a/da0/d91/dab/f4b [0,4194304] 0 2026-03-09T15:01:54.853 INFO:tasks.workunit.client.0.vm05.stdout:6/777: write da/d17/d95/da2/dae/fdd [4687836,32397] 0 2026-03-09T15:01:54.914 INFO:tasks.workunit.client.0.vm05.stdout:3/810: link d3/df/d10/d19/dce/dc8/de2/f9d d3/d29/f10b 0 2026-03-09T15:01:54.920 INFO:tasks.workunit.client.0.vm05.stdout:0/780: chown d9/l34 6738 1 2026-03-09T15:01:54.921 INFO:tasks.workunit.client.0.vm05.stdout:0/781: chown d9/de/d25/c5b 1 1 2026-03-09T15:01:54.950 INFO:tasks.workunit.client.0.vm05.stdout:9/880: link d2/d10/d22/da0/cf3 d2/d4e/d56/d53/d64/ded/d99/c131 0 2026-03-09T15:01:54.957 INFO:tasks.workunit.client.0.vm05.stdout:2/835: rmdir da/d29/d6a/da0/d91/dab/d9c 39 2026-03-09T15:01:54.957 INFO:tasks.workunit.client.0.vm05.stdout:2/836: dread - da/d29/d6a/da0/d91/dab/d2f/d35/ff3 zero size 2026-03-09T15:01:54.961 INFO:tasks.workunit.client.0.vm05.stdout:8/852: dwrite d0/d1/d12/fb6 [0,4194304] 0 2026-03-09T15:01:54.974 INFO:tasks.workunit.client.0.vm05.stdout:6/778: rmdir da/d19/dd7 39 2026-03-09T15:01:54.982 INFO:tasks.workunit.client.0.vm05.stdout:0/782: rename d9/de/d25/d38/d41 to d9/de/df1 0 2026-03-09T15:01:54.984 INFO:tasks.workunit.client.0.vm05.stdout:4/848: rmdir d2/d4/d7/d48/df0/d106 0 2026-03-09T15:01:55.009 INFO:tasks.workunit.client.0.vm05.stdout:2/837: creat da/d29/d6a/d7f/f102 x:0 0 0 2026-03-09T15:01:55.009 INFO:tasks.workunit.client.0.vm05.stdout:2/838: stat da/d29/d6a/da0/d91/dab/d2f/c49 0 2026-03-09T15:01:55.010 INFO:tasks.workunit.client.0.vm05.stdout:2/839: chown da/d29/d6a/da0/lf6 577 1 2026-03-09T15:01:55.013 INFO:tasks.workunit.client.0.vm05.stdout:9/881: dread d2/d10/d22/d2c/d69/f67 [0,4194304] 0 2026-03-09T15:01:55.016 INFO:tasks.workunit.client.0.vm05.stdout:7/825: dwrite d1/d9/f52 [0,4194304] 0 2026-03-09T15:01:55.026 INFO:tasks.workunit.client.0.vm05.stdout:5/891: fsync d1/d4/d34/d56/da6/f124 0 2026-03-09T15:01:55.027 INFO:tasks.workunit.client.0.vm05.stdout:5/892: readlink d1/d4/d34/d35/d3d/d38/l70 0 2026-03-09T15:01:55.030 INFO:tasks.workunit.client.0.vm05.stdout:5/893: dread d1/d4/d34/f6a [0,4194304] 0 2026-03-09T15:01:55.031 INFO:tasks.workunit.client.0.vm05.stdout:1/806: write f5 [2067881,130541] 0 2026-03-09T15:01:55.036 INFO:tasks.workunit.client.0.vm05.stdout:1/807: dwrite d9/d2f/d37/d5a/da9/dc9/dcd/f96 [0,4194304] 0 2026-03-09T15:01:55.075 INFO:tasks.workunit.client.0.vm05.stdout:3/811: truncate d3/df/d10/d19/dce/dc8/de2/f9d 4012094 0 2026-03-09T15:01:55.077 INFO:tasks.workunit.client.0.vm05.stdout:0/783: dread d9/de/df1/f71 [0,4194304] 0 2026-03-09T15:01:55.089 INFO:tasks.workunit.client.0.vm05.stdout:9/882: mknod d2/d10/d22/dc1/dc3/c132 0 2026-03-09T15:01:55.090 INFO:tasks.workunit.client.0.vm05.stdout:9/883: write d2/d10/d22/dc2/db1/fb8 [2703511,27139] 0 2026-03-09T15:01:55.094 INFO:tasks.workunit.client.0.vm05.stdout:7/826: chown d1/d9/l35 0 1 2026-03-09T15:01:55.094 INFO:tasks.workunit.client.0.vm05.stdout:7/827: chown d1/d9/le6 3 1 2026-03-09T15:01:55.104 INFO:tasks.workunit.client.0.vm05.stdout:1/808: truncate d9/d2f/d83/d98/d59/fe6 988771 0 2026-03-09T15:01:55.127 INFO:tasks.workunit.client.0.vm05.stdout:4/849: mknod d2/d4/d7/d48/c114 0 2026-03-09T15:01:55.136 INFO:tasks.workunit.client.0.vm05.stdout:2/840: symlink da/d29/d6a/da0/d91/dab/d2f/de7/l103 0 2026-03-09T15:01:55.150 INFO:tasks.workunit.client.0.vm05.stdout:7/828: symlink d1/d9/d23/d31/d8f/d93/l119 0 2026-03-09T15:01:55.160 INFO:tasks.workunit.client.0.vm05.stdout:5/894: mkdir d1/da/d10f/d132 0 2026-03-09T15:01:55.177 INFO:tasks.workunit.client.0.vm05.stdout:8/853: link d0/d1/d12/d1b/d95/l58 d0/d1/d12/d1b/d95/d78/db5/l11a 0 2026-03-09T15:01:55.180 INFO:tasks.workunit.client.0.vm05.stdout:8/854: dwrite d0/d1/d12/d1b/d95/d54/f5b [0,4194304] 0 2026-03-09T15:01:55.182 INFO:tasks.workunit.client.0.vm05.stdout:6/779: creat da/d43/ff2 x:0 0 0 2026-03-09T15:01:55.221 INFO:tasks.workunit.client.0.vm05.stdout:4/850: creat d2/d43/dd6/f115 x:0 0 0 2026-03-09T15:01:55.240 INFO:tasks.workunit.client.0.vm05.stdout:9/884: mkdir d2/d10/d22/d2c/d69/d133 0 2026-03-09T15:01:55.241 INFO:tasks.workunit.client.0.vm05.stdout:2/841: dread da/d29/f39 [0,4194304] 0 2026-03-09T15:01:55.247 INFO:tasks.workunit.client.0.vm05.stdout:5/895: truncate d1/d4/d34/d56/d68/fe9 2298570 0 2026-03-09T15:01:55.255 INFO:tasks.workunit.client.0.vm05.stdout:7/829: dwrite d1/d9/d72/fc4 [0,4194304] 0 2026-03-09T15:01:55.260 INFO:tasks.workunit.client.0.vm05.stdout:7/830: dwrite d1/d9/d23/d31/d51/f106 [0,4194304] 0 2026-03-09T15:01:55.272 INFO:tasks.workunit.client.0.vm05.stdout:1/809: creat d9/d2f/d37/ded/f108 x:0 0 0 2026-03-09T15:01:55.278 INFO:tasks.workunit.client.0.vm05.stdout:6/780: symlink da/d17/d3b/dbd/lf3 0 2026-03-09T15:01:55.290 INFO:tasks.workunit.client.0.vm05.stdout:8/855: dread d0/d1/d12/d1b/d95/dd7/fc9 [0,4194304] 0 2026-03-09T15:01:55.298 INFO:tasks.workunit.client.0.vm05.stdout:0/784: creat d9/de/d25/ff2 x:0 0 0 2026-03-09T15:01:55.303 INFO:tasks.workunit.client.0.vm05.stdout:9/885: creat d2/d4e/d56/d53/d64/ded/d99/f134 x:0 0 0 2026-03-09T15:01:55.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:55 vm05.local ceph-mon[50611]: pgmap v10: 65 pgs: 65 active+clean; 2.5 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 25 MiB/s rd, 56 MiB/s wr, 156 op/s 2026-03-09T15:01:55.331 INFO:tasks.workunit.client.0.vm05.stdout:3/812: creat d3/d29/f10c x:0 0 0 2026-03-09T15:01:55.347 INFO:tasks.workunit.client.0.vm05.stdout:5/896: sync 2026-03-09T15:01:55.347 INFO:tasks.workunit.client.0.vm05.stdout:1/810: sync 2026-03-09T15:01:55.348 INFO:tasks.workunit.client.0.vm05.stdout:5/897: read - d1/d4/d34/d56/d68/f8f zero size 2026-03-09T15:01:55.348 INFO:tasks.workunit.client.0.vm05.stdout:5/898: stat d1/d5d/le3 0 2026-03-09T15:01:55.349 INFO:tasks.workunit.client.0.vm05.stdout:5/899: chown d1/d4/d34/d56/da6/d10d/d91/fb1 3403749 1 2026-03-09T15:01:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:55 vm09.local ceph-mon[59673]: pgmap v10: 65 pgs: 65 active+clean; 2.5 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 25 MiB/s rd, 56 MiB/s wr, 156 op/s 2026-03-09T15:01:55.379 INFO:tasks.workunit.client.0.vm05.stdout:8/856: write d0/d1/d12/d1b/d21/fb8 [886589,21842] 0 2026-03-09T15:01:55.380 INFO:tasks.workunit.client.0.vm05.stdout:7/831: dwrite d1/de4/fe1 [0,4194304] 0 2026-03-09T15:01:55.381 INFO:tasks.workunit.client.0.vm05.stdout:9/886: mkdir d2/d4e/d56/d84/d120/d135 0 2026-03-09T15:01:55.381 INFO:tasks.workunit.client.0.vm05.stdout:0/785: creat d9/de/d25/d38/d78/dc9/ff3 x:0 0 0 2026-03-09T15:01:55.383 INFO:tasks.workunit.client.0.vm05.stdout:8/857: write d0/d1/d12/d1b/d95/d4b/fb0 [1385073,115741] 0 2026-03-09T15:01:55.385 INFO:tasks.workunit.client.0.vm05.stdout:2/842: dwrite da/d29/d6a/da0/d91/dab/d9c/dd3/fe4 [0,4194304] 0 2026-03-09T15:01:55.391 INFO:tasks.workunit.client.0.vm05.stdout:6/781: rename da/d43/d7b/d89/da8/cb1 to da/d43/cf4 0 2026-03-09T15:01:55.391 INFO:tasks.workunit.client.0.vm05.stdout:2/843: readlink da/d29/d3f/l7a 0 2026-03-09T15:01:55.391 INFO:tasks.workunit.client.0.vm05.stdout:5/900: fsync d1/da/f105 0 2026-03-09T15:01:55.394 INFO:tasks.workunit.client.0.vm05.stdout:4/851: getdents d2/d1d/dfa 0 2026-03-09T15:01:55.401 INFO:tasks.workunit.client.0.vm05.stdout:9/887: dread - d2/d10/d22/d2c/d69/f106 zero size 2026-03-09T15:01:55.404 INFO:tasks.workunit.client.0.vm05.stdout:1/811: symlink d9/d2f/d83/d98/l109 0 2026-03-09T15:01:55.425 INFO:tasks.workunit.client.0.vm05.stdout:4/852: mknod d2/d1d/df5/c116 0 2026-03-09T15:01:55.426 INFO:tasks.workunit.client.0.vm05.stdout:7/832: mkdir d1/d9/d23/d11a 0 2026-03-09T15:01:55.427 INFO:tasks.workunit.client.0.vm05.stdout:4/853: chown d2/d4/d8/d4a/d94/ca1 1 1 2026-03-09T15:01:55.430 INFO:tasks.workunit.client.0.vm05.stdout:9/888: symlink d2/d4e/d56/d53/d64/dd9/def/d12d/d12f/l136 0 2026-03-09T15:01:55.433 INFO:tasks.workunit.client.0.vm05.stdout:3/813: write d3/df/f1b [5200690,80060] 0 2026-03-09T15:01:55.433 INFO:tasks.workunit.client.0.vm05.stdout:9/889: fdatasync d2/d10/d22/d47/dc4/f113 0 2026-03-09T15:01:55.447 INFO:tasks.workunit.client.0.vm05.stdout:0/786: dwrite d9/de/f3d [0,4194304] 0 2026-03-09T15:01:55.447 INFO:tasks.workunit.client.0.vm05.stdout:2/844: dwrite da/fce [0,4194304] 0 2026-03-09T15:01:55.453 INFO:tasks.workunit.client.0.vm05.stdout:6/782: symlink da/d17/d95/da2/dae/dd9/lf5 0 2026-03-09T15:01:55.456 INFO:tasks.workunit.client.0.vm05.stdout:8/858: dwrite d0/d1/d12/d3c/f4c [0,4194304] 0 2026-03-09T15:01:55.458 INFO:tasks.workunit.client.0.vm05.stdout:0/787: stat d9/de/d12/d15/d2e/d6b/dbf 0 2026-03-09T15:01:55.462 INFO:tasks.workunit.client.0.vm05.stdout:2/845: sync 2026-03-09T15:01:55.467 INFO:tasks.workunit.client.0.vm05.stdout:8/859: fsync d0/d1/d12/d1b/d21/fb8 0 2026-03-09T15:01:55.478 INFO:tasks.workunit.client.0.vm05.stdout:6/783: dwrite da/d17/fe4 [0,4194304] 0 2026-03-09T15:01:55.484 INFO:tasks.workunit.client.0.vm05.stdout:8/860: dread d0/d1/d12/d1b/d66/dcc/fe6 [0,4194304] 0 2026-03-09T15:01:55.494 INFO:tasks.workunit.client.0.vm05.stdout:4/854: rename d2/de0 to d2/d4/d1e/da2/dec/d117 0 2026-03-09T15:01:55.518 INFO:tasks.workunit.client.0.vm05.stdout:1/812: mkdir d9/d2f/d83/d98/d59/d49/d10a 0 2026-03-09T15:01:55.523 INFO:tasks.workunit.client.0.vm05.stdout:7/833: dwrite d1/d9/d23/d31/d8f/d93/dbd/ff1 [0,4194304] 0 2026-03-09T15:01:55.541 INFO:tasks.workunit.client.0.vm05.stdout:9/890: rmdir d2/d4e 39 2026-03-09T15:01:55.564 INFO:tasks.workunit.client.0.vm05.stdout:2/846: write da/dd/f25 [3877080,54282] 0 2026-03-09T15:01:55.597 INFO:tasks.workunit.client.0.vm05.stdout:1/813: write d9/d2f/d83/d98/d59/d49/d78/d94/fd6 [4843806,57191] 0 2026-03-09T15:01:55.607 INFO:tasks.workunit.client.0.vm05.stdout:5/901: getdents d1/d4/d34/d6c/d104 0 2026-03-09T15:01:55.609 INFO:tasks.workunit.client.0.vm05.stdout:3/814: dread d3/d29/d7f/fa1 [0,4194304] 0 2026-03-09T15:01:55.616 INFO:tasks.workunit.client.0.vm05.stdout:0/788: mkdir d9/de/d12/d15/d2e/d32/d74/de9/df4 0 2026-03-09T15:01:55.618 INFO:tasks.workunit.client.0.vm05.stdout:0/789: dread - d9/de/d12/d15/d2e/f88 zero size 2026-03-09T15:01:55.629 INFO:tasks.workunit.client.0.vm05.stdout:2/847: dread da/d29/d6a/f71 [0,4194304] 0 2026-03-09T15:01:55.656 INFO:tasks.workunit.client.0.vm05.stdout:8/861: truncate d0/d1/d12/d1b/f89 2128817 0 2026-03-09T15:01:55.656 INFO:tasks.workunit.client.0.vm05.stdout:1/814: write d9/d2f/d83/d98/d59/d49/d92/d75/f76 [178213,119360] 0 2026-03-09T15:01:55.660 INFO:tasks.workunit.client.0.vm05.stdout:6/784: dwrite da/d43/d7b/da9/fc9 [0,4194304] 0 2026-03-09T15:01:55.662 INFO:tasks.workunit.client.0.vm05.stdout:6/785: chown da/fab 3783 1 2026-03-09T15:01:55.669 INFO:tasks.workunit.client.0.vm05.stdout:7/834: link d1/d9/d72/d10c/f114 d1/d9/d23/d31/d8f/d93/dbd/f11b 0 2026-03-09T15:01:55.680 INFO:tasks.workunit.client.0.vm05.stdout:5/902: readlink d1/d4/d19/l45 0 2026-03-09T15:01:55.681 INFO:tasks.workunit.client.0.vm05.stdout:5/903: fsync d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d103/f108 0 2026-03-09T15:01:55.683 INFO:tasks.workunit.client.0.vm05.stdout:5/904: chown d1/d4/d34/lac 0 1 2026-03-09T15:01:55.684 INFO:tasks.workunit.client.0.vm05.stdout:5/905: fdatasync d1/d4/d34/d56/da6/f113 0 2026-03-09T15:01:55.685 INFO:tasks.workunit.client.0.vm05.stdout:3/815: unlink d3/df/d10/f28 0 2026-03-09T15:01:55.692 INFO:tasks.workunit.client.0.vm05.stdout:9/891: dwrite d2/d4e/d56/d53/d64/ded/d9c/d8e/dcb/fcc [0,4194304] 0 2026-03-09T15:01:55.702 INFO:tasks.workunit.client.0.vm05.stdout:0/790: creat d9/de/d12/d15/d2e/d6b/dbf/ff5 x:0 0 0 2026-03-09T15:01:55.742 INFO:tasks.workunit.client.0.vm05.stdout:2/848: creat da/d29/d6a/da0/d91/dab/dd6/f104 x:0 0 0 2026-03-09T15:01:55.745 INFO:tasks.workunit.client.0.vm05.stdout:4/855: symlink d2/d4/d7/dc/da8/l118 0 2026-03-09T15:01:55.751 INFO:tasks.workunit.client.0.vm05.stdout:8/862: dread - d0/d1/d12/d1b/d95/d42/d60/da7/fcf zero size 2026-03-09T15:01:55.764 INFO:tasks.workunit.client.0.vm05.stdout:7/835: creat d1/d9/d23/d31/d8f/d93/f11c x:0 0 0 2026-03-09T15:01:55.765 INFO:tasks.workunit.client.0.vm05.stdout:4/856: dwrite d2/d4/d1e/da2/dec/d3d/f65 [4194304,4194304] 0 2026-03-09T15:01:55.769 INFO:tasks.workunit.client.0.vm05.stdout:8/863: sync 2026-03-09T15:01:55.804 INFO:tasks.workunit.client.0.vm05.stdout:0/791: mkdir d9/de/d6a/df6 0 2026-03-09T15:01:55.808 INFO:tasks.workunit.client.0.vm05.stdout:2/849: rmdir da/d29/d64/da6 39 2026-03-09T15:01:55.810 INFO:tasks.workunit.client.0.vm05.stdout:2/850: write da/d29/d6a/db1/db7/ff9 [301262,54582] 0 2026-03-09T15:01:55.826 INFO:tasks.workunit.client.0.vm05.stdout:6/786: mkdir da/d17/d3b/dbd/dee/df6 0 2026-03-09T15:01:55.834 INFO:tasks.workunit.client.0.vm05.stdout:7/836: rmdir d1/d9/d23/d31/d8f/d93/dbd/d107 39 2026-03-09T15:01:55.839 INFO:tasks.workunit.client.0.vm05.stdout:7/837: readlink d1/d9/d23/d31/d8f/d93/dbd/l104 0 2026-03-09T15:01:55.839 INFO:tasks.workunit.client.0.vm05.stdout:9/892: dwrite d2/d4e/f3e [0,4194304] 0 2026-03-09T15:01:55.847 INFO:tasks.workunit.client.0.vm05.stdout:7/838: dread d1/d9/d23/d31/d51/f106 [0,4194304] 0 2026-03-09T15:01:55.848 INFO:tasks.workunit.client.0.vm05.stdout:8/864: chown d0/d1/d12/d1b/d95/d42/d60/f9c 11723311 1 2026-03-09T15:01:55.850 INFO:tasks.workunit.client.0.vm05.stdout:5/906: mknod d1/d4/d19/c133 0 2026-03-09T15:01:55.904 INFO:tasks.workunit.client.0.vm05.stdout:1/815: creat d9/d2f/d37/d5f/f10b x:0 0 0 2026-03-09T15:01:55.907 INFO:tasks.workunit.client.0.vm05.stdout:6/787: mknod da/d43/d7b/de0/cf7 0 2026-03-09T15:01:55.908 INFO:tasks.workunit.client.0.vm05.stdout:6/788: readlink da/d19/l23 0 2026-03-09T15:01:55.915 INFO:tasks.workunit.client.0.vm05.stdout:4/857: mknod d2/d1d/d88/c119 0 2026-03-09T15:01:55.916 INFO:tasks.workunit.client.0.vm05.stdout:3/816: write d3/d29/d2d/f33 [668193,60899] 0 2026-03-09T15:01:55.925 INFO:tasks.workunit.client.0.vm05.stdout:1/816: dread d9/d2f/d55/fce [0,4194304] 0 2026-03-09T15:01:55.929 INFO:tasks.workunit.client.0.vm05.stdout:8/865: mkdir d0/d1/d12/d1b/d66/d11b 0 2026-03-09T15:01:55.936 INFO:tasks.workunit.client.0.vm05.stdout:8/866: sync 2026-03-09T15:01:55.936 INFO:tasks.workunit.client.0.vm05.stdout:1/817: sync 2026-03-09T15:01:55.940 INFO:tasks.workunit.client.0.vm05.stdout:6/789: dread da/fab [0,4194304] 0 2026-03-09T15:01:55.942 INFO:tasks.workunit.client.0.vm05.stdout:8/867: dread d0/d1/d12/d1b/d66/dcc/fe6 [0,4194304] 0 2026-03-09T15:01:55.945 INFO:tasks.workunit.client.0.vm05.stdout:7/839: dread d1/d9/d23/d54/d7b/f7f [0,4194304] 0 2026-03-09T15:01:55.949 INFO:tasks.workunit.client.0.vm05.stdout:9/893: mknod d2/d10/d22/dc1/dc3/dc6/c137 0 2026-03-09T15:01:55.949 INFO:tasks.workunit.client.0.vm05.stdout:5/907: write d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/f3a [5223621,41794] 0 2026-03-09T15:01:55.952 INFO:tasks.workunit.client.0.vm05.stdout:6/790: dread da/d43/f54 [0,4194304] 0 2026-03-09T15:01:55.953 INFO:tasks.workunit.client.0.vm05.stdout:0/792: mknod d9/cf7 0 2026-03-09T15:01:55.957 INFO:tasks.workunit.client.0.vm05.stdout:9/894: dwrite d2/d10/d22/d47/fdc [0,4194304] 0 2026-03-09T15:01:55.963 INFO:tasks.workunit.client.0.vm05.stdout:2/851: mkdir da/d29/d6a/da0/d105 0 2026-03-09T15:01:55.963 INFO:tasks.workunit.client.0.vm05.stdout:3/817: mknod d3/df/d10/d19/d44/da2/c10d 0 2026-03-09T15:01:55.964 INFO:tasks.workunit.client.0.vm05.stdout:0/793: rmdir d9/de/d12/d8a 39 2026-03-09T15:01:55.966 INFO:tasks.workunit.client.0.vm05.stdout:5/908: creat d1/d4/d34/d6c/f134 x:0 0 0 2026-03-09T15:01:55.966 INFO:tasks.workunit.client.0.vm05.stdout:3/818: chown d3/d29/d2d/d77/d4d/fe9 3 1 2026-03-09T15:01:55.971 INFO:tasks.workunit.client.0.vm05.stdout:3/819: stat d3/d29/f10c 0 2026-03-09T15:01:55.982 INFO:tasks.workunit.client.0.vm05.stdout:1/818: dread d9/d2f/d83/d98/d59/d49/d92/fd2 [0,4194304] 0 2026-03-09T15:01:56.001 INFO:tasks.workunit.client.0.vm05.stdout:3/820: symlink d3/df/d10/d19/dce/dc8/de2/d8c/l10e 0 2026-03-09T15:01:56.008 INFO:tasks.workunit.client.0.vm05.stdout:1/819: dread d9/d2f/d83/d98/d59/d49/d92/d75/f76 [0,4194304] 0 2026-03-09T15:01:56.009 INFO:tasks.workunit.client.0.vm05.stdout:1/820: write d9/d2f/d37/ded/f108 [629922,13742] 0 2026-03-09T15:01:56.011 INFO:tasks.workunit.client.0.vm05.stdout:7/840: creat d1/d9/d23/f11d x:0 0 0 2026-03-09T15:01:56.021 INFO:tasks.workunit.client.0.vm05.stdout:6/791: write da/d19/f35 [1106760,47752] 0 2026-03-09T15:01:56.023 INFO:tasks.workunit.client.0.vm05.stdout:8/868: truncate d0/d1/d12/d1b/d66/fe8 65096 0 2026-03-09T15:01:56.024 INFO:tasks.workunit.client.0.vm05.stdout:6/792: chown da/d17/f2c 40465 1 2026-03-09T15:01:56.034 INFO:tasks.workunit.client.0.vm05.stdout:9/895: dwrite d2/d10/d22/d47/fdc [4194304,4194304] 0 2026-03-09T15:01:56.037 INFO:tasks.workunit.client.0.vm05.stdout:0/794: symlink d9/de/d12/d15/ddf/lf8 0 2026-03-09T15:01:56.037 INFO:tasks.workunit.client.0.vm05.stdout:2/852: mknod da/d29/d6a/db1/db7/dea/c106 0 2026-03-09T15:01:56.045 INFO:tasks.workunit.client.0.vm05.stdout:6/793: dwrite da/d43/d7b/da9/fe7 [0,4194304] 0 2026-03-09T15:01:56.060 INFO:tasks.workunit.client.0.vm05.stdout:6/794: dwrite da/f1a [0,4194304] 0 2026-03-09T15:01:56.072 INFO:tasks.workunit.client.0.vm05.stdout:1/821: fsync d9/d2f/d83/f9e 0 2026-03-09T15:01:56.072 INFO:tasks.workunit.client.0.vm05.stdout:4/858: link d2/d49/d69/cda d2/d1d/dfa/dfb/c11a 0 2026-03-09T15:01:56.072 INFO:tasks.workunit.client.0.vm05.stdout:5/909: rename d1/d4/d34/d35/d4e/l101 to d1/l135 0 2026-03-09T15:01:56.073 INFO:tasks.workunit.client.0.vm05.stdout:5/910: readlink d1/d5d/le3 0 2026-03-09T15:01:56.074 INFO:tasks.workunit.client.0.vm05.stdout:1/822: chown d9/d2f/d83/d98/d59/d49/ffd 238 1 2026-03-09T15:01:56.074 INFO:tasks.workunit.client.0.vm05.stdout:1/823: readlink d9/d2f/d83/lca 0 2026-03-09T15:01:56.077 INFO:tasks.workunit.client.0.vm05.stdout:9/896: mkdir d2/d10/d22/d47/d95/d138 0 2026-03-09T15:01:56.078 INFO:tasks.workunit.client.0.vm05.stdout:0/795: chown d9/de/d12/d15/l8b 10 1 2026-03-09T15:01:56.083 INFO:tasks.workunit.client.0.vm05.stdout:8/869: creat d0/d100/f11c x:0 0 0 2026-03-09T15:01:56.085 INFO:tasks.workunit.client.0.vm05.stdout:8/870: truncate d0/d1/d12/d1b/d95/d54/f10d 565605 0 2026-03-09T15:01:56.094 INFO:tasks.workunit.client.0.vm05.stdout:9/897: chown d2/d10/d22/dc2/l7e 16076 1 2026-03-09T15:01:56.115 INFO:tasks.workunit.client.0.vm05.stdout:3/821: write d3/df/d1e/d2c/d74/d78/fab [333098,34189] 0 2026-03-09T15:01:56.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:56 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:01:56.117 INFO:tasks.workunit.client.0.vm05.stdout:6/795: truncate da/d17/d95/da2/fc5 2157263 0 2026-03-09T15:01:56.117 INFO:tasks.workunit.client.0.vm05.stdout:7/841: write d1/fb0 [2373373,35864] 0 2026-03-09T15:01:56.120 INFO:tasks.workunit.client.0.vm05.stdout:3/822: chown d3/df/d10/d19/dce/d107 15943 1 2026-03-09T15:01:56.124 INFO:tasks.workunit.client.0.vm05.stdout:5/911: creat d1/d4/d34/d56/da6/dea/d130/f136 x:0 0 0 2026-03-09T15:01:56.124 INFO:tasks.workunit.client.0.vm05.stdout:4/859: truncate d2/f3e 1248979 0 2026-03-09T15:01:56.124 INFO:tasks.workunit.client.0.vm05.stdout:8/871: mknod d0/d1/d12/d1b/d95/d78/dea/c11d 0 2026-03-09T15:01:56.125 INFO:tasks.workunit.client.0.vm05.stdout:1/824: dwrite d9/d2f/d55/f68 [0,4194304] 0 2026-03-09T15:01:56.141 INFO:tasks.workunit.client.0.vm05.stdout:9/898: truncate d2/d4e/d56/d53/d64/ded/d9c/d8e/f68 5206810 0 2026-03-09T15:01:56.149 INFO:tasks.workunit.client.0.vm05.stdout:7/842: fsync d1/d9/d23/d54/d7b/f7f 0 2026-03-09T15:01:56.175 INFO:tasks.workunit.client.0.vm05.stdout:6/796: dwrite da/d17/d3b/fb2 [0,4194304] 0 2026-03-09T15:01:56.201 INFO:tasks.workunit.client.0.vm05.stdout:2/853: getdents da/d29/d6a/da0/d91/dab/d2f/db3/deb 0 2026-03-09T15:01:56.202 INFO:tasks.workunit.client.0.vm05.stdout:2/854: write da/d29/d3f/dc3/f89 [10853,28375] 0 2026-03-09T15:01:56.205 INFO:tasks.workunit.client.0.vm05.stdout:2/855: fsync da/d16/d46/f73 0 2026-03-09T15:01:56.208 INFO:tasks.workunit.client.0.vm05.stdout:3/823: read d3/df/d10/d19/dce/dc8/de2/d8c/dbd/fa4 [358231,98545] 0 2026-03-09T15:01:56.236 INFO:tasks.workunit.client.0.vm05.stdout:8/872: rename d0/f4 to d0/d1/d55/f11e 0 2026-03-09T15:01:56.259 INFO:tasks.workunit.client.0.vm05.stdout:4/860: symlink d2/d4/d50/d8a/l11b 0 2026-03-09T15:01:56.272 INFO:tasks.workunit.client.0.vm05.stdout:1/825: symlink d9/d2f/d37/d101/l10c 0 2026-03-09T15:01:56.288 INFO:tasks.workunit.client.0.vm05.stdout:6/797: creat da/d17/ff8 x:0 0 0 2026-03-09T15:01:56.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:56 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:01:56.306 INFO:tasks.workunit.client.0.vm05.stdout:3/824: truncate d3/df/d10/f2a 3350647 0 2026-03-09T15:01:56.308 INFO:tasks.workunit.client.0.vm05.stdout:3/825: truncate d3/df/d1e/d2c/d74/d78/fab 1011158 0 2026-03-09T15:01:56.319 INFO:tasks.workunit.client.0.vm05.stdout:8/873: mkdir d0/d1/d12/d1b/d6e/d93/d9f/dad/d11f 0 2026-03-09T15:01:56.352 INFO:tasks.workunit.client.0.vm05.stdout:5/912: dwrite d1/d4/d19/d93/f99 [0,4194304] 0 2026-03-09T15:01:56.356 INFO:tasks.workunit.client.0.vm05.stdout:2/856: symlink da/d29/d6a/da0/l107 0 2026-03-09T15:01:56.360 INFO:tasks.workunit.client.0.vm05.stdout:7/843: dwrite d1/d9/d72/f7c [0,4194304] 0 2026-03-09T15:01:56.363 INFO:tasks.workunit.client.0.vm05.stdout:7/844: write d1/d22/da4/ff3 [380585,12564] 0 2026-03-09T15:01:56.363 INFO:tasks.workunit.client.0.vm05.stdout:5/913: chown d1/d4/d19/cd3 3675 1 2026-03-09T15:01:56.366 INFO:tasks.workunit.client.0.vm05.stdout:2/857: dwrite da/dd/ff [8388608,4194304] 0 2026-03-09T15:01:56.370 INFO:tasks.workunit.client.0.vm05.stdout:3/826: symlink d3/df/d10/d19/dce/dc8/de2/d8c/dbd/db3/l10f 0 2026-03-09T15:01:56.373 INFO:tasks.workunit.client.0.vm05.stdout:8/874: stat d0/d1/d12/d1b/d95/dd7/dd2/dd8/cdd 0 2026-03-09T15:01:56.375 INFO:tasks.workunit.client.0.vm05.stdout:1/826: write d9/d2f/d83/fa3 [479920,127639] 0 2026-03-09T15:01:56.375 INFO:tasks.workunit.client.0.vm05.stdout:4/861: write d2/d4/d7/dc/da8/fab [2171782,27534] 0 2026-03-09T15:01:56.390 INFO:tasks.workunit.client.0.vm05.stdout:2/858: creat da/d29/d6a/da0/d7c/f108 x:0 0 0 2026-03-09T15:01:56.413 INFO:tasks.workunit.client.0.vm05.stdout:0/796: link d9/de/d12/d15/d49/lc7 d9/de/d12/d15/d2e/d32/d53/d61/lf9 0 2026-03-09T15:01:56.426 INFO:tasks.workunit.client.0.vm05.stdout:9/899: getdents d2/d4e/d56/d53/d64 0 2026-03-09T15:01:56.441 INFO:tasks.workunit.client.0.vm05.stdout:1/827: rename d9/d2f/d83/d98/d59/d49/d78/cbe to d9/db9/c10d 0 2026-03-09T15:01:56.450 INFO:tasks.workunit.client.0.vm05.stdout:2/859: creat da/d29/d6a/da0/dd9/f109 x:0 0 0 2026-03-09T15:01:56.462 INFO:tasks.workunit.client.0.vm05.stdout:8/875: dread d0/d1/d12/d1b/d95/dd7/dd2/dd8/d114/f8 [0,4194304] 0 2026-03-09T15:01:56.473 INFO:tasks.workunit.client.0.vm05.stdout:0/797: dwrite d9/de/d12/d15/d2e/fc2 [0,4194304] 0 2026-03-09T15:01:56.477 INFO:tasks.workunit.client.0.vm05.stdout:9/900: readlink d2/d8b/dae/l11f 0 2026-03-09T15:01:56.483 INFO:tasks.workunit.client.0.vm05.stdout:0/798: read d9/f42 [751864,23140] 0 2026-03-09T15:01:56.483 INFO:tasks.workunit.client.0.vm05.stdout:6/798: link da/cd da/d19/dd7/cf9 0 2026-03-09T15:01:56.486 INFO:tasks.workunit.client.0.vm05.stdout:7/845: dwrite d1/de4/ffa [0,4194304] 0 2026-03-09T15:01:56.490 INFO:tasks.workunit.client.0.vm05.stdout:2/860: truncate da/d29/d6a/da0/d91/dab/f4b 4063132 0 2026-03-09T15:01:56.496 INFO:tasks.workunit.client.0.vm05.stdout:5/914: dwrite d1/d4/d34/d35/d3d/f32 [0,4194304] 0 2026-03-09T15:01:56.497 INFO:tasks.workunit.client.0.vm05.stdout:8/876: rename d0/d1/d12/d1b/d95/d4b/faa to d0/d1/d12/d1b/d95/d54/f120 0 2026-03-09T15:01:56.509 INFO:tasks.workunit.client.0.vm05.stdout:9/901: chown d2/f1f 3 1 2026-03-09T15:01:56.510 INFO:tasks.workunit.client.0.vm05.stdout:4/862: write d2/d4/d1e/d71/fe9 [694476,50500] 0 2026-03-09T15:01:56.512 INFO:tasks.workunit.client.0.vm05.stdout:1/828: mkdir d9/d10e 0 2026-03-09T15:01:56.516 INFO:tasks.workunit.client.0.vm05.stdout:6/799: creat da/d17/d3b/ffa x:0 0 0 2026-03-09T15:01:56.526 INFO:tasks.workunit.client.0.vm05.stdout:3/827: write d3/d29/f10b [3337853,3838] 0 2026-03-09T15:01:56.526 INFO:tasks.workunit.client.0.vm05.stdout:0/799: read - d9/de/d12/d15/fa5 zero size 2026-03-09T15:01:56.536 INFO:tasks.workunit.client.0.vm05.stdout:7/846: rmdir d1/d9/d23/d31/d8f/d93/dbd 39 2026-03-09T15:01:56.536 INFO:tasks.workunit.client.0.vm05.stdout:7/847: chown d1/d9/d23/d31 125397 1 2026-03-09T15:01:56.569 INFO:tasks.workunit.client.0.vm05.stdout:4/863: creat d2/d49/d69/f11c x:0 0 0 2026-03-09T15:01:56.578 INFO:tasks.workunit.client.0.vm05.stdout:6/800: symlink da/d17/d95/da2/dae/dd9/lfb 0 2026-03-09T15:01:56.578 INFO:tasks.workunit.client.0.vm05.stdout:2/861: write da/d29/f76 [747521,80231] 0 2026-03-09T15:01:56.579 INFO:tasks.workunit.client.0.vm05.stdout:3/828: fsync d3/d29/f92 0 2026-03-09T15:01:56.597 INFO:tasks.workunit.client.0.vm05.stdout:6/801: creat da/d19/dd7/ffc x:0 0 0 2026-03-09T15:01:56.601 INFO:tasks.workunit.client.0.vm05.stdout:2/862: mknod da/d29/d6a/da0/d91/dab/d9c/c10a 0 2026-03-09T15:01:56.608 INFO:tasks.workunit.client.0.vm05.stdout:7/848: creat d1/d49/d68/f11e x:0 0 0 2026-03-09T15:01:56.625 INFO:tasks.workunit.client.0.vm05.stdout:5/915: rename d1/d4/d34/d56/da6/d10d/d91 to d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d75/d137 0 2026-03-09T15:01:56.626 INFO:tasks.workunit.client.0.vm05.stdout:2/863: dread da/d16/f6b [0,4194304] 0 2026-03-09T15:01:56.626 INFO:tasks.workunit.client.0.vm05.stdout:8/877: link d0/d1/d12/d1b/d95/d4b/cfb d0/dc/c121 0 2026-03-09T15:01:56.627 INFO:tasks.workunit.client.0.vm05.stdout:3/829: mkdir d3/d29/d7f/d110 0 2026-03-09T15:01:56.627 INFO:tasks.workunit.client.0.vm05.stdout:1/829: creat d9/d2f/d83/f10f x:0 0 0 2026-03-09T15:01:56.631 INFO:tasks.workunit.client.0.vm05.stdout:5/916: dwrite d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d103/f108 [0,4194304] 0 2026-03-09T15:01:56.638 INFO:tasks.workunit.client.0.vm05.stdout:2/864: dwrite da/d29/d6a/da0/d91/dab/d9c/dd3/ff8 [0,4194304] 0 2026-03-09T15:01:56.639 INFO:tasks.workunit.client.0.vm05.stdout:0/800: rmdir d9/de/d6a/df6 0 2026-03-09T15:01:56.639 INFO:tasks.workunit.client.0.vm05.stdout:2/865: chown da/d29/d6a/db1/db7/dea 99548 1 2026-03-09T15:01:56.644 INFO:tasks.workunit.client.0.vm05.stdout:6/802: mknod da/d43/d7b/db3/cfd 0 2026-03-09T15:01:56.655 INFO:tasks.workunit.client.0.vm05.stdout:8/878: dwrite d0/d1/d12/d1b/d95/d42/da1/fcb [4194304,4194304] 0 2026-03-09T15:01:56.656 INFO:tasks.workunit.client.0.vm05.stdout:6/803: dread da/d17/f8d [0,4194304] 0 2026-03-09T15:01:56.668 INFO:tasks.workunit.client.0.vm05.stdout:6/804: write da/d17/d7c/dc6/fd0 [210162,10382] 0 2026-03-09T15:01:56.694 INFO:tasks.workunit.client.0.vm05.stdout:4/864: creat d2/d4/d8/f11d x:0 0 0 2026-03-09T15:01:56.701 INFO:tasks.workunit.client.0.vm05.stdout:0/801: symlink d9/de/d12/d15/d2e/d32/d9f/lfa 0 2026-03-09T15:01:56.703 INFO:tasks.workunit.client.0.vm05.stdout:0/802: chown d9/de/f3d 99977 1 2026-03-09T15:01:56.735 INFO:tasks.workunit.client.0.vm05.stdout:6/805: fdatasync da/d19/f52 0 2026-03-09T15:01:56.739 INFO:tasks.workunit.client.0.vm05.stdout:5/917: write d1/f2a [2938501,113041] 0 2026-03-09T15:01:56.739 INFO:tasks.workunit.client.0.vm05.stdout:7/849: write d1/f15 [1002622,128493] 0 2026-03-09T15:01:56.743 INFO:tasks.workunit.client.0.vm05.stdout:8/879: write d0/d1/d12/d1b/d21/f92 [436223,125085] 0 2026-03-09T15:01:56.749 INFO:tasks.workunit.client.0.vm05.stdout:1/830: mknod d9/d10e/c110 0 2026-03-09T15:01:56.753 INFO:tasks.workunit.client.0.vm05.stdout:3/830: creat d3/df/d10/d19/dce/dc8/de2/d8c/d90/d10a/f111 x:0 0 0 2026-03-09T15:01:56.753 INFO:tasks.workunit.client.0.vm05.stdout:0/803: creat d9/de/d25/dae/ffb x:0 0 0 2026-03-09T15:01:56.756 INFO:tasks.workunit.client.0.vm05.stdout:0/804: read d9/de/d25/dae/de6/fd8 [4160050,14298] 0 2026-03-09T15:01:56.764 INFO:tasks.workunit.client.0.vm05.stdout:5/918: creat d1/d4/d34/d35/d3d/d38/d69/d11b/f138 x:0 0 0 2026-03-09T15:01:56.764 INFO:tasks.workunit.client.0.vm05.stdout:7/850: write d1/d12/fa8 [3929593,51978] 0 2026-03-09T15:01:56.765 INFO:tasks.workunit.client.0.vm05.stdout:8/880: fsync d0/d1/d12/d3c/d8b/f106 0 2026-03-09T15:01:56.773 INFO:tasks.workunit.client.0.vm05.stdout:3/831: symlink d3/df/d10/d19/db5/l112 0 2026-03-09T15:01:56.775 INFO:tasks.workunit.client.0.vm05.stdout:9/902: rename d2/d4e/d56/d53/d64/ded/d9c/d8e/f68 to d2/d4e/d56/f139 0 2026-03-09T15:01:56.782 INFO:tasks.workunit.client.0.vm05.stdout:7/851: symlink d1/d9/d23/d31/d32/d78/d7e/d81/dcd/l11f 0 2026-03-09T15:01:56.782 INFO:tasks.workunit.client.0.vm05.stdout:3/832: stat d3/df/d10/d19/dce/dc8/de2/d8c/dbd/db3 0 2026-03-09T15:01:56.788 INFO:tasks.workunit.client.0.vm05.stdout:5/919: truncate d1/d4/d34/d35/ff7 14753 0 2026-03-09T15:01:56.800 INFO:tasks.workunit.client.0.vm05.stdout:8/881: dwrite d0/d1/d12/d1b/d95/d42/d60/d73/f113 [0,4194304] 0 2026-03-09T15:01:56.800 INFO:tasks.workunit.client.0.vm05.stdout:9/903: symlink d2/d4e/d56/d84/l13a 0 2026-03-09T15:01:56.800 INFO:tasks.workunit.client.0.vm05.stdout:5/920: fsync d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/ff6 0 2026-03-09T15:01:56.800 INFO:tasks.workunit.client.0.vm05.stdout:3/833: read d3/df/d10/d19/dce/dc8/de2/d8c/f85 [291522,30862] 0 2026-03-09T15:01:56.807 INFO:tasks.workunit.client.0.vm05.stdout:0/805: link d9/de/d25/lce d9/de/d25/d38/d78/dc9/lfc 0 2026-03-09T15:01:56.807 INFO:tasks.workunit.client.0.vm05.stdout:1/831: sync 2026-03-09T15:01:56.815 INFO:tasks.workunit.client.0.vm05.stdout:3/834: fdatasync d3/df/d1e/f8f 0 2026-03-09T15:01:56.816 INFO:tasks.workunit.client.0.vm05.stdout:7/852: dwrite d1/d9/d23/d31/d32/d78/dbb/fff [0,4194304] 0 2026-03-09T15:01:56.818 INFO:tasks.workunit.client.0.vm05.stdout:7/853: write d1/d9/d23/d31/d32/d78/dbb/fff [2332197,15095] 0 2026-03-09T15:01:56.820 INFO:tasks.workunit.client.0.vm05.stdout:7/854: dread - d1/d9/d23/f11d zero size 2026-03-09T15:01:56.826 INFO:tasks.workunit.client.0.vm05.stdout:2/866: rename da/d29/d6a/da0/d91/dab/d9c to da/d29/d6a/da0/d91/dab/d2f/d35/d10b 0 2026-03-09T15:01:56.828 INFO:tasks.workunit.client.0.vm05.stdout:2/867: read da/d29/d6a/fda [70558,6344] 0 2026-03-09T15:01:56.832 INFO:tasks.workunit.client.0.vm05.stdout:4/865: dwrite d2/d43/f4f [0,4194304] 0 2026-03-09T15:01:56.845 INFO:tasks.workunit.client.0.vm05.stdout:2/868: sync 2026-03-09T15:01:56.845 INFO:tasks.workunit.client.0.vm05.stdout:4/866: dread d2/d1d/d88/faa [0,4194304] 0 2026-03-09T15:01:56.862 INFO:tasks.workunit.client.0.vm05.stdout:0/806: mkdir d9/d59/dfd 0 2026-03-09T15:01:56.865 INFO:tasks.workunit.client.0.vm05.stdout:7/855: fdatasync d1/d22/d3c/f70 0 2026-03-09T15:01:56.869 INFO:tasks.workunit.client.0.vm05.stdout:8/882: truncate d0/d1/d12/d1b/d95/dd7/dd2/dd8/d114/da8/ff0 2105938 0 2026-03-09T15:01:56.874 INFO:tasks.workunit.client.0.vm05.stdout:9/904: write d2/d10/d22/d2c/f44 [3872338,81232] 0 2026-03-09T15:01:56.879 INFO:tasks.workunit.client.0.vm05.stdout:6/806: rename da/d43/d7b/d89 to da/d19/dd7/dfe 0 2026-03-09T15:01:56.882 INFO:tasks.workunit.client.0.vm05.stdout:6/807: write da/d17/fe4 [2097264,70953] 0 2026-03-09T15:01:56.884 INFO:tasks.workunit.client.0.vm05.stdout:8/883: sync 2026-03-09T15:01:56.885 INFO:tasks.workunit.client.0.vm05.stdout:9/905: dread d2/d10/f71 [0,4194304] 0 2026-03-09T15:01:56.885 INFO:tasks.workunit.client.0.vm05.stdout:6/808: sync 2026-03-09T15:01:56.906 INFO:tasks.workunit.client.0.vm05.stdout:5/921: rmdir d1/d4/d34/d6c/d102 0 2026-03-09T15:01:56.910 INFO:tasks.workunit.client.0.vm05.stdout:3/835: symlink d3/df/d10/d19/dce/d107/l113 0 2026-03-09T15:01:56.915 INFO:tasks.workunit.client.0.vm05.stdout:0/807: fdatasync d9/de/d12/d15/d2e/d32/d9f/fa9 0 2026-03-09T15:01:56.921 INFO:tasks.workunit.client.0.vm05.stdout:7/856: creat d1/d9/d23/d31/d51/f120 x:0 0 0 2026-03-09T15:01:56.932 INFO:tasks.workunit.client.0.vm05.stdout:8/884: mknod d0/dc/c122 0 2026-03-09T15:01:56.944 INFO:tasks.workunit.client.0.vm05.stdout:7/857: creat d1/d9/d23/d54/d7b/f121 x:0 0 0 2026-03-09T15:01:56.944 INFO:tasks.workunit.client.0.vm05.stdout:7/858: dread - d1/d49/d4a/f115 zero size 2026-03-09T15:01:56.958 INFO:tasks.workunit.client.0.vm05.stdout:7/859: rmdir d1/d9/d23/d31/d51 39 2026-03-09T15:01:56.959 INFO:tasks.workunit.client.0.vm05.stdout:1/832: rename f7 to d9/d2f/d37/d5f/f111 0 2026-03-09T15:01:56.964 INFO:tasks.workunit.client.0.vm05.stdout:2/869: link da/d29/c3d da/d29/d6a/c10c 0 2026-03-09T15:01:56.969 INFO:tasks.workunit.client.0.vm05.stdout:9/906: dread d2/d4e/d56/d53/d64/ded/f36 [0,4194304] 0 2026-03-09T15:01:56.977 INFO:tasks.workunit.client.0.vm05.stdout:7/860: dread d1/d9/d23/d54/f6f [0,4194304] 0 2026-03-09T15:01:56.993 INFO:tasks.workunit.client.0.vm05.stdout:1/833: creat d9/d2f/d37/d5a/da9/dc9/dcd/f112 x:0 0 0 2026-03-09T15:01:57.002 INFO:tasks.workunit.client.0.vm05.stdout:2/870: creat da/d29/d6a/db1/db7/dea/f10d x:0 0 0 2026-03-09T15:01:57.013 INFO:tasks.workunit.client.0.vm05.stdout:4/867: rename d2/d49/d69/l87 to d2/d1d/d88/l11e 0 2026-03-09T15:01:57.013 INFO:tasks.workunit.client.0.vm05.stdout:8/885: getdents d0 0 2026-03-09T15:01:57.013 INFO:tasks.workunit.client.0.vm05.stdout:7/861: truncate d1/d22/d3c/fa2 45466 0 2026-03-09T15:01:57.013 INFO:tasks.workunit.client.0.vm05.stdout:5/922: getdents d1/d4/d34/d35/d3d/d38 0 2026-03-09T15:01:57.014 INFO:tasks.workunit.client.0.vm05.stdout:9/907: symlink d2/d10/d22/d47/d95/d138/l13b 0 2026-03-09T15:01:57.022 INFO:tasks.workunit.client.0.vm05.stdout:5/923: dread - d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/f10e zero size 2026-03-09T15:01:57.023 INFO:tasks.workunit.client.0.vm05.stdout:7/862: fdatasync d1/d9/d23/d31/d32/fc7 0 2026-03-09T15:01:57.024 INFO:tasks.workunit.client.0.vm05.stdout:2/871: mknod da/d29/d64/da6/c10e 0 2026-03-09T15:01:57.024 INFO:tasks.workunit.client.0.vm05.stdout:4/868: sync 2026-03-09T15:01:57.029 INFO:tasks.workunit.client.0.vm05.stdout:6/809: rename da/d17/d95/da2/fc5 to da/d43/d7b/db3/fff 0 2026-03-09T15:01:57.040 INFO:tasks.workunit.client.0.vm05.stdout:1/834: dwrite d9/d2f/d83/d98/d59/d49/d92/fd2 [0,4194304] 0 2026-03-09T15:01:57.041 INFO:tasks.workunit.client.0.vm05.stdout:5/924: creat d1/d4/d34/d35/dd0/f139 x:0 0 0 2026-03-09T15:01:57.042 INFO:tasks.workunit.client.0.vm05.stdout:5/925: truncate d1/d4/d34/d56/d68/f120 4598798 0 2026-03-09T15:01:57.048 INFO:tasks.workunit.client.0.vm05.stdout:8/886: creat d0/d24/d112/f123 x:0 0 0 2026-03-09T15:01:57.059 INFO:tasks.workunit.client.0.vm05.stdout:4/869: read d2/d1d/fd0 [247913,130513] 0 2026-03-09T15:01:57.059 INFO:tasks.workunit.client.0.vm05.stdout:7/863: dread d1/d9/f8b [0,4194304] 0 2026-03-09T15:01:57.064 INFO:tasks.workunit.client.0.vm05.stdout:3/836: rename d3/df/d10/d19/cff to d3/df/d1e/d2c/d74/d9b/c114 0 2026-03-09T15:01:57.067 INFO:tasks.workunit.client.0.vm05.stdout:6/810: dwrite da/fab [4194304,4194304] 0 2026-03-09T15:01:57.072 INFO:tasks.workunit.client.0.vm05.stdout:9/908: link d2/d4e/d56/fce d2/d10/d22/d2c/d3c/d11b/f13c 0 2026-03-09T15:01:57.079 INFO:tasks.workunit.client.0.vm05.stdout:5/926: mknod d1/d4/d34/dc0/c13a 0 2026-03-09T15:01:57.080 INFO:tasks.workunit.client.0.vm05.stdout:5/927: write d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d103/f108 [4542139,111967] 0 2026-03-09T15:01:57.084 INFO:tasks.workunit.client.0.vm05.stdout:8/887: symlink d0/d24/l124 0 2026-03-09T15:01:57.091 INFO:tasks.workunit.client.0.vm05.stdout:7/864: rmdir d1/d22/d3c 39 2026-03-09T15:01:57.098 INFO:tasks.workunit.client.0.vm05.stdout:0/808: rename d9/f2b to d9/de/df1/deb/ffe 0 2026-03-09T15:01:57.107 INFO:tasks.workunit.client.0.vm05.stdout:9/909: mkdir d2/d10/d22/dc1/dc3/d13d 0 2026-03-09T15:01:57.116 INFO:tasks.workunit.client.0.vm05.stdout:8/888: creat d0/d1/d12/d1b/d95/d42/d60/d73/f125 x:0 0 0 2026-03-09T15:01:57.117 INFO:tasks.workunit.client.0.vm05.stdout:8/889: write d0/d1/d12/d1b/d95/d42/f4e [3909749,128063] 0 2026-03-09T15:01:57.121 INFO:tasks.workunit.client.0.vm05.stdout:1/835: write d9/d2f/d83/d98/d87/ff5 [840516,34653] 0 2026-03-09T15:01:57.132 INFO:tasks.workunit.client.0.vm05.stdout:7/865: fsync d1/d9/f75 0 2026-03-09T15:01:57.137 INFO:tasks.workunit.client.0.vm05.stdout:6/811: rename da/d43/ldf to da/d43/d7b/da9/l100 0 2026-03-09T15:01:57.137 INFO:tasks.workunit.client.0.vm05.stdout:6/812: chown da/d19/dd7/dfe/db8 3354 1 2026-03-09T15:01:57.138 INFO:tasks.workunit.client.0.vm05.stdout:0/809: creat d9/de/d25/d38/d78/fff x:0 0 0 2026-03-09T15:01:57.138 INFO:tasks.workunit.client.0.vm05.stdout:3/837: truncate d3/df/d10/d19/f25 180647 0 2026-03-09T15:01:57.142 INFO:tasks.workunit.client.0.vm05.stdout:3/838: truncate d3/d29/d2d/f33 1863263 0 2026-03-09T15:01:57.146 INFO:tasks.workunit.client.0.vm05.stdout:7/866: sync 2026-03-09T15:01:57.146 INFO:tasks.workunit.client.0.vm05.stdout:6/813: mknod da/d17/d3b/dbd/c101 0 2026-03-09T15:01:57.147 INFO:tasks.workunit.client.0.vm05.stdout:7/867: read - d1/d9/d23/d31/d8f/d93/f11c zero size 2026-03-09T15:01:57.198 INFO:tasks.workunit.client.0.vm05.stdout:2/872: link l8 da/d29/d64/l10f 0 2026-03-09T15:01:57.198 INFO:tasks.workunit.client.0.vm05.stdout:4/870: link d2/d7a/fbf d2/d4/d1e/f11f 0 2026-03-09T15:01:57.204 INFO:tasks.workunit.client.0.vm05.stdout:5/928: rename d1/d4/d34/d35/lff to d1/d4/d34/d6c/l13b 0 2026-03-09T15:01:57.207 INFO:tasks.workunit.client.0.vm05.stdout:3/839: mknod d3/df/d10/d19/db2/d102/c115 0 2026-03-09T15:01:57.211 INFO:tasks.workunit.client.0.vm05.stdout:6/814: rmdir da/d17/d7c 39 2026-03-09T15:01:57.212 INFO:tasks.workunit.client.0.vm05.stdout:7/868: symlink d1/d9/d23/d31/d32/d78/dbb/l122 0 2026-03-09T15:01:57.221 INFO:tasks.workunit.client.0.vm05.stdout:1/836: link d9/d2f/l65 d9/d2f/d83/d98/d59/d49/d10a/l113 0 2026-03-09T15:01:57.236 INFO:tasks.workunit.client.0.vm05.stdout:4/871: symlink d2/d1d/d88/l120 0 2026-03-09T15:01:57.241 INFO:tasks.workunit.client.0.vm05.stdout:5/929: fdatasync d1/d4/d34/dc0/f106 0 2026-03-09T15:01:57.249 INFO:tasks.workunit.client.0.vm05.stdout:0/810: write d9/de/d25/f47 [331262,96067] 0 2026-03-09T15:01:57.262 INFO:tasks.workunit.client.0.vm05.stdout:3/840: write d3/df/d10/d19/dce/dc8/de2/d8c/fdc [93238,79045] 0 2026-03-09T15:01:57.263 INFO:tasks.workunit.client.0.vm05.stdout:7/869: stat d1/d9/d23/d31/d51/f29 0 2026-03-09T15:01:57.264 INFO:tasks.workunit.client.0.vm05.stdout:9/910: link d2/d10/fe6 d2/d10/d22/d2c/d69/d5a/f13e 0 2026-03-09T15:01:57.267 INFO:tasks.workunit.client.0.vm05.stdout:8/890: getdents d0/d1 0 2026-03-09T15:01:57.272 INFO:tasks.workunit.client.0.vm05.stdout:2/873: truncate da/d16/f1e 5058770 0 2026-03-09T15:01:57.272 INFO:tasks.workunit.client.0.vm05.stdout:1/837: fsync d9/d2f/d55/dd0/fdb 0 2026-03-09T15:01:57.276 INFO:tasks.workunit.client.0.vm05.stdout:7/870: sync 2026-03-09T15:01:57.279 INFO:tasks.workunit.client.0.vm05.stdout:7/871: read d1/d49/d4a/fcc [118146,46081] 0 2026-03-09T15:01:57.286 INFO:tasks.workunit.client.0.vm05.stdout:4/872: dread d2/d1d/f36 [0,4194304] 0 2026-03-09T15:01:57.294 INFO:tasks.workunit.client.0.vm05.stdout:6/815: unlink da/d19/dd7/cf9 0 2026-03-09T15:01:57.297 INFO:tasks.workunit.client.0.vm05.stdout:9/911: chown d2/d10/d22/d2c/d69/lca 11000480 1 2026-03-09T15:01:57.297 INFO:tasks.workunit.client.0.vm05.stdout:6/816: dread - da/d43/d66/fe8 zero size 2026-03-09T15:01:57.302 INFO:tasks.workunit.client.0.vm05.stdout:4/873: fsync d2/d4/d7/f53 0 2026-03-09T15:01:57.302 INFO:tasks.workunit.client.0.vm05.stdout:9/912: sync 2026-03-09T15:01:57.307 INFO:tasks.workunit.client.0.vm05.stdout:6/817: dwrite da/d17/d95/da2/dae/fef [0,4194304] 0 2026-03-09T15:01:57.321 INFO:tasks.workunit.client.0.vm05.stdout:8/891: unlink d0/dc/c121 0 2026-03-09T15:01:57.324 INFO:tasks.workunit.client.0.vm05.stdout:2/874: mkdir da/d29/d6a/da0/dd9/dfd/d110 0 2026-03-09T15:01:57.325 INFO:tasks.workunit.client.0.vm05.stdout:2/875: chown da/d16/d46/fa3 255106 1 2026-03-09T15:01:57.325 INFO:tasks.workunit.client.0.vm05.stdout:2/876: chown da/d29/d64/dc1/c101 235160 1 2026-03-09T15:01:57.331 INFO:tasks.workunit.client.0.vm05.stdout:2/877: sync 2026-03-09T15:01:57.335 INFO:tasks.workunit.client.0.vm05.stdout:3/841: truncate d3/df/d1e/d2c/d74/ff9 182817 0 2026-03-09T15:01:57.349 INFO:tasks.workunit.client.0.vm05.stdout:9/913: write d2/d10/d22/dc2/f7c [2440566,113747] 0 2026-03-09T15:01:57.350 INFO:tasks.workunit.client.0.vm05.stdout:9/914: stat d2/d10/d22/d2c/d3c/c49 0 2026-03-09T15:01:57.351 INFO:tasks.workunit.client.0.vm05.stdout:9/915: chown d2/d10/d22/c29 492724 1 2026-03-09T15:01:57.351 INFO:tasks.workunit.client.0.vm05.stdout:9/916: fsync d2/d10/d22/fb6 0 2026-03-09T15:01:57.357 INFO:tasks.workunit.client.0.vm05.stdout:6/818: fdatasync da/d17/d3b/f5f 0 2026-03-09T15:01:57.361 INFO:tasks.workunit.client.0.vm05.stdout:5/930: link d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d75/d9c/l123 d1/d4/d34/d56/d68/l13c 0 2026-03-09T15:01:57.364 INFO:tasks.workunit.client.0.vm05.stdout:0/811: link d9/de/d12/d15/d2e/c46 d9/de/d12/da3/c100 0 2026-03-09T15:01:57.367 INFO:tasks.workunit.client.0.vm05.stdout:7/872: rename d1/d9/d23/l4b to d1/d9/d23/d31/d8f/d93/l123 0 2026-03-09T15:01:57.373 INFO:tasks.workunit.client.0.vm05.stdout:2/878: dread da/d29/d3f/f9b [0,4194304] 0 2026-03-09T15:01:57.380 INFO:tasks.workunit.client.0.vm05.stdout:2/879: dwrite da/d29/d6a/da0/d91/dab/d2f/d35/d10b/dd3/ff8 [4194304,4194304] 0 2026-03-09T15:01:57.395 INFO:tasks.workunit.client.0.vm05.stdout:8/892: truncate d0/d1/d12/d1b/d95/d54/f85 3134631 0 2026-03-09T15:01:57.396 INFO:tasks.workunit.client.0.vm05.stdout:8/893: chown d0/d1/d12/d1b/d21/f65 19226 1 2026-03-09T15:01:57.399 INFO:tasks.workunit.client.0.vm05.stdout:0/812: mkdir d9/de/df1/deb/d101 0 2026-03-09T15:01:57.401 INFO:tasks.workunit.client.0.vm05.stdout:4/874: rename d2/d1d/dfa to d2/d1d/d88/d92/d121 0 2026-03-09T15:01:57.408 INFO:tasks.workunit.client.0.vm05.stdout:6/819: write da/d17/d3b/f4a [2002263,43772] 0 2026-03-09T15:01:57.415 INFO:tasks.workunit.client.0.vm05.stdout:1/838: link d9/d2f/d83/d98/l6c d9/d2f/d37/l114 0 2026-03-09T15:01:57.416 INFO:tasks.workunit.client.0.vm05.stdout:7/873: dwrite d1/d9/d23/d54/d7b/f7f [0,4194304] 0 2026-03-09T15:01:57.421 INFO:tasks.workunit.client.0.vm05.stdout:3/842: mknod d3/df/c116 0 2026-03-09T15:01:57.444 INFO:tasks.workunit.client.0.vm05.stdout:3/843: dwrite d3/df/d10/d19/db2/d102/f105 [0,4194304] 0 2026-03-09T15:01:57.454 INFO:tasks.workunit.client.0.vm05.stdout:9/917: write d2/d10/d22/d2c/d3c/d11b/f13c [166283,123121] 0 2026-03-09T15:01:57.459 INFO:tasks.workunit.client.0.vm05.stdout:8/894: symlink d0/d1/d12/d1b/d95/d78/dea/l126 0 2026-03-09T15:01:57.460 INFO:tasks.workunit.client.0.vm05.stdout:8/895: chown d0/d1/d12/d3c/d8b/f106 60413 1 2026-03-09T15:01:57.464 INFO:tasks.workunit.client.0.vm05.stdout:4/875: mknod d2/d4/d7/d48/d6b/dd3/c122 0 2026-03-09T15:01:57.470 INFO:tasks.workunit.client.0.vm05.stdout:6/820: rmdir da/d19/dd7/dfe/da8 39 2026-03-09T15:01:57.472 INFO:tasks.workunit.client.0.vm05.stdout:7/874: chown d1/d22/d3c/d105 456 1 2026-03-09T15:01:57.477 INFO:tasks.workunit.client.0.vm05.stdout:1/839: rename d9/d10e/c110 to d9/d2f/d83/d98/d59/d49/d4b/c115 0 2026-03-09T15:01:57.488 INFO:tasks.workunit.client.0.vm05.stdout:2/880: creat da/d29/d6a/da0/d91/dab/d2f/d35/d8a/f111 x:0 0 0 2026-03-09T15:01:57.491 INFO:tasks.workunit.client.0.vm05.stdout:5/931: creat d1/d4/d34/d56/da6/d10d/f13d x:0 0 0 2026-03-09T15:01:57.495 INFO:tasks.workunit.client.0.vm05.stdout:9/918: symlink d2/d10/d22/dc1/dc3/l13f 0 2026-03-09T15:01:57.497 INFO:tasks.workunit.client.0.vm05.stdout:0/813: creat d9/de/d12/d15/d2e/d32/d9f/da0/db7/de5/f102 x:0 0 0 2026-03-09T15:01:57.502 INFO:tasks.workunit.client.0.vm05.stdout:0/814: stat d9/de/d12/d15/d2e/d6b 0 2026-03-09T15:01:57.513 INFO:tasks.workunit.client.0.vm05.stdout:4/876: symlink d2/d4/d8/d4a/l123 0 2026-03-09T15:01:57.513 INFO:tasks.workunit.client.0.vm05.stdout:0/815: stat d9/de/df1/c6f 0 2026-03-09T15:01:57.513 INFO:tasks.workunit.client.0.vm05.stdout:7/875: creat d1/d9/d23/d31/d8f/d93/d95/f124 x:0 0 0 2026-03-09T15:01:57.514 INFO:tasks.workunit.client.0.vm05.stdout:6/821: write da/d43/f86 [2237561,109640] 0 2026-03-09T15:01:57.517 INFO:tasks.workunit.client.0.vm05.stdout:1/840: chown d9/db9/c10d 106814678 1 2026-03-09T15:01:57.517 INFO:tasks.workunit.client.0.vm05.stdout:0/816: fdatasync d9/de/d12/d15/d2e/d32/d9f/da0/db7/de5/f102 0 2026-03-09T15:01:57.521 INFO:tasks.workunit.client.0.vm05.stdout:2/881: mknod da/d29/d64/da6/c112 0 2026-03-09T15:01:57.521 INFO:tasks.workunit.client.0.vm05.stdout:2/882: readlink da/dd/l61 0 2026-03-09T15:01:57.532 INFO:tasks.workunit.client.0.vm05.stdout:3/844: write d3/d29/f97 [723663,87606] 0 2026-03-09T15:01:57.549 INFO:tasks.workunit.client.0.vm05.stdout:9/919: dwrite d2/d9e/f104 [0,4194304] 0 2026-03-09T15:01:57.569 INFO:tasks.workunit.client.0.vm05.stdout:7/876: fdatasync d1/d22/d3c/fce 0 2026-03-09T15:01:57.580 INFO:tasks.workunit.client.0.vm05.stdout:4/877: creat d2/d4/d1e/da2/dec/f124 x:0 0 0 2026-03-09T15:01:57.585 INFO:tasks.workunit.client.0.vm05.stdout:1/841: mkdir d9/d2f/d83/d98/d59/d49/d4b/d116 0 2026-03-09T15:01:57.591 INFO:tasks.workunit.client.0.vm05.stdout:0/817: write d9/de/d12/da3/fa4 [1384104,11802] 0 2026-03-09T15:01:57.600 INFO:tasks.workunit.client.0.vm05.stdout:0/818: fdatasync d9/de/d12/d15/d2e/fee 0 2026-03-09T15:01:57.601 INFO:tasks.workunit.client.0.vm05.stdout:2/883: write da/d29/d6a/d7f/fa8 [4421053,78426] 0 2026-03-09T15:01:57.610 INFO:tasks.workunit.client.0.vm05.stdout:8/896: link d0/f10 d0/d1/d12/d1b/d95/d42/d60/da7/db3/f127 0 2026-03-09T15:01:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:57 vm09.local ceph-mon[59673]: pgmap v11: 65 pgs: 65 active+clean; 2.5 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 25 MiB/s rd, 56 MiB/s wr, 156 op/s 2026-03-09T15:01:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:57 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:57 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:57 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:57 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:57.620 INFO:tasks.workunit.client.0.vm05.stdout:0/819: dread d9/de/d12/d15/d2e/f76 [0,4194304] 0 2026-03-09T15:01:57.629 INFO:tasks.workunit.client.0.vm05.stdout:4/878: creat d2/d4/d8/d4a/d8f/f125 x:0 0 0 2026-03-09T15:01:57.631 INFO:tasks.workunit.client.0.vm05.stdout:4/879: stat d2/d4/d1e/d71/f103 0 2026-03-09T15:01:57.634 INFO:tasks.workunit.client.0.vm05.stdout:2/884: symlink da/d16/d46/l113 0 2026-03-09T15:01:57.642 INFO:tasks.workunit.client.0.vm05.stdout:3/845: getdents d3/d29/d2d/d7b/dc5 0 2026-03-09T15:01:57.645 INFO:tasks.workunit.client.0.vm05.stdout:8/897: rmdir d0/d1/de2 39 2026-03-09T15:01:57.661 INFO:tasks.workunit.client.0.vm05.stdout:4/880: sync 2026-03-09T15:01:57.707 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:57 vm05.local ceph-mon[50611]: pgmap v11: 65 pgs: 65 active+clean; 2.5 GiB data, 8.6 GiB used, 111 GiB / 120 GiB avail; 25 MiB/s rd, 56 MiB/s wr, 156 op/s 2026-03-09T15:01:57.707 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:57 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:57.707 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:57 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:57.707 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:57 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:57.707 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:57 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:01:57.708 INFO:tasks.workunit.client.0.vm05.stdout:1/842: creat d9/d2f/d83/d98/d59/d49/d4b/d116/f117 x:0 0 0 2026-03-09T15:01:57.710 INFO:tasks.workunit.client.0.vm05.stdout:5/932: getdents d1/d4/d34/d35/d3d/d38/d69 0 2026-03-09T15:01:57.718 INFO:tasks.workunit.client.0.vm05.stdout:5/933: sync 2026-03-09T15:01:57.719 INFO:tasks.workunit.client.0.vm05.stdout:2/885: fdatasync da/d16/fdd 0 2026-03-09T15:01:57.720 INFO:tasks.workunit.client.0.vm05.stdout:2/886: readlink da/d29/d6a/da0/d91/dab/l32 0 2026-03-09T15:01:57.723 INFO:tasks.workunit.client.0.vm05.stdout:3/846: dread d3/df/d10/d19/d44/f60 [0,4194304] 0 2026-03-09T15:01:57.744 INFO:tasks.workunit.client.0.vm05.stdout:0/820: write d9/de/df1/f71 [1153720,56208] 0 2026-03-09T15:01:57.744 INFO:tasks.workunit.client.0.vm05.stdout:0/821: readlink d9/de/d12/ldd 0 2026-03-09T15:01:57.744 INFO:tasks.workunit.client.0.vm05.stdout:0/822: stat d9 0 2026-03-09T15:01:57.755 INFO:tasks.workunit.client.0.vm05.stdout:7/877: creat d1/f125 x:0 0 0 2026-03-09T15:01:57.756 INFO:tasks.workunit.client.0.vm05.stdout:8/898: dwrite d0/d1/d12/d3c/f8c [0,4194304] 0 2026-03-09T15:01:57.759 INFO:tasks.workunit.client.0.vm05.stdout:8/899: chown d0/d1/d12/d1b/d21/c23 17 1 2026-03-09T15:01:57.779 INFO:tasks.workunit.client.0.vm05.stdout:6/822: getdents da/d19 0 2026-03-09T15:01:57.780 INFO:tasks.workunit.client.0.vm05.stdout:4/881: truncate d2/d4/d8/d4a/d6e/f93 1405976 0 2026-03-09T15:01:57.782 INFO:tasks.workunit.client.0.vm05.stdout:5/934: symlink d1/d4/d34/d35/d3d/d38/d63/l13e 0 2026-03-09T15:01:57.789 INFO:tasks.workunit.client.0.vm05.stdout:9/920: getdents d2/d4e/d56/d53 0 2026-03-09T15:01:57.798 INFO:tasks.workunit.client.0.vm05.stdout:7/878: mknod d1/de4/c126 0 2026-03-09T15:01:57.798 INFO:tasks.workunit.client.0.vm05.stdout:1/843: dwrite d9/d2f/d83/d98/d59/d49/ffd [0,4194304] 0 2026-03-09T15:01:57.801 INFO:tasks.workunit.client.0.vm05.stdout:7/879: fdatasync d1/d9/d23/d31/d51/f120 0 2026-03-09T15:01:57.813 INFO:tasks.workunit.client.0.vm05.stdout:3/847: write d3/df/d59/ffe [573603,59706] 0 2026-03-09T15:01:57.814 INFO:tasks.workunit.client.0.vm05.stdout:3/848: chown d3/df/d10/d19/db2/d102 5 1 2026-03-09T15:01:57.826 INFO:tasks.workunit.client.0.vm05.stdout:3/849: dwrite d3/d29/d2d/f33 [0,4194304] 0 2026-03-09T15:01:57.831 INFO:tasks.workunit.client.0.vm05.stdout:3/850: chown d3/df/d10/d19/d44/da2/df8 4786 1 2026-03-09T15:01:57.854 INFO:tasks.workunit.client.0.vm05.stdout:9/921: creat d2/d10/d22/d2c/d3c/d101/f140 x:0 0 0 2026-03-09T15:01:57.856 INFO:tasks.workunit.client.0.vm05.stdout:9/922: chown d2/d4e/d56/d53/d64/ded/d9c/d8e/f5f 14348 1 2026-03-09T15:01:57.865 INFO:tasks.workunit.client.0.vm05.stdout:0/823: rename d9/de/d12/d15/f9e to d9/de/d12/d15/d49/f103 0 2026-03-09T15:01:57.876 INFO:tasks.workunit.client.0.vm05.stdout:0/824: sync 2026-03-09T15:01:57.877 INFO:tasks.workunit.client.0.vm05.stdout:0/825: write d9/de/d25/dae/ffb [829743,14673] 0 2026-03-09T15:01:57.898 INFO:tasks.workunit.client.0.vm05.stdout:9/923: mkdir d2/d10/d22/dc1/dc3/d141 0 2026-03-09T15:01:57.901 INFO:tasks.workunit.client.0.vm05.stdout:5/935: write d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d75/d137/fb0 [1951779,105864] 0 2026-03-09T15:01:57.901 INFO:tasks.workunit.client.0.vm05.stdout:6/823: write da/d43/f54 [25884,124351] 0 2026-03-09T15:01:57.901 INFO:tasks.workunit.client.0.vm05.stdout:4/882: write d2/d4/d7/dc/f64 [1715441,86511] 0 2026-03-09T15:01:57.904 INFO:tasks.workunit.client.0.vm05.stdout:2/887: dwrite da/f2c [0,4194304] 0 2026-03-09T15:01:57.906 INFO:tasks.workunit.client.0.vm05.stdout:1/844: write d9/d2f/d83/d98/d59/df8/ffa [493672,21247] 0 2026-03-09T15:01:57.908 INFO:tasks.workunit.client.0.vm05.stdout:4/883: write d2/d43/f4f [3223622,108935] 0 2026-03-09T15:01:57.911 INFO:tasks.workunit.client.0.vm05.stdout:5/936: write d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d103/f108 [763245,104886] 0 2026-03-09T15:01:57.917 INFO:tasks.workunit.client.0.vm05.stdout:7/880: dwrite d1/d9/d23/d31/d32/f63 [0,4194304] 0 2026-03-09T15:01:57.932 INFO:tasks.workunit.client.0.vm05.stdout:5/937: dwrite d1/db5/f129 [0,4194304] 0 2026-03-09T15:01:57.933 INFO:tasks.workunit.client.0.vm05.stdout:5/938: readlink d1/d4/d34/d35/d4e/d6f/l12d 0 2026-03-09T15:01:57.950 INFO:tasks.workunit.client.0.vm05.stdout:2/888: rename da/d16 to da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114 0 2026-03-09T15:01:57.965 INFO:tasks.workunit.client.0.vm05.stdout:8/900: truncate d0/d1/d12/d1b/d95/d54/f5b 3362705 0 2026-03-09T15:01:57.967 INFO:tasks.workunit.client.0.vm05.stdout:7/881: creat d1/d49/dec/f127 x:0 0 0 2026-03-09T15:01:57.973 INFO:tasks.workunit.client.0.vm05.stdout:4/884: dread d2/f98 [0,4194304] 0 2026-03-09T15:01:57.997 INFO:tasks.workunit.client.0.vm05.stdout:3/851: write d3/d29/d2d/d77/d4d/fe9 [907816,85336] 0 2026-03-09T15:01:58.003 INFO:tasks.workunit.client.0.vm05.stdout:2/889: rmdir da/dd 39 2026-03-09T15:01:58.005 INFO:tasks.workunit.client.0.vm05.stdout:2/890: read da/d29/d6a/db1/db7/ff9 [78477,37980] 0 2026-03-09T15:01:58.008 INFO:tasks.workunit.client.0.vm05.stdout:1/845: dread d9/d2f/d37/d5f/f73 [0,4194304] 0 2026-03-09T15:01:58.023 INFO:tasks.workunit.client.0.vm05.stdout:8/901: symlink d0/d1/d12/d1b/d95/d42/d60/d73/l128 0 2026-03-09T15:01:58.030 INFO:tasks.workunit.client.0.vm05.stdout:0/826: dwrite d9/f42 [0,4194304] 0 2026-03-09T15:01:58.037 INFO:tasks.workunit.client.0.vm05.stdout:0/827: dread d9/de/d12/d15/d2e/fc2 [0,4194304] 0 2026-03-09T15:01:58.050 INFO:tasks.workunit.client.0.vm05.stdout:9/924: write d2/d10/fe6 [3894989,7705] 0 2026-03-09T15:01:58.053 INFO:tasks.workunit.client.0.vm05.stdout:4/885: mknod d2/d1d/d88/d92/d121/c126 0 2026-03-09T15:01:58.060 INFO:tasks.workunit.client.0.vm05.stdout:6/824: write da/d17/f64 [96151,118435] 0 2026-03-09T15:01:58.109 INFO:tasks.workunit.client.0.vm05.stdout:5/939: write d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d75/d137/fb1 [1365630,89262] 0 2026-03-09T15:01:58.117 INFO:tasks.workunit.client.0.vm05.stdout:1/846: creat d9/d2f/d83/d98/d87/f118 x:0 0 0 2026-03-09T15:01:58.118 INFO:tasks.workunit.client.0.vm05.stdout:7/882: write d1/d22/f102 [88136,49061] 0 2026-03-09T15:01:58.119 INFO:tasks.workunit.client.0.vm05.stdout:3/852: write d3/df/d10/d19/d44/fb0 [58625,99863] 0 2026-03-09T15:01:58.131 INFO:tasks.workunit.client.0.vm05.stdout:1/847: sync 2026-03-09T15:01:58.173 INFO:tasks.workunit.client.0.vm05.stdout:0/828: truncate d9/de/d6a/f7c 3166925 0 2026-03-09T15:01:58.176 INFO:tasks.workunit.client.0.vm05.stdout:9/925: creat d2/d4e/d56/d53/d64/dd9/def/d12d/d12f/f142 x:0 0 0 2026-03-09T15:01:58.176 INFO:tasks.workunit.client.0.vm05.stdout:9/926: chown d2/d10/d22/l11c 478 1 2026-03-09T15:01:58.198 INFO:tasks.workunit.client.0.vm05.stdout:4/886: creat d2/d1d/df5/df8/f127 x:0 0 0 2026-03-09T15:01:58.200 INFO:tasks.workunit.client.0.vm05.stdout:6/825: chown da/d17/f3c 2867 1 2026-03-09T15:01:58.200 INFO:tasks.workunit.client.0.vm05.stdout:6/826: fdatasync da/d19/f22 0 2026-03-09T15:01:58.212 INFO:tasks.workunit.client.0.vm05.stdout:5/940: truncate d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/f4f 727074 0 2026-03-09T15:01:58.229 INFO:tasks.workunit.client.0.vm05.stdout:1/848: mknod d9/d2f/d83/d98/d59/d49/d92/c119 0 2026-03-09T15:01:58.231 INFO:tasks.workunit.client.0.vm05.stdout:8/902: getdents d0/d1/d12/d1b/d66/d11b 0 2026-03-09T15:01:58.241 INFO:tasks.workunit.client.0.vm05.stdout:0/829: rename d9/de/d12/d15/d2e/d32/d9f to d9/de/d12/d15/d2e/d32/d53/d61/d104 0 2026-03-09T15:01:58.243 INFO:tasks.workunit.client.0.vm05.stdout:4/887: mknod d2/d4/d1e/da2/c128 0 2026-03-09T15:01:58.244 INFO:tasks.workunit.client.0.vm05.stdout:6/827: truncate da/d43/d66/f6e 5501874 0 2026-03-09T15:01:58.249 INFO:tasks.workunit.client.0.vm05.stdout:5/941: dwrite d1/d4/d34/d56/d68/f120 [0,4194304] 0 2026-03-09T15:01:58.266 INFO:tasks.workunit.client.0.vm05.stdout:7/883: mknod d1/d9/d23/c128 0 2026-03-09T15:01:58.266 INFO:tasks.workunit.client.0.vm05.stdout:3/853: creat d3/df/dbe/f117 x:0 0 0 2026-03-09T15:01:58.266 INFO:tasks.workunit.client.0.vm05.stdout:3/854: chown d3/df/f4a 185 1 2026-03-09T15:01:58.267 INFO:tasks.workunit.client.0.vm05.stdout:1/849: truncate d9/f12 5544518 0 2026-03-09T15:01:58.276 INFO:tasks.workunit.client.0.vm05.stdout:0/830: dread d9/d59/f83 [0,4194304] 0 2026-03-09T15:01:58.278 INFO:tasks.workunit.client.0.vm05.stdout:2/891: getdents da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46 0 2026-03-09T15:01:58.284 INFO:tasks.workunit.client.0.vm05.stdout:5/942: rename d1/d4/d34/d35/d4e/c95 to d1/d4/d34/d35/d3d/d38/d69/c13f 0 2026-03-09T15:01:58.284 INFO:tasks.workunit.client.0.vm05.stdout:5/943: dread - d1/d4/d34/d35/d3d/d38/fe4 zero size 2026-03-09T15:01:58.288 INFO:tasks.workunit.client.0.vm05.stdout:3/855: fsync d3/d29/d2d/d7b/ffc 0 2026-03-09T15:01:58.290 INFO:tasks.workunit.client.0.vm05.stdout:3/856: dread d3/d29/d7f/fa1 [0,4194304] 0 2026-03-09T15:01:58.294 INFO:tasks.workunit.client.0.vm05.stdout:3/857: dwrite d3/df/d10/d19/d44/f56 [0,4194304] 0 2026-03-09T15:01:58.307 INFO:tasks.workunit.client.0.vm05.stdout:6/828: symlink da/d17/l102 0 2026-03-09T15:01:58.338 INFO:tasks.workunit.client.0.vm05.stdout:2/892: dread da/f9d [0,4194304] 0 2026-03-09T15:01:58.347 INFO:tasks.workunit.client.0.vm05.stdout:6/829: symlink da/d17/d95/da2/l103 0 2026-03-09T15:01:58.349 INFO:tasks.workunit.client.0.vm05.stdout:7/884: rename d1/d9/d23/d31/d51/fd4 to d1/d49/d4a/f129 0 2026-03-09T15:01:58.349 INFO:tasks.workunit.client.0.vm05.stdout:7/885: fdatasync d1/d9/fc 0 2026-03-09T15:01:58.353 INFO:tasks.workunit.client.0.vm05.stdout:7/886: dwrite d1/d9/d23/d54/d7b/f121 [0,4194304] 0 2026-03-09T15:01:58.354 INFO:tasks.workunit.client.0.vm05.stdout:2/893: sync 2026-03-09T15:01:58.367 INFO:tasks.workunit.client.0.vm05.stdout:5/944: mknod d1/da/d10f/d132/c140 0 2026-03-09T15:01:58.386 INFO:tasks.workunit.client.0.vm05.stdout:5/945: truncate d1/d4/f20 528313 0 2026-03-09T15:01:58.391 INFO:tasks.workunit.client.0.vm05.stdout:5/946: sync 2026-03-09T15:01:58.403 INFO:tasks.workunit.client.0.vm05.stdout:3/858: link d3/df/d10/d19/dce/dc8/de2/d8c/cef d3/df/d10/d19/dce/dc8/de2/d8c/d90/d10a/c118 0 2026-03-09T15:01:58.426 INFO:tasks.workunit.client.0.vm05.stdout:9/927: dwrite d2/f8 [4194304,4194304] 0 2026-03-09T15:01:58.433 INFO:tasks.workunit.client.0.vm05.stdout:8/903: write d0/f10 [327119,75227] 0 2026-03-09T15:01:58.433 INFO:tasks.workunit.client.0.vm05.stdout:0/831: write d9/de/f1e [3138979,31753] 0 2026-03-09T15:01:58.438 INFO:tasks.workunit.client.0.vm05.stdout:1/850: dwrite d9/d2f/d83/d98/d59/d49/dc2/fd5 [0,4194304] 0 2026-03-09T15:01:58.469 INFO:tasks.workunit.client.0.vm05.stdout:6/830: creat da/d19/dd7/dfe/da8/f104 x:0 0 0 2026-03-09T15:01:58.473 INFO:tasks.workunit.client.0.vm05.stdout:2/894: chown da/dd/lad 254127516 1 2026-03-09T15:01:58.475 INFO:tasks.workunit.client.0.vm05.stdout:6/831: sync 2026-03-09T15:01:58.478 INFO:tasks.workunit.client.0.vm05.stdout:7/887: mkdir d1/d12a 0 2026-03-09T15:01:58.494 INFO:tasks.workunit.client.0.vm05.stdout:0/832: creat d9/de/d25/d38/d78/dc9/f105 x:0 0 0 2026-03-09T15:01:58.502 INFO:tasks.workunit.client.0.vm05.stdout:1/851: unlink d9/d2f/d37/d5f/f73 0 2026-03-09T15:01:58.502 INFO:tasks.workunit.client.0.vm05.stdout:4/888: rename d2/d4/d7/d48/d6b/ddb/lfd to d2/d4/d7/dc/l129 0 2026-03-09T15:01:58.504 INFO:tasks.workunit.client.0.vm05.stdout:1/852: dread - d9/d2f/d37/d5a/da9/dc9/dcd/f112 zero size 2026-03-09T15:01:58.508 INFO:tasks.workunit.client.0.vm05.stdout:9/928: dwrite d2/d10/d22/d2c/d3c/fbc [0,4194304] 0 2026-03-09T15:01:58.509 INFO:tasks.workunit.client.0.vm05.stdout:2/895: creat da/d29/d6a/da0/d7c/f115 x:0 0 0 2026-03-09T15:01:58.513 INFO:tasks.workunit.client.0.vm05.stdout:9/929: dread d2/d9e/f104 [0,4194304] 0 2026-03-09T15:01:58.520 INFO:tasks.workunit.client.0.vm05.stdout:9/930: dwrite d2/d10/d22/dc2/db1/fb8 [0,4194304] 0 2026-03-09T15:01:58.566 INFO:tasks.workunit.client.0.vm05.stdout:0/833: rmdir d9/de/d12/d15/d2e/d32/d53/d61/d104/da0 39 2026-03-09T15:01:58.566 INFO:tasks.workunit.client.0.vm05.stdout:0/834: chown d9/de/d12/d15/d2e/d6b 814295785 1 2026-03-09T15:01:58.567 INFO:tasks.workunit.client.0.vm05.stdout:7/888: write d1/d49/d4a/f6b [3732643,15786] 0 2026-03-09T15:01:58.571 INFO:tasks.workunit.client.0.vm05.stdout:8/904: rename d0/d1/d12/d1b/d6e/d93/d9f/dad to d0/d1/d12/d3c/d8b/d129 0 2026-03-09T15:01:58.573 INFO:tasks.workunit.client.0.vm05.stdout:1/853: dread - d9/d2f/d37/d5a/fcb zero size 2026-03-09T15:01:58.579 INFO:tasks.workunit.client.0.vm05.stdout:2/896: symlink da/d29/d64/dc1/l116 0 2026-03-09T15:01:58.603 INFO:tasks.workunit.client.0.vm05.stdout:4/889: dwrite d2/d4/fbe [0,4194304] 0 2026-03-09T15:01:58.613 INFO:tasks.workunit.client.0.vm05.stdout:9/931: mknod d2/d10/d22/dc1/dc3/c143 0 2026-03-09T15:01:58.633 INFO:tasks.workunit.client.0.vm05.stdout:0/835: truncate d9/d59/f83 1393825 0 2026-03-09T15:01:58.636 INFO:tasks.workunit.client.0.vm05.stdout:7/889: readlink d1/d9/d23/d31/d51/ld8 0 2026-03-09T15:01:58.640 INFO:tasks.workunit.client.0.vm05.stdout:8/905: creat d0/d1/d12/d3c/d8b/f12a x:0 0 0 2026-03-09T15:01:58.669 INFO:tasks.workunit.client.0.vm05.stdout:2/897: dwrite da/d29/d6a/fda [0,4194304] 0 2026-03-09T15:01:58.682 INFO:tasks.workunit.client.0.vm05.stdout:9/932: write d2/d9e/f104 [1198564,42064] 0 2026-03-09T15:01:58.683 INFO:tasks.workunit.client.0.vm05.stdout:9/933: fsync d2/d10/d22/d47/fdc 0 2026-03-09T15:01:58.701 INFO:tasks.workunit.client.0.vm05.stdout:5/947: getdents d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d75/d137 0 2026-03-09T15:01:58.703 INFO:tasks.workunit.client.0.vm05.stdout:3/859: link d3/df/c46 d3/d29/d7f/c119 0 2026-03-09T15:01:58.855 INFO:tasks.workunit.client.0.vm05.stdout:2/898: dread da/d29/d6a/db1/db7/ff9 [0,4194304] 0 2026-03-09T15:01:58.857 INFO:tasks.workunit.client.0.vm05.stdout:9/934: fsync d2/d10/d22/d47/fe5 0 2026-03-09T15:01:58.857 INFO:tasks.workunit.client.0.vm05.stdout:9/935: write d2/d4e/d56/fce [409500,91774] 0 2026-03-09T15:01:58.861 INFO:tasks.workunit.client.0.vm05.stdout:6/832: getdents da/d43/d7b/db3 0 2026-03-09T15:01:58.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:58 vm09.local ceph-mon[59673]: pgmap v12: 65 pgs: 65 active+clean; 2.6 GiB data, 8.8 GiB used, 111 GiB / 120 GiB avail; 42 MiB/s rd, 86 MiB/s wr, 244 op/s 2026-03-09T15:01:58.868 INFO:tasks.workunit.client.0.vm05.stdout:3/860: symlink d3/df/d10/d19/db2/l11a 0 2026-03-09T15:01:58.873 INFO:tasks.workunit.client.0.vm05.stdout:4/890: rename d2/d1d to d2/d43/d12a 0 2026-03-09T15:01:58.882 INFO:tasks.workunit.client.0.vm05.stdout:4/891: dread d2/d4/d7/d48/d6b/dd3/f108 [0,4194304] 0 2026-03-09T15:01:58.886 INFO:tasks.workunit.client.0.vm05.stdout:2/899: mknod da/d29/d3f/dc3/c117 0 2026-03-09T15:01:58.888 INFO:tasks.workunit.client.0.vm05.stdout:2/900: readlink da/d29/d3f/dc3/l98 0 2026-03-09T15:01:58.900 INFO:tasks.workunit.client.0.vm05.stdout:0/836: creat d9/de/d25/f106 x:0 0 0 2026-03-09T15:01:58.905 INFO:tasks.workunit.client.0.vm05.stdout:1/854: rename d9/d2f/d83/d98/d59/d49/dc2 to d9/d2f/d37/d5f/da2/d11a 0 2026-03-09T15:01:58.909 INFO:tasks.workunit.client.0.vm05.stdout:4/892: mknod d2/d4/d1e/c12b 0 2026-03-09T15:01:58.912 INFO:tasks.workunit.client.0.vm05.stdout:1/855: dread f5 [0,4194304] 0 2026-03-09T15:01:58.914 INFO:tasks.workunit.client.0.vm05.stdout:3/861: truncate d3/d29/f41 3656818 0 2026-03-09T15:01:58.914 INFO:tasks.workunit.client.0.vm05.stdout:3/862: stat d3/df/d1e/d2c/d74/d78 0 2026-03-09T15:01:58.915 INFO:tasks.workunit.client.0.vm05.stdout:3/863: write d3/df/f1b [4437512,87862] 0 2026-03-09T15:01:58.917 INFO:tasks.workunit.client.0.vm05.stdout:8/906: write d0/dc/f4a [1308911,121555] 0 2026-03-09T15:01:58.921 INFO:tasks.workunit.client.0.vm05.stdout:8/907: dread d0/d1/d55/f6a [0,4194304] 0 2026-03-09T15:01:58.921 INFO:tasks.workunit.client.0.vm05.stdout:7/890: link d1/d49/d4a/fcc d1/d9/d23/d31/d32/f12b 0 2026-03-09T15:01:58.929 INFO:tasks.workunit.client.0.vm05.stdout:5/948: write d1/d4/d19/fab [1289311,98801] 0 2026-03-09T15:01:58.933 INFO:tasks.workunit.client.0.vm05.stdout:6/833: rename da/d43/d7b/f97 to da/d17/d3b/dbd/dee/f105 0 2026-03-09T15:01:58.935 INFO:tasks.workunit.client.0.vm05.stdout:9/936: write d2/d4e/f51 [1436759,117388] 0 2026-03-09T15:01:58.939 INFO:tasks.workunit.client.0.vm05.stdout:4/893: creat d2/d4/d50/d8a/f12c x:0 0 0 2026-03-09T15:01:58.946 INFO:tasks.workunit.client.0.vm05.stdout:2/901: creat da/d29/d6a/da0/d91/dd4/df7/f118 x:0 0 0 2026-03-09T15:01:58.947 INFO:tasks.workunit.client.0.vm05.stdout:2/902: chown da/d29/d6a/d7f/fa8 1167961482 1 2026-03-09T15:01:58.949 INFO:tasks.workunit.client.0.vm05.stdout:1/856: fsync d9/d97/fbf 0 2026-03-09T15:01:58.950 INFO:tasks.workunit.client.0.vm05.stdout:2/903: dread da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/f73 [0,4194304] 0 2026-03-09T15:01:58.951 INFO:tasks.workunit.client.0.vm05.stdout:2/904: dread da/d29/d3f/f9b [0,4194304] 0 2026-03-09T15:01:58.958 INFO:tasks.workunit.client.0.vm05.stdout:3/864: symlink d3/d29/d7f/dc3/l11b 0 2026-03-09T15:01:58.960 INFO:tasks.workunit.client.0.vm05.stdout:0/837: symlink d9/de/d25/l107 0 2026-03-09T15:01:58.962 INFO:tasks.workunit.client.0.vm05.stdout:8/908: mknod d0/d1/d12/d1b/d95/d78/dea/c12b 0 2026-03-09T15:01:58.963 INFO:tasks.workunit.client.0.vm05.stdout:8/909: chown d0/d1/d12/d1b/d95/d78/dea/c12b 221206 1 2026-03-09T15:01:58.967 INFO:tasks.workunit.client.0.vm05.stdout:8/910: dwrite d0/d1/d12/d3c/f8c [0,4194304] 0 2026-03-09T15:01:58.972 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:58 vm05.local ceph-mon[50611]: pgmap v12: 65 pgs: 65 active+clean; 2.6 GiB data, 8.8 GiB used, 111 GiB / 120 GiB avail; 42 MiB/s rd, 86 MiB/s wr, 244 op/s 2026-03-09T15:01:59.004 INFO:tasks.workunit.client.0.vm05.stdout:5/949: fdatasync d1/f14 0 2026-03-09T15:01:59.008 INFO:tasks.workunit.client.0.vm05.stdout:6/834: mkdir da/d19/d106 0 2026-03-09T15:01:59.009 INFO:tasks.workunit.client.0.vm05.stdout:6/835: write da/d17/d3b/ffa [780692,20955] 0 2026-03-09T15:01:59.019 INFO:tasks.workunit.client.0.vm05.stdout:9/937: creat d2/d10/d22/dc2/db1/f144 x:0 0 0 2026-03-09T15:01:59.021 INFO:tasks.workunit.client.0.vm05.stdout:4/894: unlink d2/d4/d8/d4a/fae 0 2026-03-09T15:01:59.026 INFO:tasks.workunit.client.0.vm05.stdout:3/865: mkdir d3/df/d10/d19/dce/dc8/de2/d8c/d90/d10a/d11c 0 2026-03-09T15:01:59.026 INFO:tasks.workunit.client.0.vm05.stdout:3/866: chown d3/df/d1e/d2c/d74/d78/fab 1101 1 2026-03-09T15:01:59.030 INFO:tasks.workunit.client.0.vm05.stdout:3/867: dwrite d3/d29/d2d/f33 [0,4194304] 0 2026-03-09T15:01:59.033 INFO:tasks.workunit.client.0.vm05.stdout:7/891: mknod d1/d49/d4a/d77/c12c 0 2026-03-09T15:01:59.046 INFO:tasks.workunit.client.0.vm05.stdout:5/950: read d1/d4/d34/d35/ff7 [2114,2348] 0 2026-03-09T15:01:59.059 INFO:tasks.workunit.client.0.vm05.stdout:1/857: rename d9/d2f/d83/d98/l3e to d9/d2f/d37/l11b 0 2026-03-09T15:01:59.065 INFO:tasks.workunit.client.0.vm05.stdout:3/868: sync 2026-03-09T15:01:59.065 INFO:tasks.workunit.client.0.vm05.stdout:9/938: sync 2026-03-09T15:01:59.066 INFO:tasks.workunit.client.0.vm05.stdout:9/939: dread - d2/d10/d8c/fa8 zero size 2026-03-09T15:01:59.066 INFO:tasks.workunit.client.0.vm05.stdout:9/940: chown d2/d10/d22/d47/dc4/l100 23 1 2026-03-09T15:01:59.073 INFO:tasks.workunit.client.0.vm05.stdout:4/895: write d2/d4/d50/fc0 [84916,112670] 0 2026-03-09T15:01:59.085 INFO:tasks.workunit.client.0.vm05.stdout:5/951: fsync d1/d4/d34/d35/fc5 0 2026-03-09T15:01:59.085 INFO:tasks.workunit.client.0.vm05.stdout:5/952: stat d1/d4/d34/d35/dd0/cef 0 2026-03-09T15:01:59.093 INFO:tasks.workunit.client.0.vm05.stdout:2/905: rename da/d29/d6a/da0/d91/dab/d2f/db3/deb/dfa to da/d29/d6a/da0/d91/dab/d2f/d35/d10b/d119 0 2026-03-09T15:01:59.095 INFO:tasks.workunit.client.0.vm05.stdout:1/858: rmdir d9/d2f/d37/d5a/da9/dc9 39 2026-03-09T15:01:59.096 INFO:tasks.workunit.client.0.vm05.stdout:3/869: truncate d3/df/d1e/daf/fc6 1040425 0 2026-03-09T15:01:59.105 INFO:tasks.workunit.client.0.vm05.stdout:0/838: link d9/d59/d93/fd1 d9/de/d25/d38/d78/dc9/f108 0 2026-03-09T15:01:59.107 INFO:tasks.workunit.client.0.vm05.stdout:9/941: dwrite d2/f61 [4194304,4194304] 0 2026-03-09T15:01:59.110 INFO:tasks.workunit.client.0.vm05.stdout:7/892: symlink d1/d12a/l12d 0 2026-03-09T15:01:59.123 INFO:tasks.workunit.client.0.vm05.stdout:3/870: creat d3/df/d10/d19/db5/f11d x:0 0 0 2026-03-09T15:01:59.130 INFO:tasks.workunit.client.0.vm05.stdout:4/896: mkdir d2/d4/d12d 0 2026-03-09T15:01:59.134 INFO:tasks.workunit.client.0.vm05.stdout:7/893: mknod d1/d9/d23/d31/d32/d78/d7e/d81/c12e 0 2026-03-09T15:01:59.135 INFO:tasks.workunit.client.0.vm05.stdout:7/894: dread - d1/d9/d72/d10c/f114 zero size 2026-03-09T15:01:59.138 INFO:tasks.workunit.client.0.vm05.stdout:7/895: dwrite d1/d49/d4a/f6b [4194304,4194304] 0 2026-03-09T15:01:59.142 INFO:tasks.workunit.client.0.vm05.stdout:8/911: link d0/d1/d12/d1b/d95/d42/d60/da7/ced d0/d1/de2/c12c 0 2026-03-09T15:01:59.154 INFO:tasks.workunit.client.0.vm05.stdout:9/942: dwrite d2/d4e/d56/d53/d64/ded/d9c/f18 [4194304,4194304] 0 2026-03-09T15:01:59.172 INFO:tasks.workunit.client.0.vm05.stdout:2/906: unlink da/d29/d6a/da0/d91/dab/d2f/d35/caa 0 2026-03-09T15:01:59.174 INFO:tasks.workunit.client.0.vm05.stdout:2/907: dread da/d29/d6a/fda [0,4194304] 0 2026-03-09T15:01:59.176 INFO:tasks.workunit.client.0.vm05.stdout:3/871: rmdir d3/df/d10/d19/d44 39 2026-03-09T15:01:59.183 INFO:tasks.workunit.client.0.vm05.stdout:4/897: mkdir d2/d4/d7/d48/df0/d12e 0 2026-03-09T15:01:59.188 INFO:tasks.workunit.client.0.vm05.stdout:0/839: dwrite d9/de/d12/d15/d2e/d32/d53/f91 [4194304,4194304] 0 2026-03-09T15:01:59.195 INFO:tasks.workunit.client.0.vm05.stdout:4/898: sync 2026-03-09T15:01:59.209 INFO:tasks.workunit.client.0.vm05.stdout:7/896: rmdir d1/d9/d23/d31/d32/d78/ddd 39 2026-03-09T15:01:59.216 INFO:tasks.workunit.client.0.vm05.stdout:9/943: chown d2/d10/d22/d2c/d69/l6c 21 1 2026-03-09T15:01:59.244 INFO:tasks.workunit.client.0.vm05.stdout:6/836: rename da/d43/f5c to da/d17/f107 0 2026-03-09T15:01:59.244 INFO:tasks.workunit.client.0.vm05.stdout:6/837: chown da/d17/d3b/f5f 7 1 2026-03-09T15:01:59.265 INFO:tasks.workunit.client.0.vm05.stdout:2/908: write da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/f6b [3908059,73027] 0 2026-03-09T15:01:59.272 INFO:tasks.workunit.client.0.vm05.stdout:2/909: sync 2026-03-09T15:01:59.273 INFO:tasks.workunit.client.0.vm05.stdout:0/840: mknod d9/de/d25/dae/de6/c109 0 2026-03-09T15:01:59.273 INFO:tasks.workunit.client.0.vm05.stdout:2/910: fsync da/d29/d6a/da0/d7c/f108 0 2026-03-09T15:01:59.273 INFO:tasks.workunit.client.0.vm05.stdout:0/841: write d9/de/f3d [1293091,60599] 0 2026-03-09T15:01:59.278 INFO:tasks.workunit.client.0.vm05.stdout:7/897: chown d1/d9/d23/c43 556 1 2026-03-09T15:01:59.280 INFO:tasks.workunit.client.0.vm05.stdout:8/912: mknod d0/d1/d12/d1b/d66/d11b/c12d 0 2026-03-09T15:01:59.289 INFO:tasks.workunit.client.0.vm05.stdout:5/953: link d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/lc6 d1/d4/l141 0 2026-03-09T15:01:59.290 INFO:tasks.workunit.client.0.vm05.stdout:5/954: fdatasync d1/d4/d34/d35/d3d/d38/d63/ff0 0 2026-03-09T15:01:59.291 INFO:tasks.workunit.client.0.vm05.stdout:9/944: write d2/d10/d22/d2c/d69/f106 [67826,127048] 0 2026-03-09T15:01:59.294 INFO:tasks.workunit.client.0.vm05.stdout:5/955: dwrite d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d75/d137/f12a [0,4194304] 0 2026-03-09T15:01:59.314 INFO:tasks.workunit.client.0.vm05.stdout:4/899: mknod d2/d43/c12f 0 2026-03-09T15:01:59.318 INFO:tasks.workunit.client.0.vm05.stdout:6/838: dread da/d43/d66/f6e [0,4194304] 0 2026-03-09T15:01:59.320 INFO:tasks.workunit.client.0.vm05.stdout:8/913: mkdir d0/d1/d12/d1b/d95/d54/d12e 0 2026-03-09T15:01:59.324 INFO:tasks.workunit.client.0.vm05.stdout:4/900: sync 2026-03-09T15:01:59.327 INFO:tasks.workunit.client.0.vm05.stdout:0/842: dwrite d9/de/d12/d15/d2e/d6b/fb8 [0,4194304] 0 2026-03-09T15:01:59.336 INFO:tasks.workunit.client.0.vm05.stdout:9/945: creat d2/d4e/d56/d53/d64/f145 x:0 0 0 2026-03-09T15:01:59.337 INFO:tasks.workunit.client.0.vm05.stdout:9/946: fsync d2/d10/d22/fb6 0 2026-03-09T15:01:59.339 INFO:tasks.workunit.client.0.vm05.stdout:1/859: rename d9/l11 to d9/d2f/d37/d101/dd3/df1/l11c 0 2026-03-09T15:01:59.340 INFO:tasks.workunit.client.0.vm05.stdout:1/860: write d9/d2f/d37/d5f/f10b [145114,125027] 0 2026-03-09T15:01:59.347 INFO:tasks.workunit.client.0.vm05.stdout:8/914: mkdir d0/d1/d12/d1b/d6e/d93/d12f 0 2026-03-09T15:01:59.356 INFO:tasks.workunit.client.0.vm05.stdout:7/898: dwrite d1/d9/d23/d31/d8f/d93/fa3 [0,4194304] 0 2026-03-09T15:01:59.359 INFO:tasks.workunit.client.0.vm05.stdout:7/899: readlink d1/d9/d23/d31/d8f/d93/l119 0 2026-03-09T15:01:59.362 INFO:tasks.workunit.client.0.vm05.stdout:6/839: dwrite da/d17/f20 [0,4194304] 0 2026-03-09T15:01:59.362 INFO:tasks.workunit.client.0.vm05.stdout:7/900: write d1/de4/fe1 [2667808,63714] 0 2026-03-09T15:01:59.364 INFO:tasks.workunit.client.0.vm05.stdout:6/840: write da/d43/d7b/da9/fc9 [2587011,93140] 0 2026-03-09T15:01:59.374 INFO:tasks.workunit.client.0.vm05.stdout:9/947: write d2/d10/d22/d2c/d69/d5a/f12a [177259,58960] 0 2026-03-09T15:01:59.376 INFO:tasks.workunit.client.0.vm05.stdout:9/948: chown d2/d4e/d56/d53/d64/ded/d9c/d8e/dcb/ffb 478271288 1 2026-03-09T15:01:59.379 INFO:tasks.workunit.client.0.vm05.stdout:9/949: dwrite d2/d9e/f104 [0,4194304] 0 2026-03-09T15:01:59.392 INFO:tasks.workunit.client.0.vm05.stdout:3/872: rename d3/df/d1e/d2f/f9a to d3/d29/d2d/d77/f11e 0 2026-03-09T15:01:59.392 INFO:tasks.workunit.client.0.vm05.stdout:8/915: creat d0/d1/d12/d1b/d66/db7/dbe/f130 x:0 0 0 2026-03-09T15:01:59.393 INFO:tasks.workunit.client.0.vm05.stdout:7/901: mknod d1/d9/d23/d31/d32/ddc/c12f 0 2026-03-09T15:01:59.398 INFO:tasks.workunit.client.0.vm05.stdout:9/950: creat d2/d9e/df6/f146 x:0 0 0 2026-03-09T15:01:59.400 INFO:tasks.workunit.client.0.vm05.stdout:5/956: creat d1/d4/d19/f142 x:0 0 0 2026-03-09T15:01:59.400 INFO:tasks.workunit.client.0.vm05.stdout:6/841: read da/f80 [219925,4895] 0 2026-03-09T15:01:59.401 INFO:tasks.workunit.client.0.vm05.stdout:3/873: creat d3/f11f x:0 0 0 2026-03-09T15:01:59.405 INFO:tasks.workunit.client.0.vm05.stdout:2/911: link da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/f73 da/d29/d6a/da0/dd9/dfd/d110/f11a 0 2026-03-09T15:01:59.406 INFO:tasks.workunit.client.0.vm05.stdout:4/901: link d2/d43/d12a/d88/c119 d2/d4/d1e/da2/dec/c130 0 2026-03-09T15:01:59.407 INFO:tasks.workunit.client.0.vm05.stdout:7/902: truncate d1/d22/fbc 2495796 0 2026-03-09T15:01:59.408 INFO:tasks.workunit.client.0.vm05.stdout:5/957: mknod d1/db5/c143 0 2026-03-09T15:01:59.410 INFO:tasks.workunit.client.0.vm05.stdout:9/951: dread d2/f46 [0,4194304] 0 2026-03-09T15:01:59.415 INFO:tasks.workunit.client.0.vm05.stdout:1/861: write d9/d2f/d83/d98/f67 [5170266,41763] 0 2026-03-09T15:01:59.420 INFO:tasks.workunit.client.0.vm05.stdout:3/874: truncate d3/df/d10/d19/dce/dc8/de2/f48 1092009 0 2026-03-09T15:01:59.422 INFO:tasks.workunit.client.0.vm05.stdout:2/912: fdatasync da/f9d 0 2026-03-09T15:01:59.423 INFO:tasks.workunit.client.0.vm05.stdout:8/916: creat d0/d1/d12/d116/f131 x:0 0 0 2026-03-09T15:01:59.430 INFO:tasks.workunit.client.0.vm05.stdout:7/903: truncate d1/f45 3235535 0 2026-03-09T15:01:59.433 INFO:tasks.workunit.client.0.vm05.stdout:7/904: dwrite d1/d9/d23/d31/d8f/d93/d95/f124 [0,4194304] 0 2026-03-09T15:01:59.444 INFO:tasks.workunit.client.0.vm05.stdout:6/842: mknod da/d43/c108 0 2026-03-09T15:01:59.446 INFO:tasks.workunit.client.0.vm05.stdout:2/913: mknod da/d29/d6a/da0/d91/dab/d2f/d35/d10b/dbf/dc8/c11b 0 2026-03-09T15:01:59.447 INFO:tasks.workunit.client.0.vm05.stdout:8/917: dread d0/d1/d12/d1b/d66/fe8 [0,4194304] 0 2026-03-09T15:01:59.448 INFO:tasks.workunit.client.0.vm05.stdout:4/902: fdatasync d2/f7e 0 2026-03-09T15:01:59.449 INFO:tasks.workunit.client.0.vm05.stdout:0/843: link d9/de/d12/d15/f5e d9/de/d12/d15/d2e/d32/d53/d61/d104/da0/db7/de5/f10a 0 2026-03-09T15:01:59.451 INFO:tasks.workunit.client.0.vm05.stdout:7/905: symlink d1/de4/l130 0 2026-03-09T15:01:59.451 INFO:tasks.workunit.client.0.vm05.stdout:7/906: chown d1/d49/d4a/d77/c12c 64109939 1 2026-03-09T15:01:59.453 INFO:tasks.workunit.client.0.vm05.stdout:5/958: fsync d1/d4/d34/d6c/d104/f10b 0 2026-03-09T15:01:59.456 INFO:tasks.workunit.client.0.vm05.stdout:8/918: mkdir d0/d1/d12/d1b/d95/d42/d60/da7/d132 0 2026-03-09T15:01:59.461 INFO:tasks.workunit.client.0.vm05.stdout:8/919: dwrite d0/d24/d96/f118 [0,4194304] 0 2026-03-09T15:01:59.463 INFO:tasks.workunit.client.0.vm05.stdout:2/914: sync 2026-03-09T15:01:59.463 INFO:tasks.workunit.client.0.vm05.stdout:2/915: write da/f2c [3806309,2728] 0 2026-03-09T15:01:59.469 INFO:tasks.workunit.client.0.vm05.stdout:0/844: mknod d9/de/d12/d15/d2e/d32/d53/d61/c10b 0 2026-03-09T15:01:59.471 INFO:tasks.workunit.client.0.vm05.stdout:7/907: symlink d1/d12/l131 0 2026-03-09T15:01:59.475 INFO:tasks.workunit.client.0.vm05.stdout:9/952: getdents d2/d4e/d56/d53 0 2026-03-09T15:01:59.475 INFO:tasks.workunit.client.0.vm05.stdout:1/862: rename d9/d2f/d37/d5a/f5b to d9/d2f/f11d 0 2026-03-09T15:01:59.475 INFO:tasks.workunit.client.0.vm05.stdout:6/843: readlink da/d19/ldc 0 2026-03-09T15:01:59.478 INFO:tasks.workunit.client.0.vm05.stdout:4/903: mkdir d2/d4/d7/d131 0 2026-03-09T15:01:59.481 INFO:tasks.workunit.client.0.vm05.stdout:9/953: creat d2/d10/d22/d9f/d119/f147 x:0 0 0 2026-03-09T15:01:59.482 INFO:tasks.workunit.client.0.vm05.stdout:3/875: rename d3/df/d1e/d2c/d74/d9b/l9f to d3/df/d1e/d2f/d52/l120 0 2026-03-09T15:01:59.483 INFO:tasks.workunit.client.0.vm05.stdout:8/920: read d0/d2a/f2e [255038,75395] 0 2026-03-09T15:01:59.490 INFO:tasks.workunit.client.0.vm05.stdout:8/921: sync 2026-03-09T15:01:59.496 INFO:tasks.workunit.client.0.vm05.stdout:0/845: dwrite d9/de/df1/f98 [0,4194304] 0 2026-03-09T15:01:59.498 INFO:tasks.workunit.client.0.vm05.stdout:0/846: chown d9/de/d25/d38/f87 214953 1 2026-03-09T15:01:59.532 INFO:tasks.workunit.client.0.vm05.stdout:1/863: dwrite d9/d2f/d37/fe3 [0,4194304] 0 2026-03-09T15:01:59.544 INFO:tasks.workunit.client.0.vm05.stdout:3/876: dwrite d3/df/d10/d19/d44/fb0 [0,4194304] 0 2026-03-09T15:01:59.564 INFO:tasks.workunit.client.0.vm05.stdout:8/922: truncate d0/d1/d12/d1b/d95/d42/d60/fc0 856334 0 2026-03-09T15:01:59.572 INFO:tasks.workunit.client.0.vm05.stdout:8/923: dread d0/f3b [0,4194304] 0 2026-03-09T15:01:59.575 INFO:tasks.workunit.client.0.vm05.stdout:4/904: creat d2/d4/d7/d131/f132 x:0 0 0 2026-03-09T15:01:59.576 INFO:tasks.workunit.client.0.vm05.stdout:4/905: truncate d2/d4/d1e/da2/fe2 882637 0 2026-03-09T15:01:59.578 INFO:tasks.workunit.client.0.vm05.stdout:1/864: truncate d9/d2f/d83/d98/d59/d49/f82 1732611 0 2026-03-09T15:01:59.578 INFO:tasks.workunit.client.0.vm05.stdout:1/865: chown d9/d2f/d83/d98/d59/d49/d77 48589 1 2026-03-09T15:01:59.579 INFO:tasks.workunit.client.0.vm05.stdout:1/866: truncate d9/d2f/d37/ded/f108 1199100 0 2026-03-09T15:01:59.581 INFO:tasks.workunit.client.0.vm05.stdout:9/954: creat d2/d10/d22/dc1/dc3/d141/f148 x:0 0 0 2026-03-09T15:01:59.582 INFO:tasks.workunit.client.0.vm05.stdout:5/959: rename d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/f57 to d1/d5d/f144 0 2026-03-09T15:01:59.584 INFO:tasks.workunit.client.0.vm05.stdout:8/924: truncate d0/d1/d12/d3c/d8b/ff9 3123140 0 2026-03-09T15:01:59.586 INFO:tasks.workunit.client.0.vm05.stdout:4/906: rmdir d2/d4/d1e 39 2026-03-09T15:01:59.587 INFO:tasks.workunit.client.0.vm05.stdout:4/907: chown d2/d49/c78 102 1 2026-03-09T15:01:59.589 INFO:tasks.workunit.client.0.vm05.stdout:1/867: mkdir d9/d2f/d37/d11e 0 2026-03-09T15:01:59.589 INFO:tasks.workunit.client.0.vm05.stdout:1/868: chown d9/d2f/d83 920837 1 2026-03-09T15:01:59.590 INFO:tasks.workunit.client.0.vm05.stdout:9/955: unlink d2/f61 0 2026-03-09T15:01:59.592 INFO:tasks.workunit.client.0.vm05.stdout:5/960: fsync d1/d4/d34/d35/f52 0 2026-03-09T15:01:59.596 INFO:tasks.workunit.client.0.vm05.stdout:1/869: truncate d9/d2f/f3a 5733441 0 2026-03-09T15:01:59.597 INFO:tasks.workunit.client.0.vm05.stdout:9/956: truncate d2/d9e/df6/f121 70928 0 2026-03-09T15:01:59.598 INFO:tasks.workunit.client.0.vm05.stdout:8/925: sync 2026-03-09T15:01:59.600 INFO:tasks.workunit.client.0.vm05.stdout:7/908: symlink d1/d9/d23/d31/d8f/d93/dbd/l132 0 2026-03-09T15:01:59.601 INFO:tasks.workunit.client.0.vm05.stdout:7/909: truncate d1/d49/d68/f11e 438279 0 2026-03-09T15:01:59.604 INFO:tasks.workunit.client.0.vm05.stdout:4/908: dread - d2/d4/d1e/da2/dc5/f10a zero size 2026-03-09T15:01:59.606 INFO:tasks.workunit.client.0.vm05.stdout:1/870: creat d9/d2f/d83/d98/d59/d49/d92/d75/f11f x:0 0 0 2026-03-09T15:01:59.608 INFO:tasks.workunit.client.0.vm05.stdout:8/926: mkdir d0/d1/d12/d1b/d95/d42/d60/da7/db3/dec/d133 0 2026-03-09T15:01:59.611 INFO:tasks.workunit.client.0.vm05.stdout:8/927: dwrite d0/d1/d12/d1b/d95/d42/da1/fcb [0,4194304] 0 2026-03-09T15:01:59.619 INFO:tasks.workunit.client.0.vm05.stdout:7/910: creat d1/d9/d23/d31/d32/ddc/f133 x:0 0 0 2026-03-09T15:01:59.620 INFO:tasks.workunit.client.0.vm05.stdout:7/911: chown d1/de4/c126 109715964 1 2026-03-09T15:01:59.623 INFO:tasks.workunit.client.0.vm05.stdout:7/912: dwrite d1/f15 [0,4194304] 0 2026-03-09T15:01:59.633 INFO:tasks.workunit.client.0.vm05.stdout:7/913: chown d1/d9/c2c 440 1 2026-03-09T15:01:59.642 INFO:tasks.workunit.client.0.vm05.stdout:6/844: dwrite da/d17/d3b/f3f [0,4194304] 0 2026-03-09T15:01:59.671 INFO:tasks.workunit.client.0.vm05.stdout:7/914: fdatasync d1/d9/d23/d31/d8f/d93/dbd/f116 0 2026-03-09T15:01:59.672 INFO:tasks.workunit.client.0.vm05.stdout:0/847: write d9/de/df1/fb4 [799163,69842] 0 2026-03-09T15:01:59.675 INFO:tasks.workunit.client.0.vm05.stdout:9/957: link d2/d10/d22/d2c/d69/d5a/cfa d2/d8b/dae/c149 0 2026-03-09T15:01:59.677 INFO:tasks.workunit.client.0.vm05.stdout:8/928: link d0/d1/d12/d1b/d95/d78/dca/ff1 d0/d1/d12/d1b/d95/d78/dea/f134 0 2026-03-09T15:01:59.678 INFO:tasks.workunit.client.0.vm05.stdout:7/915: read - d1/d9/d23/d31/d8f/d93/fb8 zero size 2026-03-09T15:01:59.680 INFO:tasks.workunit.client.0.vm05.stdout:0/848: stat d9/de/d12/d15/fbb 0 2026-03-09T15:01:59.683 INFO:tasks.workunit.client.0.vm05.stdout:0/849: dwrite d9/de/d12/d15/d2e/d32/d53/f91 [0,4194304] 0 2026-03-09T15:01:59.685 INFO:tasks.workunit.client.0.vm05.stdout:9/958: mkdir d2/d4e/d56/d53/d64/ded/d9c/d8e/d14a 0 2026-03-09T15:01:59.685 INFO:tasks.workunit.client.0.vm05.stdout:0/850: write d9/de/f1e [3183606,82006] 0 2026-03-09T15:01:59.686 INFO:tasks.workunit.client.0.vm05.stdout:0/851: read - d9/de/d25/d38/d78/dc9/ff3 zero size 2026-03-09T15:01:59.691 INFO:tasks.workunit.client.0.vm05.stdout:9/959: dread d2/d4e/d56/d53/d64/ded/d9c/d94/fee [0,4194304] 0 2026-03-09T15:01:59.691 INFO:tasks.workunit.client.0.vm05.stdout:9/960: chown d2/d10/d22/da0/cf3 1866 1 2026-03-09T15:01:59.704 INFO:tasks.workunit.client.0.vm05.stdout:3/877: write d3/d29/d7f/f83 [1304385,11510] 0 2026-03-09T15:01:59.710 INFO:tasks.workunit.client.0.vm05.stdout:3/878: dread d3/df/d10/d19/dce/dc8/de2/d8c/f85 [0,4194304] 0 2026-03-09T15:01:59.711 INFO:tasks.workunit.client.0.vm05.stdout:3/879: fsync d3/df/d59/d79/fa8 0 2026-03-09T15:01:59.716 INFO:tasks.workunit.client.0.vm05.stdout:0/852: dread - d9/de/d25/dae/de6/fdc zero size 2026-03-09T15:01:59.716 INFO:tasks.workunit.client.0.vm05.stdout:0/853: chown d9/l94 25726 1 2026-03-09T15:01:59.722 INFO:tasks.workunit.client.0.vm05.stdout:9/961: mknod d2/d4e/d56/d84/d120/c14b 0 2026-03-09T15:01:59.729 INFO:tasks.workunit.client.0.vm05.stdout:9/962: dread d2/f1f [4194304,4194304] 0 2026-03-09T15:01:59.730 INFO:tasks.workunit.client.0.vm05.stdout:3/880: truncate d3/df/d1e/d2f/fb9 529215 0 2026-03-09T15:01:59.733 INFO:tasks.workunit.client.0.vm05.stdout:3/881: dread d3/df/d1e/d2c/d74/d78/fab [0,4194304] 0 2026-03-09T15:01:59.735 INFO:tasks.workunit.client.0.vm05.stdout:0/854: fdatasync d9/f22 0 2026-03-09T15:01:59.737 INFO:tasks.workunit.client.0.vm05.stdout:8/929: link d0/d1/d12/d1b/d95/d42/d60/da7/f10a d0/d1/d12/d1b/d6e/d93/d9f/f135 0 2026-03-09T15:01:59.744 INFO:tasks.workunit.client.0.vm05.stdout:8/930: mknod d0/d1/d12/d3c/d8b/d129/c136 0 2026-03-09T15:01:59.748 INFO:tasks.workunit.client.0.vm05.stdout:9/963: rmdir d2/d9e/df6 39 2026-03-09T15:01:59.754 INFO:tasks.workunit.client.0.vm05.stdout:6/845: rmdir da/d43 39 2026-03-09T15:01:59.755 INFO:tasks.workunit.client.0.vm05.stdout:4/909: write d2/d4/d8/d4a/d94/ff3 [125445,16825] 0 2026-03-09T15:01:59.758 INFO:tasks.workunit.client.0.vm05.stdout:4/910: creat d2/d43/d12a/d88/d92/f133 x:0 0 0 2026-03-09T15:01:59.759 INFO:tasks.workunit.client.0.vm05.stdout:4/911: stat d2/d4/d7/dc/f27 0 2026-03-09T15:01:59.759 INFO:tasks.workunit.client.0.vm05.stdout:4/912: dread - d2/d4/d8/f11d zero size 2026-03-09T15:01:59.761 INFO:tasks.workunit.client.0.vm05.stdout:4/913: truncate d2/f3e 892961 0 2026-03-09T15:01:59.762 INFO:tasks.workunit.client.0.vm05.stdout:4/914: chown d2/d43/dd6/l113 3 1 2026-03-09T15:01:59.765 INFO:tasks.workunit.client.0.vm05.stdout:5/961: symlink d1/d4/d34/d56/da6/l145 0 2026-03-09T15:01:59.766 INFO:tasks.workunit.client.0.vm05.stdout:5/962: dread d1/d4/d34/d35/ff7 [0,4194304] 0 2026-03-09T15:01:59.767 INFO:tasks.workunit.client.0.vm05.stdout:5/963: mknod d1/d4/d34/dc0/c146 0 2026-03-09T15:01:59.768 INFO:tasks.workunit.client.0.vm05.stdout:5/964: mkdir d1/da/d10f/d132/d147 0 2026-03-09T15:01:59.770 INFO:tasks.workunit.client.0.vm05.stdout:5/965: mkdir d1/d4/d34/d56/da6/dea/d130/d148 0 2026-03-09T15:01:59.772 INFO:tasks.workunit.client.0.vm05.stdout:5/966: getdents d1/da/d12e 0 2026-03-09T15:01:59.774 INFO:tasks.workunit.client.0.vm05.stdout:5/967: read d1/d4/d34/d35/f44 [722952,7722] 0 2026-03-09T15:01:59.775 INFO:tasks.workunit.client.0.vm05.stdout:5/968: readlink d1/d4/d34/d35/d4e/dc8/lfb 0 2026-03-09T15:01:59.790 INFO:tasks.workunit.client.0.vm05.stdout:7/916: write d1/d9/d72/d10c/f114 [508598,51466] 0 2026-03-09T15:01:59.791 INFO:tasks.workunit.client.0.vm05.stdout:7/917: chown d1/d12/f56 438 1 2026-03-09T15:01:59.791 INFO:tasks.workunit.client.0.vm05.stdout:7/918: chown d1/d9/d23/d31 9361 1 2026-03-09T15:01:59.794 INFO:tasks.workunit.client.0.vm05.stdout:7/919: fsync d1/d9/d23/d31/d51/ff2 0 2026-03-09T15:01:59.795 INFO:tasks.workunit.client.0.vm05.stdout:7/920: mkdir d1/d22/da4/d134 0 2026-03-09T15:01:59.797 INFO:tasks.workunit.client.0.vm05.stdout:5/969: sync 2026-03-09T15:01:59.797 INFO:tasks.workunit.client.0.vm05.stdout:5/970: fdatasync d1/d4/d34/d35/dd0/f139 0 2026-03-09T15:01:59.799 INFO:tasks.workunit.client.0.vm05.stdout:5/971: creat d1/db5/f149 x:0 0 0 2026-03-09T15:01:59.800 INFO:tasks.workunit.client.0.vm05.stdout:5/972: unlink d1/d4/f5f 0 2026-03-09T15:01:59.802 INFO:tasks.workunit.client.0.vm05.stdout:2/916: rename da/d29/d6a/da0/d91/dab/d2f/db3/dee to da/d29/d6a/da0/d105/d11c 0 2026-03-09T15:01:59.805 INFO:tasks.workunit.client.0.vm05.stdout:1/871: rename d9/d97/cfe to d9/d2f/d83/d98/d59/d49/d92/c120 0 2026-03-09T15:01:59.807 INFO:tasks.workunit.client.0.vm05.stdout:2/917: unlink da/d29/d6a/da0/d91/dab/d2f/d35/db0/dc9/lde 0 2026-03-09T15:01:59.808 INFO:tasks.workunit.client.0.vm05.stdout:5/973: getdents d1/d4/d34/d56/da6/dea/d130/d148 0 2026-03-09T15:01:59.822 INFO:tasks.workunit.client.0.vm05.stdout:3/882: rename d3/df/d10/d19 to d3/df/d1e/d2c/d74/d78/d121 0 2026-03-09T15:01:59.823 INFO:tasks.workunit.client.0.vm05.stdout:3/883: chown d3/df/d1e/d2f/d52/f61 79099293 1 2026-03-09T15:01:59.825 INFO:tasks.workunit.client.0.vm05.stdout:8/931: write d0/d1/d12/d1b/fbd [872734,96581] 0 2026-03-09T15:01:59.830 INFO:tasks.workunit.client.0.vm05.stdout:9/964: dwrite d2/d10/d8c/fa8 [0,4194304] 0 2026-03-09T15:01:59.832 INFO:tasks.workunit.client.0.vm05.stdout:9/965: stat d2/d10/d22/l11c 0 2026-03-09T15:01:59.836 INFO:tasks.workunit.client.0.vm05.stdout:6/846: dwrite da/d17/d3b/fd4 [0,4194304] 0 2026-03-09T15:01:59.845 INFO:tasks.workunit.client.0.vm05.stdout:1/872: mknod d9/d17/c121 0 2026-03-09T15:01:59.845 INFO:tasks.workunit.client.0.vm05.stdout:4/915: write d2/d4/d1e/da2/dec/d3d/f7f [379338,116509] 0 2026-03-09T15:01:59.846 INFO:tasks.workunit.client.0.vm05.stdout:2/918: truncate da/f79 8797804 0 2026-03-09T15:01:59.846 INFO:tasks.workunit.client.0.vm05.stdout:1/873: truncate d9/d2f/d83/d98/d87/ff5 1721107 0 2026-03-09T15:01:59.846 INFO:tasks.workunit.client.0.vm05.stdout:2/919: dread - da/d29/d6a/da0/fa1 zero size 2026-03-09T15:01:59.847 INFO:tasks.workunit.client.0.vm05.stdout:5/974: symlink d1/d4/d34/d35/d4e/d6f/l14a 0 2026-03-09T15:01:59.849 INFO:tasks.workunit.client.0.vm05.stdout:0/855: rename d9/de/d12/da3/fb9 to d9/de/d12/d15/d2e/d32/d53/d61/f10c 0 2026-03-09T15:01:59.850 INFO:tasks.workunit.client.0.vm05.stdout:0/856: chown d9/cf7 1628 1 2026-03-09T15:01:59.851 INFO:tasks.workunit.client.0.vm05.stdout:3/884: mknod d3/d29/d2d/d77/d4d/c122 0 2026-03-09T15:01:59.858 INFO:tasks.workunit.client.0.vm05.stdout:8/932: creat d0/d1/d12/d1b/d66/d6f/f137 x:0 0 0 2026-03-09T15:01:59.860 INFO:tasks.workunit.client.0.vm05.stdout:8/933: dread d0/d24/d96/f118 [0,4194304] 0 2026-03-09T15:01:59.869 INFO:tasks.workunit.client.0.vm05.stdout:8/934: dread d0/d1/f49 [0,4194304] 0 2026-03-09T15:01:59.875 INFO:tasks.workunit.client.0.vm05.stdout:4/916: readlink d2/d4/d8/d4a/lb0 0 2026-03-09T15:01:59.879 INFO:tasks.workunit.client.0.vm05.stdout:2/920: creat da/d29/d6a/da0/d91/dab/d2f/db3/df1/f11d x:0 0 0 2026-03-09T15:01:59.881 INFO:tasks.workunit.client.0.vm05.stdout:0/857: unlink d9/de/d12/d15/d2e/d32/d53/d61/c65 0 2026-03-09T15:01:59.881 INFO:tasks.workunit.client.0.vm05.stdout:0/858: readlink d9/de/l72 0 2026-03-09T15:01:59.902 INFO:tasks.workunit.client.0.vm05.stdout:0/859: fdatasync d9/de/d12/da3/dbc/fc4 0 2026-03-09T15:01:59.908 INFO:tasks.workunit.client.0.vm05.stdout:2/921: dread da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/fc6 [0,4194304] 0 2026-03-09T15:01:59.909 INFO:tasks.workunit.client.0.vm05.stdout:5/975: dwrite d1/da/fe6 [0,4194304] 0 2026-03-09T15:01:59.919 INFO:tasks.workunit.client.0.vm05.stdout:8/935: mknod d0/d1/d12/c138 0 2026-03-09T15:01:59.924 INFO:tasks.workunit.client.0.vm05.stdout:6/847: truncate da/d43/f46 2091670 0 2026-03-09T15:01:59.934 INFO:tasks.workunit.client.0.vm05.stdout:7/921: rename d1/d9/d23/d31/d32/d78/dbb/l122 to d1/d49/l135 0 2026-03-09T15:01:59.934 INFO:tasks.workunit.client.0.vm05.stdout:0/860: mknod d9/de/d12/d15/d2e/d32/d74/c10d 0 2026-03-09T15:01:59.938 INFO:tasks.workunit.client.0.vm05.stdout:9/966: write d2/d9e/df6/f121 [660847,88182] 0 2026-03-09T15:01:59.940 INFO:tasks.workunit.client.0.vm05.stdout:4/917: dwrite d2/f33 [4194304,4194304] 0 2026-03-09T15:01:59.950 INFO:tasks.workunit.client.0.vm05.stdout:5/976: symlink d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d75/d137/l14b 0 2026-03-09T15:01:59.956 INFO:tasks.workunit.client.0.vm05.stdout:0/861: chown d9/de/d12/d15/c24 21 1 2026-03-09T15:01:59.958 INFO:tasks.workunit.client.0.vm05.stdout:2/922: mkdir da/d29/d64/d11e 0 2026-03-09T15:01:59.960 INFO:tasks.workunit.client.0.vm05.stdout:4/918: symlink d2/d43/d12a/da5/l134 0 2026-03-09T15:01:59.961 INFO:tasks.workunit.client.0.vm05.stdout:5/977: fdatasync d1/f9 0 2026-03-09T15:01:59.963 INFO:tasks.workunit.client.0.vm05.stdout:8/936: link d0/d1/d12/d1b/d95/d42/da1/fcb d0/d1/d12/d1b/d66/d11b/f139 0 2026-03-09T15:01:59.964 INFO:tasks.workunit.client.0.vm05.stdout:8/937: write d0/d1/de2/fb1 [598824,96726] 0 2026-03-09T15:01:59.969 INFO:tasks.workunit.client.0.vm05.stdout:7/922: write d1/d9/d23/f5a [3944735,58439] 0 2026-03-09T15:01:59.971 INFO:tasks.workunit.client.0.vm05.stdout:0/862: dread d9/de/d12/f7a [0,4194304] 0 2026-03-09T15:01:59.976 INFO:tasks.workunit.client.0.vm05.stdout:1/874: rename d9/d2f/d83/d98/f50 to d9/d2f/d37/f122 0 2026-03-09T15:01:59.977 INFO:tasks.workunit.client.0.vm05.stdout:2/923: mknod da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/c11f 0 2026-03-09T15:01:59.980 INFO:tasks.workunit.client.0.vm05.stdout:6/848: truncate da/d17/d3b/ffa 552304 0 2026-03-09T15:01:59.983 INFO:tasks.workunit.client.0.vm05.stdout:8/938: mknod d0/d24/d112/c13a 0 2026-03-09T15:01:59.984 INFO:tasks.workunit.client.0.vm05.stdout:8/939: chown d0/d100/f11c 8 1 2026-03-09T15:01:59.984 INFO:tasks.workunit.client.0.vm05.stdout:8/940: chown d0/dc/l61 211628 1 2026-03-09T15:01:59.990 INFO:tasks.workunit.client.0.vm05.stdout:4/919: dread d2/d4/d7/dc/f8e [0,4194304] 0 2026-03-09T15:01:59.991 INFO:tasks.workunit.client.0.vm05.stdout:3/885: rename d3/df/d1e/d2c/d74/d78/d121/d44/da2/c10d to d3/df/d1e/d2c/d74/d78/d121/dce/d107/c123 0 2026-03-09T15:01:59.993 INFO:tasks.workunit.client.0.vm05.stdout:2/924: stat da/d29/d6a/da0/d91/dab/d2f/db3/fcb 0 2026-03-09T15:01:59.995 INFO:tasks.workunit.client.0.vm05.stdout:1/875: sync 2026-03-09T15:01:59.995 INFO:tasks.workunit.client.0.vm05.stdout:6/849: creat da/d17/d95/da2/dae/dd9/f109 x:0 0 0 2026-03-09T15:02:00.005 INFO:tasks.workunit.client.0.vm05.stdout:5/978: mknod d1/d4/d19/c14c 0 2026-03-09T15:02:00.006 INFO:tasks.workunit.client.0.vm05.stdout:6/850: dread da/d43/f59 [0,4194304] 0 2026-03-09T15:02:00.007 INFO:tasks.workunit.client.0.vm05.stdout:0/863: dwrite d9/de/d12/f84 [0,4194304] 0 2026-03-09T15:02:00.010 INFO:tasks.workunit.client.0.vm05.stdout:7/923: symlink d1/d49/l136 0 2026-03-09T15:02:00.014 INFO:tasks.workunit.client.0.vm05.stdout:4/920: mkdir d2/d4/d7/dc/da8/d135 0 2026-03-09T15:02:00.018 INFO:tasks.workunit.client.0.vm05.stdout:9/967: rename d2/d10/d22/da0/fad to d2/d10/d22/dc1/dc3/d13d/f14c 0 2026-03-09T15:02:00.020 INFO:tasks.workunit.client.0.vm05.stdout:2/925: truncate da/d29/d6a/da0/d91/dab/f8c 1515445 0 2026-03-09T15:02:00.030 INFO:tasks.workunit.client.0.vm05.stdout:5/979: dread d1/d4/d34/d35/f47 [0,4194304] 0 2026-03-09T15:02:00.033 INFO:tasks.workunit.client.0.vm05.stdout:7/924: mkdir d1/d12/d137 0 2026-03-09T15:02:00.039 INFO:tasks.workunit.client.0.vm05.stdout:6/851: dwrite da/f10 [0,4194304] 0 2026-03-09T15:02:00.044 INFO:tasks.workunit.client.0.vm05.stdout:8/941: rename d0/d1/d12/d1b/d95/d78/db5/cee to d0/d1/d12/d1b/d95/d42/d60/da7/c13b 0 2026-03-09T15:02:00.045 INFO:tasks.workunit.client.0.vm05.stdout:9/968: creat d2/d10/d22/d47/d95/f14d x:0 0 0 2026-03-09T15:02:00.048 INFO:tasks.workunit.client.0.vm05.stdout:5/980: fdatasync d1/d5d/f109 0 2026-03-09T15:02:00.050 INFO:tasks.workunit.client.0.vm05.stdout:7/925: creat d1/d12/f138 x:0 0 0 2026-03-09T15:02:00.050 INFO:tasks.workunit.client.0.vm05.stdout:7/926: chown d1/d9/fd0 1306 1 2026-03-09T15:02:00.053 INFO:tasks.workunit.client.0.vm05.stdout:9/969: mkdir d2/d10/d8c/d14e 0 2026-03-09T15:02:00.055 INFO:tasks.workunit.client.0.vm05.stdout:2/926: creat da/d29/d6a/da0/dd9/dfd/d110/f120 x:0 0 0 2026-03-09T15:02:00.059 INFO:tasks.workunit.client.0.vm05.stdout:6/852: unlink da/d17/d7c/dc6/fd0 0 2026-03-09T15:02:00.062 INFO:tasks.workunit.client.0.vm05.stdout:4/921: write d2/d4/d1e/da2/dec/f34 [3455314,23341] 0 2026-03-09T15:02:00.066 INFO:tasks.workunit.client.0.vm05.stdout:3/886: write d3/df/d1e/d2f/fb9 [211115,1930] 0 2026-03-09T15:02:00.070 INFO:tasks.workunit.client.0.vm05.stdout:8/942: mknod d0/d1/d12/d1b/d95/dd7/dd2/d117/c13c 0 2026-03-09T15:02:00.071 INFO:tasks.workunit.client.0.vm05.stdout:9/970: truncate d2/d4e/d56/d53/d64/ded/d9c/d94/fee 1455317 0 2026-03-09T15:02:00.072 INFO:tasks.workunit.client.0.vm05.stdout:1/876: getdents d9/d2f/d83/d98/d59/df8 0 2026-03-09T15:02:00.074 INFO:tasks.workunit.client.0.vm05.stdout:0/864: creat d9/de/d12/d8a/f10e x:0 0 0 2026-03-09T15:02:00.074 INFO:tasks.workunit.client.0.vm05.stdout:0/865: dread - d9/de/d25/ff2 zero size 2026-03-09T15:02:00.075 INFO:tasks.workunit.client.0.vm05.stdout:0/866: chown d9/f22 79 1 2026-03-09T15:02:00.081 INFO:tasks.workunit.client.0.vm05.stdout:9/971: mknod d2/d4e/d56/d53/d64/ded/d99/c14f 0 2026-03-09T15:02:00.083 INFO:tasks.workunit.client.0.vm05.stdout:1/877: unlink d9/d2f/d37/d5f/da2/d11a/fd5 0 2026-03-09T15:02:00.089 INFO:tasks.workunit.client.0.vm05.stdout:4/922: dwrite d2/d4/d7/dc/f18 [0,4194304] 0 2026-03-09T15:02:00.093 INFO:tasks.workunit.client.0.vm05.stdout:3/887: dwrite d3/d29/d7f/dc3/fd5 [0,4194304] 0 2026-03-09T15:02:00.103 INFO:tasks.workunit.client.0.vm05.stdout:2/927: fsync da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/fdf 0 2026-03-09T15:02:00.105 INFO:tasks.workunit.client.0.vm05.stdout:5/981: link d1/d4/d34/fe2 d1/d4/d34/d6c/dfa/f14d 0 2026-03-09T15:02:00.106 INFO:tasks.workunit.client.0.vm05.stdout:5/982: read - d1/d4/d19/f142 zero size 2026-03-09T15:02:00.107 INFO:tasks.workunit.client.0.vm05.stdout:5/983: chown d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d75/d137 0 1 2026-03-09T15:02:00.107 INFO:tasks.workunit.client.0.vm05.stdout:5/984: write d1/da/fe6 [2208244,7275] 0 2026-03-09T15:02:00.119 INFO:tasks.workunit.client.0.vm05.stdout:9/972: unlink d2/fc 0 2026-03-09T15:02:00.125 INFO:tasks.workunit.client.0.vm05.stdout:2/928: creat da/d29/d6a/da0/d91/dab/d2f/d35/d10b/dbf/dc8/f121 x:0 0 0 2026-03-09T15:02:00.128 INFO:tasks.workunit.client.0.vm05.stdout:2/929: dwrite da/f2c [4194304,4194304] 0 2026-03-09T15:02:00.132 INFO:tasks.workunit.client.0.vm05.stdout:7/927: getdents d1/d9/d23/d54/d7b 0 2026-03-09T15:02:00.135 INFO:tasks.workunit.client.0.vm05.stdout:6/853: rename da/d43/d7b/c7f to da/d17/d95/c10a 0 2026-03-09T15:02:00.139 INFO:tasks.workunit.client.0.vm05.stdout:4/923: mknod d2/c136 0 2026-03-09T15:02:00.140 INFO:tasks.workunit.client.0.vm05.stdout:4/924: chown d2/d4/fbe 0 1 2026-03-09T15:02:00.142 INFO:tasks.workunit.client.0.vm05.stdout:2/930: readlink da/d29/d6a/da0/l65 0 2026-03-09T15:02:00.143 INFO:tasks.workunit.client.0.vm05.stdout:2/931: chown da/d29/d6a/da0/dd9/dfd/d110/f120 15405 1 2026-03-09T15:02:00.144 INFO:tasks.workunit.client.0.vm05.stdout:0/867: link d9/de/d25/c44 d9/de/d12/d15/d2e/d32/d53/d61/d104/da0/db7/de5/c10f 0 2026-03-09T15:02:00.148 INFO:tasks.workunit.client.0.vm05.stdout:5/985: link d1/d5d/f82 d1/d4/d34/d35/d121/f14e 0 2026-03-09T15:02:00.149 INFO:tasks.workunit.client.0.vm05.stdout:7/928: read - d1/d9/d23/d31/d32/d78/ddd/f103 zero size 2026-03-09T15:02:00.151 INFO:tasks.workunit.client.0.vm05.stdout:5/986: dread d1/d4/d34/d6c/d104/f10b [0,4194304] 0 2026-03-09T15:02:00.151 INFO:tasks.workunit.client.0.vm05.stdout:5/987: chown d1/d4/d19/f142 3 1 2026-03-09T15:02:00.160 INFO:tasks.workunit.client.0.vm05.stdout:8/943: write d0/d1/d12/d1b/d95/d42/d60/da7/db3/fba [694526,106544] 0 2026-03-09T15:02:00.165 INFO:tasks.workunit.client.0.vm05.stdout:3/888: dwrite d3/df/d59/fcc [0,4194304] 0 2026-03-09T15:02:00.188 INFO:tasks.workunit.client.0.vm05.stdout:9/973: write d2/d10/d22/d2c/fbd [5014458,40913] 0 2026-03-09T15:02:00.189 INFO:tasks.workunit.client.0.vm05.stdout:9/974: fsync d2/d4e/d56/d53/f66 0 2026-03-09T15:02:00.203 INFO:tasks.workunit.client.0.vm05.stdout:2/932: rename da/d29/d6a/db1/db7/fef to da/d29/d6a/da0/d7c/f122 0 2026-03-09T15:02:00.219 INFO:tasks.workunit.client.0.vm05.stdout:4/925: dwrite d2/d4/d7/f53 [0,4194304] 0 2026-03-09T15:02:00.220 INFO:tasks.workunit.client.0.vm05.stdout:4/926: chown d2/c136 415659551 1 2026-03-09T15:02:00.220 INFO:tasks.workunit.client.0.vm05.stdout:4/927: chown d2/d4/d1e/da2/dec/l6f 0 1 2026-03-09T15:02:00.222 INFO:tasks.workunit.client.0.vm05.stdout:1/878: getdents d9/d2f/d37/ded 0 2026-03-09T15:02:00.226 INFO:tasks.workunit.client.0.vm05.stdout:7/929: fdatasync d1/d9/d23/d31/d8f/d93/f82 0 2026-03-09T15:02:00.229 INFO:tasks.workunit.client.0.vm05.stdout:5/988: mknod d1/d4/d34/d35/d4e/c14f 0 2026-03-09T15:02:00.231 INFO:tasks.workunit.client.0.vm05.stdout:5/989: truncate d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d75/d137/fb1 4562407 0 2026-03-09T15:02:00.232 INFO:tasks.workunit.client.0.vm05.stdout:8/944: read - d0/d1/d12/d3c/d8b/d129/ff4 zero size 2026-03-09T15:02:00.238 INFO:tasks.workunit.client.0.vm05.stdout:2/933: creat da/d29/d6a/db1/f123 x:0 0 0 2026-03-09T15:02:00.240 INFO:tasks.workunit.client.0.vm05.stdout:0/868: truncate d9/de/d12/d15/d49/f103 894790 0 2026-03-09T15:02:00.240 INFO:tasks.workunit.client.0.vm05.stdout:0/869: chown d9/de/d12/d15/d2e/d32/f7d 566 1 2026-03-09T15:02:00.243 INFO:tasks.workunit.client.0.vm05.stdout:4/928: rename d2/d43/d12a/d88/d92/f133 to d2/d49/f137 0 2026-03-09T15:02:00.245 INFO:tasks.workunit.client.0.vm05.stdout:7/930: creat d1/d9/d23/d31/d32/d78/d7e/d81/dcd/d109/f139 x:0 0 0 2026-03-09T15:02:00.245 INFO:tasks.workunit.client.0.vm05.stdout:7/931: read - d1/f125 zero size 2026-03-09T15:02:00.247 INFO:tasks.workunit.client.0.vm05.stdout:8/945: fsync d0/dc/f7e 0 2026-03-09T15:02:00.248 INFO:tasks.workunit.client.0.vm05.stdout:8/946: readlink d0/d1/d12/d1b/d95/dd7/lf2 0 2026-03-09T15:02:00.251 INFO:tasks.workunit.client.0.vm05.stdout:6/854: truncate da/d17/f33 2694719 0 2026-03-09T15:02:00.255 INFO:tasks.workunit.client.0.vm05.stdout:3/889: dwrite d3/df/d1e/d2f/fbf [0,4194304] 0 2026-03-09T15:02:00.258 INFO:tasks.workunit.client.0.vm05.stdout:9/975: dwrite d2/d4e/d56/d53/d64/ded/d9c/db2/fd2 [0,4194304] 0 2026-03-09T15:02:00.273 INFO:tasks.workunit.client.0.vm05.stdout:2/934: symlink da/d29/d64/da6/l124 0 2026-03-09T15:02:00.275 INFO:tasks.workunit.client.0.vm05.stdout:0/870: truncate d9/de/d25/dcf/dbd/fe2 2639003 0 2026-03-09T15:02:00.278 INFO:tasks.workunit.client.0.vm05.stdout:4/929: rmdir d2/d43/d12a 39 2026-03-09T15:02:00.286 INFO:tasks.workunit.client.0.vm05.stdout:1/879: unlink d9/d2f/d83/d98/fa4 0 2026-03-09T15:02:00.286 INFO:tasks.workunit.client.0.vm05.stdout:1/880: write d9/d2f/d37/ded/f108 [18424,10297] 0 2026-03-09T15:02:00.287 INFO:tasks.workunit.client.0.vm05.stdout:3/890: read d3/df/d1e/d2c/d74/d78/fab [119757,12556] 0 2026-03-09T15:02:00.287 INFO:tasks.workunit.client.0.vm05.stdout:3/891: chown d3/d29/d2d 4040045 1 2026-03-09T15:02:00.287 INFO:tasks.workunit.client.0.vm05.stdout:3/892: chown d3/df/f14 57352349 1 2026-03-09T15:02:00.288 INFO:tasks.workunit.client.0.vm05.stdout:3/893: chown d3/d29/d2d/d77/d4d/c122 3 1 2026-03-09T15:02:00.295 INFO:tasks.workunit.client.0.vm05.stdout:2/935: dread da/d29/d6a/da0/d91/dab/f8c [0,4194304] 0 2026-03-09T15:02:00.300 INFO:tasks.workunit.client.0.vm05.stdout:6/855: creat da/d19/dd7/f10b x:0 0 0 2026-03-09T15:02:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:59 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:02:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:59 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:02:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:59 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:59 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:02:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:59 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:02:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:59 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:59 vm05.local ceph-mon[50611]: Upgrade: Need to upgrade myself (mgr.vm09.cfuwdz) 2026-03-09T15:02:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:59 vm05.local ceph-mon[50611]: Upgrade: Need to upgrade myself (mgr.vm09.cfuwdz) 2026-03-09T15:02:00.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:01:59 vm05.local ceph-mon[50611]: Failed to find standby mgr for failover. Retrying in 2 seconds 2026-03-09T15:02:00.310 INFO:tasks.workunit.client.0.vm05.stdout:1/881: dread d9/d2f/d55/f5e [0,4194304] 0 2026-03-09T15:02:00.319 INFO:tasks.workunit.client.0.vm05.stdout:8/947: write d0/d1/d12/d1b/d95/d78/db5/fbb [4229597,101655] 0 2026-03-09T15:02:00.324 INFO:tasks.workunit.client.0.vm05.stdout:9/976: write d2/f13 [4201446,98533] 0 2026-03-09T15:02:00.328 INFO:tasks.workunit.client.0.vm05.stdout:9/977: dwrite d2/d10/d22/dc2/db1/f144 [0,4194304] 0 2026-03-09T15:02:00.334 INFO:tasks.workunit.client.0.vm05.stdout:5/990: link d1/d4/d34/d35/d3d/dde/lf5 d1/d4/d34/d35/d3d/d38/d63/l150 0 2026-03-09T15:02:00.337 INFO:tasks.workunit.client.0.vm05.stdout:4/930: dwrite d2/d4/d7/d48/fd9 [0,4194304] 0 2026-03-09T15:02:00.342 INFO:tasks.workunit.client.0.vm05.stdout:0/871: link d9/de/d12/d15/d2e/d32/d53/d61/d104/da0/db7/de5/f102 d9/d59/d70/dda/f110 0 2026-03-09T15:02:00.346 INFO:tasks.workunit.client.0.vm05.stdout:3/894: dwrite d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/f8e [0,4194304] 0 2026-03-09T15:02:00.354 INFO:tasks.workunit.client.0.vm05.stdout:3/895: dwrite d3/f11f [0,4194304] 0 2026-03-09T15:02:00.363 INFO:tasks.workunit.client.0.vm05.stdout:7/932: link d1/d22/d3c/c9a d1/d9/d23/d31/d32/d78/ddd/def/c13a 0 2026-03-09T15:02:00.364 INFO:tasks.workunit.client.0.vm05.stdout:6/856: creat da/d17/d7c/dc6/f10c x:0 0 0 2026-03-09T15:02:00.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:59 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:02:00.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:59 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:02:00.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:59 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:00.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:59 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:02:00.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:59 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' 2026-03-09T15:02:00.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:59 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:00.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:59 vm09.local ceph-mon[59673]: Upgrade: Need to upgrade myself (mgr.vm09.cfuwdz) 2026-03-09T15:02:00.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:59 vm09.local ceph-mon[59673]: Upgrade: Need to upgrade myself (mgr.vm09.cfuwdz) 2026-03-09T15:02:00.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:01:59 vm09.local ceph-mon[59673]: Failed to find standby mgr for failover. Retrying in 2 seconds 2026-03-09T15:02:00.367 INFO:tasks.workunit.client.0.vm05.stdout:7/933: dwrite d1/d9/d72/f7c [0,4194304] 0 2026-03-09T15:02:00.369 INFO:tasks.workunit.client.0.vm05.stdout:8/948: dread - d0/d1/d12/d1b/d95/d78/dea/f109 zero size 2026-03-09T15:02:00.394 INFO:tasks.workunit.client.0.vm05.stdout:4/931: dread d2/d49/d69/f9b [0,4194304] 0 2026-03-09T15:02:00.395 INFO:tasks.workunit.client.0.vm05.stdout:4/932: readlink d2/d4/d1e/da2/dec/d117/le1 0 2026-03-09T15:02:00.401 INFO:tasks.workunit.client.0.vm05.stdout:6/857: truncate da/f80 1065992 0 2026-03-09T15:02:00.406 INFO:tasks.workunit.client.0.vm05.stdout:8/949: dread d0/d24/d96/f118 [0,4194304] 0 2026-03-09T15:02:00.413 INFO:tasks.workunit.client.0.vm05.stdout:4/933: dread d2/d4/d7/dc/f27 [0,4194304] 0 2026-03-09T15:02:00.415 INFO:tasks.workunit.client.0.vm05.stdout:2/936: write da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/fdf [1163050,87593] 0 2026-03-09T15:02:00.416 INFO:tasks.workunit.client.0.vm05.stdout:2/937: truncate da/d29/d6a/da0/d91/dab/dd6/f104 761677 0 2026-03-09T15:02:00.418 INFO:tasks.workunit.client.0.vm05.stdout:5/991: write d1/d4/d34/d6c/faf [109659,73365] 0 2026-03-09T15:02:00.418 INFO:tasks.workunit.client.0.vm05.stdout:9/978: write d2/d10/d22/d9f/d119/f12b [484541,50284] 0 2026-03-09T15:02:00.421 INFO:tasks.workunit.client.0.vm05.stdout:1/882: dwrite d9/f21 [4194304,4194304] 0 2026-03-09T15:02:00.424 INFO:tasks.workunit.client.0.vm05.stdout:0/872: mkdir d9/d111 0 2026-03-09T15:02:00.429 INFO:tasks.workunit.client.0.vm05.stdout:6/858: dread da/d43/f96 [0,4194304] 0 2026-03-09T15:02:00.430 INFO:tasks.workunit.client.0.vm05.stdout:6/859: chown da/d17/d95/da2/dae/dd9/lf5 122 1 2026-03-09T15:02:00.440 INFO:tasks.workunit.client.0.vm05.stdout:5/992: symlink d1/d4/d34/d35/d4e/dc8/l151 0 2026-03-09T15:02:00.442 INFO:tasks.workunit.client.0.vm05.stdout:9/979: truncate d2/d10/d22/d2c/f3a 3617235 0 2026-03-09T15:02:00.445 INFO:tasks.workunit.client.0.vm05.stdout:1/883: rmdir d9/d2f/d37 39 2026-03-09T15:02:00.449 INFO:tasks.workunit.client.0.vm05.stdout:4/934: mknod d2/d43/d12a/c138 0 2026-03-09T15:02:00.450 INFO:tasks.workunit.client.0.vm05.stdout:4/935: chown d2/d43/d12a/d88/d92/d121/dfb 9 1 2026-03-09T15:02:00.452 INFO:tasks.workunit.client.0.vm05.stdout:5/993: symlink d1/d4/d34/dc0/l152 0 2026-03-09T15:02:00.454 INFO:tasks.workunit.client.0.vm05.stdout:5/994: dwrite d1/da/fe6 [0,4194304] 0 2026-03-09T15:02:00.470 INFO:tasks.workunit.client.0.vm05.stdout:9/980: symlink d2/d9e/df6/l150 0 2026-03-09T15:02:00.470 INFO:tasks.workunit.client.0.vm05.stdout:0/873: mknod d9/de/d25/c112 0 2026-03-09T15:02:00.470 INFO:tasks.workunit.client.0.vm05.stdout:3/896: getdents d3/df/d1e/d2c 0 2026-03-09T15:02:00.470 INFO:tasks.workunit.client.0.vm05.stdout:3/897: fdatasync d3/df/d59/ffe 0 2026-03-09T15:02:00.475 INFO:tasks.workunit.client.0.vm05.stdout:4/936: unlink d2/d4/d7/dc/f18 0 2026-03-09T15:02:00.480 INFO:tasks.workunit.client.0.vm05.stdout:2/938: creat da/d29/d6a/da0/d91/f125 x:0 0 0 2026-03-09T15:02:00.487 INFO:tasks.workunit.client.0.vm05.stdout:0/874: symlink d9/de/d12/d15/d2e/d6b/l113 0 2026-03-09T15:02:00.487 INFO:tasks.workunit.client.0.vm05.stdout:0/875: chown d9/de/d12/l7b 94 1 2026-03-09T15:02:00.495 INFO:tasks.workunit.client.0.vm05.stdout:2/939: rmdir da/d29/d6a/d7f 39 2026-03-09T15:02:00.499 INFO:tasks.workunit.client.0.vm05.stdout:5/995: mknod d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d75/c153 0 2026-03-09T15:02:00.500 INFO:tasks.workunit.client.0.vm05.stdout:6/860: sync 2026-03-09T15:02:00.503 INFO:tasks.workunit.client.0.vm05.stdout:7/934: write d1/d9/d23/d31/d32/d78/d7e/ff5 [224345,94616] 0 2026-03-09T15:02:00.506 INFO:tasks.workunit.client.0.vm05.stdout:8/950: dwrite d0/d1/d12/d1b/d6e/fc2 [0,4194304] 0 2026-03-09T15:02:00.516 INFO:tasks.workunit.client.0.vm05.stdout:3/898: dread d3/fea [0,4194304] 0 2026-03-09T15:02:00.518 INFO:tasks.workunit.client.0.vm05.stdout:3/899: write d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/f8e [4334211,73138] 0 2026-03-09T15:02:00.523 INFO:tasks.workunit.client.0.vm05.stdout:9/981: dwrite d2/d10/d22/d52/fb0 [0,4194304] 0 2026-03-09T15:02:00.528 INFO:tasks.workunit.client.0.vm05.stdout:4/937: mknod d2/d4/d8/c139 0 2026-03-09T15:02:00.537 INFO:tasks.workunit.client.0.vm05.stdout:9/982: dread d2/d10/d22/d2c/fbd [0,4194304] 0 2026-03-09T15:02:00.541 INFO:tasks.workunit.client.0.vm05.stdout:5/996: rmdir d1/d4/d34/dc0 39 2026-03-09T15:02:00.541 INFO:tasks.workunit.client.0.vm05.stdout:5/997: readlink d1/d5d/l77 0 2026-03-09T15:02:00.552 INFO:tasks.workunit.client.0.vm05.stdout:7/935: unlink d1/d22/da4/ff3 0 2026-03-09T15:02:00.556 INFO:tasks.workunit.client.0.vm05.stdout:6/861: dwrite da/d43/d7b/fd6 [0,4194304] 0 2026-03-09T15:02:00.566 INFO:tasks.workunit.client.0.vm05.stdout:1/884: mkdir d9/d2f/d37/d5a/da9/dc9/d123 0 2026-03-09T15:02:00.570 INFO:tasks.workunit.client.0.vm05.stdout:1/885: dwrite d9/d2f/d83/d98/d59/d49/d4b/d116/f117 [0,4194304] 0 2026-03-09T15:02:00.575 INFO:tasks.workunit.client.0.vm05.stdout:0/876: symlink d9/de/d12/d15/l114 0 2026-03-09T15:02:00.586 INFO:tasks.workunit.client.0.vm05.stdout:1/886: dread d9/d2f/d37/f66 [0,4194304] 0 2026-03-09T15:02:00.590 INFO:tasks.workunit.client.0.vm05.stdout:9/983: mkdir d2/d4e/d56/d53/d64/dd9/def/d12d/d12f/d151 0 2026-03-09T15:02:00.595 INFO:tasks.workunit.client.0.vm05.stdout:5/998: fsync d1/d4/d34/d35/d3d/d38/d69/d11b/d12c/d75/ff8 0 2026-03-09T15:02:00.596 INFO:tasks.workunit.client.0.vm05.stdout:5/999: fsync d1/d4/d34/f6a 0 2026-03-09T15:02:00.598 INFO:tasks.workunit.client.0.vm05.stdout:7/936: creat d1/d9/d23/d31/d32/ddc/f13b x:0 0 0 2026-03-09T15:02:00.604 INFO:tasks.workunit.client.0.vm05.stdout:1/887: sync 2026-03-09T15:02:00.606 INFO:tasks.workunit.client.0.vm05.stdout:3/900: symlink d3/d29/l124 0 2026-03-09T15:02:00.608 INFO:tasks.workunit.client.0.vm05.stdout:3/901: sync 2026-03-09T15:02:00.608 INFO:tasks.workunit.client.0.vm05.stdout:3/902: chown d3/df/dbe 2087755 1 2026-03-09T15:02:00.611 INFO:tasks.workunit.client.0.vm05.stdout:6/862: dread da/f41 [0,4194304] 0 2026-03-09T15:02:00.612 INFO:tasks.workunit.client.0.vm05.stdout:6/863: dread - da/d19/dd7/ffc zero size 2026-03-09T15:02:00.613 INFO:tasks.workunit.client.0.vm05.stdout:8/951: dwrite d0/d1/d12/d3c/f51 [0,4194304] 0 2026-03-09T15:02:00.622 INFO:tasks.workunit.client.0.vm05.stdout:2/940: truncate da/dd/ff 9963806 0 2026-03-09T15:02:00.624 INFO:tasks.workunit.client.0.vm05.stdout:4/938: dwrite d2/d49/d69/f9b [0,4194304] 0 2026-03-09T15:02:00.632 INFO:tasks.workunit.client.0.vm05.stdout:4/939: sync 2026-03-09T15:02:00.636 INFO:tasks.workunit.client.0.vm05.stdout:0/877: dread d9/de/d12/da3/dbc/fc4 [0,4194304] 0 2026-03-09T15:02:00.639 INFO:tasks.workunit.client.0.vm05.stdout:1/888: stat d9/d2f/f3a 0 2026-03-09T15:02:00.649 INFO:tasks.workunit.client.0.vm05.stdout:3/903: dread d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/f6d [0,4194304] 0 2026-03-09T15:02:00.651 INFO:tasks.workunit.client.0.vm05.stdout:8/952: creat d0/d1/d12/d1b/d6e/d93/d9f/f13d x:0 0 0 2026-03-09T15:02:00.657 INFO:tasks.workunit.client.0.vm05.stdout:6/864: write da/f62 [807022,118587] 0 2026-03-09T15:02:00.657 INFO:tasks.workunit.client.0.vm05.stdout:6/865: readlink da/d17/l9b 0 2026-03-09T15:02:00.678 INFO:tasks.workunit.client.0.vm05.stdout:4/940: fsync d2/d4/d7/f9 0 2026-03-09T15:02:00.682 INFO:tasks.workunit.client.0.vm05.stdout:0/878: creat d9/de/d12/d15/d2e/d6b/dbf/f115 x:0 0 0 2026-03-09T15:02:00.684 INFO:tasks.workunit.client.0.vm05.stdout:1/889: truncate d9/db9/f106 1001309 0 2026-03-09T15:02:00.685 INFO:tasks.workunit.client.0.vm05.stdout:1/890: chown d9/d2f/d83/d98/d59/c105 0 1 2026-03-09T15:02:00.691 INFO:tasks.workunit.client.0.vm05.stdout:8/953: unlink d0/d1/d12/d1b/d21/cf7 0 2026-03-09T15:02:00.692 INFO:tasks.workunit.client.0.vm05.stdout:2/941: mkdir da/d29/d6a/da0/d91/dab/d126 0 2026-03-09T15:02:00.693 INFO:tasks.workunit.client.0.vm05.stdout:2/942: chown da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/f92 2890243 1 2026-03-09T15:02:00.696 INFO:tasks.workunit.client.0.vm05.stdout:2/943: dwrite da/d29/d6a/da0/d91/dab/dd6/f104 [0,4194304] 0 2026-03-09T15:02:00.703 INFO:tasks.workunit.client.0.vm05.stdout:9/984: creat d2/d10/d22/f152 x:0 0 0 2026-03-09T15:02:00.713 INFO:tasks.workunit.client.0.vm05.stdout:7/937: link d1/d12a/l12d d1/d9/d23/d31/d32/d78/d7e/d81/l13c 0 2026-03-09T15:02:00.727 INFO:tasks.workunit.client.0.vm05.stdout:6/866: write da/d43/ff2 [608644,58145] 0 2026-03-09T15:02:00.731 INFO:tasks.workunit.client.0.vm05.stdout:1/891: dwrite d9/d2f/d37/d5a/fcb [0,4194304] 0 2026-03-09T15:02:00.736 INFO:tasks.workunit.client.0.vm05.stdout:2/944: dwrite da/d29/d6a/da0/d91/dab/d2f/db3/f100 [0,4194304] 0 2026-03-09T15:02:00.747 INFO:tasks.workunit.client.0.vm05.stdout:9/985: symlink d2/d4e/d56/d53/d64/ded/d9c/ddd/l153 0 2026-03-09T15:02:00.750 INFO:tasks.workunit.client.0.vm05.stdout:9/986: dwrite d2/d10/d8c/fa8 [0,4194304] 0 2026-03-09T15:02:00.766 INFO:tasks.workunit.client.0.vm05.stdout:0/879: mkdir d9/de/df1/deb/d101/d116 0 2026-03-09T15:02:00.768 INFO:tasks.workunit.client.0.vm05.stdout:0/880: read d9/de/d12/d15/d2e/f9a [136345,61242] 0 2026-03-09T15:02:00.777 INFO:tasks.workunit.client.0.vm05.stdout:0/881: dread d9/de/d25/dcf/f96 [0,4194304] 0 2026-03-09T15:02:00.783 INFO:tasks.workunit.client.0.vm05.stdout:8/954: getdents d0/d1/d12/d1b/d66/dcc/dd4/ddc 0 2026-03-09T15:02:00.788 INFO:tasks.workunit.client.0.vm05.stdout:6/867: fsync da/d43/d7b/fc8 0 2026-03-09T15:02:00.792 INFO:tasks.workunit.client.0.vm05.stdout:1/892: creat d9/d2f/d83/d98/d59/d49/d92/d75/f124 x:0 0 0 2026-03-09T15:02:00.796 INFO:tasks.workunit.client.0.vm05.stdout:4/941: dwrite d2/d4/d1e/f11f [0,4194304] 0 2026-03-09T15:02:00.800 INFO:tasks.workunit.client.0.vm05.stdout:4/942: dread d2/d4/d1e/da2/dec/d3d/f7f [0,4194304] 0 2026-03-09T15:02:00.802 INFO:tasks.workunit.client.0.vm05.stdout:3/904: dwrite d3/d29/d2d/d77/f35 [0,4194304] 0 2026-03-09T15:02:00.819 INFO:tasks.workunit.client.0.vm05.stdout:0/882: dread d9/d59/f83 [0,4194304] 0 2026-03-09T15:02:00.832 INFO:tasks.workunit.client.0.vm05.stdout:8/955: dread d0/d1/f7f [4194304,4194304] 0 2026-03-09T15:02:00.837 INFO:tasks.workunit.client.0.vm05.stdout:3/905: truncate d3/df/d1e/d2f/d52/f87 2834786 0 2026-03-09T15:02:00.844 INFO:tasks.workunit.client.0.vm05.stdout:0/883: creat d9/de/d12/d15/d2e/d32/d74/f117 x:0 0 0 2026-03-09T15:02:00.844 INFO:tasks.workunit.client.0.vm05.stdout:0/884: dread - d9/de/d12/d8a/f10e zero size 2026-03-09T15:02:00.845 INFO:tasks.workunit.client.0.vm05.stdout:0/885: chown d9/de/d12/d15/d2e/d32/d53/d61/d104/da0/db7/de5 6735289 1 2026-03-09T15:02:00.848 INFO:tasks.workunit.client.0.vm05.stdout:9/987: write d2/d4e/d56/d53/d64/ded/d9c/db2/ff9 [1118475,54692] 0 2026-03-09T15:02:00.851 INFO:tasks.workunit.client.0.vm05.stdout:2/945: dwrite da/dd/f5d [0,4194304] 0 2026-03-09T15:02:00.853 INFO:tasks.workunit.client.0.vm05.stdout:6/868: dwrite da/d17/f2d [0,4194304] 0 2026-03-09T15:02:00.870 INFO:tasks.workunit.client.0.vm05.stdout:1/893: fsync d9/d17/fb1 0 2026-03-09T15:02:00.872 INFO:tasks.workunit.client.0.vm05.stdout:1/894: dread d9/f21 [4194304,4194304] 0 2026-03-09T15:02:00.881 INFO:tasks.workunit.client.0.vm05.stdout:8/956: creat d0/d1/d12/d1b/d95/d42/da1/f13e x:0 0 0 2026-03-09T15:02:00.887 INFO:tasks.workunit.client.0.vm05.stdout:4/943: symlink d2/d4/d8/l13a 0 2026-03-09T15:02:00.889 INFO:tasks.workunit.client.0.vm05.stdout:4/944: chown d2/d4/d1e/d71/c10f 11806 1 2026-03-09T15:02:00.899 INFO:tasks.workunit.client.0.vm05.stdout:7/938: getdents d1/d9/d23/d31/d32 0 2026-03-09T15:02:00.900 INFO:tasks.workunit.client.0.vm05.stdout:0/886: dread - d9/de/d25/dcf/fc1 zero size 2026-03-09T15:02:00.911 INFO:tasks.workunit.client.0.vm05.stdout:2/946: truncate da/d29/d6a/da0/d91/dab/d2f/db3/fcb 2686204 0 2026-03-09T15:02:00.917 INFO:tasks.workunit.client.0.vm05.stdout:1/895: creat d9/d2f/d83/d98/d59/d49/d78/dbd/f125 x:0 0 0 2026-03-09T15:02:00.924 INFO:tasks.workunit.client.0.vm05.stdout:4/945: creat d2/d4/d1e/da2/dc5/f13b x:0 0 0 2026-03-09T15:02:00.924 INFO:tasks.workunit.client.0.vm05.stdout:4/946: chown d2/d49/d69/f9b 928190115 1 2026-03-09T15:02:00.925 INFO:tasks.workunit.client.0.vm05.stdout:4/947: chown d2/d4/c2c 0 1 2026-03-09T15:02:00.925 INFO:tasks.workunit.client.0.vm05.stdout:4/948: stat d2/d4/d1e/d71/c85 0 2026-03-09T15:02:00.928 INFO:tasks.workunit.client.0.vm05.stdout:3/906: creat d3/d29/d2d/d7b/dc5/f125 x:0 0 0 2026-03-09T15:02:00.935 INFO:tasks.workunit.client.0.vm05.stdout:8/957: dwrite d0/d1/d12/d1b/d95/d4b/f115 [0,4194304] 0 2026-03-09T15:02:00.943 INFO:tasks.workunit.client.0.vm05.stdout:9/988: mknod d2/ddb/c154 0 2026-03-09T15:02:00.943 INFO:tasks.workunit.client.0.vm05.stdout:6/869: symlink da/l10d 0 2026-03-09T15:02:00.944 INFO:tasks.workunit.client.0.vm05.stdout:9/989: stat d2/d4e/d56/d84/d120/c14b 0 2026-03-09T15:02:00.963 INFO:tasks.workunit.client.0.vm05.stdout:4/949: creat d2/d4/d1e/da2/dc5/f13c x:0 0 0 2026-03-09T15:02:00.964 INFO:tasks.workunit.client.0.vm05.stdout:1/896: dread d9/d2f/d83/d98/d59/d49/d78/d94/fd6 [0,4194304] 0 2026-03-09T15:02:00.967 INFO:tasks.workunit.client.0.vm05.stdout:3/907: mkdir d3/df/dbe/d126 0 2026-03-09T15:02:00.969 INFO:tasks.workunit.client.0.vm05.stdout:0/887: write d9/de/fd6 [1021315,101953] 0 2026-03-09T15:02:00.972 INFO:tasks.workunit.client.0.vm05.stdout:7/939: rename d1/d22/l74 to d1/d9/d23/d31/d32/d78/ddd/def/l13d 0 2026-03-09T15:02:00.973 INFO:tasks.workunit.client.0.vm05.stdout:7/940: chown d1/d49/d4a 261779 1 2026-03-09T15:02:00.984 INFO:tasks.workunit.client.0.vm05.stdout:6/870: rmdir da/d43/d66 39 2026-03-09T15:02:00.985 INFO:tasks.workunit.client.0.vm05.stdout:2/947: write da/d29/d3f/ffb [1823448,35991] 0 2026-03-09T15:02:00.988 INFO:tasks.workunit.client.0.vm05.stdout:9/990: dwrite d2/f11 [0,4194304] 0 2026-03-09T15:02:00.991 INFO:tasks.workunit.client.0.vm05.stdout:9/991: write d2/d4e/d56/d53/d64/ded/d9c/f18 [8702361,24208] 0 2026-03-09T15:02:00.999 INFO:tasks.workunit.client.0.vm05.stdout:3/908: readlink d3/d29/d2d/d77/l62 0 2026-03-09T15:02:01.005 INFO:tasks.workunit.client.0.vm05.stdout:0/888: fdatasync d9/de/d12/d15/d2e/d32/f7d 0 2026-03-09T15:02:01.012 INFO:tasks.workunit.client.0.vm05.stdout:7/941: mknod d1/d9/d72/d10c/c13e 0 2026-03-09T15:02:01.013 INFO:tasks.workunit.client.0.vm05.stdout:6/871: chown da/d43/d66/f6e 0 1 2026-03-09T15:02:01.013 INFO:tasks.workunit.client.0.vm05.stdout:6/872: readlink da/d17/l93 0 2026-03-09T15:02:01.016 INFO:tasks.workunit.client.0.vm05.stdout:2/948: fdatasync da/d29/d6a/da0/d91/dab/d2f/d35/d10b/dd3/fe4 0 2026-03-09T15:02:01.019 INFO:tasks.workunit.client.0.vm05.stdout:9/992: readlink d2/d10/d22/d2c/d69/l2b 0 2026-03-09T15:02:01.021 INFO:tasks.workunit.client.0.vm05.stdout:4/950: fdatasync d2/d4/d1e/da2/dec/dcb/fe3 0 2026-03-09T15:02:01.021 INFO:tasks.workunit.client.0.vm05.stdout:4/951: chown d2/d4/d7 260804 1 2026-03-09T15:02:01.036 INFO:tasks.workunit.client.0.vm05.stdout:8/958: rename d0/d1/d12/d1b/d95/d54/f120 to d0/d1/d12/d1b/d6e/d93/d9f/f13f 0 2026-03-09T15:02:01.042 INFO:tasks.workunit.client.0.vm05.stdout:6/873: read da/f80 [204832,33326] 0 2026-03-09T15:02:01.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:00 vm05.local ceph-mon[50611]: pgmap v13: 65 pgs: 65 active+clean; 2.6 GiB data, 8.8 GiB used, 111 GiB / 120 GiB avail; 28 MiB/s rd, 56 MiB/s wr, 164 op/s 2026-03-09T15:02:01.058 INFO:tasks.workunit.client.0.vm05.stdout:1/897: rename d9/d2f/d83/d98/d59/d49/d78/d7e/c9b to d9/d2f/d37/ded/c126 0 2026-03-09T15:02:01.059 INFO:tasks.workunit.client.0.vm05.stdout:8/959: rmdir d0/d1/d12/d1b/d95 39 2026-03-09T15:02:01.060 INFO:tasks.workunit.client.0.vm05.stdout:8/960: readlink d0/d1/d12/d1b/d21/l2f 0 2026-03-09T15:02:01.070 INFO:tasks.workunit.client.0.vm05.stdout:2/949: mknod da/c127 0 2026-03-09T15:02:01.076 INFO:tasks.workunit.client.0.vm05.stdout:4/952: fsync d2/d4/d7/dc/fb9 0 2026-03-09T15:02:01.080 INFO:tasks.workunit.client.0.vm05.stdout:0/889: creat d9/de/d12/d15/f118 x:0 0 0 2026-03-09T15:02:01.084 INFO:tasks.workunit.client.0.vm05.stdout:1/898: write f5 [2411240,48757] 0 2026-03-09T15:02:01.087 INFO:tasks.workunit.client.0.vm05.stdout:8/961: mknod d0/d1/d97/c140 0 2026-03-09T15:02:01.088 INFO:tasks.workunit.client.0.vm05.stdout:7/942: link d1/d22/d3c/fba d1/d9/d23/d31/d32/d78/d7e/d81/dcd/f13f 0 2026-03-09T15:02:01.092 INFO:tasks.workunit.client.0.vm05.stdout:9/993: creat d2/d4e/d56/f155 x:0 0 0 2026-03-09T15:02:01.095 INFO:tasks.workunit.client.0.vm05.stdout:2/950: mkdir da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/d128 0 2026-03-09T15:02:01.100 INFO:tasks.workunit.client.0.vm05.stdout:3/909: getdents d3/df/d59 0 2026-03-09T15:02:01.100 INFO:tasks.workunit.client.0.vm05.stdout:3/910: chown d3/df/d10/c81 82314 1 2026-03-09T15:02:01.103 INFO:tasks.workunit.client.0.vm05.stdout:3/911: dwrite d3/d29/f10b [0,4194304] 0 2026-03-09T15:02:01.107 INFO:tasks.workunit.client.0.vm05.stdout:4/953: write d2/d4/fb4 [2635428,71152] 0 2026-03-09T15:02:01.115 INFO:tasks.workunit.client.0.vm05.stdout:0/890: rename d9/de/d12/d15/l8b to d9/de/d12/d15/d2e/d32/d53/d61/d104/da0/db7/l119 0 2026-03-09T15:02:01.122 INFO:tasks.workunit.client.0.vm05.stdout:8/962: creat d0/d2a/f141 x:0 0 0 2026-03-09T15:02:01.127 INFO:tasks.workunit.client.0.vm05.stdout:6/874: creat da/d19/dd7/dfe/f10e x:0 0 0 2026-03-09T15:02:01.130 INFO:tasks.workunit.client.0.vm05.stdout:2/951: creat da/d29/d6a/da0/d91/dab/d2f/db3/f129 x:0 0 0 2026-03-09T15:02:01.137 INFO:tasks.workunit.client.0.vm05.stdout:3/912: mkdir d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/dbd/dc2/d127 0 2026-03-09T15:02:01.146 INFO:tasks.workunit.client.0.vm05.stdout:8/963: rmdir d0/d24/d96 39 2026-03-09T15:02:01.148 INFO:tasks.workunit.client.0.vm05.stdout:7/943: mkdir d1/d49/d4a/d94/ddb/d140 0 2026-03-09T15:02:01.150 INFO:tasks.workunit.client.0.vm05.stdout:6/875: truncate da/d17/d7c/f8f 572848 0 2026-03-09T15:02:01.153 INFO:tasks.workunit.client.0.vm05.stdout:2/952: symlink da/d29/d6a/da0/d91/dab/d2f/d35/d10b/dbf/l12a 0 2026-03-09T15:02:01.157 INFO:tasks.workunit.client.0.vm05.stdout:4/954: sync 2026-03-09T15:02:01.158 INFO:tasks.workunit.client.0.vm05.stdout:7/944: sync 2026-03-09T15:02:01.158 INFO:tasks.workunit.client.0.vm05.stdout:3/913: symlink d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/d90/d10a/l128 0 2026-03-09T15:02:01.158 INFO:tasks.workunit.client.0.vm05.stdout:4/955: chown d2/d4/d7/d48/d6b/lbc 453552378 1 2026-03-09T15:02:01.160 INFO:tasks.workunit.client.0.vm05.stdout:4/956: sync 2026-03-09T15:02:01.160 INFO:tasks.workunit.client.0.vm05.stdout:7/945: sync 2026-03-09T15:02:01.160 INFO:tasks.workunit.client.0.vm05.stdout:7/946: chown d1/d9/d23/f5a 12 1 2026-03-09T15:02:01.166 INFO:tasks.workunit.client.0.vm05.stdout:1/899: dwrite d9/d2f/d37/d101/fde [4194304,4194304] 0 2026-03-09T15:02:01.179 INFO:tasks.workunit.client.0.vm05.stdout:0/891: truncate d9/de/d12/d15/d2e/d32/d53/d61/f10c 1456803 0 2026-03-09T15:02:01.195 INFO:tasks.workunit.client.0.vm05.stdout:9/994: creat d2/d10/f156 x:0 0 0 2026-03-09T15:02:01.198 INFO:tasks.workunit.client.0.vm05.stdout:9/995: dwrite d2/d10/d22/d2c/d3c/d101/f140 [0,4194304] 0 2026-03-09T15:02:01.200 INFO:tasks.workunit.client.0.vm05.stdout:9/996: chown d2/d10/d22/d52/c77 2 1 2026-03-09T15:02:01.200 INFO:tasks.workunit.client.0.vm05.stdout:9/997: stat d2/d9e/f104 0 2026-03-09T15:02:01.220 INFO:tasks.workunit.client.0.vm05.stdout:6/876: write da/d19/dd7/dfe/fa4 [2053258,61947] 0 2026-03-09T15:02:01.224 INFO:tasks.workunit.client.0.vm05.stdout:2/953: creat da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/f12b x:0 0 0 2026-03-09T15:02:01.225 INFO:tasks.workunit.client.0.vm05.stdout:2/954: dread - da/d29/d6a/db1/f123 zero size 2026-03-09T15:02:01.226 INFO:tasks.workunit.client.0.vm05.stdout:2/955: chown da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/fdd 12678 1 2026-03-09T15:02:01.238 INFO:tasks.workunit.client.0.vm05.stdout:4/957: write d2/d4/d8/f13 [3264075,5289] 0 2026-03-09T15:02:01.241 INFO:tasks.workunit.client.0.vm05.stdout:1/900: mkdir d9/d97/d127 0 2026-03-09T15:02:01.247 INFO:tasks.workunit.client.0.vm05.stdout:0/892: write d9/de/d12/d15/d2e/d32/d53/d61/d104/fa9 [345182,92349] 0 2026-03-09T15:02:01.251 INFO:tasks.workunit.client.0.vm05.stdout:9/998: mkdir d2/d10/d22/d47/d73/d157 0 2026-03-09T15:02:01.257 INFO:tasks.workunit.client.0.vm05.stdout:8/964: dread d0/d1/d12/d1b/d95/dd7/dd2/dd8/d114/f20 [0,4194304] 0 2026-03-09T15:02:01.260 INFO:tasks.workunit.client.0.vm05.stdout:3/914: symlink d3/d29/d2d/l129 0 2026-03-09T15:02:01.261 INFO:tasks.workunit.client.0.vm05.stdout:3/915: write d3/df/d1e/d2f/fbf [1770965,68397] 0 2026-03-09T15:02:01.266 INFO:tasks.workunit.client.0.vm05.stdout:7/947: symlink d1/d12/d137/l141 0 2026-03-09T15:02:01.269 INFO:tasks.workunit.client.0.vm05.stdout:4/958: mknod d2/d4/d7/d131/c13d 0 2026-03-09T15:02:01.272 INFO:tasks.workunit.client.0.vm05.stdout:1/901: creat d9/d2f/d37/d5a/da9/dc9/dcd/f128 x:0 0 0 2026-03-09T15:02:01.275 INFO:tasks.workunit.client.0.vm05.stdout:0/893: creat d9/d59/d70/f11a x:0 0 0 2026-03-09T15:02:01.276 INFO:tasks.workunit.client.0.vm05.stdout:7/948: sync 2026-03-09T15:02:01.284 INFO:tasks.workunit.client.0.vm05.stdout:3/916: stat d3/d29/f97 0 2026-03-09T15:02:01.290 INFO:tasks.workunit.client.0.vm05.stdout:9/999: symlink d2/d4e/l158 0 2026-03-09T15:02:01.292 INFO:tasks.workunit.client.0.vm05.stdout:7/949: read d1/d12/f11 [126494,50743] 0 2026-03-09T15:02:01.302 INFO:tasks.workunit.client.0.vm05.stdout:4/959: creat d2/d43/ded/f13e x:0 0 0 2026-03-09T15:02:01.306 INFO:tasks.workunit.client.0.vm05.stdout:0/894: symlink d9/d59/dfd/l11b 0 2026-03-09T15:02:01.309 INFO:tasks.workunit.client.0.vm05.stdout:6/877: getdents da/d9e 0 2026-03-09T15:02:01.312 INFO:tasks.workunit.client.0.vm05.stdout:2/956: link da/d29/d6a/da0/dd9/dfd/d110/f11a da/d29/d6a/f12c 0 2026-03-09T15:02:01.314 INFO:tasks.workunit.client.0.vm05.stdout:8/965: link d0/d1/d12/d1b/d95/dd7/dd2/dd8/d114/f20 d0/d1/d12/d1b/d66/f142 0 2026-03-09T15:02:01.321 INFO:tasks.workunit.client.0.vm05.stdout:4/960: rename d2/d4/fb4 to d2/d4/d8/d4a/f13f 0 2026-03-09T15:02:01.326 INFO:tasks.workunit.client.0.vm05.stdout:3/917: dwrite d3/df/d10/f2a [0,4194304] 0 2026-03-09T15:02:01.328 INFO:tasks.workunit.client.0.vm05.stdout:1/902: creat d9/d2f/d37/d5a/da9/dc9/f129 x:0 0 0 2026-03-09T15:02:01.337 INFO:tasks.workunit.client.0.vm05.stdout:6/878: fsync da/d17/f8d 0 2026-03-09T15:02:01.339 INFO:tasks.workunit.client.0.vm05.stdout:2/957: chown da/d29/c3d 1145091503 1 2026-03-09T15:02:01.341 INFO:tasks.workunit.client.0.vm05.stdout:8/966: write d0/d1/d12/d1b/d66/dcc/fe6 [1123047,63887] 0 2026-03-09T15:02:01.348 INFO:tasks.workunit.client.0.vm05.stdout:1/903: mknod d9/d2f/d37/d101/c12a 0 2026-03-09T15:02:01.352 INFO:tasks.workunit.client.0.vm05.stdout:3/918: mkdir d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/dbd/dc2/d12a 0 2026-03-09T15:02:01.355 INFO:tasks.workunit.client.0.vm05.stdout:7/950: write d1/d9/d23/d31/d32/f12b [1135217,105616] 0 2026-03-09T15:02:01.357 INFO:tasks.workunit.client.0.vm05.stdout:6/879: readlink da/d17/d3b/l3e 0 2026-03-09T15:02:01.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:00 vm09.local ceph-mon[59673]: pgmap v13: 65 pgs: 65 active+clean; 2.6 GiB data, 8.8 GiB used, 111 GiB / 120 GiB avail; 28 MiB/s rd, 56 MiB/s wr, 164 op/s 2026-03-09T15:02:01.367 INFO:tasks.workunit.client.0.vm05.stdout:2/958: dwrite da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/fd1 [4194304,4194304] 0 2026-03-09T15:02:01.368 INFO:tasks.workunit.client.0.vm05.stdout:2/959: chown da/d29/d64/d11e 553550821 1 2026-03-09T15:02:01.368 INFO:tasks.workunit.client.0.vm05.stdout:2/960: readlink da/d29/d6a/da0/l107 0 2026-03-09T15:02:01.378 INFO:tasks.workunit.client.0.vm05.stdout:1/904: dread d9/d2f/d83/d98/d59/d49/d78/d7e/fb3 [0,4194304] 0 2026-03-09T15:02:01.382 INFO:tasks.workunit.client.0.vm05.stdout:0/895: link d9/d59/fed d9/d111/f11c 0 2026-03-09T15:02:01.389 INFO:tasks.workunit.client.0.vm05.stdout:6/880: symlink da/d17/d95/da2/l10f 0 2026-03-09T15:02:01.390 INFO:tasks.workunit.client.0.vm05.stdout:3/919: write d3/df/d10/fae [807479,85823] 0 2026-03-09T15:02:01.402 INFO:tasks.workunit.client.0.vm05.stdout:4/961: rename d2/d4/d8/d4a/l123 to d2/d4/l140 0 2026-03-09T15:02:01.406 INFO:tasks.workunit.client.0.vm05.stdout:8/967: dwrite d0/d1/d12/d1b/d95/d78/dca/ff1 [0,4194304] 0 2026-03-09T15:02:01.429 INFO:tasks.workunit.client.0.vm05.stdout:2/961: dread da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/f69 [0,4194304] 0 2026-03-09T15:02:01.434 INFO:tasks.workunit.client.0.vm05.stdout:6/881: mkdir da/d17/d3b/dbd/d110 0 2026-03-09T15:02:01.445 INFO:tasks.workunit.client.0.vm05.stdout:3/920: write d3/df/d59/f98 [2193779,76759] 0 2026-03-09T15:02:01.448 INFO:tasks.workunit.client.0.vm05.stdout:8/968: creat d0/d1/d12/d1b/d95/d78/f143 x:0 0 0 2026-03-09T15:02:01.452 INFO:tasks.workunit.client.0.vm05.stdout:8/969: dwrite d0/d1/d12/d1b/d95/d42/d60/da7/db3/f127 [0,4194304] 0 2026-03-09T15:02:01.462 INFO:tasks.workunit.client.0.vm05.stdout:7/951: creat d1/d49/f142 x:0 0 0 2026-03-09T15:02:01.463 INFO:tasks.workunit.client.0.vm05.stdout:7/952: stat d1/d9/d23/d31/d32/ddc/f13b 0 2026-03-09T15:02:01.465 INFO:tasks.workunit.client.0.vm05.stdout:3/921: unlink d3/df/d1e/d2c/d74/d78/d121/dce/d107/l113 0 2026-03-09T15:02:01.465 INFO:tasks.workunit.client.0.vm05.stdout:3/922: chown d3/df/d59/d79/fc7 144349955 1 2026-03-09T15:02:01.473 INFO:tasks.workunit.client.0.vm05.stdout:8/970: truncate d0/d1/d12/d1b/d95/dd7/dd2/dd8/d114/f14 5075391 0 2026-03-09T15:02:01.481 INFO:tasks.workunit.client.0.vm05.stdout:0/896: rename d9/de/d25/dcf/f8c to d9/de/d12/d8a/f11d 0 2026-03-09T15:02:01.482 INFO:tasks.workunit.client.0.vm05.stdout:0/897: dread - d9/de/d25/dcf/fc1 zero size 2026-03-09T15:02:01.483 INFO:tasks.workunit.client.0.vm05.stdout:0/898: fsync d9/d59/d70/f11a 0 2026-03-09T15:02:01.488 INFO:tasks.workunit.client.0.vm05.stdout:3/923: symlink d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/dbd/l12b 0 2026-03-09T15:02:01.490 INFO:tasks.workunit.client.0.vm05.stdout:3/924: write d3/d29/d2d/d77/d4d/fe9 [1154147,50889] 0 2026-03-09T15:02:01.492 INFO:tasks.workunit.client.0.vm05.stdout:1/905: dwrite d9/db9/f106 [0,4194304] 0 2026-03-09T15:02:01.495 INFO:tasks.workunit.client.0.vm05.stdout:8/971: fdatasync d0/d1/d12/d1b/d95/f41 0 2026-03-09T15:02:01.512 INFO:tasks.workunit.client.0.vm05.stdout:4/962: rename d2/d4/d8/d4a/d94/la6 to d2/d43/d12a/df5/d97/l141 0 2026-03-09T15:02:01.513 INFO:tasks.workunit.client.0.vm05.stdout:7/953: mkdir d1/d22/d143 0 2026-03-09T15:02:01.522 INFO:tasks.workunit.client.0.vm05.stdout:4/963: dread d2/d4/d8/f13 [0,4194304] 0 2026-03-09T15:02:01.531 INFO:tasks.workunit.client.0.vm05.stdout:1/906: write d9/d17/fb1 [603724,82895] 0 2026-03-09T15:02:01.542 INFO:tasks.workunit.client.0.vm05.stdout:0/899: symlink d9/de/d12/d15/df0/l11e 0 2026-03-09T15:02:01.544 INFO:tasks.workunit.client.0.vm05.stdout:4/964: truncate d2/d4/d7/dc/fb8 478736 0 2026-03-09T15:02:01.550 INFO:tasks.workunit.client.0.vm05.stdout:8/972: truncate d0/d1/d12/f4f 1779594 0 2026-03-09T15:02:01.551 INFO:tasks.workunit.client.0.vm05.stdout:8/973: truncate d0/d1/d12/d1b/fbd 1654567 0 2026-03-09T15:02:01.551 INFO:tasks.workunit.client.0.vm05.stdout:8/974: chown d0/d1/d12/d1b/d6e/d93/cc8 483 1 2026-03-09T15:02:01.565 INFO:tasks.workunit.client.0.vm05.stdout:1/907: dwrite d9/d2f/d37/d5a/da9/dc9/dcd/f63 [0,4194304] 0 2026-03-09T15:02:01.572 INFO:tasks.workunit.client.0.vm05.stdout:2/962: getdents da/d29/d6a/da0/d91/dd4 0 2026-03-09T15:02:01.576 INFO:tasks.workunit.client.0.vm05.stdout:7/954: unlink d1/d9/d23/d31/d8f/d93/dbd/d107/f10b 0 2026-03-09T15:02:01.581 INFO:tasks.workunit.client.0.vm05.stdout:0/900: unlink d9/de/d12/d15/d2e/fee 0 2026-03-09T15:02:01.590 INFO:tasks.workunit.client.0.vm05.stdout:6/882: rename da/d43/d7b/f9f to da/d17/d95/f111 0 2026-03-09T15:02:01.594 INFO:tasks.workunit.client.0.vm05.stdout:7/955: creat d1/d9/d23/d31/d32/f144 x:0 0 0 2026-03-09T15:02:01.598 INFO:tasks.workunit.client.0.vm05.stdout:4/965: write d2/d49/f4d [4624276,15540] 0 2026-03-09T15:02:01.600 INFO:tasks.workunit.client.0.vm05.stdout:8/975: dwrite d0/d1/d12/d3c/f9a [0,4194304] 0 2026-03-09T15:02:01.607 INFO:tasks.workunit.client.0.vm05.stdout:0/901: truncate d9/de/d12/d15/f5e 3011265 0 2026-03-09T15:02:01.611 INFO:tasks.workunit.client.0.vm05.stdout:0/902: dwrite d9/de/d25/f106 [0,4194304] 0 2026-03-09T15:02:01.623 INFO:tasks.workunit.client.0.vm05.stdout:3/925: getdents d3/df/d1e 0 2026-03-09T15:02:01.623 INFO:tasks.workunit.client.0.vm05.stdout:1/908: mkdir d9/d2f/d83/d98/d59/d49/d78/d12b 0 2026-03-09T15:02:01.624 INFO:tasks.workunit.client.0.vm05.stdout:2/963: getdents da/dd 0 2026-03-09T15:02:01.624 INFO:tasks.workunit.client.0.vm05.stdout:6/883: creat da/d43/d7b/de0/f112 x:0 0 0 2026-03-09T15:02:01.642 INFO:tasks.workunit.client.0.vm05.stdout:8/976: dread d0/d1/d12/fb6 [0,4194304] 0 2026-03-09T15:02:01.642 INFO:tasks.workunit.client.0.vm05.stdout:0/903: rmdir d9/de/d12/d15/d2e/d32/d74 39 2026-03-09T15:02:01.653 INFO:tasks.workunit.client.0.vm05.stdout:7/956: mkdir d1/d9/d23/d11a/d145 0 2026-03-09T15:02:01.662 INFO:tasks.workunit.client.0.vm05.stdout:4/966: write d2/f98 [1057355,81815] 0 2026-03-09T15:02:01.666 INFO:tasks.workunit.client.0.vm05.stdout:2/964: rename da/d29/d6a/da0/d91/dd4 to da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/d128/d12d 0 2026-03-09T15:02:01.673 INFO:tasks.workunit.client.0.vm05.stdout:3/926: dwrite d3/df/f14 [0,4194304] 0 2026-03-09T15:02:01.727 INFO:tasks.workunit.client.0.vm05.stdout:1/909: dwrite d9/d2f/d83/d98/d59/d49/f82 [0,4194304] 0 2026-03-09T15:02:01.729 INFO:tasks.workunit.client.0.vm05.stdout:1/910: write d9/d17/fb1 [1277788,65241] 0 2026-03-09T15:02:01.737 INFO:tasks.workunit.client.0.vm05.stdout:4/967: creat d2/d43/d12a/d88/d92/d121/dfb/f142 x:0 0 0 2026-03-09T15:02:01.740 INFO:tasks.workunit.client.0.vm05.stdout:3/927: mkdir d3/df/d1e/d2f/d52/d12c 0 2026-03-09T15:02:01.740 INFO:tasks.workunit.client.0.vm05.stdout:3/928: chown d3/df 262 1 2026-03-09T15:02:01.742 INFO:tasks.workunit.client.0.vm05.stdout:0/904: mkdir d9/de/d12/d11f 0 2026-03-09T15:02:01.755 INFO:tasks.workunit.client.0.vm05.stdout:6/884: link da/d17/l9b da/d17/d3b/dbd/l113 0 2026-03-09T15:02:01.761 INFO:tasks.workunit.client.0.vm05.stdout:1/911: write d9/d2f/d83/f9e [3335346,72367] 0 2026-03-09T15:02:01.764 INFO:tasks.workunit.client.0.vm05.stdout:1/912: dwrite d9/d2f/d83/f9e [0,4194304] 0 2026-03-09T15:02:01.775 INFO:tasks.workunit.client.0.vm05.stdout:2/965: fdatasync da/dd/f9e 0 2026-03-09T15:02:01.778 INFO:tasks.workunit.client.0.vm05.stdout:3/929: symlink d3/df/d1e/d2c/d74/l12d 0 2026-03-09T15:02:01.779 INFO:tasks.workunit.client.0.vm05.stdout:3/930: stat d3/df/d1e/daf/fc6 0 2026-03-09T15:02:01.779 INFO:tasks.workunit.client.0.vm05.stdout:8/977: link d0/d1/d12/d1b/d6e/d93/cc8 d0/d1/d55/c144 0 2026-03-09T15:02:01.780 INFO:tasks.workunit.client.0.vm05.stdout:7/957: link d1/d9/l66 d1/d49/d4a/d94/ddb/d140/l146 0 2026-03-09T15:02:01.788 INFO:tasks.workunit.client.0.vm05.stdout:6/885: rename da/d43/d66/f6e to da/d19/d106/f114 0 2026-03-09T15:02:01.790 INFO:tasks.workunit.client.0.vm05.stdout:1/913: rmdir d9/d2f/d37/d5f/da2/d11a 39 2026-03-09T15:02:01.796 INFO:tasks.workunit.client.0.vm05.stdout:4/968: symlink d2/d43/l143 0 2026-03-09T15:02:01.812 INFO:tasks.workunit.client.0.vm05.stdout:6/886: mkdir da/d19/dd7/dfe/d115 0 2026-03-09T15:02:01.823 INFO:tasks.workunit.client.0.vm05.stdout:4/969: mkdir d2/d4/d7/d48/df0/d12e/d144 0 2026-03-09T15:02:01.823 INFO:tasks.workunit.client.0.vm05.stdout:0/905: link d9/de/d12/d15/d2e/f40 d9/de/d12/d15/f120 0 2026-03-09T15:02:01.823 INFO:tasks.workunit.client.0.vm05.stdout:0/906: write d9/de/d12/d15/d2e/d32/d53/f91 [4296319,127009] 0 2026-03-09T15:02:01.826 INFO:tasks.workunit.client.0.vm05.stdout:2/966: sync 2026-03-09T15:02:01.826 INFO:tasks.workunit.client.0.vm05.stdout:3/931: sync 2026-03-09T15:02:01.832 INFO:tasks.workunit.client.0.vm05.stdout:8/978: dwrite d0/d1/d12/d1b/d21/fb8 [0,4194304] 0 2026-03-09T15:02:01.834 INFO:tasks.workunit.client.0.vm05.stdout:1/914: dwrite d9/d2f/d83/d98/d59/d49/d77/ffc [0,4194304] 0 2026-03-09T15:02:01.835 INFO:tasks.workunit.client.0.vm05.stdout:1/915: chown d9/d2f/f43 272136265 1 2026-03-09T15:02:01.840 INFO:tasks.workunit.client.0.vm05.stdout:4/970: symlink d2/d4/d8/d4a/d94/l145 0 2026-03-09T15:02:01.850 INFO:tasks.workunit.client.0.vm05.stdout:7/958: link d1/d12/d137/l141 d1/d9/d23/d31/l147 0 2026-03-09T15:02:01.858 INFO:tasks.workunit.client.0.vm05.stdout:6/887: fdatasync da/d19/f35 0 2026-03-09T15:02:01.861 INFO:tasks.workunit.client.0.vm05.stdout:6/888: write da/d43/d7b/fd6 [1444332,91601] 0 2026-03-09T15:02:01.868 INFO:tasks.workunit.client.0.vm05.stdout:4/971: mkdir d2/d43/d12a/d146 0 2026-03-09T15:02:01.869 INFO:tasks.workunit.client.0.vm05.stdout:4/972: stat d2/d43/d12a/d88/f8b 0 2026-03-09T15:02:01.870 INFO:tasks.workunit.client.0.vm05.stdout:4/973: read d2/d43/f4f [2857056,76954] 0 2026-03-09T15:02:01.873 INFO:tasks.workunit.client.0.vm05.stdout:7/959: dread d1/d9/d23/d54/d7b/f7f [0,4194304] 0 2026-03-09T15:02:01.881 INFO:tasks.workunit.client.0.vm05.stdout:0/907: dwrite d9/de/d12/da3/dbc/fc4 [0,4194304] 0 2026-03-09T15:02:01.883 INFO:tasks.workunit.client.0.vm05.stdout:0/908: chown d9/d59/d70 20 1 2026-03-09T15:02:01.883 INFO:tasks.workunit.client.0.vm05.stdout:2/967: dwrite da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/fc6 [0,4194304] 0 2026-03-09T15:02:01.883 INFO:tasks.workunit.client.0.vm05.stdout:1/916: write d9/d2f/d37/d5a/da9/dc9/dcd/f96 [3814541,42643] 0 2026-03-09T15:02:01.887 INFO:tasks.workunit.client.0.vm05.stdout:2/968: fsync da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/f12b 0 2026-03-09T15:02:01.894 INFO:tasks.workunit.client.0.vm05.stdout:3/932: dwrite d3/d29/d7f/fcf [0,4194304] 0 2026-03-09T15:02:01.924 INFO:tasks.workunit.client.0.vm05.stdout:4/974: symlink d2/d43/d12a/df5/d97/l147 0 2026-03-09T15:02:01.929 INFO:tasks.workunit.client.0.vm05.stdout:7/960: symlink d1/d9/d23/d31/d8f/d93/dbd/dd1/l148 0 2026-03-09T15:02:01.944 INFO:tasks.workunit.client.0.vm05.stdout:2/969: unlink da/d29/d6a/da0/d91/dab/d2f/db3/lf2 0 2026-03-09T15:02:01.951 INFO:tasks.workunit.client.0.vm05.stdout:1/917: dread d9/d2f/d83/d98/d59/d49/d92/d75/f76 [0,4194304] 0 2026-03-09T15:02:01.952 INFO:tasks.workunit.client.0.vm05.stdout:0/909: write d9/f22 [1770700,16502] 0 2026-03-09T15:02:01.962 INFO:tasks.workunit.client.0.vm05.stdout:0/910: dread d9/de/d12/d15/d2e/d6b/fb8 [0,4194304] 0 2026-03-09T15:02:01.963 INFO:tasks.workunit.client.0.vm05.stdout:0/911: dread - d9/d59/d93/fd1 zero size 2026-03-09T15:02:01.966 INFO:tasks.workunit.client.0.vm05.stdout:0/912: dwrite d9/de/df1/f98 [4194304,4194304] 0 2026-03-09T15:02:01.981 INFO:tasks.workunit.client.0.vm05.stdout:8/979: rename d0/dc/f10e to d0/d1/f145 0 2026-03-09T15:02:01.985 INFO:tasks.workunit.client.0.vm05.stdout:4/975: truncate d2/d4/d1e/d71/fdf 430478 0 2026-03-09T15:02:01.986 INFO:tasks.workunit.client.0.vm05.stdout:4/976: dread - d2/d4/d1e/da2/f111 zero size 2026-03-09T15:02:01.987 INFO:tasks.workunit.client.0.vm05.stdout:7/961: stat d1/d9/d23/d31/d51/c5d 0 2026-03-09T15:02:01.991 INFO:tasks.workunit.client.0.vm05.stdout:1/918: symlink d9/d2f/d37/d5a/da9/dc9/dcd/df7/l12c 0 2026-03-09T15:02:01.996 INFO:tasks.workunit.client.0.vm05.stdout:3/933: fsync d3/d29/f41 0 2026-03-09T15:02:02.003 INFO:tasks.workunit.client.0.vm05.stdout:0/913: creat d9/de/d25/dae/f121 x:0 0 0 2026-03-09T15:02:02.005 INFO:tasks.workunit.client.0.vm05.stdout:8/980: dread - d0/d1/d12/d3c/d8b/d129/ff8 zero size 2026-03-09T15:02:02.009 INFO:tasks.workunit.client.0.vm05.stdout:2/970: mknod da/d29/d6a/da0/d91/dab/c12e 0 2026-03-09T15:02:02.012 INFO:tasks.workunit.client.0.vm05.stdout:1/919: chown d9/d2f/d83/d98/d59/d49/d78/d94/le7 15411 1 2026-03-09T15:02:02.013 INFO:tasks.workunit.client.0.vm05.stdout:1/920: stat d9/d2f/d83/d98/d59/d49/d78/d7e 0 2026-03-09T15:02:02.018 INFO:tasks.workunit.client.0.vm05.stdout:2/971: dread da/d29/d6a/f81 [0,4194304] 0 2026-03-09T15:02:02.021 INFO:tasks.workunit.client.0.vm05.stdout:6/889: link da/d17/l73 da/d43/l116 0 2026-03-09T15:02:02.030 INFO:tasks.workunit.client.0.vm05.stdout:2/972: dread da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/f6e [0,4194304] 0 2026-03-09T15:02:02.030 INFO:tasks.workunit.client.0.vm05.stdout:3/934: dwrite d3/df/d59/f75 [4194304,4194304] 0 2026-03-09T15:02:02.044 INFO:tasks.workunit.client.0.vm05.stdout:4/977: rename d2/d4/d1e/d71/fdf to d2/d4/d7/dc/da8/f148 0 2026-03-09T15:02:02.045 INFO:tasks.workunit.client.0.vm05.stdout:7/962: getdents d1/d9/d23/d11a/d145 0 2026-03-09T15:02:02.050 INFO:tasks.workunit.client.0.vm05.stdout:1/921: creat d9/d2f/d37/d5a/da9/dc9/dcd/df7/f12d x:0 0 0 2026-03-09T15:02:02.051 INFO:tasks.workunit.client.0.vm05.stdout:1/922: write d9/d2f/d83/d98/d59/d49/f82 [4670732,101797] 0 2026-03-09T15:02:02.052 INFO:tasks.workunit.client.0.vm05.stdout:1/923: write d9/d2f/d83/d98/d59/d49/d92/d75/f11f [36158,126430] 0 2026-03-09T15:02:02.057 INFO:tasks.workunit.client.0.vm05.stdout:6/890: creat da/d17/d95/da2/dae/f117 x:0 0 0 2026-03-09T15:02:02.058 INFO:tasks.workunit.client.0.vm05.stdout:6/891: write da/f10 [4618124,43759] 0 2026-03-09T15:02:02.062 INFO:tasks.workunit.client.0.vm05.stdout:0/914: mkdir d9/de/d12/d15/d2e/d32/d74/de9/d122 0 2026-03-09T15:02:02.067 INFO:tasks.workunit.client.0.vm05.stdout:3/935: unlink d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/f4c 0 2026-03-09T15:02:02.082 INFO:tasks.workunit.client.0.vm05.stdout:7/963: dread d1/d9/d23/d31/d8f/d93/dbd/ff1 [0,4194304] 0 2026-03-09T15:02:02.085 INFO:tasks.workunit.client.0.vm05.stdout:1/924: creat d9/d17/f12e x:0 0 0 2026-03-09T15:02:02.089 INFO:tasks.workunit.client.0.vm05.stdout:6/892: creat da/d43/d7b/da9/db7/f118 x:0 0 0 2026-03-09T15:02:02.093 INFO:tasks.workunit.client.0.vm05.stdout:8/981: creat d0/d1/d12/d1b/f146 x:0 0 0 2026-03-09T15:02:02.109 INFO:tasks.workunit.client.0.vm05.stdout:1/925: dread d9/d17/f81 [0,4194304] 0 2026-03-09T15:02:02.110 INFO:tasks.workunit.client.0.vm05.stdout:7/964: write d1/d9/d23/d31/fad [1173639,24532] 0 2026-03-09T15:02:02.110 INFO:tasks.workunit.client.0.vm05.stdout:7/965: write d1/d22/f102 [55290,37615] 0 2026-03-09T15:02:02.115 INFO:tasks.workunit.client.0.vm05.stdout:6/893: creat da/d43/d7b/da9/db7/f119 x:0 0 0 2026-03-09T15:02:02.117 INFO:tasks.workunit.client.0.vm05.stdout:0/915: fdatasync d9/de/d12/d15/fe1 0 2026-03-09T15:02:02.120 INFO:tasks.workunit.client.0.vm05.stdout:8/982: read d0/d24/d96/f118 [430048,111856] 0 2026-03-09T15:02:02.124 INFO:tasks.workunit.client.0.vm05.stdout:3/936: mkdir d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/dbd/dc2/d12a/d12e 0 2026-03-09T15:02:02.126 INFO:tasks.workunit.client.0.vm05.stdout:4/978: link d2/d4/d1e/da2/c128 d2/d49/c149 0 2026-03-09T15:02:02.133 INFO:tasks.workunit.client.0.vm05.stdout:3/937: dread d3/d29/d2d/f33 [0,4194304] 0 2026-03-09T15:02:02.140 INFO:tasks.workunit.client.0.vm05.stdout:0/916: read d9/de/d12/d15/d2e/d32/d53/d61/f10c [608437,18435] 0 2026-03-09T15:02:02.140 INFO:tasks.workunit.client.0.vm05.stdout:0/917: readlink d9/de/l31 0 2026-03-09T15:02:02.142 INFO:tasks.workunit.client.0.vm05.stdout:8/983: creat d0/d1/d12/d1b/d66/d11b/f147 x:0 0 0 2026-03-09T15:02:02.143 INFO:tasks.workunit.client.0.vm05.stdout:8/984: truncate d0/d1/d12/d3c/d8b/f12a 595812 0 2026-03-09T15:02:02.144 INFO:tasks.workunit.client.0.vm05.stdout:8/985: chown d0/d24/ca5 2880973 1 2026-03-09T15:02:02.146 INFO:tasks.workunit.client.0.vm05.stdout:2/973: getdents da/d29/d6a/da0/d91/dab/d2f/d35/d10b/dbf/dc8 0 2026-03-09T15:02:02.146 INFO:tasks.workunit.client.0.vm05.stdout:2/974: chown da/f10 1459 1 2026-03-09T15:02:02.147 INFO:tasks.workunit.client.0.vm05.stdout:2/975: dread - da/d29/d6a/da0/d91/dab/d2f/db3/df1/f11d zero size 2026-03-09T15:02:02.162 INFO:tasks.workunit.client.0.vm05.stdout:4/979: write d2/d4/d8/d4a/d6e/ff4 [995524,95601] 0 2026-03-09T15:02:02.181 INFO:tasks.workunit.client.0.vm05.stdout:6/894: rename da/d17/f107 to da/d17/d3b/dbd/d110/f11a 0 2026-03-09T15:02:02.181 INFO:tasks.workunit.client.0.vm05.stdout:6/895: chown da/d17/f20 10 1 2026-03-09T15:02:02.215 INFO:tasks.workunit.client.0.vm05.stdout:8/986: dwrite d0/d1/d12/d1b/d95/d78/dea/f109 [0,4194304] 0 2026-03-09T15:02:02.218 INFO:tasks.workunit.client.0.vm05.stdout:8/987: truncate d0/d1/d12/d3c/d8b/f12a 1135055 0 2026-03-09T15:02:02.220 INFO:tasks.workunit.client.0.vm05.stdout:2/976: dwrite da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/f85 [0,4194304] 0 2026-03-09T15:02:02.222 INFO:tasks.workunit.client.0.vm05.stdout:8/988: dwrite d0/f10 [0,4194304] 0 2026-03-09T15:02:02.242 INFO:tasks.workunit.client.0.vm05.stdout:8/989: dwrite d0/d1/d12/d1b/d95/d42/d60/da7/db3/f127 [0,4194304] 0 2026-03-09T15:02:02.267 INFO:tasks.workunit.client.0.vm05.stdout:7/966: creat d1/d9/d23/d31/d32/d78/f149 x:0 0 0 2026-03-09T15:02:02.273 INFO:tasks.workunit.client.0.vm05.stdout:4/980: symlink d2/d4/d1e/l14a 0 2026-03-09T15:02:02.298 INFO:tasks.workunit.client.0.vm05.stdout:2/977: mkdir da/d29/d6a/da0/d91/dab/d2f/d35/d10b/d119/d12f 0 2026-03-09T15:02:02.302 INFO:tasks.workunit.client.0.vm05.stdout:8/990: mknod d0/d24/d112/c148 0 2026-03-09T15:02:02.303 INFO:tasks.workunit.client.0.vm05.stdout:1/926: link d9/l20 d9/d10e/l12f 0 2026-03-09T15:02:02.303 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:01 vm05.local ceph-mon[50611]: Failed to find standby mgr for failover. Retrying in 8 seconds 2026-03-09T15:02:02.307 INFO:tasks.workunit.client.0.vm05.stdout:7/967: creat d1/d9/d23/d54/f14a x:0 0 0 2026-03-09T15:02:02.314 INFO:tasks.workunit.client.0.vm05.stdout:6/896: link da/d43/d7b/da9/db7/f119 da/d17/d3b/dbd/f11b 0 2026-03-09T15:02:02.323 INFO:tasks.workunit.client.0.vm05.stdout:4/981: dwrite d2/d4/d1e/da2/f109 [0,4194304] 0 2026-03-09T15:02:02.337 INFO:tasks.workunit.client.0.vm05.stdout:7/968: read d1/d22/fbc [1764212,118834] 0 2026-03-09T15:02:02.338 INFO:tasks.workunit.client.0.vm05.stdout:6/897: truncate da/d17/f61 1433620 0 2026-03-09T15:02:02.344 INFO:tasks.workunit.client.0.vm05.stdout:8/991: fdatasync d0/d1/d12/d1b/d95/dd7/dd2/dd8/d114/da8/ff0 0 2026-03-09T15:02:02.351 INFO:tasks.workunit.client.0.vm05.stdout:4/982: dwrite d2/d43/d12a/f36 [0,4194304] 0 2026-03-09T15:02:02.353 INFO:tasks.workunit.client.0.vm05.stdout:1/927: write d9/d2f/d37/d5f/da2/d11a/ff9 [40025,47657] 0 2026-03-09T15:02:02.362 INFO:tasks.workunit.client.0.vm05.stdout:3/938: rename d3/df/d1e/d2f/d52/f87 to d3/d29/d7f/f12f 0 2026-03-09T15:02:02.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:01 vm09.local ceph-mon[59673]: Failed to find standby mgr for failover. Retrying in 8 seconds 2026-03-09T15:02:02.369 INFO:tasks.workunit.client.0.vm05.stdout:0/918: rename d9/de/d25/dae/f121 to d9/de/d12/d15/d2e/d32/d53/d61/d104/da0/f123 0 2026-03-09T15:02:02.370 INFO:tasks.workunit.client.0.vm05.stdout:4/983: dread d2/d4/d8/d4a/d6e/f93 [0,4194304] 0 2026-03-09T15:02:02.372 INFO:tasks.workunit.client.0.vm05.stdout:3/939: creat d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/d90/d10a/f130 x:0 0 0 2026-03-09T15:02:02.375 INFO:tasks.workunit.client.0.vm05.stdout:3/940: dwrite d3/df/d59/d79/fa8 [0,4194304] 0 2026-03-09T15:02:02.386 INFO:tasks.workunit.client.0.vm05.stdout:2/978: rename da/d29/d6a/d7f/fa8 to da/d29/d3f/f130 0 2026-03-09T15:02:02.388 INFO:tasks.workunit.client.0.vm05.stdout:0/919: mknod d9/de/d12/d15/d2e/d32/d53/d61/d104/da0/c124 0 2026-03-09T15:02:02.388 INFO:tasks.workunit.client.0.vm05.stdout:0/920: readlink d9/l34 0 2026-03-09T15:02:02.390 INFO:tasks.workunit.client.0.vm05.stdout:4/984: creat d2/d49/f14b x:0 0 0 2026-03-09T15:02:02.392 INFO:tasks.workunit.client.0.vm05.stdout:7/969: creat d1/d22/f14b x:0 0 0 2026-03-09T15:02:02.396 INFO:tasks.workunit.client.0.vm05.stdout:2/979: chown da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/c55 352013 1 2026-03-09T15:02:02.398 INFO:tasks.workunit.client.0.vm05.stdout:4/985: truncate d2/d4/d7/d48/fc8 738195 0 2026-03-09T15:02:02.400 INFO:tasks.workunit.client.0.vm05.stdout:7/970: dread d1/d12/ff6 [0,4194304] 0 2026-03-09T15:02:02.400 INFO:tasks.workunit.client.0.vm05.stdout:7/971: chown d1/f62 651527239 1 2026-03-09T15:02:02.401 INFO:tasks.workunit.client.0.vm05.stdout:8/992: getdents d0/d1/d12/d1b/d66/d11b 0 2026-03-09T15:02:02.403 INFO:tasks.workunit.client.0.vm05.stdout:2/980: dread - da/d29/d6a/da0/d91/dab/d2f/d35/d8a/fc0 zero size 2026-03-09T15:02:02.413 INFO:tasks.workunit.client.0.vm05.stdout:0/921: rename d9/de/d12/d15/f5e to d9/de/d12/d15/d2e/d6b/f125 0 2026-03-09T15:02:02.413 INFO:tasks.workunit.client.0.vm05.stdout:7/972: rmdir d1/d12/d137 39 2026-03-09T15:02:02.413 INFO:tasks.workunit.client.0.vm05.stdout:3/941: creat d3/d29/d2d/f131 x:0 0 0 2026-03-09T15:02:02.413 INFO:tasks.workunit.client.0.vm05.stdout:8/993: mknod d0/d1/d12/d1b/d6e/d93/d9f/c149 0 2026-03-09T15:02:02.413 INFO:tasks.workunit.client.0.vm05.stdout:7/973: creat d1/d9/d23/d54/d7b/f14c x:0 0 0 2026-03-09T15:02:02.414 INFO:tasks.workunit.client.0.vm05.stdout:1/928: sync 2026-03-09T15:02:02.416 INFO:tasks.workunit.client.0.vm05.stdout:1/929: chown d9/d2f/d37/d5a/da9/dc9/dcd/f96 13124 1 2026-03-09T15:02:02.421 INFO:tasks.workunit.client.0.vm05.stdout:4/986: dread d2/d4/d50/d8a/fc3 [0,4194304] 0 2026-03-09T15:02:02.423 INFO:tasks.workunit.client.0.vm05.stdout:2/981: dread da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/f99 [0,4194304] 0 2026-03-09T15:02:02.425 INFO:tasks.workunit.client.0.vm05.stdout:6/898: write da/f57 [1864377,72617] 0 2026-03-09T15:02:02.433 INFO:tasks.workunit.client.0.vm05.stdout:0/922: mkdir d9/de/d25/d38/d126 0 2026-03-09T15:02:02.439 INFO:tasks.workunit.client.0.vm05.stdout:3/942: dwrite d3/df/d1e/d2c/d74/d78/d121/f26 [0,4194304] 0 2026-03-09T15:02:02.445 INFO:tasks.workunit.client.0.vm05.stdout:3/943: dread d3/df/d10/fae [0,4194304] 0 2026-03-09T15:02:02.452 INFO:tasks.workunit.client.0.vm05.stdout:8/994: write d0/d1/d97/fae [1645473,60394] 0 2026-03-09T15:02:02.456 INFO:tasks.workunit.client.0.vm05.stdout:1/930: rename d9/d2f/d83/l6b to d9/d2f/d55/dd0/l130 0 2026-03-09T15:02:02.456 INFO:tasks.workunit.client.0.vm05.stdout:1/931: chown d9/d2f/d83/d98/d59/d49/d78/d94 63629 1 2026-03-09T15:02:02.457 INFO:tasks.workunit.client.0.vm05.stdout:7/974: write d1/d22/d3c/fa7 [415475,52637] 0 2026-03-09T15:02:02.462 INFO:tasks.workunit.client.0.vm05.stdout:2/982: creat da/d29/d6a/da0/d91/dab/d2f/d35/d10b/dbf/dc8/f131 x:0 0 0 2026-03-09T15:02:02.467 INFO:tasks.workunit.client.0.vm05.stdout:3/944: rmdir d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/dbd/dc2 39 2026-03-09T15:02:02.468 INFO:tasks.workunit.client.0.vm05.stdout:3/945: stat d3/df/d1e/d2f/fb9 0 2026-03-09T15:02:02.472 INFO:tasks.workunit.client.0.vm05.stdout:6/899: dwrite da/d17/f44 [0,4194304] 0 2026-03-09T15:02:02.473 INFO:tasks.workunit.client.0.vm05.stdout:6/900: read - da/d17/d95/fce zero size 2026-03-09T15:02:02.484 INFO:tasks.workunit.client.0.vm05.stdout:8/995: rename d0/d1/d12/d1b/d66/dcc to d0/d14a 0 2026-03-09T15:02:02.486 INFO:tasks.workunit.client.0.vm05.stdout:7/975: truncate d1/de4/ffa 3520340 0 2026-03-09T15:02:02.486 INFO:tasks.workunit.client.0.vm05.stdout:7/976: readlink d1/d9/d23/d31/d8f/ld5 0 2026-03-09T15:02:02.486 INFO:tasks.workunit.client.0.vm05.stdout:7/977: chown d1/leb 4499 1 2026-03-09T15:02:02.487 INFO:tasks.workunit.client.0.vm05.stdout:7/978: chown d1/d9/d23/d31/d8f/d93/fb8 2 1 2026-03-09T15:02:02.488 INFO:tasks.workunit.client.0.vm05.stdout:7/979: readlink d1/d9/d23/d31/d8f/d93/dbd/l104 0 2026-03-09T15:02:02.489 INFO:tasks.workunit.client.0.vm05.stdout:2/983: read - da/d29/d6a/d7f/f102 zero size 2026-03-09T15:02:02.490 INFO:tasks.workunit.client.0.vm05.stdout:2/984: chown da/d29/d6a/da0/d91/dab/c56 34 1 2026-03-09T15:02:02.491 INFO:tasks.workunit.client.0.vm05.stdout:0/923: symlink d9/de/d25/d38/l127 0 2026-03-09T15:02:02.493 INFO:tasks.workunit.client.0.vm05.stdout:8/996: creat d0/d1/d12/d116/f14b x:0 0 0 2026-03-09T15:02:02.495 INFO:tasks.workunit.client.0.vm05.stdout:7/980: rename d1/c42 to d1/d9/d23/d31/d32/ddc/c14d 0 2026-03-09T15:02:02.495 INFO:tasks.workunit.client.0.vm05.stdout:7/981: fdatasync d1/d9/d23/d31/d32/f63 0 2026-03-09T15:02:02.497 INFO:tasks.workunit.client.0.vm05.stdout:2/985: mknod da/d29/d6a/da0/d91/dab/d2f/de7/c132 0 2026-03-09T15:02:02.500 INFO:tasks.workunit.client.0.vm05.stdout:3/946: creat d3/d29/d7f/d110/f132 x:0 0 0 2026-03-09T15:02:02.500 INFO:tasks.workunit.client.0.vm05.stdout:6/901: symlink da/d19/dd7/dfe/l11c 0 2026-03-09T15:02:02.501 INFO:tasks.workunit.client.0.vm05.stdout:8/997: mknod d0/d1/d12/d1b/d6e/d93/d9f/c14c 0 2026-03-09T15:02:02.503 INFO:tasks.workunit.client.0.vm05.stdout:4/987: write d2/f3e [1311994,11824] 0 2026-03-09T15:02:02.506 INFO:tasks.workunit.client.0.vm05.stdout:1/932: creat d9/d2f/d37/d5a/da9/dc9/f131 x:0 0 0 2026-03-09T15:02:02.509 INFO:tasks.workunit.client.0.vm05.stdout:1/933: dwrite d9/d17/f12e [0,4194304] 0 2026-03-09T15:02:02.514 INFO:tasks.workunit.client.0.vm05.stdout:1/934: dread d9/d2f/d37/d5a/da9/dc9/dcd/f6f [0,4194304] 0 2026-03-09T15:02:02.515 INFO:tasks.workunit.client.0.vm05.stdout:1/935: chown d9/d2f/d83/d98/d59/d49/cb8 57165639 1 2026-03-09T15:02:02.519 INFO:tasks.workunit.client.0.vm05.stdout:0/924: write d9/de/d12/d15/d2e/f9a [378201,30044] 0 2026-03-09T15:02:02.522 INFO:tasks.workunit.client.0.vm05.stdout:7/982: stat d1/d49/d4a/f129 0 2026-03-09T15:02:02.523 INFO:tasks.workunit.client.0.vm05.stdout:2/986: mknod da/d29/d6a/da0/d91/dab/d2f/db3/c133 0 2026-03-09T15:02:02.527 INFO:tasks.workunit.client.0.vm05.stdout:6/902: unlink da/d17/d3b/dbd/c101 0 2026-03-09T15:02:02.535 INFO:tasks.workunit.client.0.vm05.stdout:1/936: fsync d9/d2f/d83/d98/d59/d49/f51 0 2026-03-09T15:02:02.535 INFO:tasks.workunit.client.0.vm05.stdout:2/987: rename da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/f99 to da/d29/d6a/da0/d91/dab/d2f/d35/d8a/f134 0 2026-03-09T15:02:02.536 INFO:tasks.workunit.client.0.vm05.stdout:6/903: unlink da/d17/d95/da2/fb6 0 2026-03-09T15:02:02.539 INFO:tasks.workunit.client.0.vm05.stdout:1/937: mkdir d9/d2f/d83/d98/d59/d49/d4b/d132 0 2026-03-09T15:02:02.545 INFO:tasks.workunit.client.0.vm05.stdout:2/988: creat da/d29/d6a/da0/d91/dab/d2f/d35/d10b/dbf/dc8/f135 x:0 0 0 2026-03-09T15:02:02.547 INFO:tasks.workunit.client.0.vm05.stdout:0/925: write d9/de/d12/da3/dbc/fbe [436951,43743] 0 2026-03-09T15:02:02.550 INFO:tasks.workunit.client.0.vm05.stdout:8/998: write d0/d1/d12/d1b/d6e/d93/d9f/f13f [326680,88109] 0 2026-03-09T15:02:02.553 INFO:tasks.workunit.client.0.vm05.stdout:6/904: creat da/d43/d7b/de0/f11d x:0 0 0 2026-03-09T15:02:02.558 INFO:tasks.workunit.client.0.vm05.stdout:1/938: fsync d9/d2f/d83/d98/d59/fd4 0 2026-03-09T15:02:02.559 INFO:tasks.workunit.client.0.vm05.stdout:7/983: link d1/d9/d72/cb2 d1/d9/d72/d10c/c14e 0 2026-03-09T15:02:02.561 INFO:tasks.workunit.client.0.vm05.stdout:2/989: unlink da/d29/d6a/da0/d7c/l90 0 2026-03-09T15:02:02.569 INFO:tasks.workunit.client.0.vm05.stdout:8/999: dread d0/d1/d12/d1b/f89 [0,4194304] 0 2026-03-09T15:02:02.570 INFO:tasks.workunit.client.0.vm05.stdout:3/947: link d3/d29/l7a d3/df/d1e/d2c/d74/d78/d121/d44/d50/l133 0 2026-03-09T15:02:02.572 INFO:tasks.workunit.client.0.vm05.stdout:6/905: mknod da/d43/d66/c11e 0 2026-03-09T15:02:02.572 INFO:tasks.workunit.client.0.vm05.stdout:6/906: dread - da/d17/d3b/dbd/f11b zero size 2026-03-09T15:02:02.573 INFO:tasks.workunit.client.0.vm05.stdout:4/988: getdents d2/d43/d12a/d88/d92/d121 0 2026-03-09T15:02:02.574 INFO:tasks.workunit.client.0.vm05.stdout:4/989: chown d2/d4/d50/d8a/d101 0 1 2026-03-09T15:02:02.576 INFO:tasks.workunit.client.0.vm05.stdout:1/939: symlink d9/d97/l133 0 2026-03-09T15:02:02.577 INFO:tasks.workunit.client.0.vm05.stdout:7/984: creat d1/de4/f14f x:0 0 0 2026-03-09T15:02:02.579 INFO:tasks.workunit.client.0.vm05.stdout:0/926: mknod d9/de/d12/d11f/c128 0 2026-03-09T15:02:02.580 INFO:tasks.workunit.client.0.vm05.stdout:3/948: creat d3/df/d1e/d2c/d74/d78/f134 x:0 0 0 2026-03-09T15:02:02.583 INFO:tasks.workunit.client.0.vm05.stdout:6/907: rmdir da/d17/d95/da2/dae 39 2026-03-09T15:02:02.584 INFO:tasks.workunit.client.0.vm05.stdout:4/990: truncate d2/d4/d7/dc/fb9 758215 0 2026-03-09T15:02:02.587 INFO:tasks.workunit.client.0.vm05.stdout:4/991: dwrite d2/d7a/fbf [0,4194304] 0 2026-03-09T15:02:02.608 INFO:tasks.workunit.client.0.vm05.stdout:7/985: dread - d1/d9/d23/d31/d8f/d93/dbd/fea zero size 2026-03-09T15:02:02.610 INFO:tasks.workunit.client.0.vm05.stdout:0/927: dread - d9/de/d25/dae/de6/faf zero size 2026-03-09T15:02:02.612 INFO:tasks.workunit.client.0.vm05.stdout:3/949: write d3/df/d1e/d2c/d74/d78/d121/d44/f60 [505320,100208] 0 2026-03-09T15:02:02.613 INFO:tasks.workunit.client.0.vm05.stdout:3/950: chown d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/d90/cb7 67071726 1 2026-03-09T15:02:02.614 INFO:tasks.workunit.client.0.vm05.stdout:3/951: readlink d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/dbd/l12b 0 2026-03-09T15:02:02.614 INFO:tasks.workunit.client.0.vm05.stdout:3/952: readlink d3/df/d1e/d2f/d52/l8a 0 2026-03-09T15:02:02.617 INFO:tasks.workunit.client.0.vm05.stdout:0/928: dread d9/de/df1/fb4 [0,4194304] 0 2026-03-09T15:02:02.623 INFO:tasks.workunit.client.0.vm05.stdout:7/986: rename d1/d49/d4a/f6b to d1/d49/d4a/d94/ddb/d140/f150 0 2026-03-09T15:02:02.630 INFO:tasks.workunit.client.0.vm05.stdout:1/940: write d9/d2f/d83/d98/d59/fbc [36684,85586] 0 2026-03-09T15:02:02.630 INFO:tasks.workunit.client.0.vm05.stdout:2/990: truncate da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/fc6 4096372 0 2026-03-09T15:02:02.634 INFO:tasks.workunit.client.0.vm05.stdout:6/908: read da/d17/d95/da2/dae/fdd [4402852,46108] 0 2026-03-09T15:02:02.639 INFO:tasks.workunit.client.0.vm05.stdout:4/992: dwrite d2/d4/d8/d4a/d6e/f93 [0,4194304] 0 2026-03-09T15:02:02.641 INFO:tasks.workunit.client.0.vm05.stdout:4/993: readlink d2/d4/d7/dc/da8/l118 0 2026-03-09T15:02:02.671 INFO:tasks.workunit.client.0.vm05.stdout:1/941: write d9/d17/f81 [3799275,123535] 0 2026-03-09T15:02:02.672 INFO:tasks.workunit.client.0.vm05.stdout:1/942: write d9/d2f/d83/d98/d59/d49/d77/ffc [7757939,91441] 0 2026-03-09T15:02:02.680 INFO:tasks.workunit.client.0.vm05.stdout:1/943: dread d9/d2f/d83/d98/d59/df8/ffa [0,4194304] 0 2026-03-09T15:02:02.680 INFO:tasks.workunit.client.0.vm05.stdout:1/944: chown d9/d2f/d83/lca 1981754 1 2026-03-09T15:02:02.685 INFO:tasks.workunit.client.0.vm05.stdout:6/909: symlink da/d17/d3b/dbd/d110/l11f 0 2026-03-09T15:02:02.691 INFO:tasks.workunit.client.0.vm05.stdout:0/929: mknod d9/de/d25/d38/c129 0 2026-03-09T15:02:02.695 INFO:tasks.workunit.client.0.vm05.stdout:7/987: mknod d1/d22/d143/c151 0 2026-03-09T15:02:02.709 INFO:tasks.workunit.client.0.vm05.stdout:2/991: write da/d29/d6a/da0/d91/dab/d2f/db3/df1/d114/d46/f73 [228152,62962] 0 2026-03-09T15:02:02.709 INFO:tasks.workunit.client.0.vm05.stdout:1/945: creat d9/d2f/d37/d101/f134 x:0 0 0 2026-03-09T15:02:02.711 INFO:tasks.workunit.client.0.vm05.stdout:1/946: read d9/d2f/d83/d98/d87/ff5 [539971,63581] 0 2026-03-09T15:02:02.719 INFO:tasks.workunit.client.0.vm05.stdout:4/994: write d2/f1b [954421,76443] 0 2026-03-09T15:02:02.720 INFO:tasks.workunit.client.0.vm05.stdout:6/910: dwrite da/d17/d3b/f85 [0,4194304] 0 2026-03-09T15:02:02.733 INFO:tasks.workunit.client.0.vm05.stdout:3/953: getdents d3/d29/d7f/d110 0 2026-03-09T15:02:02.734 INFO:tasks.workunit.client.0.vm05.stdout:3/954: fsync d3/df/d1e/d2c/d74/d78/d121/d44/f56 0 2026-03-09T15:02:02.746 INFO:tasks.workunit.client.0.vm05.stdout:0/930: write d9/de/d6a/fe4 [2190381,118285] 0 2026-03-09T15:02:02.749 INFO:tasks.workunit.client.0.vm05.stdout:6/911: truncate da/d43/d7b/fc8 313344 0 2026-03-09T15:02:02.751 INFO:tasks.workunit.client.0.vm05.stdout:7/988: truncate d1/d12/f118 59041 0 2026-03-09T15:02:02.761 INFO:tasks.workunit.client.0.vm05.stdout:2/992: link da/d29/d6a/da0/dd9/f109 da/d29/d6a/da0/d7c/f136 0 2026-03-09T15:02:02.765 INFO:tasks.workunit.client.0.vm05.stdout:2/993: dwrite da/d29/d6a/da0/d91/dab/dd6/f104 [0,4194304] 0 2026-03-09T15:02:02.766 INFO:tasks.workunit.client.0.vm05.stdout:2/994: dread - da/d29/d6a/da0/d7c/f115 zero size 2026-03-09T15:02:02.779 INFO:tasks.workunit.client.0.vm05.stdout:4/995: link d2/d4/d8/ccf d2/d4/d50/d8a/c14c 0 2026-03-09T15:02:02.785 INFO:tasks.workunit.client.0.vm05.stdout:6/912: getdents da/d9e 0 2026-03-09T15:02:02.785 INFO:tasks.workunit.client.0.vm05.stdout:4/996: dread d2/f3e [0,4194304] 0 2026-03-09T15:02:02.786 INFO:tasks.workunit.client.0.vm05.stdout:1/947: write d9/d2f/d83/d98/d59/fe6 [1342437,70229] 0 2026-03-09T15:02:02.787 INFO:tasks.workunit.client.0.vm05.stdout:0/931: write d9/de/d12/d15/fbb [1270141,113872] 0 2026-03-09T15:02:02.793 INFO:tasks.workunit.client.0.vm05.stdout:0/932: dread d9/de/d12/d15/d2e/d32/d53/d61/d104/fa9 [0,4194304] 0 2026-03-09T15:02:02.798 INFO:tasks.workunit.client.0.vm05.stdout:3/955: dwrite d3/f7 [4194304,4194304] 0 2026-03-09T15:02:02.798 INFO:tasks.workunit.client.0.vm05.stdout:4/997: dread d2/d49/f4d [0,4194304] 0 2026-03-09T15:02:02.814 INFO:tasks.workunit.client.0.vm05.stdout:0/933: unlink d9/d59/cec 0 2026-03-09T15:02:02.817 INFO:tasks.workunit.client.0.vm05.stdout:3/956: unlink d3/df/d1e/d2c/d74/d78/d121/d44/f56 0 2026-03-09T15:02:02.818 INFO:tasks.workunit.client.0.vm05.stdout:3/957: chown d3/d29/d2d/d77 1 1 2026-03-09T15:02:02.818 INFO:tasks.workunit.client.0.vm05.stdout:7/989: rmdir d1/d22/da4/d134 0 2026-03-09T15:02:02.822 INFO:tasks.workunit.client.0.vm05.stdout:2/995: link da/dd/f5d da/d29/d6a/da0/d91/dab/d2f/db3/f137 0 2026-03-09T15:02:02.822 INFO:tasks.workunit.client.0.vm05.stdout:2/996: chown da/d29 0 1 2026-03-09T15:02:02.824 INFO:tasks.workunit.client.0.vm05.stdout:6/913: mkdir da/d17/d95/d120 0 2026-03-09T15:02:02.831 INFO:tasks.workunit.client.0.vm05.stdout:4/998: dwrite d2/d4/f4e [0,4194304] 0 2026-03-09T15:02:02.834 INFO:tasks.workunit.client.0.vm05.stdout:7/990: symlink d1/d49/d4a/l152 0 2026-03-09T15:02:02.842 INFO:tasks.workunit.client.0.vm05.stdout:3/958: getdents d3/d29/d7f/d110 0 2026-03-09T15:02:02.852 INFO:tasks.workunit.client.0.vm05.stdout:7/991: symlink d1/d49/d4a/d94/l153 0 2026-03-09T15:02:02.852 INFO:tasks.workunit.client.0.vm05.stdout:6/914: truncate da/d17/d3b/ffa 1195322 0 2026-03-09T15:02:02.852 INFO:tasks.workunit.client.0.vm05.stdout:0/934: dwrite d9/de/df1/fb4 [0,4194304] 0 2026-03-09T15:02:02.852 INFO:tasks.workunit.client.0.vm05.stdout:0/935: stat d9/de/l2a 0 2026-03-09T15:02:02.855 INFO:tasks.workunit.client.0.vm05.stdout:3/959: rename d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/dbd to d3/df/d1e/d2c/d74/d78/d135 0 2026-03-09T15:02:02.856 INFO:tasks.workunit.client.0.vm05.stdout:7/992: creat d1/d9/d72/d10c/f154 x:0 0 0 2026-03-09T15:02:02.859 INFO:tasks.workunit.client.0.vm05.stdout:1/948: link d9/fea d9/d2f/d37/d5f/f135 0 2026-03-09T15:02:02.863 INFO:tasks.workunit.client.0.vm05.stdout:2/997: getdents da/d29/d6a/da0/d105 0 2026-03-09T15:02:02.866 INFO:tasks.workunit.client.0.vm05.stdout:0/936: dread d9/de/d25/d38/f55 [0,4194304] 0 2026-03-09T15:02:02.872 INFO:tasks.workunit.client.0.vm05.stdout:4/999: getdents d2/d4/d7/dc 0 2026-03-09T15:02:02.872 INFO:tasks.workunit.client.0.vm05.stdout:2/998: creat da/d29/d3f/f138 x:0 0 0 2026-03-09T15:02:02.877 INFO:tasks.workunit.client.0.vm05.stdout:2/999: dread da/d29/d6a/f81 [0,4194304] 0 2026-03-09T15:02:02.879 INFO:tasks.workunit.client.0.vm05.stdout:7/993: dread d1/d12/fa8 [0,4194304] 0 2026-03-09T15:02:02.885 INFO:tasks.workunit.client.0.vm05.stdout:3/960: write d3/d29/d2d/f33 [1198621,60256] 0 2026-03-09T15:02:02.887 INFO:tasks.workunit.client.0.vm05.stdout:3/961: truncate d3/df/dbe/f117 499326 0 2026-03-09T15:02:02.887 INFO:tasks.workunit.client.0.vm05.stdout:7/994: rename d1/d9/f75 to d1/d9/d23/d31/d32/d78/dbb/f155 0 2026-03-09T15:02:02.889 INFO:tasks.workunit.client.0.vm05.stdout:6/915: dwrite da/d19/d106/f114 [0,4194304] 0 2026-03-09T15:02:02.895 INFO:tasks.workunit.client.0.vm05.stdout:6/916: dwrite da/d17/d3b/dbd/d110/f11a [0,4194304] 0 2026-03-09T15:02:02.912 INFO:tasks.workunit.client.0.vm05.stdout:7/995: mkdir d1/d9/d23/d11a/d156 0 2026-03-09T15:02:02.915 INFO:tasks.workunit.client.0.vm05.stdout:6/917: readlink da/lba 0 2026-03-09T15:02:02.916 INFO:tasks.workunit.client.0.vm05.stdout:1/949: getdents d9/d2f/d83/d98/d59/d49/d4b/d116 0 2026-03-09T15:02:02.917 INFO:tasks.workunit.client.0.vm05.stdout:1/950: write d9/d2f/d83/d98/d59/d49/d92/d75/f124 [562224,106843] 0 2026-03-09T15:02:02.919 INFO:tasks.workunit.client.0.vm05.stdout:6/918: rename da/d19/dd7/dfe/d115 to da/d19/d106/d121 0 2026-03-09T15:02:02.921 INFO:tasks.workunit.client.0.vm05.stdout:6/919: stat da/d43/d7b/da9/l100 0 2026-03-09T15:02:02.922 INFO:tasks.workunit.client.0.vm05.stdout:1/951: mknod d9/d97/d127/c136 0 2026-03-09T15:02:02.925 INFO:tasks.workunit.client.0.vm05.stdout:1/952: creat d9/d2f/d83/d98/d59/d49/d92/d75/f137 x:0 0 0 2026-03-09T15:02:02.925 INFO:tasks.workunit.client.0.vm05.stdout:7/996: getdents d1/d9/d23/d31/d8f/d93/dbd 0 2026-03-09T15:02:02.926 INFO:tasks.workunit.client.0.vm05.stdout:7/997: dread - d1/d49/f142 zero size 2026-03-09T15:02:02.927 INFO:tasks.workunit.client.0.vm05.stdout:6/920: fsync da/d17/d95/da2/fa3 0 2026-03-09T15:02:02.928 INFO:tasks.workunit.client.0.vm05.stdout:1/953: creat d9/d2f/d37/d101/f138 x:0 0 0 2026-03-09T15:02:02.930 INFO:tasks.workunit.client.0.vm05.stdout:6/921: creat da/d43/d7b/db3/f122 x:0 0 0 2026-03-09T15:02:02.933 INFO:tasks.workunit.client.0.vm05.stdout:7/998: link d1/d9/d23/d31/d8f/d93/d95/lbf d1/d9/d23/d31/d8f/l157 0 2026-03-09T15:02:02.935 INFO:tasks.workunit.client.0.vm05.stdout:1/954: getdents d9/d2f/d37/d5f/da2 0 2026-03-09T15:02:02.941 INFO:tasks.workunit.client.0.vm05.stdout:1/955: readlink d9/lfb 0 2026-03-09T15:02:02.941 INFO:tasks.workunit.client.0.vm05.stdout:6/922: link da/d17/l93 da/l123 0 2026-03-09T15:02:02.941 INFO:tasks.workunit.client.0.vm05.stdout:1/956: creat d9/d2f/d37/d101/f139 x:0 0 0 2026-03-09T15:02:02.941 INFO:tasks.workunit.client.0.vm05.stdout:1/957: truncate d9/d2f/d37/d5a/da9/dc9/dcd/f128 338271 0 2026-03-09T15:02:02.941 INFO:tasks.workunit.client.0.vm05.stdout:1/958: mkdir d9/d10e/d13a 0 2026-03-09T15:02:02.942 INFO:tasks.workunit.client.0.vm05.stdout:1/959: creat d9/d2f/d55/f13b x:0 0 0 2026-03-09T15:02:02.944 INFO:tasks.workunit.client.0.vm05.stdout:1/960: creat d9/d2f/d37/d5a/da9/dc9/dcd/df7/f13c x:0 0 0 2026-03-09T15:02:02.947 INFO:tasks.workunit.client.0.vm05.stdout:1/961: dread d9/f21 [0,4194304] 0 2026-03-09T15:02:02.949 INFO:tasks.workunit.client.0.vm05.stdout:6/923: link da/cc da/d17/d95/da2/c124 0 2026-03-09T15:02:02.949 INFO:tasks.workunit.client.0.vm05.stdout:6/924: chown da/f82 2919 1 2026-03-09T15:02:02.951 INFO:tasks.workunit.client.0.vm05.stdout:0/937: sync 2026-03-09T15:02:02.956 INFO:tasks.workunit.client.0.vm05.stdout:6/925: mknod da/d17/d3b/dbd/dee/df6/c125 0 2026-03-09T15:02:02.957 INFO:tasks.workunit.client.0.vm05.stdout:1/962: link d9/d17/c85 d9/d2f/d37/d101/dd3/c13d 0 2026-03-09T15:02:02.958 INFO:tasks.workunit.client.0.vm05.stdout:6/926: stat da/d17/f30 0 2026-03-09T15:02:02.964 INFO:tasks.workunit.client.0.vm05.stdout:3/962: write d3/df/d1e/d2f/d52/f95 [2403181,80495] 0 2026-03-09T15:02:02.968 INFO:tasks.workunit.client.0.vm05.stdout:6/927: readlink da/d17/l93 0 2026-03-09T15:02:02.972 INFO:tasks.workunit.client.0.vm05.stdout:7/999: chown d1/d9/d23/d31/d8f/d93/dbd/dd1/l148 56 1 2026-03-09T15:02:02.974 INFO:tasks.workunit.client.0.vm05.stdout:1/963: link d9/d2f/d37/d101/dd3/c13d d9/d2f/d83/d98/d59/d49/d92/c13e 0 2026-03-09T15:02:02.976 INFO:tasks.workunit.client.0.vm05.stdout:1/964: readlink d9/d10e/l12f 0 2026-03-09T15:02:02.978 INFO:tasks.workunit.client.0.vm05.stdout:0/938: write d9/de/d12/d15/d2e/d6b/dbf/ff5 [976846,117472] 0 2026-03-09T15:02:02.978 INFO:tasks.workunit.client.0.vm05.stdout:0/939: chown d9/l94 16404712 1 2026-03-09T15:02:02.982 INFO:tasks.workunit.client.0.vm05.stdout:1/965: rmdir d9/d2f/d83/d98/d87 39 2026-03-09T15:02:02.983 INFO:tasks.workunit.client.0.vm05.stdout:1/966: write d9/d2f/d55/f13b [661693,12847] 0 2026-03-09T15:02:02.990 INFO:tasks.workunit.client.0.vm05.stdout:6/928: write da/d17/d7c/fea [578551,80478] 0 2026-03-09T15:02:02.993 INFO:tasks.workunit.client.0.vm05.stdout:3/963: truncate d3/df/f1b 6476584 0 2026-03-09T15:02:02.996 INFO:tasks.workunit.client.0.vm05.stdout:0/940: write d9/de/d12/d15/fa5 [988808,106623] 0 2026-03-09T15:02:03.001 INFO:tasks.workunit.client.0.vm05.stdout:1/967: dwrite d9/d2f/f43 [0,4194304] 0 2026-03-09T15:02:03.003 INFO:tasks.workunit.client.0.vm05.stdout:1/968: dread - d9/d2f/d83/d98/d59/d49/d78/dbd/f125 zero size 2026-03-09T15:02:03.010 INFO:tasks.workunit.client.0.vm05.stdout:0/941: truncate d9/de/d6a/fb3 588479 0 2026-03-09T15:02:03.013 INFO:tasks.workunit.client.0.vm05.stdout:6/929: dwrite da/d19/f7e [0,4194304] 0 2026-03-09T15:02:03.016 INFO:tasks.workunit.client.0.vm05.stdout:3/964: link d3/df/d1e/d2f/d52/f95 d3/df/d1e/d2c/d74/d78/d121/dce/dc8/f136 0 2026-03-09T15:02:03.016 INFO:tasks.workunit.client.0.vm05.stdout:3/965: chown d3/df/c116 25 1 2026-03-09T15:02:03.025 INFO:tasks.workunit.client.0.vm05.stdout:0/942: unlink d9/de/d25/d38/d78/fd9 0 2026-03-09T15:02:03.026 INFO:tasks.workunit.client.0.vm05.stdout:0/943: chown d9/de/d12/d15/ddf 20 1 2026-03-09T15:02:03.026 INFO:tasks.workunit.client.0.vm05.stdout:0/944: readlink d9/de/l57 0 2026-03-09T15:02:03.029 INFO:tasks.workunit.client.0.vm05.stdout:0/945: dread d9/de/d12/d15/d2e/d6b/dbf/ff5 [0,4194304] 0 2026-03-09T15:02:03.031 INFO:tasks.workunit.client.0.vm05.stdout:1/969: dread d9/d2f/d83/fa3 [0,4194304] 0 2026-03-09T15:02:03.035 INFO:tasks.workunit.client.0.vm05.stdout:0/946: symlink d9/de/d12/d15/l12a 0 2026-03-09T15:02:03.037 INFO:tasks.workunit.client.0.vm05.stdout:0/947: mkdir d9/de/d12/d15/d2e/d32/d53/d61/d12b 0 2026-03-09T15:02:03.038 INFO:tasks.workunit.client.0.vm05.stdout:1/970: getdents d9/d2f/d55/dd0 0 2026-03-09T15:02:03.046 INFO:tasks.workunit.client.0.vm05.stdout:1/971: dwrite d9/d2f/d55/fb0 [0,4194304] 0 2026-03-09T15:02:03.047 INFO:tasks.workunit.client.0.vm05.stdout:1/972: chown d9/d2f/d37/l114 1251387 1 2026-03-09T15:02:03.055 INFO:tasks.workunit.client.0.vm05.stdout:1/973: chown d9/cf 16 1 2026-03-09T15:02:03.057 INFO:tasks.workunit.client.0.vm05.stdout:6/930: dwrite da/d17/d95/da2/dae/fef [0,4194304] 0 2026-03-09T15:02:03.061 INFO:tasks.workunit.client.0.vm05.stdout:3/966: dwrite d3/df/d1e/d2c/d74/d78/ff1 [0,4194304] 0 2026-03-09T15:02:03.074 INFO:tasks.workunit.client.0.vm05.stdout:0/948: dwrite d9/de/d12/f23 [4194304,4194304] 0 2026-03-09T15:02:03.082 INFO:tasks.workunit.client.0.vm05.stdout:1/974: mkdir d9/d10e/d13f 0 2026-03-09T15:02:03.083 INFO:tasks.workunit.client.0.vm05.stdout:1/975: write d9/d2f/d37/d101/fde [552066,9402] 0 2026-03-09T15:02:03.083 INFO:tasks.workunit.client.0.vm05.stdout:6/931: creat da/d19/d106/f126 x:0 0 0 2026-03-09T15:02:03.093 INFO:tasks.workunit.client.0.vm05.stdout:0/949: creat d9/de/d12/d15/d2e/d32/f12c x:0 0 0 2026-03-09T15:02:03.093 INFO:tasks.workunit.client.0.vm05.stdout:0/950: write d9/de/d12/da3/fb2 [1937686,25741] 0 2026-03-09T15:02:03.098 INFO:tasks.workunit.client.0.vm05.stdout:0/951: creat d9/de/d12/d15/d2e/d32/f12d x:0 0 0 2026-03-09T15:02:03.098 INFO:tasks.workunit.client.0.vm05.stdout:0/952: mknod d9/de/c12e 0 2026-03-09T15:02:03.100 INFO:tasks.workunit.client.0.vm05.stdout:0/953: rename d9/de/d25/lce to d9/d59/d70/l12f 0 2026-03-09T15:02:03.120 INFO:tasks.workunit.client.0.vm05.stdout:6/932: write da/d17/d95/fd5 [1559609,11409] 0 2026-03-09T15:02:03.120 INFO:tasks.workunit.client.0.vm05.stdout:3/967: dwrite d3/df/d1e/d2c/d74/d78/d121/db2/d102/f105 [0,4194304] 0 2026-03-09T15:02:03.120 INFO:tasks.workunit.client.0.vm05.stdout:1/976: write d9/d2f/d37/d101/f104 [466517,13611] 0 2026-03-09T15:02:03.127 INFO:tasks.workunit.client.0.vm05.stdout:3/968: dread d3/df/d1e/d2c/d74/d78/ff1 [0,4194304] 0 2026-03-09T15:02:03.131 INFO:tasks.workunit.client.0.vm05.stdout:0/954: write d9/f82 [1533351,17095] 0 2026-03-09T15:02:03.132 INFO:tasks.workunit.client.0.vm05.stdout:0/955: stat d9/de/d12/d15/ddf/lf8 0 2026-03-09T15:02:03.135 INFO:tasks.workunit.client.0.vm05.stdout:3/969: dread d3/df/d10/fae [0,4194304] 0 2026-03-09T15:02:03.138 INFO:tasks.workunit.client.0.vm05.stdout:1/977: mkdir d9/d2f/d83/d98/d59/d49/d78/d12b/d140 0 2026-03-09T15:02:03.140 INFO:tasks.workunit.client.0.vm05.stdout:6/933: creat da/d19/f127 x:0 0 0 2026-03-09T15:02:03.142 INFO:tasks.workunit.client.0.vm05.stdout:3/970: truncate d3/df/d1e/d2c/d74/d78/d121/dce/dc8/fd1 1241619 0 2026-03-09T15:02:03.142 INFO:tasks.workunit.client.0.vm05.stdout:3/971: chown d3 233170553 1 2026-03-09T15:02:03.144 INFO:tasks.workunit.client.0.vm05.stdout:6/934: dwrite da/d19/f35 [0,4194304] 0 2026-03-09T15:02:03.146 INFO:tasks.workunit.client.0.vm05.stdout:6/935: truncate da/d19/dd7/dfe/da8/f104 675020 0 2026-03-09T15:02:03.157 INFO:tasks.workunit.client.0.vm05.stdout:0/956: mkdir d9/de/d12/d15/d2e/d32/d74/de9/d122/d130 0 2026-03-09T15:02:03.157 INFO:tasks.workunit.client.0.vm05.stdout:1/978: rmdir d9/d2f/d83/d98/d59/d49 39 2026-03-09T15:02:03.157 INFO:tasks.workunit.client.0.vm05.stdout:0/957: dread - d9/de/f7f zero size 2026-03-09T15:02:03.158 INFO:tasks.workunit.client.0.vm05.stdout:3/972: mkdir d3/df/d1e/d2c/d74/d78/d137 0 2026-03-09T15:02:03.162 INFO:tasks.workunit.client.0.vm05.stdout:1/979: rmdir d9/db9 39 2026-03-09T15:02:03.163 INFO:tasks.workunit.client.0.vm05.stdout:3/973: creat d3/df/dbe/f138 x:0 0 0 2026-03-09T15:02:03.165 INFO:tasks.workunit.client.0.vm05.stdout:0/958: creat d9/de/d12/d15/d2e/d32/d53/d61/d104/dd3/f131 x:0 0 0 2026-03-09T15:02:03.167 INFO:tasks.workunit.client.0.vm05.stdout:3/974: read d3/f17 [2665290,85662] 0 2026-03-09T15:02:03.168 INFO:tasks.workunit.client.0.vm05.stdout:6/936: link da/d17/d95/da2/dae/dd9/lf5 da/d17/d3b/dbd/dee/l128 0 2026-03-09T15:02:03.169 INFO:tasks.workunit.client.0.vm05.stdout:1/980: mkdir d9/d2f/d37/d5f/d141 0 2026-03-09T15:02:03.170 INFO:tasks.workunit.client.0.vm05.stdout:0/959: creat d9/de/d25/dcf/dbd/f132 x:0 0 0 2026-03-09T15:02:03.172 INFO:tasks.workunit.client.0.vm05.stdout:6/937: mkdir da/d19/dd7/dfe/db8/d129 0 2026-03-09T15:02:03.173 INFO:tasks.workunit.client.0.vm05.stdout:6/938: readlink da/d19/dd7/dfe/la5 0 2026-03-09T15:02:03.175 INFO:tasks.workunit.client.0.vm05.stdout:3/975: dread d3/df/d1e/f2b [0,4194304] 0 2026-03-09T15:02:03.191 INFO:tasks.workunit.client.0.vm05.stdout:0/960: unlink d9/de/d12/d15/d2e/c89 0 2026-03-09T15:02:03.191 INFO:tasks.workunit.client.0.vm05.stdout:0/961: stat d9/de/f1e 0 2026-03-09T15:02:03.200 INFO:tasks.workunit.client.0.vm05.stdout:6/939: unlink da/d43/d7b/da9/fc9 0 2026-03-09T15:02:03.207 INFO:tasks.workunit.client.0.vm05.stdout:1/981: fdatasync d9/d2f/d83/d98/d59/d49/d4b/f8e 0 2026-03-09T15:02:03.212 INFO:tasks.workunit.client.0.vm05.stdout:0/962: creat d9/de/df1/f133 x:0 0 0 2026-03-09T15:02:03.245 INFO:tasks.workunit.client.0.vm05.stdout:6/940: dwrite da/d43/d7b/db3/ff1 [0,4194304] 0 2026-03-09T15:02:03.245 INFO:tasks.workunit.client.0.vm05.stdout:1/982: dread d9/d2f/d37/f122 [0,4194304] 0 2026-03-09T15:02:03.254 INFO:tasks.workunit.client.0.vm05.stdout:3/976: truncate d3/df/d59/fcc 1379148 0 2026-03-09T15:02:03.254 INFO:tasks.workunit.client.0.vm05.stdout:0/963: write d9/de/df1/deb/ffe [7464734,56963] 0 2026-03-09T15:02:03.259 INFO:tasks.workunit.client.0.vm05.stdout:3/977: mknod d3/df/d1e/d2c/d74/d78/d121/db5/c139 0 2026-03-09T15:02:03.273 INFO:tasks.workunit.client.0.vm05.stdout:6/941: write da/f3d [2307149,105543] 0 2026-03-09T15:02:03.276 INFO:tasks.workunit.client.0.vm05.stdout:1/983: dwrite d9/d2f/d37/d5a/da9/dc9/dcd/fee [0,4194304] 0 2026-03-09T15:02:03.283 INFO:tasks.workunit.client.0.vm05.stdout:3/978: write d3/df/d1e/d2c/d74/d78/d135/fde [159899,31695] 0 2026-03-09T15:02:03.283 INFO:tasks.workunit.client.0.vm05.stdout:0/964: write d9/de/d25/dcf/dbd/fe2 [738950,51264] 0 2026-03-09T15:02:03.291 INFO:tasks.workunit.client.0.vm05.stdout:6/942: dread da/d17/d3b/fb2 [0,4194304] 0 2026-03-09T15:02:03.292 INFO:tasks.workunit.client.0.vm05.stdout:0/965: read d9/de/d25/dcf/f96 [1088391,126724] 0 2026-03-09T15:02:03.293 INFO:tasks.workunit.client.0.vm05.stdout:0/966: chown d9/de/d12/d15/d2e/d32/d53/d61 551269 1 2026-03-09T15:02:03.295 INFO:tasks.workunit.client.0.vm05.stdout:6/943: creat da/d17/d7c/dc6/f12a x:0 0 0 2026-03-09T15:02:03.299 INFO:tasks.workunit.client.0.vm05.stdout:0/967: dwrite d9/f22 [4194304,4194304] 0 2026-03-09T15:02:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:02 vm05.local ceph-mon[50611]: pgmap v14: 65 pgs: 65 active+clean; 2.6 GiB data, 9.0 GiB used, 111 GiB / 120 GiB avail; 40 MiB/s rd, 87 MiB/s wr, 251 op/s 2026-03-09T15:02:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:02 vm05.local ceph-mon[50611]: Standby manager daemon vm05.lhsexd started 2026-03-09T15:02:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:02 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.lhsexd/crt"}]: dispatch 2026-03-09T15:02:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:02 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:02:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:02 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.lhsexd/key"}]: dispatch 2026-03-09T15:02:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:02 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:02:03.308 INFO:tasks.workunit.client.0.vm05.stdout:1/984: link d9/d2f/d37/d5a/da9/dc9/fe8 d9/d2f/d37/d5f/da2/f142 0 2026-03-09T15:02:03.308 INFO:tasks.workunit.client.0.vm05.stdout:0/968: symlink d9/de/d12/d15/d2e/d32/d53/d61/d104/dd3/l134 0 2026-03-09T15:02:03.309 INFO:tasks.workunit.client.0.vm05.stdout:1/985: write d9/d2f/d37/fe3 [122005,67937] 0 2026-03-09T15:02:03.312 INFO:tasks.workunit.client.0.vm05.stdout:3/979: dwrite d3/df/d1e/d2f/d52/f61 [4194304,4194304] 0 2026-03-09T15:02:03.314 INFO:tasks.workunit.client.0.vm05.stdout:0/969: mkdir d9/de/d25/dae/de6/d135 0 2026-03-09T15:02:03.323 INFO:tasks.workunit.client.0.vm05.stdout:1/986: unlink d9/d97/c9a 0 2026-03-09T15:02:03.330 INFO:tasks.workunit.client.0.vm05.stdout:0/970: dread d9/de/f6c [0,4194304] 0 2026-03-09T15:02:03.331 INFO:tasks.workunit.client.0.vm05.stdout:1/987: mknod d9/d2f/d83/d98/d59/d49/d77/c143 0 2026-03-09T15:02:03.332 INFO:tasks.workunit.client.0.vm05.stdout:6/944: link da/d19/lec da/d17/d3b/l12b 0 2026-03-09T15:02:03.335 INFO:tasks.workunit.client.0.vm05.stdout:1/988: mkdir d9/d2f/d37/d5a/d144 0 2026-03-09T15:02:03.336 INFO:tasks.workunit.client.0.vm05.stdout:6/945: creat da/d43/d7b/da9/f12c x:0 0 0 2026-03-09T15:02:03.339 INFO:tasks.workunit.client.0.vm05.stdout:6/946: chown da/d17/d3b/dbd/dee/f105 0 1 2026-03-09T15:02:03.339 INFO:tasks.workunit.client.0.vm05.stdout:6/947: readlink da/d43/d66/l75 0 2026-03-09T15:02:03.339 INFO:tasks.workunit.client.0.vm05.stdout:6/948: stat da/c78 0 2026-03-09T15:02:03.340 INFO:tasks.workunit.client.0.vm05.stdout:1/989: mkdir d9/d2f/d37/d5a/da9/d145 0 2026-03-09T15:02:03.341 INFO:tasks.workunit.client.0.vm05.stdout:6/949: creat da/d43/d7b/da9/db7/f12d x:0 0 0 2026-03-09T15:02:03.348 INFO:tasks.workunit.client.0.vm05.stdout:3/980: dwrite d3/df/d10/fae [0,4194304] 0 2026-03-09T15:02:03.348 INFO:tasks.workunit.client.0.vm05.stdout:1/990: symlink d9/d2f/d37/d5a/da9/dc9/d123/l146 0 2026-03-09T15:02:03.357 INFO:tasks.workunit.client.0.vm05.stdout:0/971: dwrite d9/de/d12/d15/f120 [0,4194304] 0 2026-03-09T15:02:03.362 INFO:tasks.workunit.client.0.vm05.stdout:6/950: symlink da/d17/d95/da2/l12e 0 2026-03-09T15:02:03.365 INFO:tasks.workunit.client.0.vm05.stdout:3/981: mknod d3/df/d1e/d2c/d74/d78/d135/c13a 0 2026-03-09T15:02:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:02 vm09.local ceph-mon[59673]: pgmap v14: 65 pgs: 65 active+clean; 2.6 GiB data, 9.0 GiB used, 111 GiB / 120 GiB avail; 40 MiB/s rd, 87 MiB/s wr, 251 op/s 2026-03-09T15:02:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:02 vm09.local ceph-mon[59673]: Standby manager daemon vm05.lhsexd started 2026-03-09T15:02:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:02 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.lhsexd/crt"}]: dispatch 2026-03-09T15:02:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:02 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:02:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:02 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm05.lhsexd/key"}]: dispatch 2026-03-09T15:02:03.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:02 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:02:03.370 INFO:tasks.workunit.client.0.vm05.stdout:6/951: fdatasync da/f7a 0 2026-03-09T15:02:03.371 INFO:tasks.workunit.client.0.vm05.stdout:0/972: rename d9/faa to d9/de/d12/d15/d2e/d32/d53/d61/d12b/f136 0 2026-03-09T15:02:03.372 INFO:tasks.workunit.client.0.vm05.stdout:3/982: mknod d3/df/d1e/d2c/d74/d78/d121/d44/da2/df8/c13b 0 2026-03-09T15:02:03.377 INFO:tasks.workunit.client.0.vm05.stdout:0/973: symlink d9/de/d12/d15/d2e/d32/l137 0 2026-03-09T15:02:03.378 INFO:tasks.workunit.client.0.vm05.stdout:3/983: mkdir d3/df/d1e/d2c/d74/d78/d135/dc2/d13c 0 2026-03-09T15:02:03.379 INFO:tasks.workunit.client.0.vm05.stdout:6/952: dread da/d19/dd7/dfe/fa4 [0,4194304] 0 2026-03-09T15:02:03.379 INFO:tasks.workunit.client.0.vm05.stdout:3/984: chown d3/df/d1e/d2c/d74/d9b 25 1 2026-03-09T15:02:03.380 INFO:tasks.workunit.client.0.vm05.stdout:6/953: chown da/d19/f127 2661 1 2026-03-09T15:02:03.381 INFO:tasks.workunit.client.0.vm05.stdout:0/974: truncate d9/de/d12/da3/fa4 1539624 0 2026-03-09T15:02:03.384 INFO:tasks.workunit.client.0.vm05.stdout:0/975: mknod d9/de/d25/d38/d78/c138 0 2026-03-09T15:02:03.384 INFO:tasks.workunit.client.0.vm05.stdout:3/985: rename d3/df/d1e/d2c/d74/de0/f108 to d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/d90/d10a/f13d 0 2026-03-09T15:02:03.385 INFO:tasks.workunit.client.0.vm05.stdout:0/976: read - d9/de/d12/d15/f118 zero size 2026-03-09T15:02:03.386 INFO:tasks.workunit.client.0.vm05.stdout:3/986: mknod d3/df/d1e/d2f/c13e 0 2026-03-09T15:02:03.387 INFO:tasks.workunit.client.0.vm05.stdout:0/977: symlink d9/de/d12/d8a/dc3/l139 0 2026-03-09T15:02:03.387 INFO:tasks.workunit.client.0.vm05.stdout:0/978: write d9/de/d12/da3/dbc/fc4 [4770495,128753] 0 2026-03-09T15:02:03.389 INFO:tasks.workunit.client.0.vm05.stdout:3/987: dread d3/df/d1e/d2c/d74/d78/ff1 [0,4194304] 0 2026-03-09T15:02:03.390 INFO:tasks.workunit.client.0.vm05.stdout:3/988: chown d3/d29/d2d/cf3 137160027 1 2026-03-09T15:02:03.390 INFO:tasks.workunit.client.0.vm05.stdout:0/979: chown d9/de/d12/d15/d2e/d32/d53/d61/d104/da0/db7/de5 19735 1 2026-03-09T15:02:03.392 INFO:tasks.workunit.client.0.vm05.stdout:0/980: chown d9/de/d12/d15/d2e/d32/d53/d61/d104/da0/db7/de5/f10a 15689566 1 2026-03-09T15:02:03.394 INFO:tasks.workunit.client.0.vm05.stdout:3/989: chown d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/cef 1 1 2026-03-09T15:02:03.394 INFO:tasks.workunit.client.0.vm05.stdout:0/981: fdatasync d9/de/fd6 0 2026-03-09T15:02:03.394 INFO:tasks.workunit.client.0.vm05.stdout:0/982: fdatasync d9/de/d12/d15/d2e/d32/d53/d61/f10c 0 2026-03-09T15:02:03.397 INFO:tasks.workunit.client.0.vm05.stdout:0/983: rename d9/de/df1/deb/d101 to d9/de/d25/d13a 0 2026-03-09T15:02:03.398 INFO:tasks.workunit.client.0.vm05.stdout:0/984: dread - d9/de/d25/dae/de6/faf zero size 2026-03-09T15:02:03.399 INFO:tasks.workunit.client.0.vm05.stdout:0/985: write d9/de/d25/ff2 [624220,61469] 0 2026-03-09T15:02:03.400 INFO:tasks.workunit.client.0.vm05.stdout:1/991: write d9/d2f/d83/d98/d59/d49/f2c [3998904,108968] 0 2026-03-09T15:02:03.410 INFO:tasks.workunit.client.0.vm05.stdout:3/990: write d3/d29/d2d/d7b/fe3 [516766,28107] 0 2026-03-09T15:02:03.411 INFO:tasks.workunit.client.0.vm05.stdout:1/992: mknod d9/d2f/d37/d5a/da9/dc9/dcd/db2/c147 0 2026-03-09T15:02:03.411 INFO:tasks.workunit.client.0.vm05.stdout:6/954: dwrite da/d43/d7b/db3/fff [0,4194304] 0 2026-03-09T15:02:03.420 INFO:tasks.workunit.client.0.vm05.stdout:1/993: mkdir d9/d2f/d37/d101/dd3/d148 0 2026-03-09T15:02:03.428 INFO:tasks.workunit.client.0.vm05.stdout:6/955: creat da/d43/d7b/da9/df0/f12f x:0 0 0 2026-03-09T15:02:03.429 INFO:tasks.workunit.client.0.vm05.stdout:6/956: dread - da/d17/d95/da2/dae/dd9/f109 zero size 2026-03-09T15:02:03.431 INFO:tasks.workunit.client.0.vm05.stdout:0/986: getdents d9/de/d12/d15/d2e/d32/d53 0 2026-03-09T15:02:03.434 INFO:tasks.workunit.client.0.vm05.stdout:6/957: stat da/d17/d95/da2/cb4 0 2026-03-09T15:02:03.437 INFO:tasks.workunit.client.0.vm05.stdout:3/991: getdents d3/d29/d2d/d77 0 2026-03-09T15:02:03.438 INFO:tasks.workunit.client.0.vm05.stdout:1/994: link d9/d2f/d83/d98/d59/d49/d78/dbd/cd1 d9/d2f/d83/d98/d59/d49/d78/d12b/c149 0 2026-03-09T15:02:03.439 INFO:tasks.workunit.client.0.vm05.stdout:6/958: rename da/d43/d7b/db3/c8c to da/d19/d106/d121/c130 0 2026-03-09T15:02:03.441 INFO:tasks.workunit.client.0.vm05.stdout:6/959: read da/d17/f2c [1516940,8993] 0 2026-03-09T15:02:03.441 INFO:tasks.workunit.client.0.vm05.stdout:1/995: rename d9/d2f/d37/la8 to d9/d2f/d83/d98/d59/d49/d10a/l14a 0 2026-03-09T15:02:03.445 INFO:tasks.workunit.client.0.vm05.stdout:1/996: dwrite d9/d2f/d83/d98/d59/d49/ffd [0,4194304] 0 2026-03-09T15:02:03.451 INFO:tasks.workunit.client.0.vm05.stdout:3/992: getdents d3/df/d1e/d2c/d74/d78/d135/dc2/d12a/d12e 0 2026-03-09T15:02:03.458 INFO:tasks.workunit.client.0.vm05.stdout:0/987: write d9/d59/d93/fd1 [470631,9296] 0 2026-03-09T15:02:03.466 INFO:tasks.workunit.client.0.vm05.stdout:0/988: stat d9/de/d12/d15/d2e/d32/d53/d61/f10c 0 2026-03-09T15:02:03.466 INFO:tasks.workunit.client.0.vm05.stdout:0/989: chown d9/de/d12/d8a/f10e 111 1 2026-03-09T15:02:03.466 INFO:tasks.workunit.client.0.vm05.stdout:1/997: symlink d9/d2f/d83/d98/d59/d49/d78/d12b/d140/l14b 0 2026-03-09T15:02:03.466 INFO:tasks.workunit.client.0.vm05.stdout:1/998: truncate d9/d2f/d55/f13b 825579 0 2026-03-09T15:02:03.466 INFO:tasks.workunit.client.0.vm05.stdout:0/990: creat d9/de/d12/d15/d2e/d32/d53/d61/d104/da0/f13b x:0 0 0 2026-03-09T15:02:03.471 INFO:tasks.workunit.client.0.vm05.stdout:6/960: write da/d17/f90 [255928,87900] 0 2026-03-09T15:02:03.472 INFO:tasks.workunit.client.0.vm05.stdout:6/961: readlink da/l10d 0 2026-03-09T15:02:03.474 INFO:tasks.workunit.client.0.vm05.stdout:6/962: symlink da/d43/d7b/l131 0 2026-03-09T15:02:03.475 INFO:tasks.workunit.client.0.vm05.stdout:6/963: creat da/d17/d3b/dbd/f132 x:0 0 0 2026-03-09T15:02:03.477 INFO:tasks.workunit.client.0.vm05.stdout:3/993: dwrite d3/df/d1e/d2c/d74/d9b/fc9 [0,4194304] 0 2026-03-09T15:02:03.483 INFO:tasks.workunit.client.0.vm05.stdout:1/999: write d9/d2f/d83/d98/f39 [980459,72754] 0 2026-03-09T15:02:03.485 INFO:tasks.workunit.client.0.vm05.stdout:0/991: dwrite d9/de/d12/d15/d2e/d32/d53/f5f [0,4194304] 0 2026-03-09T15:02:03.488 INFO:tasks.workunit.client.0.vm05.stdout:0/992: stat d9/de/d12/d15/d2e/d6b/f125 0 2026-03-09T15:02:03.497 INFO:tasks.workunit.client.0.vm05.stdout:0/993: truncate d9/de/d12/d15/d2e/d32/d53/d61/d104/da0/f123 492306 0 2026-03-09T15:02:03.497 INFO:tasks.workunit.client.0.vm05.stdout:6/964: mknod da/d17/d95/da2/dae/dd9/c133 0 2026-03-09T15:02:03.497 INFO:tasks.workunit.client.0.vm05.stdout:3/994: rename d3/df/d1e/d2c/d74/d78/d121/db5 to d3/df/d1e/d2c/d74/d78/d121/dce/dc8/de2/d8c/d13f 0 2026-03-09T15:02:03.497 INFO:tasks.workunit.client.0.vm05.stdout:3/995: readlink d3/df/d1e/d2f/l51 0 2026-03-09T15:02:03.497 INFO:tasks.workunit.client.0.vm05.stdout:6/965: creat da/d43/d7b/da9/df0/f134 x:0 0 0 2026-03-09T15:02:03.497 INFO:tasks.workunit.client.0.vm05.stdout:3/996: creat d3/df/d1e/d2c/d74/d78/d121/d44/d50/f140 x:0 0 0 2026-03-09T15:02:03.502 INFO:tasks.workunit.client.0.vm05.stdout:3/997: fsync d3/df/d1e/f8f 0 2026-03-09T15:02:03.503 INFO:tasks.workunit.client.0.vm05.stdout:3/998: write d3/f7 [3021849,40455] 0 2026-03-09T15:02:03.507 INFO:tasks.workunit.client.0.vm05.stdout:3/999: symlink d3/df/d1e/d2c/d74/de0/l141 0 2026-03-09T15:02:03.521 INFO:tasks.workunit.client.0.vm05.stdout:0/994: truncate d9/de/d12/d15/fa5 305213 0 2026-03-09T15:02:03.521 INFO:tasks.workunit.client.0.vm05.stdout:0/995: fsync d9/de/d25/d38/d78/dc9/ff3 0 2026-03-09T15:02:03.527 INFO:tasks.workunit.client.0.vm05.stdout:0/996: write d9/de/d6a/fe4 [3523408,34170] 0 2026-03-09T15:02:03.527 INFO:tasks.workunit.client.0.vm05.stdout:6/966: write da/d17/f2c [6918915,130982] 0 2026-03-09T15:02:03.535 INFO:tasks.workunit.client.0.vm05.stdout:6/967: mkdir da/d19/d106/d135 0 2026-03-09T15:02:03.536 INFO:tasks.workunit.client.0.vm05.stdout:0/997: creat d9/de/d12/d11f/f13c x:0 0 0 2026-03-09T15:02:03.537 INFO:tasks.workunit.client.0.vm05.stdout:0/998: symlink d9/d59/d70/l13d 0 2026-03-09T15:02:03.541 INFO:tasks.workunit.client.0.vm05.stdout:0/999: rmdir d9/de/d25/d13a/d116 0 2026-03-09T15:02:03.546 INFO:tasks.workunit.client.0.vm05.stdout:6/968: dwrite da/d43/d7b/db3/fed [0,4194304] 0 2026-03-09T15:02:03.552 INFO:tasks.workunit.client.0.vm05.stdout:6/969: rename da/d17/d3b/dbd/l113 to da/d17/d3b/l136 0 2026-03-09T15:02:03.590 INFO:tasks.workunit.client.0.vm05.stdout:6/970: dwrite da/d17/d7c/fcf [0,4194304] 0 2026-03-09T15:02:03.602 INFO:tasks.workunit.client.0.vm05.stdout:6/971: mkdir da/d19/dd7/dfe/db8/d137 0 2026-03-09T15:02:03.618 INFO:tasks.workunit.client.0.vm05.stdout:6/972: dwrite da/d43/d7b/da9/fe7 [4194304,4194304] 0 2026-03-09T15:02:03.620 INFO:tasks.workunit.client.0.vm05.stdout:6/973: fdatasync da/d43/d7b/da9/db7/f118 0 2026-03-09T15:02:03.620 INFO:tasks.workunit.client.0.vm05.stdout:6/974: chown da/d17/l40 382297144 1 2026-03-09T15:02:03.625 INFO:tasks.workunit.client.0.vm05.stdout:6/975: creat da/d17/d3b/dbd/f138 x:0 0 0 2026-03-09T15:02:03.626 INFO:tasks.workunit.client.0.vm05.stdout:6/976: readlink da/d19/dd7/dfe/leb 0 2026-03-09T15:02:03.627 INFO:tasks.workunit.client.0.vm05.stdout:6/977: read da/f65 [134999,67317] 0 2026-03-09T15:02:03.628 INFO:tasks.workunit.client.0.vm05.stdout:6/978: write da/d43/d7b/db3/f122 [361701,31112] 0 2026-03-09T15:02:03.629 INFO:tasks.workunit.client.0.vm05.stdout:6/979: creat da/d17/d3b/dbd/d110/f139 x:0 0 0 2026-03-09T15:02:03.631 INFO:tasks.workunit.client.0.vm05.stdout:6/980: mknod da/d19/d106/d135/c13a 0 2026-03-09T15:02:03.632 INFO:tasks.workunit.client.0.vm05.stdout:6/981: fsync da/d43/d7b/da9/db7/f118 0 2026-03-09T15:02:03.633 INFO:tasks.workunit.client.0.vm05.stdout:6/982: creat da/d43/d7b/db3/f13b x:0 0 0 2026-03-09T15:02:03.633 INFO:tasks.workunit.client.0.vm05.stdout:6/983: chown da/f7a 102292 1 2026-03-09T15:02:03.634 INFO:tasks.workunit.client.0.vm05.stdout:6/984: dread - da/d43/d7b/da9/db7/f119 zero size 2026-03-09T15:02:03.653 INFO:tasks.workunit.client.0.vm05.stdout:6/985: link da/d17/d7c/c8a da/d19/d106/c13c 0 2026-03-09T15:02:03.656 INFO:tasks.workunit.client.0.vm05.stdout:6/986: mknod da/d19/dd7/dfe/db8/d137/c13d 0 2026-03-09T15:02:03.656 INFO:tasks.workunit.client.0.vm05.stdout:6/987: readlink da/d17/l93 0 2026-03-09T15:02:03.657 INFO:tasks.workunit.client.0.vm05.stdout:6/988: mkdir da/d19/dd7/dfe/db8/d13e 0 2026-03-09T15:02:03.660 INFO:tasks.workunit.client.0.vm05.stdout:6/989: link da/d17/ca0 da/d17/d95/c13f 0 2026-03-09T15:02:03.662 INFO:tasks.workunit.client.0.vm05.stdout:6/990: creat da/f140 x:0 0 0 2026-03-09T15:02:03.663 INFO:tasks.workunit.client.0.vm05.stdout:6/991: mknod da/d19/dd7/dfe/db8/d137/c141 0 2026-03-09T15:02:03.683 INFO:tasks.workunit.client.0.vm05.stdout:6/992: dwrite da/d19/f6a [0,4194304] 0 2026-03-09T15:02:03.686 INFO:tasks.workunit.client.0.vm05.stdout:6/993: dread da/d19/f35 [0,4194304] 0 2026-03-09T15:02:03.705 INFO:tasks.workunit.client.0.vm05.stdout:6/994: write da/d43/d7b/da9/db7/fd2 [1276288,20018] 0 2026-03-09T15:02:03.709 INFO:tasks.workunit.client.0.vm05.stdout:6/995: mknod da/d17/d95/da2/c142 0 2026-03-09T15:02:03.711 INFO:tasks.workunit.client.0.vm05.stdout:6/996: rename da/d43/d7b/da9/df0/f12f to da/d43/d7b/da9/df0/f143 0 2026-03-09T15:02:03.721 INFO:tasks.workunit.client.0.vm05.stdout:6/997: dwrite da/d17/d3b/f4a [4194304,4194304] 0 2026-03-09T15:02:03.728 INFO:tasks.workunit.client.0.vm05.stdout:6/998: dwrite da/d17/d3b/dbd/d110/f11a [0,4194304] 0 2026-03-09T15:02:03.735 INFO:tasks.workunit.client.0.vm05.stdout:6/999: dread da/d19/f7e [0,4194304] 0 2026-03-09T15:02:03.739 INFO:tasks.workunit.client.0.vm05.stderr:+ rm -rf -- ./tmp.LfZX0Lp45U 2026-03-09T15:02:04.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:04 vm09.local ceph-mon[59673]: mgrmap e25: vm09.cfuwdz(active, since 23s), standbys: vm05.lhsexd 2026-03-09T15:02:04.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:04 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mgr metadata", "who": "vm05.lhsexd", "id": "vm05.lhsexd"}]: dispatch 2026-03-09T15:02:04.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:04 vm09.local ceph-mon[59673]: pgmap v15: 65 pgs: 65 active+clean; 2.6 GiB data, 9.0 GiB used, 111 GiB / 120 GiB avail; 29 MiB/s rd, 61 MiB/s wr, 175 op/s 2026-03-09T15:02:05.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:04 vm05.local ceph-mon[50611]: mgrmap e25: vm09.cfuwdz(active, since 23s), standbys: vm05.lhsexd 2026-03-09T15:02:05.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:04 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mgr metadata", "who": "vm05.lhsexd", "id": "vm05.lhsexd"}]: dispatch 2026-03-09T15:02:05.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:04 vm05.local ceph-mon[50611]: pgmap v15: 65 pgs: 65 active+clean; 2.6 GiB data, 9.0 GiB used, 111 GiB / 120 GiB avail; 29 MiB/s rd, 61 MiB/s wr, 175 op/s 2026-03-09T15:02:05.355 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T15:02:05.355 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.1/client.1/tmp 2026-03-09T15:02:07.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:06 vm09.local ceph-mon[59673]: pgmap v16: 65 pgs: 65 active+clean; 2.6 GiB data, 9.0 GiB used, 111 GiB / 120 GiB avail; 29 MiB/s rd, 61 MiB/s wr, 175 op/s 2026-03-09T15:02:07.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:06 vm05.local ceph-mon[50611]: pgmap v16: 65 pgs: 65 active+clean; 2.6 GiB data, 9.0 GiB used, 111 GiB / 120 GiB avail; 29 MiB/s rd, 61 MiB/s wr, 175 op/s 2026-03-09T15:02:08.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:08 vm09.local ceph-mon[59673]: pgmap v17: 65 pgs: 65 active+clean; 2.6 GiB data, 9.1 GiB used, 111 GiB / 120 GiB avail; 39 MiB/s rd, 85 MiB/s wr, 261 op/s 2026-03-09T15:02:09.048 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:08 vm05.local ceph-mon[50611]: pgmap v17: 65 pgs: 65 active+clean; 2.6 GiB data, 9.1 GiB used, 111 GiB / 120 GiB avail; 39 MiB/s rd, 85 MiB/s wr, 261 op/s 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mgr fail", "who": "vm09.cfuwdz"}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mgr fail", "who": "vm09.cfuwdz"}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: osdmap e40: 6 total, 6 up, 6 in 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd='[{"prefix": "mgr fail", "who": "vm09.cfuwdz"}]': finished 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: mgrmap e26: vm05.lhsexd(active, starting, since 0.0637866s) 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.nrocqt"}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.jrhwzz"}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm05.lhsexd", "id": "vm05.lhsexd"}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:02:09.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:09 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.24413 192.168.123.109:0/3087545146' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mgr fail", "who": "vm09.cfuwdz"}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "mgr fail", "who": "vm09.cfuwdz"}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: osdmap e40: 6 total, 6 up, 6 in 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.24413 ' entity='mgr.vm09.cfuwdz' cmd='[{"prefix": "mgr fail", "who": "vm09.cfuwdz"}]': finished 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: mgrmap e26: vm05.lhsexd(active, starting, since 0.0637866s) 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.nrocqt"}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.jrhwzz"}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm05.lhsexd", "id": "vm05.lhsexd"}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:02:10.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:09 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:02:10.848 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:10 vm05.local ceph-mon[50611]: Manager daemon vm05.lhsexd is now available 2026-03-09T15:02:10.848 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:10 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:10.848 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:10 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:02:10.848 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:10 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:02:10.848 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:10 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/trash_purge_schedule"}]: dispatch 2026-03-09T15:02:10.864 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:10 vm09.local ceph-mon[59673]: Manager daemon vm05.lhsexd is now available 2026-03-09T15:02:10.864 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:10 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:10.864 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:10 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:02:10.864 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:10 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:02:10.864 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:10 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/trash_purge_schedule"}]: dispatch 2026-03-09T15:02:11.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:11 vm09.local ceph-mon[59673]: mgrmap e27: vm05.lhsexd(active, since 1.63521s) 2026-03-09T15:02:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:11 vm09.local ceph-mon[59673]: pgmap v3: 65 pgs: 65 active+clean; 1.6 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail 2026-03-09T15:02:11.971 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:11 vm05.local ceph-mon[50611]: mgrmap e27: vm05.lhsexd(active, since 1.63521s) 2026-03-09T15:02:11.971 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:11 vm05.local ceph-mon[50611]: pgmap v3: 65 pgs: 65 active+clean; 1.6 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail 2026-03-09T15:02:12.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:12 vm09.local ceph-mon[59673]: pgmap v4: 65 pgs: 65 active+clean; 1.6 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail 2026-03-09T15:02:12.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:12 vm09.local ceph-mon[59673]: mgrmap e28: vm05.lhsexd(active, since 2s) 2026-03-09T15:02:12.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:12 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:12.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:12 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:12 vm05.local ceph-mon[50611]: pgmap v4: 65 pgs: 65 active+clean; 1.6 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail 2026-03-09T15:02:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:12 vm05.local ceph-mon[50611]: mgrmap e28: vm05.lhsexd(active, since 2s) 2026-03-09T15:02:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:12 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:12.711 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:12 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:12.711 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.709+0000 7f4335b2b700 1 -- 192.168.123.105:0/4215750829 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4330071a90 msgr2=0x7f4330071ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:12.711 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.709+0000 7f4335b2b700 1 --2- 192.168.123.105:0/4215750829 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4330071a90 0x7f4330071ea0 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7f4324009a60 tx=0x7f4324009d70 comp rx=0 tx=0).stop 2026-03-09T15:02:12.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.711+0000 7f4335b2b700 1 -- 192.168.123.105:0/4215750829 shutdown_connections 2026-03-09T15:02:12.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.711+0000 7f4335b2b700 1 --2- 192.168.123.105:0/4215750829 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4330072470 0x7f433010beb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:12.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.711+0000 7f4335b2b700 1 --2- 192.168.123.105:0/4215750829 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4330071a90 0x7f4330071ea0 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:12.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.711+0000 7f4335b2b700 1 -- 192.168.123.105:0/4215750829 >> 192.168.123.105:0/4215750829 conn(0x7f433006d1a0 msgr2=0x7f433006f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:12.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.711+0000 7f4335b2b700 1 -- 192.168.123.105:0/4215750829 shutdown_connections 2026-03-09T15:02:12.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.711+0000 7f4335b2b700 1 -- 192.168.123.105:0/4215750829 wait complete. 2026-03-09T15:02:12.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.711+0000 7f4335b2b700 1 Processor -- start 2026-03-09T15:02:12.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.711+0000 7f4335b2b700 1 -- start start 2026-03-09T15:02:12.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.711+0000 7f4335b2b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4330071a90 0x7f4330116940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:12.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.711+0000 7f4335b2b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4330072470 0x7f4330116e80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:12.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.711+0000 7f4335b2b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f43301174d0 con 0x7f4330072470 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.711+0000 7f4335b2b700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4330117640 con 0x7f4330071a90 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.712+0000 7f432effd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4330072470 0x7f4330116e80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.712+0000 7f432effd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4330072470 0x7f4330116e80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:45740/0 (socket says 192.168.123.105:45740) 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.712+0000 7f432effd700 1 -- 192.168.123.105:0/324884585 learned_addr learned my addr 192.168.123.105:0/324884585 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.712+0000 7f432f7fe700 1 --2- 192.168.123.105:0/324884585 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4330071a90 0x7f4330116940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.712+0000 7f432effd700 1 -- 192.168.123.105:0/324884585 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4330071a90 msgr2=0x7f4330116940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.712+0000 7f432effd700 1 --2- 192.168.123.105:0/324884585 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4330071a90 0x7f4330116940 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.712+0000 7f432effd700 1 -- 192.168.123.105:0/324884585 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4324009710 con 0x7f4330072470 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.713+0000 7f432effd700 1 --2- 192.168.123.105:0/324884585 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4330072470 0x7f4330116e80 secure :-1 s=READY pgs=331 cs=0 l=1 rev1=1 crypto rx=0x7f4328007ed0 tx=0x7f432800d3b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.713+0000 7f432cff9700 1 -- 192.168.123.105:0/324884585 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4328017070 con 0x7f4330072470 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.713+0000 7f4335b2b700 1 -- 192.168.123.105:0/324884585 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4330077140 con 0x7f4330072470 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.713+0000 7f4335b2b700 1 -- 192.168.123.105:0/324884585 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4330077690 con 0x7f4330072470 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.715+0000 7f4335b2b700 1 -- 192.168.123.105:0/324884585 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4330110c20 con 0x7f4330072470 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.715+0000 7f432cff9700 1 -- 192.168.123.105:0/324884585 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f432800f040 con 0x7f4330072470 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.715+0000 7f432cff9700 1 -- 192.168.123.105:0/324884585 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4328013860 con 0x7f4330072470 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.716+0000 7f432cff9700 1 -- 192.168.123.105:0/324884585 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 28) v1 ==== 50327+0+0 (secure 0 0 0) 0x7f4328013a80 con 0x7f4330072470 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.716+0000 7f432cff9700 1 --2- 192.168.123.105:0/324884585 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f431803def0 0x7f43180403a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:12.717 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.716+0000 7f432cff9700 1 -- 192.168.123.105:0/324884585 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f4328053e50 con 0x7f4330072470 2026-03-09T15:02:12.718 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.717+0000 7f432f7fe700 1 --2- 192.168.123.105:0/324884585 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f431803def0 0x7f43180403a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:12.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.719+0000 7f432f7fe700 1 --2- 192.168.123.105:0/324884585 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f431803def0 0x7f43180403a0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f4330117860 tx=0x7f43240058e0 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:12.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.722+0000 7f432cff9700 1 -- 192.168.123.105:0/324884585 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f432801b370 con 0x7f4330072470 2026-03-09T15:02:12.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.924+0000 7f4335b2b700 1 -- 192.168.123.105:0/324884585 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f4330061190 con 0x7f431803def0 2026-03-09T15:02:12.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.926+0000 7f432cff9700 1 -- 192.168.123.105:0/324884585 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+329 (secure 0 0 0) 0x7f4330061190 con 0x7f431803def0 2026-03-09T15:02:12.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.931+0000 7f43167fc700 1 -- 192.168.123.105:0/324884585 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f431803def0 msgr2=0x7f43180403a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:12.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.931+0000 7f43167fc700 1 --2- 192.168.123.105:0/324884585 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f431803def0 0x7f43180403a0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f4330117860 tx=0x7f43240058e0 comp rx=0 tx=0).stop 2026-03-09T15:02:12.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.931+0000 7f43167fc700 1 -- 192.168.123.105:0/324884585 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4330072470 msgr2=0x7f4330116e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:12.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.931+0000 7f43167fc700 1 --2- 192.168.123.105:0/324884585 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4330072470 0x7f4330116e80 secure :-1 s=READY pgs=331 cs=0 l=1 rev1=1 crypto rx=0x7f4328007ed0 tx=0x7f432800d3b0 comp rx=0 tx=0).stop 2026-03-09T15:02:12.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.931+0000 7f43167fc700 1 -- 192.168.123.105:0/324884585 shutdown_connections 2026-03-09T15:02:12.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.931+0000 7f43167fc700 1 --2- 192.168.123.105:0/324884585 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f431803def0 0x7f43180403a0 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:12.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.931+0000 7f43167fc700 1 --2- 192.168.123.105:0/324884585 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4330071a90 0x7f4330116940 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:12.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.931+0000 7f43167fc700 1 --2- 192.168.123.105:0/324884585 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4330072470 0x7f4330116e80 unknown :-1 s=CLOSED pgs=331 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:12.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.931+0000 7f43167fc700 1 -- 192.168.123.105:0/324884585 >> 192.168.123.105:0/324884585 conn(0x7f433006d1a0 msgr2=0x7f433010a6d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:12.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.931+0000 7f43167fc700 1 -- 192.168.123.105:0/324884585 shutdown_connections 2026-03-09T15:02:12.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:12.932+0000 7f43167fc700 1 -- 192.168.123.105:0/324884585 wait complete. 2026-03-09T15:02:12.953 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:02:13.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.049+0000 7fa6d9b6b700 1 -- 192.168.123.105:0/3356820419 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa6d4072360 msgr2=0x7fa6d40770e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:13.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.049+0000 7fa6d9b6b700 1 --2- 192.168.123.105:0/3356820419 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa6d4072360 0x7fa6d40770e0 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7fa6cc009230 tx=0x7fa6cc009260 comp rx=0 tx=0).stop 2026-03-09T15:02:13.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.049+0000 7fa6d9b6b700 1 -- 192.168.123.105:0/3356820419 shutdown_connections 2026-03-09T15:02:13.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.049+0000 7fa6d9b6b700 1 --2- 192.168.123.105:0/3356820419 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa6d4072360 0x7fa6d40770e0 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.049+0000 7fa6d9b6b700 1 --2- 192.168.123.105:0/3356820419 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa6d4071980 0x7fa6d4071d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.049+0000 7fa6d9b6b700 1 -- 192.168.123.105:0/3356820419 >> 192.168.123.105:0/3356820419 conn(0x7fa6d406d1a0 msgr2=0x7fa6d406f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:13.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.050+0000 7fa6d9b6b700 1 -- 192.168.123.105:0/3356820419 shutdown_connections 2026-03-09T15:02:13.050 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.050+0000 7fa6d9b6b700 1 -- 192.168.123.105:0/3356820419 wait complete. 2026-03-09T15:02:13.051 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.050+0000 7fa6d9b6b700 1 Processor -- start 2026-03-09T15:02:13.051 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.050+0000 7fa6d9b6b700 1 -- start start 2026-03-09T15:02:13.051 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.050+0000 7fa6d9b6b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa6d4071980 0x7fa6d4082530 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:13.051 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.050+0000 7fa6d9b6b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa6d4082a70 0x7fa6d4082ee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:13.051 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.050+0000 7fa6d9b6b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa6d412dd80 con 0x7fa6d4082a70 2026-03-09T15:02:13.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.050+0000 7fa6d9b6b700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa6d412def0 con 0x7fa6d4071980 2026-03-09T15:02:13.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.051+0000 7fa6d2ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa6d4082a70 0x7fa6d4082ee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:13.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.051+0000 7fa6d2ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa6d4082a70 0x7fa6d4082ee0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:45760/0 (socket says 192.168.123.105:45760) 2026-03-09T15:02:13.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.051+0000 7fa6d2ffd700 1 -- 192.168.123.105:0/1537017416 learned_addr learned my addr 192.168.123.105:0/1537017416 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:02:13.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.051+0000 7fa6d37fe700 1 --2- 192.168.123.105:0/1537017416 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa6d4071980 0x7fa6d4082530 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:13.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.051+0000 7fa6d2ffd700 1 -- 192.168.123.105:0/1537017416 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa6d4071980 msgr2=0x7fa6d4082530 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:13.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.051+0000 7fa6d2ffd700 1 --2- 192.168.123.105:0/1537017416 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa6d4071980 0x7fa6d4082530 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.051+0000 7fa6d2ffd700 1 -- 192.168.123.105:0/1537017416 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa6cc008ee0 con 0x7fa6d4082a70 2026-03-09T15:02:13.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.051+0000 7fa6d2ffd700 1 --2- 192.168.123.105:0/1537017416 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa6d4082a70 0x7fa6d4082ee0 secure :-1 s=READY pgs=332 cs=0 l=1 rev1=1 crypto rx=0x7fa6cc004770 tx=0x7fa6cc004850 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:13.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.052+0000 7fa6d0ff9700 1 -- 192.168.123.105:0/1537017416 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa6cc01d070 con 0x7fa6d4082a70 2026-03-09T15:02:13.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.052+0000 7fa6d9b6b700 1 -- 192.168.123.105:0/1537017416 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa6d412e110 con 0x7fa6d4082a70 2026-03-09T15:02:13.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.052+0000 7fa6d9b6b700 1 -- 192.168.123.105:0/1537017416 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa6d412e600 con 0x7fa6d4082a70 2026-03-09T15:02:13.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.053+0000 7fa6d0ff9700 1 -- 192.168.123.105:0/1537017416 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7fa6cc004a20 con 0x7fa6d4082a70 2026-03-09T15:02:13.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.054+0000 7fa6d0ff9700 1 -- 192.168.123.105:0/1537017416 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa6cc016a00 con 0x7fa6d4082a70 2026-03-09T15:02:13.054 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.055+0000 7fa6d0ff9700 1 -- 192.168.123.105:0/1537017416 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 28) v1 ==== 50327+0+0 (secure 0 0 0) 0x7fa6cc0085f0 con 0x7fa6d4082a70 2026-03-09T15:02:13.055 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.055+0000 7fa6d0ff9700 1 --2- 192.168.123.105:0/1537017416 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa6bc03dca0 0x7fa6bc040150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:13.055 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.055+0000 7fa6d37fe700 1 --2- 192.168.123.105:0/1537017416 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa6bc03dca0 0x7fa6bc040150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:13.055 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.055+0000 7fa6d0ff9700 1 -- 192.168.123.105:0/1537017416 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7fa6cc012070 con 0x7fa6d4082a70 2026-03-09T15:02:13.055 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.056+0000 7fa6d37fe700 1 --2- 192.168.123.105:0/1537017416 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa6bc03dca0 0x7fa6bc040150 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fa6c400be10 tx=0x7fa6c400d040 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:13.056 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.056+0000 7fa6d9b6b700 1 -- 192.168.123.105:0/1537017416 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa6c0005320 con 0x7fa6d4082a70 2026-03-09T15:02:13.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.059+0000 7fa6d0ff9700 1 -- 192.168.123.105:0/1537017416 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7fa6cc026020 con 0x7fa6d4082a70 2026-03-09T15:02:13.213 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.212+0000 7fa6d9b6b700 1 -- 192.168.123.105:0/1537017416 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fa6c0000bf0 con 0x7fa6bc03dca0 2026-03-09T15:02:13.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.214+0000 7fa6d0ff9700 1 -- 192.168.123.105:0/1537017416 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+329 (secure 0 0 0) 0x7fa6c0000bf0 con 0x7fa6bc03dca0 2026-03-09T15:02:13.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.219+0000 7fa6ba7fc700 1 -- 192.168.123.105:0/1537017416 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa6bc03dca0 msgr2=0x7fa6bc040150 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:13.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.219+0000 7fa6ba7fc700 1 --2- 192.168.123.105:0/1537017416 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa6bc03dca0 0x7fa6bc040150 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fa6c400be10 tx=0x7fa6c400d040 comp rx=0 tx=0).stop 2026-03-09T15:02:13.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.219+0000 7fa6ba7fc700 1 -- 192.168.123.105:0/1537017416 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa6d4082a70 msgr2=0x7fa6d4082ee0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:13.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.219+0000 7fa6ba7fc700 1 --2- 192.168.123.105:0/1537017416 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa6d4082a70 0x7fa6d4082ee0 secure :-1 s=READY pgs=332 cs=0 l=1 rev1=1 crypto rx=0x7fa6cc004770 tx=0x7fa6cc004850 comp rx=0 tx=0).stop 2026-03-09T15:02:13.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.219+0000 7fa6ba7fc700 1 -- 192.168.123.105:0/1537017416 shutdown_connections 2026-03-09T15:02:13.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.219+0000 7fa6ba7fc700 1 --2- 192.168.123.105:0/1537017416 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa6bc03dca0 0x7fa6bc040150 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.219+0000 7fa6ba7fc700 1 --2- 192.168.123.105:0/1537017416 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa6d4071980 0x7fa6d4082530 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.219+0000 7fa6ba7fc700 1 --2- 192.168.123.105:0/1537017416 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa6d4082a70 0x7fa6d4082ee0 unknown :-1 s=CLOSED pgs=332 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.219+0000 7fa6ba7fc700 1 -- 192.168.123.105:0/1537017416 >> 192.168.123.105:0/1537017416 conn(0x7fa6d406d1a0 msgr2=0x7fa6d40764f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:13.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.220+0000 7fa6ba7fc700 1 -- 192.168.123.105:0/1537017416 shutdown_connections 2026-03-09T15:02:13.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.220+0000 7fa6ba7fc700 1 -- 192.168.123.105:0/1537017416 wait complete. 2026-03-09T15:02:13.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.347+0000 7f1b5c4cc700 1 -- 192.168.123.105:0/2495122722 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b54101730 msgr2=0x7f1b54101b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:13.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.347+0000 7f1b5c4cc700 1 --2- 192.168.123.105:0/2495122722 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b54101730 0x7f1b54101b80 secure :-1 s=READY pgs=333 cs=0 l=1 rev1=1 crypto rx=0x7f1b44009b00 tx=0x7f1b44009e10 comp rx=0 tx=0).stop 2026-03-09T15:02:13.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.348+0000 7f1b5c4cc700 1 -- 192.168.123.105:0/2495122722 shutdown_connections 2026-03-09T15:02:13.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.348+0000 7f1b5c4cc700 1 --2- 192.168.123.105:0/2495122722 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b54101730 0x7f1b54101b80 unknown :-1 s=CLOSED pgs=333 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.348+0000 7f1b5c4cc700 1 --2- 192.168.123.105:0/2495122722 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1b54100530 0x7f1b54100940 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.348+0000 7f1b5c4cc700 1 -- 192.168.123.105:0/2495122722 >> 192.168.123.105:0/2495122722 conn(0x7f1b540fbaa0 msgr2=0x7f1b540fdf10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:13.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.348+0000 7f1b5c4cc700 1 -- 192.168.123.105:0/2495122722 shutdown_connections 2026-03-09T15:02:13.349 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.348+0000 7f1b5c4cc700 1 -- 192.168.123.105:0/2495122722 wait complete. 2026-03-09T15:02:13.350 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.350+0000 7f1b5c4cc700 1 Processor -- start 2026-03-09T15:02:13.350 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.350+0000 7f1b5c4cc700 1 -- start start 2026-03-09T15:02:13.350 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.351+0000 7f1b5c4cc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1b54100530 0x7f1b54195e80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:13.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.351+0000 7f1b5c4cc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b54101730 0x7f1b541963c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:13.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.351+0000 7f1b5c4cc700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1b541969e0 con 0x7f1b54101730 2026-03-09T15:02:13.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.351+0000 7f1b5c4cc700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1b54196b20 con 0x7f1b54100530 2026-03-09T15:02:13.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.351+0000 7f1b5a268700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1b54100530 0x7f1b54195e80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:13.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.351+0000 7f1b5a268700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1b54100530 0x7f1b54195e80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:54180/0 (socket says 192.168.123.105:54180) 2026-03-09T15:02:13.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.351+0000 7f1b5a268700 1 -- 192.168.123.105:0/2064375553 learned_addr learned my addr 192.168.123.105:0/2064375553 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:02:13.352 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.351+0000 7f1b59a67700 1 --2- 192.168.123.105:0/2064375553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b54101730 0x7f1b541963c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:13.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.352+0000 7f1b5a268700 1 -- 192.168.123.105:0/2064375553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b54101730 msgr2=0x7f1b541963c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:13.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.352+0000 7f1b5a268700 1 --2- 192.168.123.105:0/2064375553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b54101730 0x7f1b541963c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.352+0000 7f1b5a268700 1 -- 192.168.123.105:0/2064375553 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1b440097e0 con 0x7f1b54100530 2026-03-09T15:02:13.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.353+0000 7f1b5a268700 1 --2- 192.168.123.105:0/2064375553 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1b54100530 0x7f1b54195e80 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f1b5000b700 tx=0x7f1b5000bac0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:13.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.353+0000 7f1b4b7fe700 1 -- 192.168.123.105:0/2064375553 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1b50010820 con 0x7f1b54100530 2026-03-09T15:02:13.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.353+0000 7f1b5c4cc700 1 -- 192.168.123.105:0/2064375553 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1b54072f80 con 0x7f1b54100530 2026-03-09T15:02:13.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.353+0000 7f1b5c4cc700 1 -- 192.168.123.105:0/2064375553 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1b540734d0 con 0x7f1b54100530 2026-03-09T15:02:13.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.353+0000 7f1b4b7fe700 1 -- 192.168.123.105:0/2064375553 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f1b50010e60 con 0x7f1b54100530 2026-03-09T15:02:13.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.354+0000 7f1b4b7fe700 1 -- 192.168.123.105:0/2064375553 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1b50017570 con 0x7f1b54100530 2026-03-09T15:02:13.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.354+0000 7f1b5c4cc700 1 -- 192.168.123.105:0/2064375553 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1b38005320 con 0x7f1b54100530 2026-03-09T15:02:13.360 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.359+0000 7f1b4b7fe700 1 -- 192.168.123.105:0/2064375553 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 28) v1 ==== 50327+0+0 (secure 0 0 0) 0x7f1b50010980 con 0x7f1b54100530 2026-03-09T15:02:13.360 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.359+0000 7f1b4b7fe700 1 --2- 192.168.123.105:0/2064375553 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f1b40046490 0x7f1b40048940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:13.360 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.360+0000 7f1b4b7fe700 1 -- 192.168.123.105:0/2064375553 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f1b50052ed0 con 0x7f1b54100530 2026-03-09T15:02:13.366 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.361+0000 7f1b4b7fe700 1 -- 192.168.123.105:0/2064375553 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f1b5000f3c0 con 0x7f1b54100530 2026-03-09T15:02:13.366 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.364+0000 7f1b59a67700 1 --2- 192.168.123.105:0/2064375553 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f1b40046490 0x7f1b40048940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:13.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.372+0000 7f1b59a67700 1 --2- 192.168.123.105:0/2064375553 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f1b40046490 0x7f1b40048940 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f1b44009fd0 tx=0x7f1b440058e0 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:13.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.569+0000 7f1b4b7fe700 1 -- 192.168.123.105:0/2064375553 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mgrmap(e 29) v1 ==== 50383+0+0 (secure 0 0 0) 0x7f1b50050730 con 0x7f1b54100530 2026-03-09T15:02:13.576 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.574+0000 7f1b5c4cc700 1 -- 192.168.123.105:0/2064375553 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f1b38000bf0 con 0x7f1b40046490 2026-03-09T15:02:13.588 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:02:13.588 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (5m) 17s ago 6m 23.0M - 0.25.0 c8568f914cd2 35e160b8d1de 2026-03-09T15:02:13.588 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (6m) 17s ago 6m 8388k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:02:13.588 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (5m) 1s ago 5m 11.1M - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:02:13.588 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (6m) 17s ago 6m 7411k - 18.2.0 dc2bc1663786 1c577d7a0de0 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (5m) 1s ago 5m 7402k - 18.2.0 dc2bc1663786 9e4961442551 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (5m) 17s ago 5m 90.1M - 9.4.7 954c08fa6188 46e00e5e5b38 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (3m) 17s ago 3m 259M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (3m) 17s ago 3m 15.4M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (3m) 1s ago 3m 15.7M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (3m) 1s ago 3m 299M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (21s) 17s ago 7m 48.5M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (41s) 1s ago 5m 176M - 19.2.3-678-ge911bdeb 654f31e6858e 9e4386df1493 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (7m) 17s ago 7m 50.4M 2048M 18.2.0 dc2bc1663786 c83e96b62251 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (5m) 1s ago 5m 42.1M 2048M 18.2.0 dc2bc1663786 7963792b5376 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (6m) 17s ago 6m 14.6M - 1.5.0 0da6a335fe13 925d94d1da6f 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (5m) 1s ago 5m 14.4M - 1.5.0 0da6a335fe13 e0b25e3a046e 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (5m) 17s ago 5m 300M 4096M 18.2.0 dc2bc1663786 50f3ca995318 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (4m) 17s ago 4m 303M 4096M 18.2.0 dc2bc1663786 23e35bdafe50 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (4m) 17s ago 4m 259M 4096M 18.2.0 dc2bc1663786 75097dc12979 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (4m) 1s ago 4m 351M 4096M 18.2.0 dc2bc1663786 e79644a0564f 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (4m) 1s ago 4m 322M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (4m) 1s ago 4m 289M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (23s) 17s ago 5m 38.8M - 2.43.0 a07b618ecd1d 9009e5813cd5 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.585+0000 7f1b4b7fe700 1 -- 192.168.123.105:0/2064375553 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3552 (secure 0 0 0) 0x7f1b38000bf0 con 0x7f1b40046490 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.588+0000 7f1b497fa700 1 -- 192.168.123.105:0/2064375553 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f1b40046490 msgr2=0x7f1b40048940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.588+0000 7f1b497fa700 1 --2- 192.168.123.105:0/2064375553 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f1b40046490 0x7f1b40048940 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f1b44009fd0 tx=0x7f1b440058e0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.588+0000 7f1b497fa700 1 -- 192.168.123.105:0/2064375553 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1b54100530 msgr2=0x7f1b54195e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:13.589 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.588+0000 7f1b497fa700 1 --2- 192.168.123.105:0/2064375553 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1b54100530 0x7f1b54195e80 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f1b5000b700 tx=0x7f1b5000bac0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.588+0000 7f1b497fa700 1 -- 192.168.123.105:0/2064375553 shutdown_connections 2026-03-09T15:02:13.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.588+0000 7f1b497fa700 1 --2- 192.168.123.105:0/2064375553 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f1b40046490 0x7f1b40048940 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.588+0000 7f1b497fa700 1 --2- 192.168.123.105:0/2064375553 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1b54100530 0x7f1b54195e80 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.588+0000 7f1b497fa700 1 --2- 192.168.123.105:0/2064375553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1b54101730 0x7f1b541963c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.588+0000 7f1b497fa700 1 -- 192.168.123.105:0/2064375553 >> 192.168.123.105:0/2064375553 conn(0x7f1b540fbaa0 msgr2=0x7f1b54104960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:13.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.590+0000 7f1b497fa700 1 -- 192.168.123.105:0/2064375553 shutdown_connections 2026-03-09T15:02:13.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.590+0000 7f1b497fa700 1 -- 192.168.123.105:0/2064375553 wait complete. 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 -- 192.168.123.105:0/1537792649 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5824072330 msgr2=0x7f58240770b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 --2- 192.168.123.105:0/1537792649 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5824072330 0x7f58240770b0 secure :-1 s=READY pgs=334 cs=0 l=1 rev1=1 crypto rx=0x7f581c00d3f0 tx=0x7f581c00d700 comp rx=0 tx=0).stop 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 -- 192.168.123.105:0/1537792649 shutdown_connections 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 --2- 192.168.123.105:0/1537792649 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5824072330 0x7f58240770b0 unknown :-1 s=CLOSED pgs=334 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 --2- 192.168.123.105:0/1537792649 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5824071950 0x7f5824071d60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 -- 192.168.123.105:0/1537792649 >> 192.168.123.105:0/1537792649 conn(0x7f582406d1a0 msgr2=0x7f582406f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 -- 192.168.123.105:0/1537792649 shutdown_connections 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 -- 192.168.123.105:0/1537792649 wait complete. 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 Processor -- start 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 -- start start 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5824071950 0x7f58241312d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5824072330 0x7f5824131810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5824131e30 con 0x7f5824072330 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.693+0000 7f582b07b700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f582407f440 con 0x7f5824071950 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.694+0000 7f582a079700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5824071950 0x7f58241312d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.694+0000 7f582a079700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5824071950 0x7f58241312d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:54192/0 (socket says 192.168.123.105:54192) 2026-03-09T15:02:13.694 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.694+0000 7f582a079700 1 -- 192.168.123.105:0/3233225606 learned_addr learned my addr 192.168.123.105:0/3233225606 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:02:13.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.694+0000 7f582a079700 1 -- 192.168.123.105:0/3233225606 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5824072330 msgr2=0x7f5824131810 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:13.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.694+0000 7f582a079700 1 --2- 192.168.123.105:0/3233225606 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5824072330 0x7f5824131810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.694+0000 7f582a079700 1 -- 192.168.123.105:0/3233225606 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f581c007ed0 con 0x7f5824071950 2026-03-09T15:02:13.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.694+0000 7f582a079700 1 --2- 192.168.123.105:0/3233225606 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5824071950 0x7f58241312d0 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7f582000d8d0 tx=0x7f582000dc90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:13.697 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.697+0000 7f581b7fe700 1 -- 192.168.123.105:0/3233225606 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5820009940 con 0x7f5824071950 2026-03-09T15:02:13.697 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.697+0000 7f581b7fe700 1 -- 192.168.123.105:0/3233225606 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f5820010460 con 0x7f5824071950 2026-03-09T15:02:13.698 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.697+0000 7f581b7fe700 1 -- 192.168.123.105:0/3233225606 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f582000f5d0 con 0x7f5824071950 2026-03-09T15:02:13.698 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.697+0000 7f582b07b700 1 -- 192.168.123.105:0/3233225606 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f582407f720 con 0x7f5824071950 2026-03-09T15:02:13.698 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.697+0000 7f582b07b700 1 -- 192.168.123.105:0/3233225606 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f582407fc70 con 0x7f5824071950 2026-03-09T15:02:13.698 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.697+0000 7f58197fa700 1 -- 192.168.123.105:0/3233225606 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f582404ea50 con 0x7f5824071950 2026-03-09T15:02:13.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.699+0000 7f581b7fe700 1 -- 192.168.123.105:0/3233225606 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 29) v1 ==== 50383+0+0 (secure 0 0 0) 0x7f58200105d0 con 0x7f5824071950 2026-03-09T15:02:13.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.699+0000 7f581b7fe700 1 --2- 192.168.123.105:0/3233225606 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f581003df20 0x7f58100403d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:13.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.699+0000 7f581b7fe700 1 -- 192.168.123.105:0/3233225606 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f58200537e0 con 0x7f5824071950 2026-03-09T15:02:13.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.699+0000 7f5829878700 1 --2- 192.168.123.105:0/3233225606 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f581003df20 0x7f58100403d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:13.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.700+0000 7f5829878700 1 --2- 192.168.123.105:0/3233225606 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f581003df20 0x7f58100403d0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f581c00db80 tx=0x7f581c007480 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:13.703 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.701+0000 7f581b7fe700 1 -- 192.168.123.105:0/3233225606 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f582000e7d0 con 0x7f5824071950 2026-03-09T15:02:13.831 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:13 vm09.local ceph-mon[59673]: [09/Mar/2026:15:02:12] ENGINE Bus STARTING 2026-03-09T15:02:13.831 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:13 vm09.local ceph-mon[59673]: [09/Mar/2026:15:02:12] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T15:02:13.831 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:13 vm09.local ceph-mon[59673]: [09/Mar/2026:15:02:12] ENGINE Client ('192.168.123.105', 40822) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:02:13.831 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:13 vm09.local ceph-mon[59673]: [09/Mar/2026:15:02:12] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T15:02:13.831 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:13 vm09.local ceph-mon[59673]: [09/Mar/2026:15:02:12] ENGINE Bus STARTED 2026-03-09T15:02:13.831 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:13 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:13.831 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:13 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:13.831 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:13 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:02:13.831 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:13 vm09.local ceph-mon[59673]: mgrmap e29: vm05.lhsexd(active, since 4s) 2026-03-09T15:02:13.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:13 vm05.local ceph-mon[50611]: [09/Mar/2026:15:02:12] ENGINE Bus STARTING 2026-03-09T15:02:13.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:13 vm05.local ceph-mon[50611]: [09/Mar/2026:15:02:12] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T15:02:13.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:13 vm05.local ceph-mon[50611]: [09/Mar/2026:15:02:12] ENGINE Client ('192.168.123.105', 40822) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:02:13.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:13 vm05.local ceph-mon[50611]: [09/Mar/2026:15:02:12] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T15:02:13.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:13 vm05.local ceph-mon[50611]: [09/Mar/2026:15:02:12] ENGINE Bus STARTED 2026-03-09T15:02:13.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:13 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:13.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:13 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:13.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:13 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:02:13.982 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:13 vm05.local ceph-mon[50611]: mgrmap e29: vm05.lhsexd(active, since 4s) 2026-03-09T15:02:13.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.979+0000 7f58197fa700 1 -- 192.168.123.105:0/3233225606 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f5824061960 con 0x7f5824071950 2026-03-09T15:02:13.985 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.985+0000 7f581b7fe700 1 -- 192.168.123.105:0/3233225606 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+679 (secure 0 0 0) 0x7f5820016070 con 0x7f5824071950 2026-03-09T15:02:13.985 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:02:13.985 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:02:13.985 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2 2026-03-09T15:02:13.985 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:02:13.985 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:02:13.985 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T15:02:13.986 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:02:13.986 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:02:13.986 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 6 2026-03-09T15:02:13.986 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:02:13.986 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:02:13.986 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4 2026-03-09T15:02:13.986 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:02:13.986 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:02:13.986 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 12, 2026-03-09T15:02:13.986 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T15:02:13.986 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:02:13.986 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:02:13.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.989+0000 7f582b07b700 1 -- 192.168.123.105:0/3233225606 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f581003df20 msgr2=0x7f58100403d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:13.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.989+0000 7f582b07b700 1 --2- 192.168.123.105:0/3233225606 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f581003df20 0x7f58100403d0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f581c00db80 tx=0x7f581c007480 comp rx=0 tx=0).stop 2026-03-09T15:02:13.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.989+0000 7f582b07b700 1 -- 192.168.123.105:0/3233225606 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5824071950 msgr2=0x7f58241312d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:13.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.989+0000 7f582b07b700 1 --2- 192.168.123.105:0/3233225606 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5824071950 0x7f58241312d0 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7f582000d8d0 tx=0x7f582000dc90 comp rx=0 tx=0).stop 2026-03-09T15:02:13.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.990+0000 7f582b07b700 1 -- 192.168.123.105:0/3233225606 shutdown_connections 2026-03-09T15:02:13.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.990+0000 7f582b07b700 1 --2- 192.168.123.105:0/3233225606 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f581003df20 0x7f58100403d0 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.990+0000 7f582b07b700 1 --2- 192.168.123.105:0/3233225606 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5824071950 0x7f58241312d0 unknown :-1 s=CLOSED pgs=104 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.990+0000 7f582b07b700 1 --2- 192.168.123.105:0/3233225606 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5824072330 0x7f5824131810 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:13.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.990+0000 7f582b07b700 1 -- 192.168.123.105:0/3233225606 >> 192.168.123.105:0/3233225606 conn(0x7f582406d1a0 msgr2=0x7f5824076660 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:13.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.993+0000 7f582b07b700 1 -- 192.168.123.105:0/3233225606 shutdown_connections 2026-03-09T15:02:13.992 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:13.993+0000 7f582b07b700 1 -- 192.168.123.105:0/3233225606 wait complete. 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.107+0000 7f9920fd1700 1 -- 192.168.123.105:0/1563890648 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f991c072360 msgr2=0x7f991c0770e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.107+0000 7f9920fd1700 1 --2- 192.168.123.105:0/1563890648 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f991c072360 0x7f991c0770e0 secure :-1 s=READY pgs=335 cs=0 l=1 rev1=1 crypto rx=0x7f991400d3f0 tx=0x7f991400d700 comp rx=0 tx=0).stop 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.107+0000 7f9920fd1700 1 -- 192.168.123.105:0/1563890648 shutdown_connections 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.107+0000 7f9920fd1700 1 --2- 192.168.123.105:0/1563890648 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f991c072360 0x7f991c0770e0 unknown :-1 s=CLOSED pgs=335 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.107+0000 7f9920fd1700 1 --2- 192.168.123.105:0/1563890648 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f991c071980 0x7f991c071d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.107+0000 7f9920fd1700 1 -- 192.168.123.105:0/1563890648 >> 192.168.123.105:0/1563890648 conn(0x7f991c06d1a0 msgr2=0x7f991c06f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.107+0000 7f9920fd1700 1 -- 192.168.123.105:0/1563890648 shutdown_connections 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.107+0000 7f9920fd1700 1 -- 192.168.123.105:0/1563890648 wait complete. 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f9920fd1700 1 Processor -- start 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f9920fd1700 1 -- start start 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f9920fd1700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f991c071980 0x7f991c082530 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f9920fd1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f991c082a70 0x7f991c082ee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f9920fd1700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f991c1b2a90 con 0x7f991c082a70 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f9920fd1700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f991c1b2bd0 con 0x7f991c071980 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f9919d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f991c082a70 0x7f991c082ee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f9919d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f991c082a70 0x7f991c082ee0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:45820/0 (socket says 192.168.123.105:45820) 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f9919d9b700 1 -- 192.168.123.105:0/3635739930 learned_addr learned my addr 192.168.123.105:0/3635739930 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f991a59c700 1 --2- 192.168.123.105:0/3635739930 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f991c071980 0x7f991c082530 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:14.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f9919d9b700 1 -- 192.168.123.105:0/3635739930 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f991c071980 msgr2=0x7f991c082530 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:14.109 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f9919d9b700 1 --2- 192.168.123.105:0/3635739930 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f991c071980 0x7f991c082530 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:14.109 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.108+0000 7f9919d9b700 1 -- 192.168.123.105:0/3635739930 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9914007ed0 con 0x7f991c082a70 2026-03-09T15:02:14.109 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.109+0000 7f9919d9b700 1 --2- 192.168.123.105:0/3635739930 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f991c082a70 0x7f991c082ee0 secure :-1 s=READY pgs=336 cs=0 l=1 rev1=1 crypto rx=0x7f9914000f80 tx=0x7f9914004b40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:14.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.110+0000 7f990b7fe700 1 -- 192.168.123.105:0/3635739930 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f991401c070 con 0x7f991c082a70 2026-03-09T15:02:14.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.110+0000 7f9920fd1700 1 -- 192.168.123.105:0/3635739930 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f991c1b2d10 con 0x7f991c082a70 2026-03-09T15:02:14.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.110+0000 7f9920fd1700 1 -- 192.168.123.105:0/3635739930 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f991c1b3170 con 0x7f991c082a70 2026-03-09T15:02:14.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.110+0000 7f990b7fe700 1 -- 192.168.123.105:0/3635739930 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f991400deb0 con 0x7f991c082a70 2026-03-09T15:02:14.110 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.110+0000 7f990b7fe700 1 -- 192.168.123.105:0/3635739930 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9914017b80 con 0x7f991c082a70 2026-03-09T15:02:14.111 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.111+0000 7f990b7fe700 1 -- 192.168.123.105:0/3635739930 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 29) v1 ==== 50383+0+0 (secure 0 0 0) 0x7f9914017ce0 con 0x7f991c082a70 2026-03-09T15:02:14.111 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.111+0000 7f990b7fe700 1 --2- 192.168.123.105:0/3635739930 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f990403df70 0x7f9904040420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:14.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.111+0000 7f991a59c700 1 --2- 192.168.123.105:0/3635739930 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f990403df70 0x7f9904040420 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:14.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.111+0000 7f990b7fe700 1 -- 192.168.123.105:0/3635739930 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f9914013070 con 0x7f991c082a70 2026-03-09T15:02:14.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.112+0000 7f9920fd1700 1 -- 192.168.123.105:0/3635739930 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f98fc005320 con 0x7f991c082a70 2026-03-09T15:02:14.116 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.115+0000 7f991a59c700 1 --2- 192.168.123.105:0/3635739930 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f990403df70 0x7f9904040420 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f990c005950 tx=0x7f990c00f820 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:14.119 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.117+0000 7f990b7fe700 1 -- 192.168.123.105:0/3635739930 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f9914025030 con 0x7f991c082a70 2026-03-09T15:02:14.304 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.302+0000 7f9920fd1700 1 -- 192.168.123.105:0/3635739930 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f98fc000bf0 con 0x7f990403df70 2026-03-09T15:02:14.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.307+0000 7f990b7fe700 1 -- 192.168.123.105:0/3635739930 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+329 (secure 0 0 0) 0x7f98fc000bf0 con 0x7f990403df70 2026-03-09T15:02:14.308 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:02:14.308 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:02:14.308 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:02:14.308 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) mgr", 2026-03-09T15:02:14.308 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [ 2026-03-09T15:02:14.308 INFO:teuthology.orchestra.run.vm05.stdout: "mgr" 2026-03-09T15:02:14.308 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-09T15:02:14.308 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "2/2 daemons upgraded", 2026-03-09T15:02:14.308 INFO:teuthology.orchestra.run.vm05.stdout: "message": "", 2026-03-09T15:02:14.308 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:02:14.308 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:02:14.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.311+0000 7f99097fa700 1 -- 192.168.123.105:0/3635739930 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f990403df70 msgr2=0x7f9904040420 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:14.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.311+0000 7f99097fa700 1 --2- 192.168.123.105:0/3635739930 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f990403df70 0x7f9904040420 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f990c005950 tx=0x7f990c00f820 comp rx=0 tx=0).stop 2026-03-09T15:02:14.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.311+0000 7f99097fa700 1 -- 192.168.123.105:0/3635739930 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f991c082a70 msgr2=0x7f991c082ee0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:14.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.311+0000 7f99097fa700 1 --2- 192.168.123.105:0/3635739930 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f991c082a70 0x7f991c082ee0 secure :-1 s=READY pgs=336 cs=0 l=1 rev1=1 crypto rx=0x7f9914000f80 tx=0x7f9914004b40 comp rx=0 tx=0).stop 2026-03-09T15:02:14.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.311+0000 7f99097fa700 1 -- 192.168.123.105:0/3635739930 shutdown_connections 2026-03-09T15:02:14.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.311+0000 7f99097fa700 1 --2- 192.168.123.105:0/3635739930 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f990403df70 0x7f9904040420 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:14.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.311+0000 7f99097fa700 1 --2- 192.168.123.105:0/3635739930 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f991c071980 0x7f991c082530 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:14.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.311+0000 7f99097fa700 1 --2- 192.168.123.105:0/3635739930 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f991c082a70 0x7f991c082ee0 unknown :-1 s=CLOSED pgs=336 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:14.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.311+0000 7f99097fa700 1 -- 192.168.123.105:0/3635739930 >> 192.168.123.105:0/3635739930 conn(0x7f991c06d1a0 msgr2=0x7f991c0764c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:14.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.312+0000 7f99097fa700 1 -- 192.168.123.105:0/3635739930 shutdown_connections 2026-03-09T15:02:14.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:14.312+0000 7f99097fa700 1 -- 192.168.123.105:0/3635739930 wait complete. 2026-03-09T15:02:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:14 vm05.local ceph-mon[50611]: from='client.14674 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:14 vm05.local ceph-mon[50611]: pgmap v5: 65 pgs: 65 active+clean; 1.6 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail 2026-03-09T15:02:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:14 vm05.local ceph-mon[50611]: from='client.14678 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:14 vm05.local ceph-mon[50611]: from='client.24485 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:14 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/3233225606' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:14 vm05.local ceph-mon[50611]: Standby manager daemon vm09.cfuwdz started 2026-03-09T15:02:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:14 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.109:0/1358535171' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/crt"}]: dispatch 2026-03-09T15:02:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:14 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.109:0/1358535171' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:02:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:14 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.109:0/1358535171' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/key"}]: dispatch 2026-03-09T15:02:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:14 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.109:0/1358535171' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:02:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:14 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:14 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:14 vm09.local ceph-mon[59673]: from='client.14674 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:14 vm09.local ceph-mon[59673]: pgmap v5: 65 pgs: 65 active+clean; 1.6 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail 2026-03-09T15:02:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:14 vm09.local ceph-mon[59673]: from='client.14678 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:14 vm09.local ceph-mon[59673]: from='client.24485 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:14 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/3233225606' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:14 vm09.local ceph-mon[59673]: Standby manager daemon vm09.cfuwdz started 2026-03-09T15:02:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:14 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/1358535171' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/crt"}]: dispatch 2026-03-09T15:02:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:14 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/1358535171' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:02:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:14 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/1358535171' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/key"}]: dispatch 2026-03-09T15:02:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:14 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/1358535171' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:02:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:14 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:14 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:16.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:15 vm09.local ceph-mon[59673]: from='client.14688 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:16.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:15 vm09.local ceph-mon[59673]: pgmap v6: 65 pgs: 65 active+clean; 1.6 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail 2026-03-09T15:02:16.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:15 vm09.local ceph-mon[59673]: mgrmap e30: vm05.lhsexd(active, since 6s), standbys: vm09.cfuwdz 2026-03-09T15:02:16.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:15 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm09.cfuwdz", "id": "vm09.cfuwdz"}]: dispatch 2026-03-09T15:02:16.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:15 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:16.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:15 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:16.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:15 vm05.local ceph-mon[50611]: from='client.14688 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:16.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:15 vm05.local ceph-mon[50611]: pgmap v6: 65 pgs: 65 active+clean; 1.6 GiB data, 6.7 GiB used, 113 GiB / 120 GiB avail 2026-03-09T15:02:16.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:15 vm05.local ceph-mon[50611]: mgrmap e30: vm05.lhsexd(active, since 6s), standbys: vm09.cfuwdz 2026-03-09T15:02:16.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:15 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm09.cfuwdz", "id": "vm09.cfuwdz"}]: dispatch 2026-03-09T15:02:16.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:15 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:16.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:15 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:17 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:17.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:17 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:18.822 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:18 vm05.local ceph-mon[50611]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:02:18.822 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:18 vm05.local ceph-mon[50611]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:02:18.822 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:18 vm05.local ceph-mon[50611]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T15:02:18.822 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:18 vm05.local ceph-mon[50611]: pgmap v7: 65 pgs: 65 active+clean; 1.4 GiB data, 5.8 GiB used, 114 GiB / 120 GiB avail; 601 KiB/s rd, 632 KiB/s wr, 73 op/s 2026-03-09T15:02:18.822 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:18 vm05.local ceph-mon[50611]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T15:02:18.822 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:18 vm05.local ceph-mon[50611]: Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-09T15:02:18.822 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:18 vm05.local ceph-mon[50611]: Reconfiguring daemon prometheus.vm05 on vm05 2026-03-09T15:02:18.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:18 vm09.local ceph-mon[59673]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:02:18.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:18 vm09.local ceph-mon[59673]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:02:18.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:18 vm09.local ceph-mon[59673]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T15:02:18.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:18 vm09.local ceph-mon[59673]: pgmap v7: 65 pgs: 65 active+clean; 1.4 GiB data, 5.8 GiB used, 114 GiB / 120 GiB avail; 601 KiB/s rd, 632 KiB/s wr, 73 op/s 2026-03-09T15:02:18.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:18 vm09.local ceph-mon[59673]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T15:02:18.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:18 vm09.local ceph-mon[59673]: Reconfiguring prometheus.vm05 (dependencies changed)... 2026-03-09T15:02:18.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:18 vm09.local ceph-mon[59673]: Reconfiguring daemon prometheus.vm05 on vm05 2026-03-09T15:02:20.141 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:19 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:20.141 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:19 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:20.141 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:19 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:02:20.141 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:19 vm09.local ceph-mon[59673]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:02:20.141 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:19 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:20.141 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:19 vm09.local ceph-mon[59673]: pgmap v8: 65 pgs: 65 active+clean; 1.4 GiB data, 5.8 GiB used, 114 GiB / 120 GiB avail; 458 KiB/s rd, 482 KiB/s wr, 56 op/s 2026-03-09T15:02:20.141 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:19 vm09.local ceph-mon[59673]: Upgrade: Updating mgr.vm09.cfuwdz 2026-03-09T15:02:20.141 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:19 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.cfuwdz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:02:20.141 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:19 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:02:20.142 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:19 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:20.142 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:19 vm09.local ceph-mon[59673]: Deploying daemon mgr.vm09.cfuwdz on vm09 2026-03-09T15:02:20.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:19 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:20.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:19 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:20.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:19 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:02:20.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:19 vm05.local ceph-mon[50611]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:02:20.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:19 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:20.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:19 vm05.local ceph-mon[50611]: pgmap v8: 65 pgs: 65 active+clean; 1.4 GiB data, 5.8 GiB used, 114 GiB / 120 GiB avail; 458 KiB/s rd, 482 KiB/s wr, 56 op/s 2026-03-09T15:02:20.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:19 vm05.local ceph-mon[50611]: Upgrade: Updating mgr.vm09.cfuwdz 2026-03-09T15:02:20.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:19 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.cfuwdz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:02:20.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:19 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:02:20.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:19 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:20.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:19 vm05.local ceph-mon[50611]: Deploying daemon mgr.vm09.cfuwdz on vm09 2026-03-09T15:02:22.032 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:21 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:22.032 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:21 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:22.032 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:21 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:22.032 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:21 vm05.local ceph-mon[50611]: pgmap v9: 65 pgs: 65 active+clean; 1.1 GiB data, 5.1 GiB used, 115 GiB / 120 GiB avail; 982 KiB/s rd, 984 KiB/s wr, 94 op/s 2026-03-09T15:02:22.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:21 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:22.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:21 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:22.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:21 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:22.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:21 vm09.local ceph-mon[59673]: pgmap v9: 65 pgs: 65 active+clean; 1.1 GiB data, 5.1 GiB used, 115 GiB / 120 GiB avail; 982 KiB/s rd, 984 KiB/s wr, 94 op/s 2026-03-09T15:02:23.439 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:23 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:23.439 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:23 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:23.439 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:23 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:23.439 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:23 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:23.439 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:23 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:23.439 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:23 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:23.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:23 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:23.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:23 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:23.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:23 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:23.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:23 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:23.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:23 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:23.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:23 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:24.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:24 vm05.local ceph-mon[50611]: pgmap v10: 65 pgs: 65 active+clean; 1.1 GiB data, 5.1 GiB used, 115 GiB / 120 GiB avail; 850 KiB/s rd, 852 KiB/s wr, 81 op/s 2026-03-09T15:02:24.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:24 vm09.local ceph-mon[59673]: pgmap v10: 65 pgs: 65 active+clean; 1.1 GiB data, 5.1 GiB used, 115 GiB / 120 GiB avail; 850 KiB/s rd, 852 KiB/s wr, 81 op/s 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: Upgrade: Setting container_image for all mgr 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm05.lhsexd"}]: dispatch 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm05.lhsexd"}]': finished 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm09.cfuwdz"}]: dispatch 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm09.cfuwdz"}]': finished 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: Upgrade: Setting container_image for all rgw 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: Upgrade: Setting container_image for all rbd-mirror 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: Upgrade: Setting container_image for all iscsi 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: Upgrade: Setting container_image for all nfs 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:25.962 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:25.963 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: Upgrade: Setting container_image for all nvmeof 2026-03-09T15:02:25.963 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:25.963 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:02:26.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:26.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:26.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:26.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:02:26.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:26.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: Upgrade: Setting container_image for all mgr 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm05.lhsexd"}]: dispatch 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm05.lhsexd"}]': finished 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm09.cfuwdz"}]: dispatch 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.vm09.cfuwdz"}]': finished 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: Upgrade: Setting container_image for all rgw 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: Upgrade: Setting container_image for all rbd-mirror 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: Upgrade: Setting container_image for all cephfs-mirror 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: Upgrade: Setting container_image for all iscsi 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: Upgrade: Setting container_image for all nfs 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: Upgrade: Setting container_image for all nvmeof 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:26.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:02:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:26 vm05.local ceph-mon[50611]: Upgrade: Updating node-exporter.vm05 (1/2) 2026-03-09T15:02:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:26 vm05.local ceph-mon[50611]: Deploying daemon node-exporter.vm05 on vm05 2026-03-09T15:02:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:26 vm05.local ceph-mon[50611]: pgmap v11: 65 pgs: 65 active+clean; 1.1 GiB data, 5.1 GiB used, 115 GiB / 120 GiB avail; 850 KiB/s rd, 852 KiB/s wr, 81 op/s 2026-03-09T15:02:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:26 vm05.local ceph-mon[50611]: Standby manager daemon vm09.cfuwdz restarted 2026-03-09T15:02:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:26 vm05.local ceph-mon[50611]: Standby manager daemon vm09.cfuwdz started 2026-03-09T15:02:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:26 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.109:0/3895071363' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/crt"}]: dispatch 2026-03-09T15:02:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:26 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.109:0/3895071363' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:02:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:26 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.109:0/3895071363' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/key"}]: dispatch 2026-03-09T15:02:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:26 vm05.local ceph-mon[50611]: from='mgr.? 192.168.123.109:0/3895071363' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:02:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:26 vm09.local ceph-mon[59673]: Upgrade: Updating node-exporter.vm05 (1/2) 2026-03-09T15:02:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:26 vm09.local ceph-mon[59673]: Deploying daemon node-exporter.vm05 on vm05 2026-03-09T15:02:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:26 vm09.local ceph-mon[59673]: pgmap v11: 65 pgs: 65 active+clean; 1.1 GiB data, 5.1 GiB used, 115 GiB / 120 GiB avail; 850 KiB/s rd, 852 KiB/s wr, 81 op/s 2026-03-09T15:02:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:26 vm09.local ceph-mon[59673]: Standby manager daemon vm09.cfuwdz restarted 2026-03-09T15:02:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:26 vm09.local ceph-mon[59673]: Standby manager daemon vm09.cfuwdz started 2026-03-09T15:02:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:26 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/3895071363' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/crt"}]: dispatch 2026-03-09T15:02:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:26 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/3895071363' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:02:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:26 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/3895071363' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/key"}]: dispatch 2026-03-09T15:02:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:26 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/3895071363' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:02:28.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:28 vm05.local ceph-mon[50611]: pgmap v12: 65 pgs: 65 active+clean; 785 MiB data, 4.1 GiB used, 116 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 119 op/s 2026-03-09T15:02:28.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:28 vm05.local ceph-mon[50611]: mgrmap e31: vm05.lhsexd(active, since 18s), standbys: vm09.cfuwdz 2026-03-09T15:02:28.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:28 vm09.local ceph-mon[59673]: pgmap v12: 65 pgs: 65 active+clean; 785 MiB data, 4.1 GiB used, 116 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 119 op/s 2026-03-09T15:02:28.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:28 vm09.local ceph-mon[59673]: mgrmap e31: vm05.lhsexd(active, since 18s), standbys: vm09.cfuwdz 2026-03-09T15:02:29.233 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T15:02:29.234 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T15:02:30.461 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:30 vm09.local ceph-mon[59673]: pgmap v13: 65 pgs: 65 active+clean; 785 MiB data, 4.1 GiB used, 116 GiB / 120 GiB avail; 914 KiB/s rd, 926 KiB/s wr, 80 op/s 2026-03-09T15:02:30.461 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:30 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:30.461 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:30 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:30 vm05.local ceph-mon[50611]: pgmap v13: 65 pgs: 65 active+clean; 785 MiB data, 4.1 GiB used, 116 GiB / 120 GiB avail; 914 KiB/s rd, 926 KiB/s wr, 80 op/s 2026-03-09T15:02:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:30 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:30 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:31.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:31 vm05.local ceph-mon[50611]: Upgrade: Updating node-exporter.vm09 (2/2) 2026-03-09T15:02:31.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:31 vm05.local ceph-mon[50611]: Deploying daemon node-exporter.vm09 on vm09 2026-03-09T15:02:31.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:31 vm09.local ceph-mon[59673]: Upgrade: Updating node-exporter.vm09 (2/2) 2026-03-09T15:02:31.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:31 vm09.local ceph-mon[59673]: Deploying daemon node-exporter.vm09 on vm09 2026-03-09T15:02:32.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:32 vm09.local ceph-mon[59673]: pgmap v14: 65 pgs: 65 active+clean; 424 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 130 op/s 2026-03-09T15:02:32.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:32 vm05.local ceph-mon[50611]: pgmap v14: 65 pgs: 65 active+clean; 424 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 130 op/s 2026-03-09T15:02:34.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:34 vm05.local ceph-mon[50611]: pgmap v15: 65 pgs: 65 active+clean; 424 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 993 KiB/s rd, 1.0 MiB/s wr, 88 op/s 2026-03-09T15:02:34.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:34 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:34.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:34 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:34.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:34 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:34.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:34 vm09.local ceph-mon[59673]: pgmap v15: 65 pgs: 65 active+clean; 424 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 993 KiB/s rd, 1.0 MiB/s wr, 88 op/s 2026-03-09T15:02:34.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:34 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:34.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:34 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:34.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:34 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:36.299 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:36 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:36.299 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:36 vm09.local ceph-mon[59673]: pgmap v16: 65 pgs: 65 active+clean; 424 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 993 KiB/s rd, 1.0 MiB/s wr, 88 op/s 2026-03-09T15:02:36.299 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:36 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:36.299 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:36 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:36.299 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:36 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:36 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:36 vm05.local ceph-mon[50611]: pgmap v16: 65 pgs: 65 active+clean; 424 MiB data, 3.2 GiB used, 117 GiB / 120 GiB avail; 993 KiB/s rd, 1.0 MiB/s wr, 88 op/s 2026-03-09T15:02:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:36 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:36 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:36.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:36 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:37.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:37 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:37.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:37 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:37.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:37 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:37.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:37 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:37.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:37 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:37.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:37 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:37 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:37 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:37 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:37 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:37 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:37 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:38.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:38 vm09.local ceph-mon[59673]: pgmap v17: 65 pgs: 65 active+clean; 296 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.6 MiB/s wr, 135 op/s 2026-03-09T15:02:38.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:38 vm05.local ceph-mon[50611]: pgmap v17: 65 pgs: 65 active+clean; 296 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.6 MiB/s wr, 135 op/s 2026-03-09T15:02:39.354 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:39.354 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:39.355 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:39.355 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:02:39.355 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:39.355 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:39.355 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.355 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.355 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.355 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.355 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.355 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.355 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.355 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:39 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:39.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:39.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:39.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:02:39.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:39.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:39.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:39.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:39 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:40.405 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:40 vm05.local ceph-mon[50611]: Upgrade: Updating prometheus.vm05 2026-03-09T15:02:40.405 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:40 vm05.local ceph-mon[50611]: pgmap v18: 65 pgs: 65 active+clean; 296 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 97 op/s 2026-03-09T15:02:40.405 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:40 vm05.local ceph-mon[50611]: Deploying daemon prometheus.vm05 on vm05 2026-03-09T15:02:40.405 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:40 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:02:40.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:40 vm09.local ceph-mon[59673]: Upgrade: Updating prometheus.vm05 2026-03-09T15:02:40.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:40 vm09.local ceph-mon[59673]: pgmap v18: 65 pgs: 65 active+clean; 296 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 97 op/s 2026-03-09T15:02:40.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:40 vm09.local ceph-mon[59673]: Deploying daemon prometheus.vm05 on vm05 2026-03-09T15:02:40.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:40 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:02:42.321 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:42 vm05.local ceph-mon[50611]: pgmap v19: 65 pgs: 65 active+clean; 295 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 135 op/s 2026-03-09T15:02:42.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:42 vm09.local ceph-mon[59673]: pgmap v19: 65 pgs: 65 active+clean; 295 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 135 op/s 2026-03-09T15:02:44.494 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:44 vm05.local ceph-mon[50611]: pgmap v20: 65 pgs: 65 active+clean; 295 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 84 op/s 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.489+0000 7f0731bea700 1 -- 192.168.123.105:0/680526905 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f072c071950 msgr2=0x7f072c071d60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.489+0000 7f0731bea700 1 --2- 192.168.123.105:0/680526905 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f072c071950 0x7f072c071d60 secure :-1 s=READY pgs=338 cs=0 l=1 rev1=1 crypto rx=0x7f071c009ab0 tx=0x7f071c009dc0 comp rx=0 tx=0).stop 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.490+0000 7f0731bea700 1 -- 192.168.123.105:0/680526905 shutdown_connections 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.490+0000 7f0731bea700 1 --2- 192.168.123.105:0/680526905 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f072c072330 0x7f072c0770b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.490+0000 7f0731bea700 1 --2- 192.168.123.105:0/680526905 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f072c071950 0x7f072c071d60 unknown :-1 s=CLOSED pgs=338 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.490+0000 7f0731bea700 1 -- 192.168.123.105:0/680526905 >> 192.168.123.105:0/680526905 conn(0x7f072c06d1a0 msgr2=0x7f072c06f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.490+0000 7f0731bea700 1 -- 192.168.123.105:0/680526905 shutdown_connections 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.490+0000 7f0731bea700 1 -- 192.168.123.105:0/680526905 wait complete. 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.491+0000 7f0731bea700 1 Processor -- start 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.491+0000 7f0731bea700 1 -- start start 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.491+0000 7f0731bea700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f072c072330 0x7f072c0824d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.491+0000 7f0731bea700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f072c082a10 0x7f072c082e80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.491+0000 7f0731bea700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f072c1b2a90 con 0x7f072c082a10 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.491+0000 7f0731bea700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f072c1b2bd0 con 0x7f072c072330 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.491+0000 7f072bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f072c082a10 0x7f072c082e80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.491+0000 7f072bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f072c082a10 0x7f072c082e80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:54342/0 (socket says 192.168.123.105:54342) 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.491+0000 7f072bfff700 1 -- 192.168.123.105:0/2202623553 learned_addr learned my addr 192.168.123.105:0/2202623553 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.492+0000 7f072bfff700 1 -- 192.168.123.105:0/2202623553 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f072c072330 msgr2=0x7f072c0824d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.492+0000 7f072bfff700 1 --2- 192.168.123.105:0/2202623553 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f072c072330 0x7f072c0824d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.492+0000 7f072bfff700 1 -- 192.168.123.105:0/2202623553 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f071c009710 con 0x7f072c082a10 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.492+0000 7f072bfff700 1 --2- 192.168.123.105:0/2202623553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f072c082a10 0x7f072c082e80 secure :-1 s=READY pgs=339 cs=0 l=1 rev1=1 crypto rx=0x7f072400e500 tx=0x7f072400e8c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:44.494 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.494+0000 7f0729ffb700 1 -- 192.168.123.105:0/2202623553 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f07240090d0 con 0x7f072c082a10 2026-03-09T15:02:44.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.494+0000 7f0731bea700 1 -- 192.168.123.105:0/2202623553 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f072c1b2d10 con 0x7f072c082a10 2026-03-09T15:02:44.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.494+0000 7f0731bea700 1 -- 192.168.123.105:0/2202623553 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f072c1b31b0 con 0x7f072c082a10 2026-03-09T15:02:44.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.495+0000 7f0729ffb700 1 -- 192.168.123.105:0/2202623553 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f072400f040 con 0x7f072c082a10 2026-03-09T15:02:44.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.495+0000 7f0729ffb700 1 -- 192.168.123.105:0/2202623553 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0724014790 con 0x7f072c082a10 2026-03-09T15:02:44.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.496+0000 7f0731bea700 1 -- 192.168.123.105:0/2202623553 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0718005320 con 0x7f072c082a10 2026-03-09T15:02:44.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.497+0000 7f0729ffb700 1 -- 192.168.123.105:0/2202623553 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f0724009230 con 0x7f072c082a10 2026-03-09T15:02:44.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.498+0000 7f0729ffb700 1 --2- 192.168.123.105:0/2202623553 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f0714079990 0x7f071407be40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:44.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.498+0000 7f0729ffb700 1 -- 192.168.123.105:0/2202623553 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f0724099ee0 con 0x7f072c082a10 2026-03-09T15:02:44.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.498+0000 7f0730be8700 1 --2- 192.168.123.105:0/2202623553 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f0714079990 0x7f071407be40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:44.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.500+0000 7f0730be8700 1 --2- 192.168.123.105:0/2202623553 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f0714079990 0x7f071407be40 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f071c000c00 tx=0x7f071c005d20 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:44.502 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.502+0000 7f0729ffb700 1 -- 192.168.123.105:0/2202623553 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f0724062a70 con 0x7f072c082a10 2026-03-09T15:02:44.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:44 vm09.local ceph-mon[59673]: pgmap v20: 65 pgs: 65 active+clean; 295 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 84 op/s 2026-03-09T15:02:44.711 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.710+0000 7f0731bea700 1 -- 192.168.123.105:0/2202623553 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f0718000bf0 con 0x7f0714079990 2026-03-09T15:02:44.712 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.712+0000 7f0729ffb700 1 -- 192.168.123.105:0/2202623553 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+367 (secure 0 0 0) 0x7f0718000bf0 con 0x7f0714079990 2026-03-09T15:02:44.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.714+0000 7f07137fe700 1 -- 192.168.123.105:0/2202623553 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f0714079990 msgr2=0x7f071407be40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:44.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.714+0000 7f07137fe700 1 --2- 192.168.123.105:0/2202623553 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f0714079990 0x7f071407be40 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f071c000c00 tx=0x7f071c005d20 comp rx=0 tx=0).stop 2026-03-09T15:02:44.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.715+0000 7f07137fe700 1 -- 192.168.123.105:0/2202623553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f072c082a10 msgr2=0x7f072c082e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:44.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.715+0000 7f07137fe700 1 --2- 192.168.123.105:0/2202623553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f072c082a10 0x7f072c082e80 secure :-1 s=READY pgs=339 cs=0 l=1 rev1=1 crypto rx=0x7f072400e500 tx=0x7f072400e8c0 comp rx=0 tx=0).stop 2026-03-09T15:02:44.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.715+0000 7f07137fe700 1 -- 192.168.123.105:0/2202623553 shutdown_connections 2026-03-09T15:02:44.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.715+0000 7f07137fe700 1 --2- 192.168.123.105:0/2202623553 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f0714079990 0x7f071407be40 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:44.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.715+0000 7f07137fe700 1 --2- 192.168.123.105:0/2202623553 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f072c072330 0x7f072c0824d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:44.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.715+0000 7f07137fe700 1 --2- 192.168.123.105:0/2202623553 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f072c082a10 0x7f072c082e80 unknown :-1 s=CLOSED pgs=339 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:44.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.715+0000 7f07137fe700 1 -- 192.168.123.105:0/2202623553 >> 192.168.123.105:0/2202623553 conn(0x7f072c06d1a0 msgr2=0x7f072c070600 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:44.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.715+0000 7f07137fe700 1 -- 192.168.123.105:0/2202623553 shutdown_connections 2026-03-09T15:02:44.716 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.715+0000 7f07137fe700 1 -- 192.168.123.105:0/2202623553 wait complete. 2026-03-09T15:02:44.729 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:02:44.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.839+0000 7f06a24f4700 1 -- 192.168.123.105:0/3130855041 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f069c071980 msgr2=0x7f069c071d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:44.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.839+0000 7f06a24f4700 1 --2- 192.168.123.105:0/3130855041 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f069c071980 0x7f069c071d90 secure :-1 s=READY pgs=340 cs=0 l=1 rev1=1 crypto rx=0x7f068c008790 tx=0x7f068c00ae50 comp rx=0 tx=0).stop 2026-03-09T15:02:44.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.839+0000 7f06a24f4700 1 -- 192.168.123.105:0/3130855041 shutdown_connections 2026-03-09T15:02:44.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.839+0000 7f06a24f4700 1 --2- 192.168.123.105:0/3130855041 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f069c072360 0x7f069c0770e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:44.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.839+0000 7f06a24f4700 1 --2- 192.168.123.105:0/3130855041 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f069c071980 0x7f069c071d90 unknown :-1 s=CLOSED pgs=340 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:44.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.839+0000 7f06a24f4700 1 -- 192.168.123.105:0/3130855041 >> 192.168.123.105:0/3130855041 conn(0x7f069c06d1a0 msgr2=0x7f069c06f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.839+0000 7f06a24f4700 1 -- 192.168.123.105:0/3130855041 shutdown_connections 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.840+0000 7f06a24f4700 1 -- 192.168.123.105:0/3130855041 wait complete. 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.840+0000 7f06a24f4700 1 Processor -- start 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.840+0000 7f06a24f4700 1 -- start start 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.841+0000 7f06a24f4700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f069c072360 0x7f069c0824e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.841+0000 7f06a24f4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f069c082a20 0x7f069c082e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.841+0000 7f06a24f4700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f069c12dd80 con 0x7f069c082a20 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.841+0000 7f06a24f4700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f069c12def0 con 0x7f069c072360 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.841+0000 7f069b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f069c082a20 0x7f069c082e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.841+0000 7f069b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f069c082a20 0x7f069c082e90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:54358/0 (socket says 192.168.123.105:54358) 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.841+0000 7f069b7fe700 1 -- 192.168.123.105:0/3675443419 learned_addr learned my addr 192.168.123.105:0/3675443419 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.841+0000 7f069bfff700 1 --2- 192.168.123.105:0/3675443419 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f069c072360 0x7f069c0824e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.841+0000 7f069b7fe700 1 -- 192.168.123.105:0/3675443419 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f069c072360 msgr2=0x7f069c0824e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.841+0000 7f069b7fe700 1 --2- 192.168.123.105:0/3675443419 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f069c072360 0x7f069c0824e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.841+0000 7f069b7fe700 1 -- 192.168.123.105:0/3675443419 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f068c008440 con 0x7f069c082a20 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.842+0000 7f069b7fe700 1 --2- 192.168.123.105:0/3675443419 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f069c082a20 0x7f069c082e90 secure :-1 s=READY pgs=341 cs=0 l=1 rev1=1 crypto rx=0x7f0694009fc0 tx=0x7f06940076a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:44.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.843+0000 7f06997fa700 1 -- 192.168.123.105:0/3675443419 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0694010040 con 0x7f069c082a20 2026-03-09T15:02:44.844 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.843+0000 7f06a24f4700 1 -- 192.168.123.105:0/3675443419 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f069c12e1d0 con 0x7f069c082a20 2026-03-09T15:02:44.844 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.843+0000 7f06a24f4700 1 -- 192.168.123.105:0/3675443419 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f069c12e720 con 0x7f069c082a20 2026-03-09T15:02:44.844 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.844+0000 7f06997fa700 1 -- 192.168.123.105:0/3675443419 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f0694009240 con 0x7f069c082a20 2026-03-09T15:02:44.844 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.844+0000 7f06997fa700 1 -- 192.168.123.105:0/3675443419 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0694016830 con 0x7f069c082a20 2026-03-09T15:02:44.845 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.845+0000 7f06a24f4700 1 -- 192.168.123.105:0/3675443419 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0688005320 con 0x7f069c082a20 2026-03-09T15:02:44.848 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.847+0000 7f06997fa700 1 -- 192.168.123.105:0/3675443419 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f0694004ad0 con 0x7f069c082a20 2026-03-09T15:02:44.848 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.847+0000 7f06997fa700 1 --2- 192.168.123.105:0/3675443419 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f06840776c0 0x7f0684079b70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:44.848 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.848+0000 7f06997fa700 1 -- 192.168.123.105:0/3675443419 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f0694099780 con 0x7f069c082a20 2026-03-09T15:02:44.848 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.848+0000 7f069bfff700 1 --2- 192.168.123.105:0/3675443419 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f06840776c0 0x7f0684079b70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:44.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.852+0000 7f069bfff700 1 --2- 192.168.123.105:0/3675443419 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f06840776c0 0x7f0684079b70 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f068c008760 tx=0x7f068c00b320 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:44.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:44.853+0000 7f06997fa700 1 -- 192.168.123.105:0/3675443419 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f06940623c0 con 0x7f069c082a20 2026-03-09T15:02:45.066 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.065+0000 7f06a24f4700 1 -- 192.168.123.105:0/3675443419 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f0688000bf0 con 0x7f06840776c0 2026-03-09T15:02:45.072 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.069+0000 7f06997fa700 1 -- 192.168.123.105:0/3675443419 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+367 (secure 0 0 0) 0x7f0688000bf0 con 0x7f06840776c0 2026-03-09T15:02:45.072 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.072+0000 7f06a24f4700 1 -- 192.168.123.105:0/3675443419 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f06840776c0 msgr2=0x7f0684079b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:45.072 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.072+0000 7f06a24f4700 1 --2- 192.168.123.105:0/3675443419 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f06840776c0 0x7f0684079b70 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f068c008760 tx=0x7f068c00b320 comp rx=0 tx=0).stop 2026-03-09T15:02:45.072 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.072+0000 7f06a24f4700 1 -- 192.168.123.105:0/3675443419 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f069c082a20 msgr2=0x7f069c082e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:45.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.072+0000 7f06a24f4700 1 --2- 192.168.123.105:0/3675443419 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f069c082a20 0x7f069c082e90 secure :-1 s=READY pgs=341 cs=0 l=1 rev1=1 crypto rx=0x7f0694009fc0 tx=0x7f06940076a0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.073+0000 7f06a24f4700 1 -- 192.168.123.105:0/3675443419 shutdown_connections 2026-03-09T15:02:45.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.073+0000 7f06a24f4700 1 --2- 192.168.123.105:0/3675443419 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f06840776c0 0x7f0684079b70 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.073+0000 7f06a24f4700 1 --2- 192.168.123.105:0/3675443419 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f069c072360 0x7f069c0824e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.073+0000 7f06a24f4700 1 --2- 192.168.123.105:0/3675443419 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f069c082a20 0x7f069c082e90 unknown :-1 s=CLOSED pgs=341 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.073+0000 7f06a24f4700 1 -- 192.168.123.105:0/3675443419 >> 192.168.123.105:0/3675443419 conn(0x7f069c06d1a0 msgr2=0x7f069c06e190 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:45.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.073+0000 7f06a24f4700 1 -- 192.168.123.105:0/3675443419 shutdown_connections 2026-03-09T15:02:45.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.073+0000 7f06a24f4700 1 -- 192.168.123.105:0/3675443419 wait complete. 2026-03-09T15:02:45.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.219+0000 7f89e754b700 1 -- 192.168.123.105:0/4173394175 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f89e0072360 msgr2=0x7f89e00770e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.219+0000 7f89e754b700 1 --2- 192.168.123.105:0/4173394175 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f89e0072360 0x7f89e00770e0 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f89d800b210 tx=0x7f89d800b520 comp rx=0 tx=0).stop 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.219+0000 7f89e754b700 1 -- 192.168.123.105:0/4173394175 shutdown_connections 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.219+0000 7f89e754b700 1 --2- 192.168.123.105:0/4173394175 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f89e0072360 0x7f89e00770e0 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.219+0000 7f89e754b700 1 --2- 192.168.123.105:0/4173394175 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f89e0071980 0x7f89e0071d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.219+0000 7f89e754b700 1 -- 192.168.123.105:0/4173394175 >> 192.168.123.105:0/4173394175 conn(0x7f89e006d1a0 msgr2=0x7f89e006f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.219+0000 7f89e754b700 1 -- 192.168.123.105:0/4173394175 shutdown_connections 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.219+0000 7f89e754b700 1 -- 192.168.123.105:0/4173394175 wait complete. 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.220+0000 7f89e754b700 1 Processor -- start 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.220+0000 7f89e754b700 1 -- start start 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.220+0000 7f89e754b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f89e0071980 0x7f89e01313e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.220+0000 7f89e754b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f89e0131920 0x7f89e007f580 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.220+0000 7f89e754b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f89e0131e20 con 0x7f89e0131920 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.220+0000 7f89e754b700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f89e0131f90 con 0x7f89e0071980 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.221+0000 7f89e52e7700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f89e0071980 0x7f89e01313e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.221+0000 7f89e52e7700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f89e0071980 0x7f89e01313e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:34166/0 (socket says 192.168.123.105:34166) 2026-03-09T15:02:45.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.221+0000 7f89e52e7700 1 -- 192.168.123.105:0/2631373131 learned_addr learned my addr 192.168.123.105:0/2631373131 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:02:45.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.223+0000 7f89e52e7700 1 -- 192.168.123.105:0/2631373131 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f89e0131920 msgr2=0x7f89e007f580 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:45.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.223+0000 7f89e52e7700 1 --2- 192.168.123.105:0/2631373131 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f89e0131920 0x7f89e007f580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.223+0000 7f89e52e7700 1 -- 192.168.123.105:0/2631373131 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f89d8009e30 con 0x7f89e0071980 2026-03-09T15:02:45.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.224+0000 7f89e52e7700 1 --2- 192.168.123.105:0/2631373131 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f89e0071980 0x7f89e01313e0 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f89dc009d00 tx=0x7f89dc00e3b0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:45.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.226+0000 7f89d67fc700 1 -- 192.168.123.105:0/2631373131 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f89dc00a4f0 con 0x7f89e0071980 2026-03-09T15:02:45.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.226+0000 7f89d67fc700 1 -- 192.168.123.105:0/2631373131 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f89dc010040 con 0x7f89e0071980 2026-03-09T15:02:45.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.226+0000 7f89d67fc700 1 -- 192.168.123.105:0/2631373131 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f89dc0136a0 con 0x7f89e0071980 2026-03-09T15:02:45.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.226+0000 7f89e754b700 1 -- 192.168.123.105:0/2631373131 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f89e007fb20 con 0x7f89e0071980 2026-03-09T15:02:45.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.226+0000 7f89e754b700 1 -- 192.168.123.105:0/2631373131 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f89e0080040 con 0x7f89e0071980 2026-03-09T15:02:45.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.226+0000 7f89cbfff700 1 -- 192.168.123.105:0/2631373131 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f89e004ea50 con 0x7f89e0071980 2026-03-09T15:02:45.234 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.230+0000 7f89d67fc700 1 -- 192.168.123.105:0/2631373131 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f89dc00ed00 con 0x7f89e0071980 2026-03-09T15:02:45.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.231+0000 7f89d67fc700 1 --2- 192.168.123.105:0/2631373131 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f89cc077680 0x7f89cc079b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:45.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.231+0000 7f89d67fc700 1 -- 192.168.123.105:0/2631373131 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f89dc099c50 con 0x7f89e0071980 2026-03-09T15:02:45.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.232+0000 7f89e4ae6700 1 --2- 192.168.123.105:0/2631373131 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f89cc077680 0x7f89cc079b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:45.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.232+0000 7f89d67fc700 1 -- 192.168.123.105:0/2631373131 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f89dc0627e0 con 0x7f89e0071980 2026-03-09T15:02:45.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.232+0000 7f89e4ae6700 1 --2- 192.168.123.105:0/2631373131 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f89cc077680 0x7f89cc079b30 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f89e0072ff0 tx=0x7f89d8000f40 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:45.483 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:45 vm05.local ceph-mon[50611]: from='client.14694 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:45.509 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.507+0000 7f89cbfff700 1 -- 192.168.123.105:0/2631373131 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f89e00620c0 con 0x7f89cc077680 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (5m) 9s ago 6m 23.1M - 0.25.0 c8568f914cd2 35e160b8d1de 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (6m) 9s ago 6m 8438k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (6m) 10s ago 6m 11.1M - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (6m) 9s ago 6m 7411k - 18.2.0 dc2bc1663786 1c577d7a0de0 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (6m) 10s ago 6m 7402k - 18.2.0 dc2bc1663786 9e4961442551 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (5m) 9s ago 6m 92.2M - 9.4.7 954c08fa6188 46e00e5e5b38 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (4m) 9s ago 4m 261M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (4m) 9s ago 4m 16.1M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (4m) 10s ago 4m 16.1M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (4m) 10s ago 4m 294M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (52s) 9s ago 7m 611M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (24s) 10s ago 5m 487M - 19.2.3-678-ge911bdeb 654f31e6858e acf5a6f3f804 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (7m) 9s ago 7m 52.9M 2048M 18.2.0 dc2bc1663786 c83e96b62251 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (5m) 10s ago 5m 47.2M 2048M 18.2.0 dc2bc1663786 7963792b5376 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (16s) 9s ago 6m 8757k - 1.7.0 72c9c2088986 888d071c50d9 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (12s) 10s ago 5m 5372k - 1.7.0 72c9c2088986 22c96a576a60 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (5m) 9s ago 5m 285M 4096M 18.2.0 dc2bc1663786 50f3ca995318 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (5m) 9s ago 5m 282M 4096M 18.2.0 dc2bc1663786 23e35bdafe50 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (5m) 9s ago 5m 252M 4096M 18.2.0 dc2bc1663786 75097dc12979 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (5m) 10s ago 5m 366M 4096M 18.2.0 dc2bc1663786 e79644a0564f 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (4m) 10s ago 4m 310M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (4m) 10s ago 4m 278M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (26s) 9s ago 6m 52.2M - 2.43.0 a07b618ecd1d e5fa7075da5a 2026-03-09T15:02:45.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.518+0000 7f89d67fc700 1 -- 192.168.123.105:0/2631373131 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3552 (secure 0 0 0) 0x7f89e00620c0 con 0x7f89cc077680 2026-03-09T15:02:45.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.523+0000 7f89e754b700 1 -- 192.168.123.105:0/2631373131 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f89cc077680 msgr2=0x7f89cc079b30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:45.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.523+0000 7f89e754b700 1 --2- 192.168.123.105:0/2631373131 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f89cc077680 0x7f89cc079b30 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f89e0072ff0 tx=0x7f89d8000f40 comp rx=0 tx=0).stop 2026-03-09T15:02:45.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.523+0000 7f89e754b700 1 -- 192.168.123.105:0/2631373131 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f89e0071980 msgr2=0x7f89e01313e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:45.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.523+0000 7f89e754b700 1 --2- 192.168.123.105:0/2631373131 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f89e0071980 0x7f89e01313e0 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7f89dc009d00 tx=0x7f89dc00e3b0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.528 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.524+0000 7f89e754b700 1 -- 192.168.123.105:0/2631373131 shutdown_connections 2026-03-09T15:02:45.528 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.524+0000 7f89e754b700 1 --2- 192.168.123.105:0/2631373131 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f89cc077680 0x7f89cc079b30 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.528 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.524+0000 7f89e754b700 1 --2- 192.168.123.105:0/2631373131 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f89e0071980 0x7f89e01313e0 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.528 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.524+0000 7f89e754b700 1 --2- 192.168.123.105:0/2631373131 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f89e0131920 0x7f89e007f580 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.528 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.524+0000 7f89e754b700 1 -- 192.168.123.105:0/2631373131 >> 192.168.123.105:0/2631373131 conn(0x7f89e006d1a0 msgr2=0x7f89e00764b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:45.529 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.530+0000 7f89e754b700 1 -- 192.168.123.105:0/2631373131 shutdown_connections 2026-03-09T15:02:45.530 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.530+0000 7f89e754b700 1 -- 192.168.123.105:0/2631373131 wait complete. 2026-03-09T15:02:45.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:45 vm09.local ceph-mon[59673]: from='client.14694 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:45.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.682+0000 7f365d047700 1 -- 192.168.123.105:0/1377857 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3658072360 msgr2=0x7f36580770e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:45.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.682+0000 7f365d047700 1 --2- 192.168.123.105:0/1377857 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3658072360 0x7f36580770e0 secure :-1 s=READY pgs=342 cs=0 l=1 rev1=1 crypto rx=0x7f365000d3f0 tx=0x7f365000d700 comp rx=0 tx=0).stop 2026-03-09T15:02:45.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.687+0000 7f365d047700 1 -- 192.168.123.105:0/1377857 shutdown_connections 2026-03-09T15:02:45.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.687+0000 7f365d047700 1 --2- 192.168.123.105:0/1377857 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3658072360 0x7f36580770e0 unknown :-1 s=CLOSED pgs=342 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.687+0000 7f365d047700 1 --2- 192.168.123.105:0/1377857 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3658071980 0x7f3658071d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.687+0000 7f365d047700 1 -- 192.168.123.105:0/1377857 >> 192.168.123.105:0/1377857 conn(0x7f365806d1a0 msgr2=0x7f365806f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:45.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.691+0000 7f365d047700 1 -- 192.168.123.105:0/1377857 shutdown_connections 2026-03-09T15:02:45.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.691+0000 7f365d047700 1 -- 192.168.123.105:0/1377857 wait complete. 2026-03-09T15:02:45.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.692+0000 7f365d047700 1 Processor -- start 2026-03-09T15:02:45.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.692+0000 7f365d047700 1 -- start start 2026-03-09T15:02:45.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.692+0000 7f365d047700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3658071980 0x7f36581312c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:45.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.692+0000 7f365d047700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3658072360 0x7f3658131800 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:45.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.692+0000 7f365d047700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3658131e20 con 0x7f3658071980 2026-03-09T15:02:45.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.692+0000 7f365d047700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f365807f460 con 0x7f3658072360 2026-03-09T15:02:45.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.693+0000 7f365659c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3658072360 0x7f3658131800 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:45.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.693+0000 7f365659c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3658072360 0x7f3658131800 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:34186/0 (socket says 192.168.123.105:34186) 2026-03-09T15:02:45.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.693+0000 7f365659c700 1 -- 192.168.123.105:0/3277855604 learned_addr learned my addr 192.168.123.105:0/3277855604 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:02:45.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.693+0000 7f365659c700 1 -- 192.168.123.105:0/3277855604 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3658071980 msgr2=0x7f36581312c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:45.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.693+0000 7f365659c700 1 --2- 192.168.123.105:0/3277855604 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3658071980 0x7f36581312c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.693+0000 7f365659c700 1 -- 192.168.123.105:0/3277855604 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3650007ed0 con 0x7f3658072360 2026-03-09T15:02:45.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.694+0000 7f365659c700 1 --2- 192.168.123.105:0/3277855604 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3658072360 0x7f3658131800 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f36500049d0 tx=0x7f3650004ab0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:45.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.694+0000 7f363ffff700 1 -- 192.168.123.105:0/3277855604 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f365001d070 con 0x7f3658072360 2026-03-09T15:02:45.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.694+0000 7f363ffff700 1 -- 192.168.123.105:0/3277855604 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f3650017420 con 0x7f3658072360 2026-03-09T15:02:45.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.694+0000 7f363ffff700 1 -- 192.168.123.105:0/3277855604 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3650021620 con 0x7f3658072360 2026-03-09T15:02:45.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.695+0000 7f365d047700 1 -- 192.168.123.105:0/3277855604 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f365807f6e0 con 0x7f3658072360 2026-03-09T15:02:45.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.695+0000 7f365d047700 1 -- 192.168.123.105:0/3277855604 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f365807fbd0 con 0x7f3658072360 2026-03-09T15:02:45.702 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.702+0000 7f365d047700 1 -- 192.168.123.105:0/3277855604 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f365812b500 con 0x7f3658072360 2026-03-09T15:02:45.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.703+0000 7f363ffff700 1 -- 192.168.123.105:0/3277855604 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f365002b430 con 0x7f3658072360 2026-03-09T15:02:45.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.703+0000 7f363ffff700 1 --2- 192.168.123.105:0/3277855604 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f36400776d0 0x7f3640079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:45.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.703+0000 7f363ffff700 1 -- 192.168.123.105:0/3277855604 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f3650013070 con 0x7f3658072360 2026-03-09T15:02:45.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.703+0000 7f3656d9d700 1 --2- 192.168.123.105:0/3277855604 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f36400776d0 0x7f3640079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:45.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.705+0000 7f3656d9d700 1 --2- 192.168.123.105:0/3277855604 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f36400776d0 0x7f3640079b80 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f3648005950 tx=0x7f36480058e0 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:45.707 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.707+0000 7f363ffff700 1 -- 192.168.123.105:0/3277855604 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f3650064e90 con 0x7f3658072360 2026-03-09T15:02:45.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.954+0000 7f365d047700 1 -- 192.168.123.105:0/3277855604 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f365804ea50 con 0x7f3658072360 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.959+0000 7f363ffff700 1 -- 192.168.123.105:0/3277855604 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+679 (secure 0 0 0) 0x7f3650007480 con 0x7f3658072360 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 6 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 12, 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:02:45.959 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:02:45.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.964+0000 7f363dffb700 1 -- 192.168.123.105:0/3277855604 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f36400776d0 msgr2=0x7f3640079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:45.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.964+0000 7f363dffb700 1 --2- 192.168.123.105:0/3277855604 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f36400776d0 0x7f3640079b80 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f3648005950 tx=0x7f36480058e0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.964+0000 7f363dffb700 1 -- 192.168.123.105:0/3277855604 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3658072360 msgr2=0x7f3658131800 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:45.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.964+0000 7f363dffb700 1 --2- 192.168.123.105:0/3277855604 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3658072360 0x7f3658131800 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f36500049d0 tx=0x7f3650004ab0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.965+0000 7f363dffb700 1 -- 192.168.123.105:0/3277855604 shutdown_connections 2026-03-09T15:02:45.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.965+0000 7f363dffb700 1 --2- 192.168.123.105:0/3277855604 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f36400776d0 0x7f3640079b80 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.965+0000 7f363dffb700 1 --2- 192.168.123.105:0/3277855604 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3658071980 0x7f36581312c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.965+0000 7f363dffb700 1 --2- 192.168.123.105:0/3277855604 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3658072360 0x7f3658131800 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:45.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.965+0000 7f363dffb700 1 -- 192.168.123.105:0/3277855604 >> 192.168.123.105:0/3277855604 conn(0x7f365806d1a0 msgr2=0x7f3658076640 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:45.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.965+0000 7f363dffb700 1 -- 192.168.123.105:0/3277855604 shutdown_connections 2026-03-09T15:02:45.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:45.965+0000 7f363dffb700 1 -- 192.168.123.105:0/3277855604 wait complete. 2026-03-09T15:02:46.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.096+0000 7f6e0717c700 1 -- 192.168.123.105:0/3098756093 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e00072330 msgr2=0x7f6e000770b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:46.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.096+0000 7f6e0717c700 1 --2- 192.168.123.105:0/3098756093 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e00072330 0x7f6e000770b0 secure :-1 s=READY pgs=343 cs=0 l=1 rev1=1 crypto rx=0x7f6df800d3f0 tx=0x7f6df800d700 comp rx=0 tx=0).stop 2026-03-09T15:02:46.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.096+0000 7f6e0717c700 1 -- 192.168.123.105:0/3098756093 shutdown_connections 2026-03-09T15:02:46.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.096+0000 7f6e0717c700 1 --2- 192.168.123.105:0/3098756093 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e00072330 0x7f6e000770b0 unknown :-1 s=CLOSED pgs=343 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:46.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.096+0000 7f6e0717c700 1 --2- 192.168.123.105:0/3098756093 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6e00071950 0x7f6e00071d60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:46.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.096+0000 7f6e0717c700 1 -- 192.168.123.105:0/3098756093 >> 192.168.123.105:0/3098756093 conn(0x7f6e0006d1a0 msgr2=0x7f6e0006f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:46.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.096+0000 7f6e0717c700 1 -- 192.168.123.105:0/3098756093 shutdown_connections 2026-03-09T15:02:46.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.096+0000 7f6e0717c700 1 -- 192.168.123.105:0/3098756093 wait complete. 2026-03-09T15:02:46.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.097+0000 7f6e0717c700 1 Processor -- start 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.097+0000 7f6e0717c700 1 -- start start 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.097+0000 7f6e0717c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6e00071950 0x7f6e00131380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.097+0000 7f6e0717c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e001318c0 0x7f6e0007f520 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.097+0000 7f6e0717c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6e00131dc0 con 0x7f6e001318c0 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.097+0000 7f6e0717c700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6e00131f30 con 0x7f6e00071950 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.097+0000 7f6e05979700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e001318c0 0x7f6e0007f520 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.097+0000 7f6e05979700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e001318c0 0x7f6e0007f520 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:54404/0 (socket says 192.168.123.105:54404) 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.097+0000 7f6e05979700 1 -- 192.168.123.105:0/3792134201 learned_addr learned my addr 192.168.123.105:0/3792134201 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.097+0000 7f6e0617a700 1 --2- 192.168.123.105:0/3792134201 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6e00071950 0x7f6e00131380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.098+0000 7f6e05979700 1 -- 192.168.123.105:0/3792134201 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6e00071950 msgr2=0x7f6e00131380 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.098+0000 7f6e05979700 1 --2- 192.168.123.105:0/3792134201 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6e00071950 0x7f6e00131380 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.098+0000 7f6e05979700 1 -- 192.168.123.105:0/3792134201 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6df8007ed0 con 0x7f6e001318c0 2026-03-09T15:02:46.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.098+0000 7f6e05979700 1 --2- 192.168.123.105:0/3792134201 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e001318c0 0x7f6e0007f520 secure :-1 s=READY pgs=344 cs=0 l=1 rev1=1 crypto rx=0x7f6df8000f80 tx=0x7f6df8004b40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:46.100 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.099+0000 7f6df77fe700 1 -- 192.168.123.105:0/3792134201 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6df801c070 con 0x7f6e001318c0 2026-03-09T15:02:46.100 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.099+0000 7f6df77fe700 1 -- 192.168.123.105:0/3792134201 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1139+0+0 (secure 0 0 0) 0x7f6df800deb0 con 0x7f6e001318c0 2026-03-09T15:02:46.100 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.100+0000 7f6df77fe700 1 -- 192.168.123.105:0/3792134201 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6df8017b30 con 0x7f6e001318c0 2026-03-09T15:02:46.100 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.100+0000 7f6e0717c700 1 -- 192.168.123.105:0/3792134201 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6e0007fa60 con 0x7f6e001318c0 2026-03-09T15:02:46.100 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.100+0000 7f6e0717c700 1 -- 192.168.123.105:0/3792134201 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6e0007fed0 con 0x7f6e001318c0 2026-03-09T15:02:46.100 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.101+0000 7f6e0717c700 1 -- 192.168.123.105:0/3792134201 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6e0012b500 con 0x7f6e001318c0 2026-03-09T15:02:46.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.104+0000 7f6df77fe700 1 -- 192.168.123.105:0/3792134201 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f6df8017c90 con 0x7f6e001318c0 2026-03-09T15:02:46.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.104+0000 7f6df77fe700 1 --2- 192.168.123.105:0/3792134201 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f6dec077450 0x7f6dec079900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:02:46.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.104+0000 7f6df77fe700 1 -- 192.168.123.105:0/3792134201 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f6df8013070 con 0x7f6e001318c0 2026-03-09T15:02:46.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.105+0000 7f6e0617a700 1 --2- 192.168.123.105:0/3792134201 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f6dec077450 0x7f6dec079900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:02:46.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.105+0000 7f6e0617a700 1 --2- 192.168.123.105:0/3792134201 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f6dec077450 0x7f6dec079900 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f6dfc005950 tx=0x7f6dfc00b410 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:02:46.108 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.106+0000 7f6df77fe700 1 -- 192.168.123.105:0/3792134201 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f6df8064370 con 0x7f6e001318c0 2026-03-09T15:02:46.310 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.309+0000 7f6e0717c700 1 -- 192.168.123.105:0/3792134201 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f6e0002cfd0 con 0x7f6dec077450 2026-03-09T15:02:46.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.312+0000 7f6df77fe700 1 -- 192.168.123.105:0/3792134201 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+367 (secure 0 0 0) 0x7f6e0002cfd0 con 0x7f6dec077450 2026-03-09T15:02:46.312 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:02:46.312 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:02:46.312 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:02:46.312 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) mgr", 2026-03-09T15:02:46.312 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [ 2026-03-09T15:02:46.312 INFO:teuthology.orchestra.run.vm05.stdout: "mgr" 2026-03-09T15:02:46.312 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-09T15:02:46.312 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "2/2 daemons upgraded", 2026-03-09T15:02:46.312 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading prometheus daemons", 2026-03-09T15:02:46.312 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:02:46.312 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:02:46.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.314+0000 7f6df57fa700 1 -- 192.168.123.105:0/3792134201 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f6dec077450 msgr2=0x7f6dec079900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:46.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.314+0000 7f6df57fa700 1 --2- 192.168.123.105:0/3792134201 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f6dec077450 0x7f6dec079900 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f6dfc005950 tx=0x7f6dfc00b410 comp rx=0 tx=0).stop 2026-03-09T15:02:46.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.314+0000 7f6df57fa700 1 -- 192.168.123.105:0/3792134201 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e001318c0 msgr2=0x7f6e0007f520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:02:46.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.314+0000 7f6df57fa700 1 --2- 192.168.123.105:0/3792134201 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e001318c0 0x7f6e0007f520 secure :-1 s=READY pgs=344 cs=0 l=1 rev1=1 crypto rx=0x7f6df8000f80 tx=0x7f6df8004b40 comp rx=0 tx=0).stop 2026-03-09T15:02:46.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.315+0000 7f6df57fa700 1 -- 192.168.123.105:0/3792134201 shutdown_connections 2026-03-09T15:02:46.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.315+0000 7f6df57fa700 1 --2- 192.168.123.105:0/3792134201 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f6dec077450 0x7f6dec079900 secure :-1 s=CLOSED pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f6dfc005950 tx=0x7f6dfc00b410 comp rx=0 tx=0).stop 2026-03-09T15:02:46.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.315+0000 7f6df57fa700 1 --2- 192.168.123.105:0/3792134201 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6e00071950 0x7f6e00131380 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:46.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.315+0000 7f6df57fa700 1 --2- 192.168.123.105:0/3792134201 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e001318c0 0x7f6e0007f520 unknown :-1 s=CLOSED pgs=344 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:02:46.315 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.315+0000 7f6df57fa700 1 -- 192.168.123.105:0/3792134201 >> 192.168.123.105:0/3792134201 conn(0x7f6e0006d1a0 msgr2=0x7f6e00076490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:02:46.316 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.316+0000 7f6df57fa700 1 -- 192.168.123.105:0/3792134201 shutdown_connections 2026-03-09T15:02:46.316 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:02:46.316+0000 7f6df57fa700 1 -- 192.168.123.105:0/3792134201 wait complete. 2026-03-09T15:02:46.438 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:46 vm05.local ceph-mon[50611]: pgmap v21: 65 pgs: 65 active+clean; 295 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 84 op/s 2026-03-09T15:02:46.438 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:46 vm05.local ceph-mon[50611]: from='client.14698 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:46.438 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:46 vm05.local ceph-mon[50611]: from='client.24507 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:46.438 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:46 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/3277855604' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:46 vm09.local ceph-mon[59673]: pgmap v21: 65 pgs: 65 active+clean; 295 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 84 op/s 2026-03-09T15:02:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:46 vm09.local ceph-mon[59673]: from='client.14698 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:46 vm09.local ceph-mon[59673]: from='client.24507 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:46 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/3277855604' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:47.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:47 vm05.local ceph-mon[50611]: from='client.14706 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:47.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:47 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:47.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:47 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:47.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:47 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:47 vm09.local ceph-mon[59673]: from='client.14706 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:02:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:47 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:47 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:47 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:48.408 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:48 vm05.local ceph-mon[50611]: pgmap v22: 65 pgs: 65 active+clean; 296 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 117 op/s 2026-03-09T15:02:48.608 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:48 vm09.local ceph-mon[59673]: pgmap v22: 65 pgs: 65 active+clean; 296 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 117 op/s 2026-03-09T15:02:50.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:50 vm09.local ceph-mon[59673]: pgmap v23: 65 pgs: 65 active+clean; 296 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 69 op/s 2026-03-09T15:02:50.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:50 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:50.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:50 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:50.733 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:50 vm05.local ceph-mon[50611]: pgmap v23: 65 pgs: 65 active+clean; 296 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 69 op/s 2026-03-09T15:02:50.733 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:50 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:50.733 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:50 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:51.590 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:51 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:51.590 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:51 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:51.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:51 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:51.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:51 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: pgmap v24: 65 pgs: 65 active+clean; 299 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 99 op/s 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:52.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:52.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:52 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: pgmap v24: 65 pgs: 65 active+clean; 299 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 99 op/s 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:53.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:52 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:53.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:53 vm05.local ceph-mon[50611]: Upgrade: Updating alertmanager.vm05 2026-03-09T15:02:53.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:53 vm05.local ceph-mon[50611]: Deploying daemon alertmanager.vm05 on vm05 2026-03-09T15:02:54.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:53 vm09.local ceph-mon[59673]: Upgrade: Updating alertmanager.vm05 2026-03-09T15:02:54.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:53 vm09.local ceph-mon[59673]: Deploying daemon alertmanager.vm05 on vm05 2026-03-09T15:02:54.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:54 vm05.local ceph-mon[50611]: pgmap v25: 65 pgs: 65 active+clean; 299 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 61 op/s 2026-03-09T15:02:54.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:54 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:54.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:54 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:54.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:54 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:55.028 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:54 vm09.local ceph-mon[59673]: pgmap v25: 65 pgs: 65 active+clean; 299 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 61 op/s 2026-03-09T15:02:55.028 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:54 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:55.028 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:54 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:55.028 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:54 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:55.971 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:55 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:02:56.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:55 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:02:57.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:56 vm05.local ceph-mon[50611]: pgmap v26: 65 pgs: 65 active+clean; 299 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 61 op/s 2026-03-09T15:02:57.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:56 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:57.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:56 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:56 vm09.local ceph-mon[59673]: pgmap v26: 65 pgs: 65 active+clean; 299 MiB data, 2.8 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 61 op/s 2026-03-09T15:02:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:56 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:56 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:58.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:57 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:58.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:57 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:58.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:57 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:58.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:57 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: pgmap v27: 65 pgs: 65 active+clean; 304 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.6 MiB/s rd, 1.6 MiB/s wr, 94 op/s 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:58.783 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:58.784 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:58.784 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:58.784 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:58.784 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:02:58 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: pgmap v27: 65 pgs: 65 active+clean; 304 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.6 MiB/s rd, 1.6 MiB/s wr, 94 op/s 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:02:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:02:58 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:00.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:00 vm09.local ceph-mon[59673]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:03:00.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:00 vm09.local ceph-mon[59673]: Upgrade: Updating grafana.vm05 2026-03-09T15:03:00.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:00 vm09.local ceph-mon[59673]: Deploying daemon grafana.vm05 on vm05 2026-03-09T15:03:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:00 vm05.local ceph-mon[50611]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T15:03:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:00 vm05.local ceph-mon[50611]: Upgrade: Updating grafana.vm05 2026-03-09T15:03:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:00 vm05.local ceph-mon[50611]: Deploying daemon grafana.vm05 on vm05 2026-03-09T15:03:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:01 vm05.local ceph-mon[50611]: pgmap v28: 65 pgs: 65 active+clean; 304 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 61 op/s 2026-03-09T15:03:01.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:01 vm09.local ceph-mon[59673]: pgmap v28: 65 pgs: 65 active+clean; 304 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 61 op/s 2026-03-09T15:03:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:02 vm09.local ceph-mon[59673]: pgmap v29: 65 pgs: 65 active+clean; 309 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.6 MiB/s rd, 1.6 MiB/s wr, 106 op/s 2026-03-09T15:03:02.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:02 vm05.local ceph-mon[50611]: pgmap v29: 65 pgs: 65 active+clean; 309 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.6 MiB/s rd, 1.6 MiB/s wr, 106 op/s 2026-03-09T15:03:04.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:04 vm05.local ceph-mon[50611]: pgmap v30: 65 pgs: 65 active+clean; 309 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.0 MiB/s wr, 76 op/s 2026-03-09T15:03:04.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:04 vm09.local ceph-mon[59673]: pgmap v30: 65 pgs: 65 active+clean; 309 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.0 MiB/s wr, 76 op/s 2026-03-09T15:03:06.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:06 vm09.local ceph-mon[59673]: pgmap v31: 65 pgs: 65 active+clean; 309 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.0 MiB/s wr, 76 op/s 2026-03-09T15:03:06.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:06 vm05.local ceph-mon[50611]: pgmap v31: 65 pgs: 65 active+clean; 309 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.0 MiB/s rd, 1.0 MiB/s wr, 76 op/s 2026-03-09T15:03:08.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:08 vm05.local ceph-mon[50611]: pgmap v32: 65 pgs: 65 active+clean; 309 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 109 op/s 2026-03-09T15:03:09.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:08 vm09.local ceph-mon[59673]: pgmap v32: 65 pgs: 65 active+clean; 309 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 109 op/s 2026-03-09T15:03:10.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:10 vm05.local ceph-mon[50611]: pgmap v33: 65 pgs: 65 active+clean; 309 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 897 KiB/s rd, 954 KiB/s wr, 77 op/s 2026-03-09T15:03:10.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:10 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:10.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:10 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:10.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:10 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:10.953 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:10 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:03:11.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:10 vm09.local ceph-mon[59673]: pgmap v33: 65 pgs: 65 active+clean; 309 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 897 KiB/s rd, 954 KiB/s wr, 77 op/s 2026-03-09T15:03:11.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:10 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:11.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:10 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:11.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:10 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:11.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:10 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:03:13.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:12 vm05.local ceph-mon[50611]: pgmap v34: 65 pgs: 65 active+clean; 307 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 127 op/s 2026-03-09T15:03:13.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:12 vm09.local ceph-mon[59673]: pgmap v34: 65 pgs: 65 active+clean; 307 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 127 op/s 2026-03-09T15:03:14.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:13 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:14.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:13 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:14.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:13 vm05.local ceph-mon[50611]: pgmap v35: 65 pgs: 65 active+clean; 307 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 939 KiB/s rd, 962 KiB/s wr, 83 op/s 2026-03-09T15:03:14.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:13 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:14.215 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:13 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:14.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:14 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:14.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:14 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:14.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:14 vm09.local ceph-mon[59673]: pgmap v35: 65 pgs: 65 active+clean; 307 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 939 KiB/s rd, 962 KiB/s wr, 83 op/s 2026-03-09T15:03:14.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:14 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:14.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:14 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:16.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.453+0000 7f275b13a700 1 -- 192.168.123.105:0/2703331844 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2754071950 msgr2=0x7f2754071d60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:16.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.453+0000 7f275b13a700 1 --2- 192.168.123.105:0/2703331844 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2754071950 0x7f2754071d60 secure :-1 s=READY pgs=345 cs=0 l=1 rev1=1 crypto rx=0x7f2750007780 tx=0x7f2750007a90 comp rx=0 tx=0).stop 2026-03-09T15:03:16.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.454+0000 7f275b13a700 1 -- 192.168.123.105:0/2703331844 shutdown_connections 2026-03-09T15:03:16.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.454+0000 7f275b13a700 1 --2- 192.168.123.105:0/2703331844 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2754072330 0x7f27540770b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:16.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.454+0000 7f275b13a700 1 --2- 192.168.123.105:0/2703331844 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2754071950 0x7f2754071d60 unknown :-1 s=CLOSED pgs=345 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:16.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.454+0000 7f275b13a700 1 -- 192.168.123.105:0/2703331844 >> 192.168.123.105:0/2703331844 conn(0x7f275406d1a0 msgr2=0x7f275406f5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:16.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.454+0000 7f275b13a700 1 -- 192.168.123.105:0/2703331844 shutdown_connections 2026-03-09T15:03:16.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.454+0000 7f275b13a700 1 -- 192.168.123.105:0/2703331844 wait complete. 2026-03-09T15:03:16.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.455+0000 7f275b13a700 1 Processor -- start 2026-03-09T15:03:16.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.455+0000 7f275b13a700 1 -- start start 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.455+0000 7f275b13a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2754072330 0x7f2754082300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.455+0000 7f275b13a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2754082840 0x7f2754082cb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.455+0000 7f275b13a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2754083cb0 con 0x7f2754072330 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.455+0000 7f275b13a700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f2754083e20 con 0x7f2754082840 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.455+0000 7f275a138700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2754072330 0x7f2754082300 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.455+0000 7f275a138700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2754072330 0x7f2754082300 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:33932/0 (socket says 192.168.123.105:33932) 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.455+0000 7f275a138700 1 -- 192.168.123.105:0/1018986707 learned_addr learned my addr 192.168.123.105:0/1018986707 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.455+0000 7f2759937700 1 --2- 192.168.123.105:0/1018986707 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2754082840 0x7f2754082cb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.456+0000 7f2759937700 1 -- 192.168.123.105:0/1018986707 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2754072330 msgr2=0x7f2754082300 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.456+0000 7f2759937700 1 --2- 192.168.123.105:0/1018986707 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2754072330 0x7f2754082300 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.456+0000 7f2759937700 1 -- 192.168.123.105:0/1018986707 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2750007430 con 0x7f2754082840 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.456+0000 7f2759937700 1 --2- 192.168.123.105:0/1018986707 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2754082840 0x7f2754082cb0 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7f274c00c360 tx=0x7f274c00c670 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:16.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.457+0000 7f274b7fe700 1 -- 192.168.123.105:0/1018986707 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f274c00e030 con 0x7f2754082840 2026-03-09T15:03:16.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.457+0000 7f275b13a700 1 -- 192.168.123.105:0/1018986707 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f275412de40 con 0x7f2754082840 2026-03-09T15:03:16.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.457+0000 7f275b13a700 1 -- 192.168.123.105:0/1018986707 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f275412e360 con 0x7f2754082840 2026-03-09T15:03:16.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.458+0000 7f274b7fe700 1 -- 192.168.123.105:0/1018986707 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f274c00f040 con 0x7f2754082840 2026-03-09T15:03:16.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.458+0000 7f274b7fe700 1 -- 192.168.123.105:0/1018986707 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f274c014650 con 0x7f2754082840 2026-03-09T15:03:16.459 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.459+0000 7f274b7fe700 1 -- 192.168.123.105:0/1018986707 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f274c009110 con 0x7f2754082840 2026-03-09T15:03:16.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.461+0000 7f274b7fe700 1 --2- 192.168.123.105:0/1018986707 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f2740077790 0x7f2740079c40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:16.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.461+0000 7f274b7fe700 1 -- 192.168.123.105:0/1018986707 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f274c099650 con 0x7f2754082840 2026-03-09T15:03:16.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.461+0000 7f275b13a700 1 -- 192.168.123.105:0/1018986707 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2738005320 con 0x7f2754082840 2026-03-09T15:03:16.464 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.464+0000 7f275a138700 1 --2- 192.168.123.105:0/1018986707 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f2740077790 0x7f2740079c40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:16.465 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.465+0000 7f275a138700 1 --2- 192.168.123.105:0/1018986707 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f2740077790 0x7f2740079c40 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f2750007750 tx=0x7f2750007e00 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:16.466 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.466+0000 7f274b7fe700 1 -- 192.168.123.105:0/1018986707 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f274c0621e0 con 0x7f2754082840 2026-03-09T15:03:16.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.681+0000 7f275b13a700 1 -- 192.168.123.105:0/1018986707 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f2738000bf0 con 0x7f2740077790 2026-03-09T15:03:16.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.685+0000 7f274b7fe700 1 -- 192.168.123.105:0/1018986707 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+175 (secure 0 0 0) 0x7f2738000bf0 con 0x7f2740077790 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: pgmap v36: 65 pgs: 65 active+clean; 307 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 939 KiB/s rd, 963 KiB/s wr, 83 op/s 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: Upgrade: Finalizing container_image settings 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: Upgrade: Complete! 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T15:03:16.688 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T15:03:16.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:16.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:16.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:16.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:16.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:16.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:16.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:16.689 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:16 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:16.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.688+0000 7f27497fa700 1 -- 192.168.123.105:0/1018986707 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f2740077790 msgr2=0x7f2740079c40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:16.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.688+0000 7f27497fa700 1 --2- 192.168.123.105:0/1018986707 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f2740077790 0x7f2740079c40 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f2750007750 tx=0x7f2750007e00 comp rx=0 tx=0).stop 2026-03-09T15:03:16.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.688+0000 7f27497fa700 1 -- 192.168.123.105:0/1018986707 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2754082840 msgr2=0x7f2754082cb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:16.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.688+0000 7f27497fa700 1 --2- 192.168.123.105:0/1018986707 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2754082840 0x7f2754082cb0 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7f274c00c360 tx=0x7f274c00c670 comp rx=0 tx=0).stop 2026-03-09T15:03:16.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.688+0000 7f27497fa700 1 -- 192.168.123.105:0/1018986707 shutdown_connections 2026-03-09T15:03:16.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.688+0000 7f27497fa700 1 --2- 192.168.123.105:0/1018986707 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f2740077790 0x7f2740079c40 secure :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f2750007750 tx=0x7f2750007e00 comp rx=0 tx=0).stop 2026-03-09T15:03:16.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.688+0000 7f27497fa700 1 --2- 192.168.123.105:0/1018986707 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2754072330 0x7f2754082300 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:16.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.688+0000 7f27497fa700 1 --2- 192.168.123.105:0/1018986707 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2754082840 0x7f2754082cb0 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:16.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.688+0000 7f27497fa700 1 -- 192.168.123.105:0/1018986707 >> 192.168.123.105:0/1018986707 conn(0x7f275406d1a0 msgr2=0x7f275406e0d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:16.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.689+0000 7f27497fa700 1 -- 192.168.123.105:0/1018986707 shutdown_connections 2026-03-09T15:03:16.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:16.689+0000 7f27497fa700 1 -- 192.168.123.105:0/1018986707 wait complete. 2026-03-09T15:03:16.757 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mgr | length == 1'"'"'' 2026-03-09T15:03:16.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: pgmap v36: 65 pgs: 65 active+clean; 307 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 939 KiB/s rd, 963 KiB/s wr, 83 op/s 2026-03-09T15:03:16.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: Upgrade: Finalizing container_image settings 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: Upgrade: Complete! 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:16.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:16 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:17.037 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.456+0000 7ff7114fe700 1 -- 192.168.123.105:0/3265157083 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff70c071b60 msgr2=0x7ff70c071fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.456+0000 7ff7114fe700 1 --2- 192.168.123.105:0/3265157083 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff70c071b60 0x7ff70c071fd0 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7ff6fc00b3a0 tx=0x7ff6fc00b6b0 comp rx=0 tx=0).stop 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.456+0000 7ff7114fe700 1 -- 192.168.123.105:0/3265157083 shutdown_connections 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.456+0000 7ff7114fe700 1 --2- 192.168.123.105:0/3265157083 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff70c071b60 0x7ff70c071fd0 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.456+0000 7ff7114fe700 1 --2- 192.168.123.105:0/3265157083 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff70c10eab0 0x7ff70c10ee80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.456+0000 7ff7114fe700 1 -- 192.168.123.105:0/3265157083 >> 192.168.123.105:0/3265157083 conn(0x7ff70c06c6c0 msgr2=0x7ff70c06cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.457+0000 7ff7114fe700 1 -- 192.168.123.105:0/3265157083 shutdown_connections 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.457+0000 7ff7114fe700 1 -- 192.168.123.105:0/3265157083 wait complete. 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.457+0000 7ff7114fe700 1 Processor -- start 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.457+0000 7ff7114fe700 1 -- start start 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.457+0000 7ff7114fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff70c071b60 0x7ff70c117720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.457+0000 7ff7114fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff70c10eab0 0x7ff70c112720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.457+0000 7ff7114fe700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff70c112e00 con 0x7ff70c10eab0 2026-03-09T15:03:17.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.457+0000 7ff7114fe700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff70c112f40 con 0x7ff70c071b60 2026-03-09T15:03:17.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.458+0000 7ff70affd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff70c071b60 0x7ff70c117720 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:17.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.458+0000 7ff70affd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff70c071b60 0x7ff70c117720 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:58052/0 (socket says 192.168.123.105:58052) 2026-03-09T15:03:17.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.458+0000 7ff70affd700 1 -- 192.168.123.105:0/2446274514 learned_addr learned my addr 192.168.123.105:0/2446274514 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:17.462 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.459+0000 7ff70a7fc700 1 --2- 192.168.123.105:0/2446274514 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff70c10eab0 0x7ff70c112720 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:17.462 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.460+0000 7ff70affd700 1 -- 192.168.123.105:0/2446274514 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff70c10eab0 msgr2=0x7ff70c112720 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:17.462 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.460+0000 7ff70affd700 1 --2- 192.168.123.105:0/2446274514 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff70c10eab0 0x7ff70c112720 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:17.462 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.460+0000 7ff70affd700 1 -- 192.168.123.105:0/2446274514 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff6fc00b050 con 0x7ff70c071b60 2026-03-09T15:03:17.466 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.463+0000 7ff70affd700 1 --2- 192.168.123.105:0/2446274514 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff70c071b60 0x7ff70c117720 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7ff70000b770 tx=0x7ff70000bb30 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:17.466 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.464+0000 7ff6f3fff700 1 -- 192.168.123.105:0/2446274514 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff70000f820 con 0x7ff70c071b60 2026-03-09T15:03:17.466 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.464+0000 7ff7114fe700 1 -- 192.168.123.105:0/2446274514 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff70c113220 con 0x7ff70c071b60 2026-03-09T15:03:17.466 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.464+0000 7ff7114fe700 1 -- 192.168.123.105:0/2446274514 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff70c1135a0 con 0x7ff70c071b60 2026-03-09T15:03:17.466 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.465+0000 7ff6f3fff700 1 -- 192.168.123.105:0/2446274514 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7ff70000fe60 con 0x7ff70c071b60 2026-03-09T15:03:17.466 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.465+0000 7ff6f3fff700 1 -- 192.168.123.105:0/2446274514 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff70000d610 con 0x7ff70c071b60 2026-03-09T15:03:17.466 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.466+0000 7ff7114fe700 1 -- 192.168.123.105:0/2446274514 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff6f8005320 con 0x7ff70c071b60 2026-03-09T15:03:17.466 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.466+0000 7ff6f3fff700 1 -- 192.168.123.105:0/2446274514 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7ff70000f980 con 0x7ff70c071b60 2026-03-09T15:03:17.467 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.467+0000 7ff6f3fff700 1 --2- 192.168.123.105:0/2446274514 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff6f4077450 0x7ff6f4079900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:17.467 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.467+0000 7ff6f3fff700 1 -- 192.168.123.105:0/2446274514 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7ff700099440 con 0x7ff70c071b60 2026-03-09T15:03:17.467 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.467+0000 7ff70a7fc700 1 --2- 192.168.123.105:0/2446274514 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff6f4077450 0x7ff6f4079900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:17.468 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.468+0000 7ff70a7fc700 1 --2- 192.168.123.105:0/2446274514 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff6f4077450 0x7ff6f4079900 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7ff6fc00bb30 tx=0x7ff6fc014040 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:17.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.474+0000 7ff6f3fff700 1 -- 192.168.123.105:0/2446274514 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7ff700061f50 con 0x7ff70c071b60 2026-03-09T15:03:17.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:17 vm05.local ceph-mon[50611]: from='client.24519 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:17.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.728+0000 7ff7114fe700 1 -- 192.168.123.105:0/2446274514 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7ff6f8005cc0 con 0x7ff70c071b60 2026-03-09T15:03:17.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.729+0000 7ff6f3fff700 1 -- 192.168.123.105:0/2446274514 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+679 (secure 0 0 0) 0x7ff7000616a0 con 0x7ff70c071b60 2026-03-09T15:03:17.736 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.735+0000 7ff7114fe700 1 -- 192.168.123.105:0/2446274514 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff6f4077450 msgr2=0x7ff6f4079900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:17.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.736+0000 7ff7114fe700 1 --2- 192.168.123.105:0/2446274514 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff6f4077450 0x7ff6f4079900 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7ff6fc00bb30 tx=0x7ff6fc014040 comp rx=0 tx=0).stop 2026-03-09T15:03:17.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.736+0000 7ff7114fe700 1 -- 192.168.123.105:0/2446274514 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff70c071b60 msgr2=0x7ff70c117720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:17.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.736+0000 7ff7114fe700 1 --2- 192.168.123.105:0/2446274514 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff70c071b60 0x7ff70c117720 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7ff70000b770 tx=0x7ff70000bb30 comp rx=0 tx=0).stop 2026-03-09T15:03:17.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.736+0000 7ff7114fe700 1 -- 192.168.123.105:0/2446274514 shutdown_connections 2026-03-09T15:03:17.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.736+0000 7ff7114fe700 1 --2- 192.168.123.105:0/2446274514 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff6f4077450 0x7ff6f4079900 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:17.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.736+0000 7ff7114fe700 1 --2- 192.168.123.105:0/2446274514 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff70c071b60 0x7ff70c117720 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:17.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.736+0000 7ff7114fe700 1 --2- 192.168.123.105:0/2446274514 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff70c10eab0 0x7ff70c112720 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:17.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.736+0000 7ff7114fe700 1 -- 192.168.123.105:0/2446274514 >> 192.168.123.105:0/2446274514 conn(0x7ff70c06c6c0 msgr2=0x7ff70c1181f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:17.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.736+0000 7ff7114fe700 1 -- 192.168.123.105:0/2446274514 shutdown_connections 2026-03-09T15:03:17.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:17.736+0000 7ff7114fe700 1 -- 192.168.123.105:0/2446274514 wait complete. 2026-03-09T15:03:17.754 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:03:17.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:17 vm09.local ceph-mon[59673]: from='client.24519 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:18.244 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mgr | keys'"'"' | grep $sha1' 2026-03-09T15:03:18.464 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:03:18.533 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:18 vm05.local ceph-mon[50611]: pgmap v37: 65 pgs: 65 active+clean; 300 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 114 op/s 2026-03-09T15:03:18.533 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:18 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/2446274514' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:18.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:18 vm09.local ceph-mon[59673]: pgmap v37: 65 pgs: 65 active+clean; 300 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 114 op/s 2026-03-09T15:03:18.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:18 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/2446274514' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:18.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.927+0000 7f6729907700 1 -- 192.168.123.105:0/980715745 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f672410ed80 msgr2=0x7f672406d260 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:18.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.927+0000 7f6729907700 1 --2- 192.168.123.105:0/980715745 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f672410ed80 0x7f672406d260 secure :-1 s=READY pgs=346 cs=0 l=1 rev1=1 crypto rx=0x7f6718009b00 tx=0x7f6718009e10 comp rx=0 tx=0).stop 2026-03-09T15:03:18.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.929+0000 7f6729907700 1 -- 192.168.123.105:0/980715745 shutdown_connections 2026-03-09T15:03:18.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.929+0000 7f6729907700 1 --2- 192.168.123.105:0/980715745 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f672406d7a0 0x7f672406dc10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:18.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.929+0000 7f6729907700 1 --2- 192.168.123.105:0/980715745 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f672410ed80 0x7f672406d260 unknown :-1 s=CLOSED pgs=346 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:18.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.929+0000 7f6729907700 1 -- 192.168.123.105:0/980715745 >> 192.168.123.105:0/980715745 conn(0x7f672406c830 msgr2=0x7f6724071830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:18.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.929+0000 7f6729907700 1 -- 192.168.123.105:0/980715745 shutdown_connections 2026-03-09T15:03:18.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.930+0000 7f6729907700 1 -- 192.168.123.105:0/980715745 wait complete. 2026-03-09T15:03:18.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.930+0000 7f6729907700 1 Processor -- start 2026-03-09T15:03:18.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.930+0000 7f6729907700 1 -- start start 2026-03-09T15:03:18.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.930+0000 7f6729907700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f672406d7a0 0x7f6724115960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:18.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.930+0000 7f6729907700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f672410ed80 0x7f6724115ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:18.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.930+0000 7f6729907700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6724119af0 con 0x7f672406d7a0 2026-03-09T15:03:18.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.930+0000 7f6729907700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f67241163e0 con 0x7f672410ed80 2026-03-09T15:03:18.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.931+0000 7f6722ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f672406d7a0 0x7f6724115960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:18.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.931+0000 7f6722ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f672406d7a0 0x7f6724115960 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:57422/0 (socket says 192.168.123.105:57422) 2026-03-09T15:03:18.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.931+0000 7f6722ffd700 1 -- 192.168.123.105:0/3343119113 learned_addr learned my addr 192.168.123.105:0/3343119113 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:18.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.931+0000 7f6722ffd700 1 -- 192.168.123.105:0/3343119113 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f672410ed80 msgr2=0x7f6724115ea0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:18.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.931+0000 7f6722ffd700 1 --2- 192.168.123.105:0/3343119113 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f672410ed80 0x7f6724115ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:18.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.931+0000 7f6722ffd700 1 -- 192.168.123.105:0/3343119113 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f67180097e0 con 0x7f672406d7a0 2026-03-09T15:03:18.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.931+0000 7f6722ffd700 1 --2- 192.168.123.105:0/3343119113 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f672406d7a0 0x7f6724115960 secure :-1 s=READY pgs=347 cs=0 l=1 rev1=1 crypto rx=0x7f671800b5c0 tx=0x7f6718004a20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:18.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.931+0000 7f6728905700 1 -- 192.168.123.105:0/3343119113 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f671801d070 con 0x7f672406d7a0 2026-03-09T15:03:18.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.931+0000 7f6728905700 1 -- 192.168.123.105:0/3343119113 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f6718004500 con 0x7f672406d7a0 2026-03-09T15:03:18.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.931+0000 7f6728905700 1 -- 192.168.123.105:0/3343119113 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6718022470 con 0x7f672406d7a0 2026-03-09T15:03:18.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.932+0000 7f6729907700 1 -- 192.168.123.105:0/3343119113 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f67241166c0 con 0x7f672406d7a0 2026-03-09T15:03:18.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.932+0000 7f6729907700 1 -- 192.168.123.105:0/3343119113 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f67241b7ba0 con 0x7f672406d7a0 2026-03-09T15:03:18.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.934+0000 7f6728905700 1 -- 192.168.123.105:0/3343119113 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f6718003f70 con 0x7f672406d7a0 2026-03-09T15:03:18.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.934+0000 7f6728905700 1 --2- 192.168.123.105:0/3343119113 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f670c077670 0x7f670c079b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:18.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.934+0000 7f6728905700 1 -- 192.168.123.105:0/3343119113 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f671809b320 con 0x7f672406d7a0 2026-03-09T15:03:18.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.935+0000 7f6729907700 1 -- 192.168.123.105:0/3343119113 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f672404ea50 con 0x7f672406d7a0 2026-03-09T15:03:18.935 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.935+0000 7f67227fc700 1 --2- 192.168.123.105:0/3343119113 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f670c077670 0x7f670c079b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:18.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.937+0000 7f67227fc700 1 --2- 192.168.123.105:0/3343119113 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f670c077670 0x7f670c079b20 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f6724117150 tx=0x7f6714009250 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:18.941 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:18.941+0000 7f6728905700 1 -- 192.168.123.105:0/3343119113 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f6718063e30 con 0x7f672406d7a0 2026-03-09T15:03:19.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.149+0000 7f6729907700 1 -- 192.168.123.105:0/3343119113 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f6724116e90 con 0x7f672406d7a0 2026-03-09T15:03:19.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.149+0000 7f6728905700 1 -- 192.168.123.105:0/3343119113 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+679 (secure 0 0 0) 0x7f6718063580 con 0x7f672406d7a0 2026-03-09T15:03:19.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.155+0000 7f670a7fc700 1 -- 192.168.123.105:0/3343119113 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f670c077670 msgr2=0x7f670c079b20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:19.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.155+0000 7f670a7fc700 1 --2- 192.168.123.105:0/3343119113 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f670c077670 0x7f670c079b20 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f6724117150 tx=0x7f6714009250 comp rx=0 tx=0).stop 2026-03-09T15:03:19.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.155+0000 7f670a7fc700 1 -- 192.168.123.105:0/3343119113 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f672406d7a0 msgr2=0x7f6724115960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:19.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.155+0000 7f670a7fc700 1 --2- 192.168.123.105:0/3343119113 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f672406d7a0 0x7f6724115960 secure :-1 s=READY pgs=347 cs=0 l=1 rev1=1 crypto rx=0x7f671800b5c0 tx=0x7f6718004a20 comp rx=0 tx=0).stop 2026-03-09T15:03:19.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.155+0000 7f670a7fc700 1 -- 192.168.123.105:0/3343119113 shutdown_connections 2026-03-09T15:03:19.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.155+0000 7f670a7fc700 1 --2- 192.168.123.105:0/3343119113 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f670c077670 0x7f670c079b20 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:19.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.155+0000 7f670a7fc700 1 --2- 192.168.123.105:0/3343119113 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f672406d7a0 0x7f6724115960 unknown :-1 s=CLOSED pgs=347 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:19.155 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.155+0000 7f670a7fc700 1 --2- 192.168.123.105:0/3343119113 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f672410ed80 0x7f6724115ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:19.156 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.155+0000 7f670a7fc700 1 -- 192.168.123.105:0/3343119113 >> 192.168.123.105:0/3343119113 conn(0x7f672406c830 msgr2=0x7f67240711c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:19.156 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.156+0000 7f670a7fc700 1 -- 192.168.123.105:0/3343119113 shutdown_connections 2026-03-09T15:03:19.156 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.156+0000 7f670a7fc700 1 -- 192.168.123.105:0/3343119113 wait complete. 2026-03-09T15:03:19.168 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-09T15:03:19.211 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | length == 2'"'"'' 2026-03-09T15:03:19.463 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:03:19.765 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:19 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/3343119113' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:19.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:19 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/3343119113' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.884+0000 7ff60dd60700 1 -- 192.168.123.105:0/3561990994 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff608071e40 msgr2=0x7ff6080722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.884+0000 7ff60dd60700 1 --2- 192.168.123.105:0/3561990994 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff608071e40 0x7ff6080722b0 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7ff600009230 tx=0x7ff600009260 comp rx=0 tx=0).stop 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.884+0000 7ff60dd60700 1 -- 192.168.123.105:0/3561990994 shutdown_connections 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.884+0000 7ff60dd60700 1 --2- 192.168.123.105:0/3561990994 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff608071e40 0x7ff6080722b0 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.884+0000 7ff60dd60700 1 --2- 192.168.123.105:0/3561990994 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff60810c8b0 0x7ff60810cc80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.884+0000 7ff60dd60700 1 -- 192.168.123.105:0/3561990994 >> 192.168.123.105:0/3561990994 conn(0x7ff60806c6c0 msgr2=0x7ff60806cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.884+0000 7ff60dd60700 1 -- 192.168.123.105:0/3561990994 shutdown_connections 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.884+0000 7ff60dd60700 1 -- 192.168.123.105:0/3561990994 wait complete. 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.885+0000 7ff60dd60700 1 Processor -- start 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.885+0000 7ff60dd60700 1 -- start start 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.885+0000 7ff60dd60700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff60810c8b0 0x7ff60807ce70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.885+0000 7ff60dd60700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60807d3b0 0x7ff60807d820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.885+0000 7ff60dd60700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff6080819f0 con 0x7ff60810c8b0 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.885+0000 7ff60dd60700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff608081b60 con 0x7ff60807d3b0 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.885+0000 7ff606ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60807d3b0 0x7ff60807d820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.885+0000 7ff606ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60807d3b0 0x7ff60807d820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:40124/0 (socket says 192.168.123.105:40124) 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.885+0000 7ff606ffd700 1 -- 192.168.123.105:0/3409112075 learned_addr learned my addr 192.168.123.105:0/3409112075 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.886+0000 7ff606ffd700 1 -- 192.168.123.105:0/3409112075 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff60810c8b0 msgr2=0x7ff60807ce70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.886+0000 7ff606ffd700 1 --2- 192.168.123.105:0/3409112075 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff60810c8b0 0x7ff60807ce70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:19.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.886+0000 7ff606ffd700 1 -- 192.168.123.105:0/3409112075 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff600008ee0 con 0x7ff60807d3b0 2026-03-09T15:03:19.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.886+0000 7ff606ffd700 1 --2- 192.168.123.105:0/3409112075 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60807d3b0 0x7ff60807d820 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7ff600004740 tx=0x7ff600004820 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:19.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.886+0000 7ff604ff9700 1 -- 192.168.123.105:0/3409112075 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff60001d070 con 0x7ff60807d3b0 2026-03-09T15:03:19.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.886+0000 7ff60dd60700 1 -- 192.168.123.105:0/3409112075 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff608081de0 con 0x7ff60807d3b0 2026-03-09T15:03:19.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.886+0000 7ff60dd60700 1 -- 192.168.123.105:0/3409112075 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff6080822d0 con 0x7ff60807d3b0 2026-03-09T15:03:19.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.887+0000 7ff604ff9700 1 -- 192.168.123.105:0/3409112075 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7ff60000ece0 con 0x7ff60807d3b0 2026-03-09T15:03:19.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.887+0000 7ff604ff9700 1 -- 192.168.123.105:0/3409112075 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff600016b40 con 0x7ff60807d3b0 2026-03-09T15:03:19.888 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.888+0000 7ff604ff9700 1 -- 192.168.123.105:0/3409112075 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7ff600016ca0 con 0x7ff60807d3b0 2026-03-09T15:03:19.888 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.888+0000 7ff604ff9700 1 --2- 192.168.123.105:0/3409112075 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff5f0077790 0x7ff5f0079c40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:19.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.889+0000 7ff60dd60700 1 -- 192.168.123.105:0/3409112075 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff60804f2a0 con 0x7ff60807d3b0 2026-03-09T15:03:19.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.889+0000 7ff6077fe700 1 --2- 192.168.123.105:0/3409112075 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff5f0077790 0x7ff5f0079c40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:19.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.889+0000 7ff604ff9700 1 -- 192.168.123.105:0/3409112075 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7ff600012070 con 0x7ff60807d3b0 2026-03-09T15:03:19.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.890+0000 7ff6077fe700 1 --2- 192.168.123.105:0/3409112075 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff5f0077790 0x7ff5f0079c40 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7ff5f800bef0 tx=0x7ff5f800d040 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:19.892 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:19.892+0000 7ff604ff9700 1 -- 192.168.123.105:0/3409112075 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7ff600064b10 con 0x7ff60807d3b0 2026-03-09T15:03:20.115 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.114+0000 7ff60dd60700 1 -- 192.168.123.105:0/3409112075 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7ff60804ea50 con 0x7ff60807d3b0 2026-03-09T15:03:20.119 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.116+0000 7ff604ff9700 1 -- 192.168.123.105:0/3409112075 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+679 (secure 0 0 0) 0x7ff600064260 con 0x7ff60807d3b0 2026-03-09T15:03:20.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.123+0000 7ff60dd60700 1 -- 192.168.123.105:0/3409112075 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff5f0077790 msgr2=0x7ff5f0079c40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:20.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.123+0000 7ff60dd60700 1 --2- 192.168.123.105:0/3409112075 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff5f0077790 0x7ff5f0079c40 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7ff5f800bef0 tx=0x7ff5f800d040 comp rx=0 tx=0).stop 2026-03-09T15:03:20.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.123+0000 7ff60dd60700 1 -- 192.168.123.105:0/3409112075 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60807d3b0 msgr2=0x7ff60807d820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:20.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.123+0000 7ff60dd60700 1 --2- 192.168.123.105:0/3409112075 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60807d3b0 0x7ff60807d820 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7ff600004740 tx=0x7ff600004820 comp rx=0 tx=0).stop 2026-03-09T15:03:20.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.123+0000 7ff60dd60700 1 -- 192.168.123.105:0/3409112075 shutdown_connections 2026-03-09T15:03:20.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.123+0000 7ff60dd60700 1 --2- 192.168.123.105:0/3409112075 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff5f0077790 0x7ff5f0079c40 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:20.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.123+0000 7ff60dd60700 1 --2- 192.168.123.105:0/3409112075 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff60810c8b0 0x7ff60807ce70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:20.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.123+0000 7ff60dd60700 1 --2- 192.168.123.105:0/3409112075 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff60807d3b0 0x7ff60807d820 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:20.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.123+0000 7ff60dd60700 1 -- 192.168.123.105:0/3409112075 >> 192.168.123.105:0/3409112075 conn(0x7ff60806c6c0 msgr2=0x7ff60806ff50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:20.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.124+0000 7ff60dd60700 1 -- 192.168.123.105:0/3409112075 shutdown_connections 2026-03-09T15:03:20.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.124+0000 7ff60dd60700 1 -- 192.168.123.105:0/3409112075 wait complete. 2026-03-09T15:03:20.134 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:03:20.234 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '"'"'.up_to_date | length == 2'"'"'' 2026-03-09T15:03:20.488 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:03:20.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.897+0000 7ff72759e700 1 -- 192.168.123.105:0/759914179 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7281084e0 msgr2=0x7ff7281088b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:20.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.897+0000 7ff72759e700 1 --2- 192.168.123.105:0/759914179 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7281084e0 0x7ff7281088b0 secure :-1 s=READY pgs=348 cs=0 l=1 rev1=1 crypto rx=0x7ff718009b50 tx=0x7ff718009e60 comp rx=0 tx=0).stop 2026-03-09T15:03:20.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.898+0000 7ff72759e700 1 -- 192.168.123.105:0/759914179 shutdown_connections 2026-03-09T15:03:20.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.898+0000 7ff72759e700 1 --2- 192.168.123.105:0/759914179 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff7281024e0 0x7ff728102950 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:20.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.898+0000 7ff72759e700 1 --2- 192.168.123.105:0/759914179 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7281084e0 0x7ff7281088b0 unknown :-1 s=CLOSED pgs=348 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:20.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.898+0000 7ff72759e700 1 -- 192.168.123.105:0/759914179 >> 192.168.123.105:0/759914179 conn(0x7ff7280fe000 msgr2=0x7ff728100410 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:20.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.899+0000 7ff72759e700 1 -- 192.168.123.105:0/759914179 shutdown_connections 2026-03-09T15:03:20.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.899+0000 7ff72759e700 1 -- 192.168.123.105:0/759914179 wait complete. 2026-03-09T15:03:20.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.899+0000 7ff72759e700 1 Processor -- start 2026-03-09T15:03:20.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.900+0000 7ff72759e700 1 -- start start 2026-03-09T15:03:20.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.900+0000 7ff72759e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff7281024e0 0x7ff728075260 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:20.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.900+0000 7ff72759e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7281084e0 0x7ff7280757a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:20.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.900+0000 7ff72759e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff7280793f0 con 0x7ff7281084e0 2026-03-09T15:03:20.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.900+0000 7ff72759e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff728075ce0 con 0x7ff7281024e0 2026-03-09T15:03:20.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.900+0000 7ff725d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7281084e0 0x7ff7280757a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:20.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.900+0000 7ff725d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7281084e0 0x7ff7280757a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:57462/0 (socket says 192.168.123.105:57462) 2026-03-09T15:03:20.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.900+0000 7ff725d9b700 1 -- 192.168.123.105:0/517543487 learned_addr learned my addr 192.168.123.105:0/517543487 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:20.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.900+0000 7ff72659c700 1 --2- 192.168.123.105:0/517543487 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff7281024e0 0x7ff728075260 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:20.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.900+0000 7ff725d9b700 1 -- 192.168.123.105:0/517543487 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff7281024e0 msgr2=0x7ff728075260 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:20.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.901+0000 7ff725d9b700 1 --2- 192.168.123.105:0/517543487 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff7281024e0 0x7ff728075260 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:20.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.901+0000 7ff725d9b700 1 -- 192.168.123.105:0/517543487 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff7180097e0 con 0x7ff7281084e0 2026-03-09T15:03:20.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.901+0000 7ff725d9b700 1 --2- 192.168.123.105:0/517543487 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7281084e0 0x7ff7280757a0 secure :-1 s=READY pgs=349 cs=0 l=1 rev1=1 crypto rx=0x7ff71c00eb10 tx=0x7ff71c00eed0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:20.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.901+0000 7ff72659c700 1 --2- 192.168.123.105:0/517543487 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff7281024e0 0x7ff728075260 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:03:20.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.901+0000 7ff7177fe700 1 -- 192.168.123.105:0/517543487 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff71c00cca0 con 0x7ff7281084e0 2026-03-09T15:03:20.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.901+0000 7ff72759e700 1 -- 192.168.123.105:0/517543487 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff728075fc0 con 0x7ff7281084e0 2026-03-09T15:03:20.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.901+0000 7ff72759e700 1 -- 192.168.123.105:0/517543487 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff7281a69c0 con 0x7ff7281084e0 2026-03-09T15:03:20.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.901+0000 7ff7177fe700 1 -- 192.168.123.105:0/517543487 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7ff71c00ce00 con 0x7ff7281084e0 2026-03-09T15:03:20.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.903+0000 7ff7177fe700 1 -- 192.168.123.105:0/517543487 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff71c0189c0 con 0x7ff7281084e0 2026-03-09T15:03:20.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.903+0000 7ff7177fe700 1 -- 192.168.123.105:0/517543487 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7ff71c018b60 con 0x7ff7281084e0 2026-03-09T15:03:20.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.903+0000 7ff7177fe700 1 --2- 192.168.123.105:0/517543487 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff710077570 0x7ff710079a20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:20.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.903+0000 7ff7177fe700 1 -- 192.168.123.105:0/517543487 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7ff71c014070 con 0x7ff7281084e0 2026-03-09T15:03:20.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.903+0000 7ff72659c700 1 --2- 192.168.123.105:0/517543487 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff710077570 0x7ff710079a20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:20.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.904+0000 7ff72659c700 1 --2- 192.168.123.105:0/517543487 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff710077570 0x7ff710079a20 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7ff7180054c0 tx=0x7ff7180055f0 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:20.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.904+0000 7ff72759e700 1 -- 192.168.123.105:0/517543487 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff708005320 con 0x7ff7281084e0 2026-03-09T15:03:20.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:20.907+0000 7ff7177fe700 1 -- 192.168.123.105:0/517543487 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7ff71c063d90 con 0x7ff7281084e0 2026-03-09T15:03:21.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:20 vm05.local ceph-mon[50611]: pgmap v38: 65 pgs: 65 active+clean; 300 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 81 op/s 2026-03-09T15:03:21.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:20 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:21.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:20 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/3409112075' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:21.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:21.086+0000 7ff72759e700 1 -- 192.168.123.105:0/517543487 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}) v1 -- 0x7ff708000c90 con 0x7ff710077570 2026-03-09T15:03:21.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:20 vm09.local ceph-mon[59673]: pgmap v38: 65 pgs: 65 active+clean; 300 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 81 op/s 2026-03-09T15:03:21.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:20 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:21.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:20 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/3409112075' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:22.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:21 vm05.local ceph-mon[50611]: pgmap v39: 65 pgs: 65 active+clean; 298 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 107 op/s 2026-03-09T15:03:22.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:21 vm05.local ceph-mon[50611]: from='client.14722 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:22.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:21 vm09.local ceph-mon[59673]: pgmap v39: 65 pgs: 65 active+clean; 298 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 107 op/s 2026-03-09T15:03:22.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:21 vm09.local ceph-mon[59673]: from='client.14722 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:23.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:23.298+0000 7ff7177fe700 1 -- 192.168.123.105:0/517543487 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+5308 (secure 0 0 0) 0x7ff708000c90 con 0x7ff710077570 2026-03-09T15:03:23.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:23.301+0000 7ff72759e700 1 -- 192.168.123.105:0/517543487 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff710077570 msgr2=0x7ff710079a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:23.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:23.301+0000 7ff72759e700 1 --2- 192.168.123.105:0/517543487 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff710077570 0x7ff710079a20 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7ff7180054c0 tx=0x7ff7180055f0 comp rx=0 tx=0).stop 2026-03-09T15:03:23.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:23.301+0000 7ff72759e700 1 -- 192.168.123.105:0/517543487 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7281084e0 msgr2=0x7ff7280757a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:23.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:23.301+0000 7ff72759e700 1 --2- 192.168.123.105:0/517543487 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7281084e0 0x7ff7280757a0 secure :-1 s=READY pgs=349 cs=0 l=1 rev1=1 crypto rx=0x7ff71c00eb10 tx=0x7ff71c00eed0 comp rx=0 tx=0).stop 2026-03-09T15:03:23.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:23.301+0000 7ff72759e700 1 -- 192.168.123.105:0/517543487 shutdown_connections 2026-03-09T15:03:23.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:23.301+0000 7ff72759e700 1 --2- 192.168.123.105:0/517543487 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff710077570 0x7ff710079a20 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:23.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:23.301+0000 7ff72759e700 1 --2- 192.168.123.105:0/517543487 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff7281024e0 0x7ff728075260 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:23.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:23.301+0000 7ff72759e700 1 --2- 192.168.123.105:0/517543487 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff7281084e0 0x7ff7280757a0 unknown :-1 s=CLOSED pgs=349 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:23.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:23.301+0000 7ff72759e700 1 -- 192.168.123.105:0/517543487 >> 192.168.123.105:0/517543487 conn(0x7ff7280fe000 msgr2=0x7ff7280fea50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:23.303 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:23.302+0000 7ff72759e700 1 -- 192.168.123.105:0/517543487 shutdown_connections 2026-03-09T15:03:23.303 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:23.302+0000 7ff72759e700 1 -- 192.168.123.105:0/517543487 wait complete. 2026-03-09T15:03:23.313 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:03:23.483 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T15:03:23.785 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:03:24.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.180+0000 7fb9be075700 1 -- 192.168.123.105:0/4065959456 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b810ed80 msgr2=0x7fb9b806d260 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:24.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.180+0000 7fb9b6ffd700 1 -- 192.168.123.105:0/4065959456 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb9a800ba20 con 0x7fb9b810ed80 2026-03-09T15:03:24.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.180+0000 7fb9be075700 1 --2- 192.168.123.105:0/4065959456 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b810ed80 0x7fb9b806d260 secure :-1 s=READY pgs=350 cs=0 l=1 rev1=1 crypto rx=0x7fb9a8009b50 tx=0x7fb9a8009e60 comp rx=0 tx=0).stop 2026-03-09T15:03:24.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.180+0000 7fb9be075700 1 -- 192.168.123.105:0/4065959456 shutdown_connections 2026-03-09T15:03:24.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.180+0000 7fb9be075700 1 --2- 192.168.123.105:0/4065959456 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb9b806d7a0 0x7fb9b806dc10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:24.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.180+0000 7fb9be075700 1 --2- 192.168.123.105:0/4065959456 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b810ed80 0x7fb9b806d260 secure :-1 s=CLOSED pgs=350 cs=0 l=1 rev1=1 crypto rx=0x7fb9a8009b50 tx=0x7fb9a8009e60 comp rx=0 tx=0).stop 2026-03-09T15:03:24.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.180+0000 7fb9be075700 1 -- 192.168.123.105:0/4065959456 >> 192.168.123.105:0/4065959456 conn(0x7fb9b806c830 msgr2=0x7fb9b8071830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:24.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.181+0000 7fb9be075700 1 -- 192.168.123.105:0/4065959456 shutdown_connections 2026-03-09T15:03:24.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.181+0000 7fb9be075700 1 -- 192.168.123.105:0/4065959456 wait complete. 2026-03-09T15:03:24.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.182+0000 7fb9be075700 1 Processor -- start 2026-03-09T15:03:24.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.182+0000 7fb9be075700 1 -- start start 2026-03-09T15:03:24.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.182+0000 7fb9be075700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b806d7a0 0x7fb9b81179a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:24.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.182+0000 7fb9be075700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb9b8112950 0x7fb9b8112dc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:24.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.182+0000 7fb9be075700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb9b8113450 con 0x7fb9b806d7a0 2026-03-09T15:03:24.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.182+0000 7fb9be075700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb9b8113590 con 0x7fb9b8112950 2026-03-09T15:03:24.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.182+0000 7fb9b77fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b806d7a0 0x7fb9b81179a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:24.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.182+0000 7fb9b77fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b806d7a0 0x7fb9b81179a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:57472/0 (socket says 192.168.123.105:57472) 2026-03-09T15:03:24.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.182+0000 7fb9b77fe700 1 -- 192.168.123.105:0/1584798803 learned_addr learned my addr 192.168.123.105:0/1584798803 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:24.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.182+0000 7fb9aedff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb9b8112950 0x7fb9b8112dc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:24.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.183+0000 7fb9b77fe700 1 -- 192.168.123.105:0/1584798803 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb9b8112950 msgr2=0x7fb9b8112dc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:24.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.183+0000 7fb9b77fe700 1 --2- 192.168.123.105:0/1584798803 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb9b8112950 0x7fb9b8112dc0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:24.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.183+0000 7fb9b77fe700 1 -- 192.168.123.105:0/1584798803 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb9a80097e0 con 0x7fb9b806d7a0 2026-03-09T15:03:24.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.183+0000 7fb9b77fe700 1 --2- 192.168.123.105:0/1584798803 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b806d7a0 0x7fb9b81179a0 secure :-1 s=READY pgs=351 cs=0 l=1 rev1=1 crypto rx=0x7fb9a8005fd0 tx=0x7fb9a8005710 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:24.184 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.184+0000 7fb9b57fa700 1 -- 192.168.123.105:0/1584798803 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb9a801d070 con 0x7fb9b806d7a0 2026-03-09T15:03:24.184 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.184+0000 7fb9b57fa700 1 -- 192.168.123.105:0/1584798803 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fb9a8022470 con 0x7fb9b806d7a0 2026-03-09T15:03:24.184 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.184+0000 7fb9b57fa700 1 -- 192.168.123.105:0/1584798803 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb9a800f670 con 0x7fb9b806d7a0 2026-03-09T15:03:24.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.184+0000 7fb9be075700 1 -- 192.168.123.105:0/1584798803 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb9b81b7c70 con 0x7fb9b806d7a0 2026-03-09T15:03:24.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.185+0000 7fb9be075700 1 -- 192.168.123.105:0/1584798803 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb9b81b7fd0 con 0x7fb9b806d7a0 2026-03-09T15:03:24.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.188+0000 7fb9b57fa700 1 -- 192.168.123.105:0/1584798803 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7fb9a8022aa0 con 0x7fb9b806d7a0 2026-03-09T15:03:24.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.188+0000 7fb9b57fa700 1 --2- 192.168.123.105:0/1584798803 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fb9a40776c0 0x7fb9a4079b70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:24.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.188+0000 7fb9b57fa700 1 -- 192.168.123.105:0/1584798803 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7fb9a809ace0 con 0x7fb9b806d7a0 2026-03-09T15:03:24.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.189+0000 7fb9be075700 1 -- 192.168.123.105:0/1584798803 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb9b8109e10 con 0x7fb9b806d7a0 2026-03-09T15:03:24.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.189+0000 7fb9aedff700 1 --2- 192.168.123.105:0/1584798803 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fb9a40776c0 0x7fb9a4079b70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:24.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.190+0000 7fb9aedff700 1 --2- 192.168.123.105:0/1584798803 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fb9a40776c0 0x7fb9a4079b70 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fb9a0005950 tx=0x7fb9a000a300 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:24.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.193+0000 7fb9b57fa700 1 -- 192.168.123.105:0/1584798803 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7fb9a80637f0 con 0x7fb9b806d7a0 2026-03-09T15:03:24.334 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.334+0000 7fb9be075700 1 -- 192.168.123.105:0/1584798803 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fb9b8113e80 con 0x7fb9a40776c0 2026-03-09T15:03:24.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.340+0000 7fb9b57fa700 1 -- 192.168.123.105:0/1584798803 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3552 (secure 0 0 0) 0x7fb9b8113e80 con 0x7fb9a40776c0 2026-03-09T15:03:24.340 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:03:24.340 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (30s) 11s ago 7m 16.5M - 0.25.0 c8568f914cd2 7635cece310c 2026-03-09T15:03:24.340 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (7m) 11s ago 7m 8493k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:03:24.340 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (6m) 49s ago 6m 11.1M - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:03:24.340 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (7m) 11s ago 7m 7411k - 18.2.0 dc2bc1663786 1c577d7a0de0 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (6m) 49s ago 6m 7402k - 18.2.0 dc2bc1663786 9e4961442551 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (15s) 11s ago 6m 39.9M - 10.4.0 c8b91775d855 eb6431f63d88 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (5m) 11s ago 4m 237M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (4m) 11s ago 4m 16.4M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (4m) 49s ago 4m 16.1M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (4m) 49s ago 4m 294M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (91s) 11s ago 8m 622M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (63s) 49s ago 6m 487M - 19.2.3-678-ge911bdeb 654f31e6858e acf5a6f3f804 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (8m) 11s ago 8m 54.9M 2048M 18.2.0 dc2bc1663786 c83e96b62251 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (6m) 49s ago 6m 47.2M 2048M 18.2.0 dc2bc1663786 7963792b5376 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (55s) 11s ago 7m 9126k - 1.7.0 72c9c2088986 888d071c50d9 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (51s) 49s ago 6m 5372k - 1.7.0 72c9c2088986 22c96a576a60 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (6m) 11s ago 6m 298M 4096M 18.2.0 dc2bc1663786 50f3ca995318 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (6m) 11s ago 6m 297M 4096M 18.2.0 dc2bc1663786 23e35bdafe50 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (5m) 11s ago 5m 261M 4096M 18.2.0 dc2bc1663786 75097dc12979 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (5m) 49s ago 5m 366M 4096M 18.2.0 dc2bc1663786 e79644a0564f 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (5m) 49s ago 5m 310M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (5m) 49s ago 5m 278M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:03:24.341 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (37s) 11s ago 6m 48.9M - 2.51.0 1d3b7f56885b e6f470b0ba11 2026-03-09T15:03:24.348 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.346+0000 7fb9ae5fe700 1 -- 192.168.123.105:0/1584798803 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fb9a40776c0 msgr2=0x7fb9a4079b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:24.348 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.346+0000 7fb9ae5fe700 1 --2- 192.168.123.105:0/1584798803 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fb9a40776c0 0x7fb9a4079b70 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fb9a0005950 tx=0x7fb9a000a300 comp rx=0 tx=0).stop 2026-03-09T15:03:24.348 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.346+0000 7fb9ae5fe700 1 -- 192.168.123.105:0/1584798803 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b806d7a0 msgr2=0x7fb9b81179a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:24.348 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.346+0000 7fb9ae5fe700 1 --2- 192.168.123.105:0/1584798803 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b806d7a0 0x7fb9b81179a0 secure :-1 s=READY pgs=351 cs=0 l=1 rev1=1 crypto rx=0x7fb9a8005fd0 tx=0x7fb9a8005710 comp rx=0 tx=0).stop 2026-03-09T15:03:24.348 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.348+0000 7fb9ae5fe700 1 -- 192.168.123.105:0/1584798803 shutdown_connections 2026-03-09T15:03:24.348 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.348+0000 7fb9ae5fe700 1 --2- 192.168.123.105:0/1584798803 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fb9a40776c0 0x7fb9a4079b70 secure :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fb9a0005950 tx=0x7fb9a000a300 comp rx=0 tx=0).stop 2026-03-09T15:03:24.348 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.348+0000 7fb9ae5fe700 1 --2- 192.168.123.105:0/1584798803 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb9b806d7a0 0x7fb9b81179a0 unknown :-1 s=CLOSED pgs=351 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:24.348 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.348+0000 7fb9ae5fe700 1 --2- 192.168.123.105:0/1584798803 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb9b8112950 0x7fb9b8112dc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:24.348 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.348+0000 7fb9ae5fe700 1 -- 192.168.123.105:0/1584798803 >> 192.168.123.105:0/1584798803 conn(0x7fb9b806c830 msgr2=0x7fb9b8071830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:24.348 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.348+0000 7fb9ae5fe700 1 -- 192.168.123.105:0/1584798803 shutdown_connections 2026-03-09T15:03:24.348 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:24.348+0000 7fb9ae5fe700 1 -- 192.168.123.105:0/1584798803 wait complete. 2026-03-09T15:03:24.413 INFO:teuthology.task.sequential:In sequential, running task cephadm.shell... 2026-03-09T15:03:24.413 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-09T15:03:24.413 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mgr mgr/orchestrator/fail_fs true' 2026-03-09T15:03:24.440 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:24 vm05.local ceph-mon[50611]: pgmap v40: 65 pgs: 65 active+clean; 298 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1002 KiB/s rd, 968 KiB/s wr, 57 op/s 2026-03-09T15:03:24.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:24 vm09.local ceph-mon[59673]: pgmap v40: 65 pgs: 65 active+clean; 298 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1002 KiB/s rd, 968 KiB/s wr, 57 op/s 2026-03-09T15:03:24.680 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:03:25.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.164+0000 7ff563fff700 1 -- 192.168.123.105:0/500550300 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff564071e40 msgr2=0x7ff5640722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:25.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.164+0000 7ff563fff700 1 --2- 192.168.123.105:0/500550300 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff564071e40 0x7ff5640722b0 secure :-1 s=READY pgs=352 cs=0 l=1 rev1=1 crypto rx=0x7ff55c00d3e0 tx=0x7ff55c00d6f0 comp rx=0 tx=0).stop 2026-03-09T15:03:25.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.164+0000 7ff563fff700 1 -- 192.168.123.105:0/500550300 shutdown_connections 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.164+0000 7ff563fff700 1 --2- 192.168.123.105:0/500550300 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff564071e40 0x7ff5640722b0 unknown :-1 s=CLOSED pgs=352 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.164+0000 7ff563fff700 1 --2- 192.168.123.105:0/500550300 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff56410c8f0 0x7ff56410ccc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.164+0000 7ff563fff700 1 -- 192.168.123.105:0/500550300 >> 192.168.123.105:0/500550300 conn(0x7ff56406c6c0 msgr2=0x7ff56406cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.164+0000 7ff563fff700 1 -- 192.168.123.105:0/500550300 shutdown_connections 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.164+0000 7ff563fff700 1 -- 192.168.123.105:0/500550300 wait complete. 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.165+0000 7ff563fff700 1 Processor -- start 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.165+0000 7ff563fff700 1 -- start start 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.165+0000 7ff563fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff56410c8f0 0x7ff56407cea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.165+0000 7ff563fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff56407d3e0 0x7ff56407d850 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.165+0000 7ff563fff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff5640819d0 con 0x7ff56407d3e0 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.165+0000 7ff563fff700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff564081b40 con 0x7ff56410c8f0 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.165+0000 7ff562ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff56410c8f0 0x7ff56407cea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.165+0000 7ff562ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff56410c8f0 0x7ff56407cea0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:40162/0 (socket says 192.168.123.105:40162) 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.165+0000 7ff562ffd700 1 -- 192.168.123.105:0/3838545137 learned_addr learned my addr 192.168.123.105:0/3838545137 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.165+0000 7ff5627fc700 1 --2- 192.168.123.105:0/3838545137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff56407d3e0 0x7ff56407d850 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.166+0000 7ff562ffd700 1 -- 192.168.123.105:0/3838545137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff56407d3e0 msgr2=0x7ff56407d850 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.166+0000 7ff562ffd700 1 --2- 192.168.123.105:0/3838545137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff56407d3e0 0x7ff56407d850 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:25.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.166+0000 7ff562ffd700 1 -- 192.168.123.105:0/3838545137 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff55c00d090 con 0x7ff56410c8f0 2026-03-09T15:03:25.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.166+0000 7ff562ffd700 1 --2- 192.168.123.105:0/3838545137 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff56410c8f0 0x7ff56407cea0 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7ff554008ca0 tx=0x7ff55400e130 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:25.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.166+0000 7ff568a1e700 1 -- 192.168.123.105:0/3838545137 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff554019070 con 0x7ff56410c8f0 2026-03-09T15:03:25.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.166+0000 7ff563fff700 1 -- 192.168.123.105:0/3838545137 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff564081e20 con 0x7ff56410c8f0 2026-03-09T15:03:25.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.166+0000 7ff563fff700 1 -- 192.168.123.105:0/3838545137 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff564082370 con 0x7ff56410c8f0 2026-03-09T15:03:25.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.167+0000 7ff568a1e700 1 -- 192.168.123.105:0/3838545137 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7ff554004d10 con 0x7ff56410c8f0 2026-03-09T15:03:25.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.167+0000 7ff568a1e700 1 -- 192.168.123.105:0/3838545137 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff55400f040 con 0x7ff56410c8f0 2026-03-09T15:03:25.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.168+0000 7ff568a1e700 1 -- 192.168.123.105:0/3838545137 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7ff554004e80 con 0x7ff56410c8f0 2026-03-09T15:03:25.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.168+0000 7ff568a1e700 1 --2- 192.168.123.105:0/3838545137 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff54c0776c0 0x7ff54c079b70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:25.169 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.170+0000 7ff568a1e700 1 -- 192.168.123.105:0/3838545137 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7ff554099690 con 0x7ff56410c8f0 2026-03-09T15:03:25.170 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.170+0000 7ff563fff700 1 -- 192.168.123.105:0/3838545137 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff550005320 con 0x7ff56410c8f0 2026-03-09T15:03:25.170 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.171+0000 7ff5627fc700 1 --2- 192.168.123.105:0/3838545137 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff54c0776c0 0x7ff54c079b70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:25.171 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.171+0000 7ff5627fc700 1 --2- 192.168.123.105:0/3838545137 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff54c0776c0 0x7ff54c079b70 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7ff55c0095a0 tx=0x7ff55c00bec0 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:25.176 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.174+0000 7ff568a1e700 1 -- 192.168.123.105:0/3838545137 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7ff5540621a0 con 0x7ff56410c8f0 2026-03-09T15:03:25.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.325+0000 7ff563fff700 1 -- 192.168.123.105:0/3838545137 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command([{prefix=config set, name=mgr/orchestrator/fail_fs}] v 0) v1 -- 0x7ff550005cc0 con 0x7ff56410c8f0 2026-03-09T15:03:25.552 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.552+0000 7ff568a1e700 1 -- 192.168.123.105:0/3838545137 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/orchestrator/fail_fs}]=0 v37)=0 v37) v1 ==== 125+0+0 (secure 0 0 0) 0x7ff5540618f0 con 0x7ff56410c8f0 2026-03-09T15:03:25.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.555+0000 7ff563fff700 1 -- 192.168.123.105:0/3838545137 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff54c0776c0 msgr2=0x7ff54c079b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:25.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.555+0000 7ff563fff700 1 --2- 192.168.123.105:0/3838545137 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff54c0776c0 0x7ff54c079b70 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7ff55c0095a0 tx=0x7ff55c00bec0 comp rx=0 tx=0).stop 2026-03-09T15:03:25.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.555+0000 7ff563fff700 1 -- 192.168.123.105:0/3838545137 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff56410c8f0 msgr2=0x7ff56407cea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:25.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.555+0000 7ff563fff700 1 --2- 192.168.123.105:0/3838545137 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff56410c8f0 0x7ff56407cea0 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7ff554008ca0 tx=0x7ff55400e130 comp rx=0 tx=0).stop 2026-03-09T15:03:25.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.555+0000 7ff563fff700 1 -- 192.168.123.105:0/3838545137 shutdown_connections 2026-03-09T15:03:25.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.555+0000 7ff563fff700 1 --2- 192.168.123.105:0/3838545137 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7ff54c0776c0 0x7ff54c079b70 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:25.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.555+0000 7ff563fff700 1 --2- 192.168.123.105:0/3838545137 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff56410c8f0 0x7ff56407cea0 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:25.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.555+0000 7ff563fff700 1 --2- 192.168.123.105:0/3838545137 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff56407d3e0 0x7ff56407d850 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:25.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.555+0000 7ff563fff700 1 -- 192.168.123.105:0/3838545137 >> 192.168.123.105:0/3838545137 conn(0x7ff56406c6c0 msgr2=0x7ff564070a00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:25.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.555+0000 7ff563fff700 1 -- 192.168.123.105:0/3838545137 shutdown_connections 2026-03-09T15:03:25.555 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:25.555+0000 7ff563fff700 1 -- 192.168.123.105:0/3838545137 wait complete. 2026-03-09T15:03:25.574 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:25 vm05.local ceph-mon[50611]: from='client.14726 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:25.574 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:25 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:03:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:25 vm09.local ceph-mon[59673]: from='client.14726 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:25 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:03:25.629 INFO:teuthology.task.sequential:In sequential, running task cephadm.shell... 2026-03-09T15:03:25.630 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-09T15:03:25.630 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force' 2026-03-09T15:03:25.899 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:03:26.262 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.260+0000 7fd5ac341700 1 -- 192.168.123.105:0/2119890784 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd5a4071b60 msgr2=0x7fd5a4071fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:26.262 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.260+0000 7fd5ac341700 1 --2- 192.168.123.105:0/2119890784 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd5a4071b60 0x7fd5a4071fd0 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fd594009a60 tx=0x7fd594009d70 comp rx=0 tx=0).stop 2026-03-09T15:03:26.262 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.263+0000 7fd5ac341700 1 -- 192.168.123.105:0/2119890784 shutdown_connections 2026-03-09T15:03:26.262 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.263+0000 7fd5ac341700 1 --2- 192.168.123.105:0/2119890784 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd5a4071b60 0x7fd5a4071fd0 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:26.262 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.263+0000 7fd5ac341700 1 --2- 192.168.123.105:0/2119890784 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd5a410eab0 0x7fd5a410ee80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:26.262 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.263+0000 7fd5ac341700 1 -- 192.168.123.105:0/2119890784 >> 192.168.123.105:0/2119890784 conn(0x7fd5a406c6c0 msgr2=0x7fd5a406cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:26.263 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.263+0000 7fd5ac341700 1 -- 192.168.123.105:0/2119890784 shutdown_connections 2026-03-09T15:03:26.263 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.263+0000 7fd5ac341700 1 -- 192.168.123.105:0/2119890784 wait complete. 2026-03-09T15:03:26.263 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.263+0000 7fd5ac341700 1 Processor -- start 2026-03-09T15:03:26.263 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.263+0000 7fd5ac341700 1 -- start start 2026-03-09T15:03:26.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.264+0000 7fd5ac341700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd5a4071b60 0x7fd5a41a4e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:26.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.264+0000 7fd5ac341700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd5a410eab0 0x7fd5a41a5390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:26.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.264+0000 7fd5ac341700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd5a41a5a70 con 0x7fd5a410eab0 2026-03-09T15:03:26.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.264+0000 7fd5ac341700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd5a41a9800 con 0x7fd5a4071b60 2026-03-09T15:03:26.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.264+0000 7fd5aa0dd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd5a4071b60 0x7fd5a41a4e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:26.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.264+0000 7fd5aa0dd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd5a4071b60 0x7fd5a41a4e50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:40182/0 (socket says 192.168.123.105:40182) 2026-03-09T15:03:26.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.264+0000 7fd5aa0dd700 1 -- 192.168.123.105:0/1279439888 learned_addr learned my addr 192.168.123.105:0/1279439888 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:26.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.264+0000 7fd5aa0dd700 1 -- 192.168.123.105:0/1279439888 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd5a410eab0 msgr2=0x7fd5a41a5390 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:26.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.264+0000 7fd5aa0dd700 1 --2- 192.168.123.105:0/1279439888 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd5a410eab0 0x7fd5a41a5390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:26.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.264+0000 7fd5aa0dd700 1 -- 192.168.123.105:0/1279439888 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd594009710 con 0x7fd5a4071b60 2026-03-09T15:03:26.267 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.264+0000 7fd5aa0dd700 1 --2- 192.168.123.105:0/1279439888 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd5a4071b60 0x7fd5a41a4e50 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fd5a000ea30 tx=0x7fd5a000edf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:26.267 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.267+0000 7fd59b7fe700 1 -- 192.168.123.105:0/1279439888 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd5a000cc40 con 0x7fd5a4071b60 2026-03-09T15:03:26.267 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.267+0000 7fd5ac341700 1 -- 192.168.123.105:0/1279439888 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd5a41a9ae0 con 0x7fd5a4071b60 2026-03-09T15:03:26.267 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.267+0000 7fd5ac341700 1 -- 192.168.123.105:0/1279439888 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd5a41aa030 con 0x7fd5a4071b60 2026-03-09T15:03:26.269 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.268+0000 7fd59b7fe700 1 -- 192.168.123.105:0/1279439888 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fd5a000cda0 con 0x7fd5a4071b60 2026-03-09T15:03:26.269 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.268+0000 7fd59b7fe700 1 -- 192.168.123.105:0/1279439888 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd5a0010430 con 0x7fd5a4071b60 2026-03-09T15:03:26.269 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.269+0000 7fd59b7fe700 1 -- 192.168.123.105:0/1279439888 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7fd5a0004830 con 0x7fd5a4071b60 2026-03-09T15:03:26.269 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.269+0000 7fd5997fa700 1 -- 192.168.123.105:0/1279439888 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd588005320 con 0x7fd5a4071b60 2026-03-09T15:03:26.270 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.270+0000 7fd59b7fe700 1 --2- 192.168.123.105:0/1279439888 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fd590077660 0x7fd590079b10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:26.270 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.270+0000 7fd59b7fe700 1 -- 192.168.123.105:0/1279439888 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7fd5a0014070 con 0x7fd5a4071b60 2026-03-09T15:03:26.270 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.270+0000 7fd5a98dc700 1 --2- 192.168.123.105:0/1279439888 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fd590077660 0x7fd590079b10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:26.270 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.270+0000 7fd5a98dc700 1 --2- 192.168.123.105:0/1279439888 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fd590077660 0x7fd590079b10 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fd5a41a6470 tx=0x7fd59400b540 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:26.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.273+0000 7fd59b7fe700 1 -- 192.168.123.105:0/1279439888 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7fd5a0062710 con 0x7fd5a4071b60 2026-03-09T15:03:26.392 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.391+0000 7fd5997fa700 1 -- 192.168.123.105:0/1279439888 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim}] v 0) v1 -- 0x7fd588005190 con 0x7fd5a4071b60 2026-03-09T15:03:26.393 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.393+0000 7fd59b7fe700 1 -- 192.168.123.105:0/1279439888 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim}]=0 v37)=0 v37) v1 ==== 155+0+0 (secure 0 0 0) 0x7fd5a0061e60 con 0x7fd5a4071b60 2026-03-09T15:03:26.395 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.396+0000 7fd5997fa700 1 -- 192.168.123.105:0/1279439888 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fd590077660 msgr2=0x7fd590079b10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:26.396 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.396+0000 7fd5997fa700 1 --2- 192.168.123.105:0/1279439888 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fd590077660 0x7fd590079b10 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7fd5a41a6470 tx=0x7fd59400b540 comp rx=0 tx=0).stop 2026-03-09T15:03:26.396 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.396+0000 7fd5997fa700 1 -- 192.168.123.105:0/1279439888 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd5a4071b60 msgr2=0x7fd5a41a4e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:26.396 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.396+0000 7fd5997fa700 1 --2- 192.168.123.105:0/1279439888 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd5a4071b60 0x7fd5a41a4e50 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fd5a000ea30 tx=0x7fd5a000edf0 comp rx=0 tx=0).stop 2026-03-09T15:03:26.396 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.396+0000 7fd5997fa700 1 -- 192.168.123.105:0/1279439888 shutdown_connections 2026-03-09T15:03:26.396 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.396+0000 7fd5997fa700 1 --2- 192.168.123.105:0/1279439888 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fd590077660 0x7fd590079b10 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:26.396 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.396+0000 7fd5997fa700 1 --2- 192.168.123.105:0/1279439888 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd5a4071b60 0x7fd5a41a4e50 secure :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fd5a000ea30 tx=0x7fd5a000edf0 comp rx=0 tx=0).stop 2026-03-09T15:03:26.396 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.396+0000 7fd5997fa700 1 --2- 192.168.123.105:0/1279439888 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd5a410eab0 0x7fd5a41a5390 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:26.396 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.396+0000 7fd5997fa700 1 -- 192.168.123.105:0/1279439888 >> 192.168.123.105:0/1279439888 conn(0x7fd5a406c6c0 msgr2=0x7fd5a40703d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:26.396 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.396+0000 7fd5997fa700 1 -- 192.168.123.105:0/1279439888 shutdown_connections 2026-03-09T15:03:26.396 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:26.396+0000 7fd5997fa700 1 -- 192.168.123.105:0/1279439888 wait complete. 2026-03-09T15:03:26.447 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force' 2026-03-09T15:03:26.713 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:03:26.746 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:26 vm05.local ceph-mon[50611]: pgmap v41: 65 pgs: 65 active+clean; 298 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1002 KiB/s rd, 968 KiB/s wr, 57 op/s 2026-03-09T15:03:26.746 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:26 vm05.local ceph-mon[50611]: from='client.? ' entity='client.admin' 2026-03-09T15:03:26.746 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:26 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:26.746 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:26 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:26.746 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:26 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:26.746 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:26 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:26 vm09.local ceph-mon[59673]: pgmap v41: 65 pgs: 65 active+clean; 298 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1002 KiB/s rd, 968 KiB/s wr, 57 op/s 2026-03-09T15:03:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:26 vm09.local ceph-mon[59673]: from='client.? ' entity='client.admin' 2026-03-09T15:03:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:26 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:26 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:26 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:26 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:27.057 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.057+0000 7fa88ca1e700 1 -- 192.168.123.105:0/1948453016 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa888071b60 msgr2=0x7fa888071fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:27.057 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.057+0000 7fa88ca1e700 1 --2- 192.168.123.105:0/1948453016 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa888071b60 0x7fa888071fd0 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7fa87c009b00 tx=0x7fa87c009e10 comp rx=0 tx=0).stop 2026-03-09T15:03:27.057 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.057+0000 7fa88ca1e700 1 -- 192.168.123.105:0/1948453016 shutdown_connections 2026-03-09T15:03:27.057 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.057+0000 7fa88ca1e700 1 --2- 192.168.123.105:0/1948453016 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa888071b60 0x7fa888071fd0 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:27.057 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.057+0000 7fa88ca1e700 1 --2- 192.168.123.105:0/1948453016 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa88810e9e0 0x7fa88810edb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:27.057 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.057+0000 7fa88ca1e700 1 -- 192.168.123.105:0/1948453016 >> 192.168.123.105:0/1948453016 conn(0x7fa88806c6c0 msgr2=0x7fa88806cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:27.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.058+0000 7fa88ca1e700 1 -- 192.168.123.105:0/1948453016 shutdown_connections 2026-03-09T15:03:27.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.058+0000 7fa88ca1e700 1 -- 192.168.123.105:0/1948453016 wait complete. 2026-03-09T15:03:27.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.058+0000 7fa88ca1e700 1 Processor -- start 2026-03-09T15:03:27.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.059+0000 7fa88ca1e700 1 -- start start 2026-03-09T15:03:27.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.059+0000 7fa88ca1e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa888071b60 0x7fa8881158f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:27.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.059+0000 7fa88ca1e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa88810e9e0 0x7fa888115e30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:27.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.059+0000 7fa88ca1e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa888119a80 con 0x7fa888071b60 2026-03-09T15:03:27.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.059+0000 7fa88ca1e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa888116370 con 0x7fa88810e9e0 2026-03-09T15:03:27.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.059+0000 7fa886ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa88810e9e0 0x7fa888115e30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:27.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.059+0000 7fa886ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa88810e9e0 0x7fa888115e30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:40208/0 (socket says 192.168.123.105:40208) 2026-03-09T15:03:27.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.059+0000 7fa886ffd700 1 -- 192.168.123.105:0/913439876 learned_addr learned my addr 192.168.123.105:0/913439876 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:27.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.059+0000 7fa8877fe700 1 --2- 192.168.123.105:0/913439876 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa888071b60 0x7fa8881158f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:27.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.059+0000 7fa886ffd700 1 -- 192.168.123.105:0/913439876 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa888071b60 msgr2=0x7fa8881158f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:27.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.059+0000 7fa886ffd700 1 --2- 192.168.123.105:0/913439876 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa888071b60 0x7fa8881158f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:27.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.059+0000 7fa886ffd700 1 -- 192.168.123.105:0/913439876 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa87c0097e0 con 0x7fa88810e9e0 2026-03-09T15:03:27.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.060+0000 7fa886ffd700 1 --2- 192.168.123.105:0/913439876 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa88810e9e0 0x7fa888115e30 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7fa87c004930 tx=0x7fa87c004a10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:27.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.061+0000 7fa884ff9700 1 -- 192.168.123.105:0/913439876 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa87c01d070 con 0x7fa88810e9e0 2026-03-09T15:03:27.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.061+0000 7fa884ff9700 1 -- 192.168.123.105:0/913439876 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fa87c00bc50 con 0x7fa88810e9e0 2026-03-09T15:03:27.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.061+0000 7fa884ff9700 1 -- 192.168.123.105:0/913439876 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa87c00f790 con 0x7fa88810e9e0 2026-03-09T15:03:27.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.062+0000 7fa88ca1e700 1 -- 192.168.123.105:0/913439876 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa8881165f0 con 0x7fa88810e9e0 2026-03-09T15:03:27.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.062+0000 7fa88ca1e700 1 -- 192.168.123.105:0/913439876 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa8881b7980 con 0x7fa88810e9e0 2026-03-09T15:03:27.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.062+0000 7fa88ca1e700 1 -- 192.168.123.105:0/913439876 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa88804f2a0 con 0x7fa88810e9e0 2026-03-09T15:03:27.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.064+0000 7fa884ff9700 1 -- 192.168.123.105:0/913439876 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7fa87c022470 con 0x7fa88810e9e0 2026-03-09T15:03:27.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.064+0000 7fa884ff9700 1 --2- 192.168.123.105:0/913439876 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa870077670 0x7fa870079b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:27.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.065+0000 7fa884ff9700 1 -- 192.168.123.105:0/913439876 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7fa87c09b9c0 con 0x7fa88810e9e0 2026-03-09T15:03:27.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.066+0000 7fa884ff9700 1 -- 192.168.123.105:0/913439876 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7fa87c064550 con 0x7fa88810e9e0 2026-03-09T15:03:27.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.067+0000 7fa8877fe700 1 --2- 192.168.123.105:0/913439876 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa870077670 0x7fa870079b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:27.070 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.069+0000 7fa8877fe700 1 --2- 192.168.123.105:0/913439876 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa870077670 0x7fa870079b20 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fa878006fd0 tx=0x7fa878008040 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:27.191 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.191+0000 7fa88ca1e700 1 -- 192.168.123.105:0/913439876 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim_allowed}] v 0) v1 -- 0x7fa88804ea50 con 0x7fa88810e9e0 2026-03-09T15:03:27.193 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.193+0000 7fa884ff9700 1 -- 192.168.123.105:0/913439876 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mon_warn_on_insecure_global_id_reclaim_allowed}]=0 v37)=0 v37) v1 ==== 163+0+0 (secure 0 0 0) 0x7fa87c063ca0 con 0x7fa88810e9e0 2026-03-09T15:03:27.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.197+0000 7fa86e7fc700 1 -- 192.168.123.105:0/913439876 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa870077670 msgr2=0x7fa870079b20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:27.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.197+0000 7fa86e7fc700 1 --2- 192.168.123.105:0/913439876 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa870077670 0x7fa870079b20 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fa878006fd0 tx=0x7fa878008040 comp rx=0 tx=0).stop 2026-03-09T15:03:27.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.197+0000 7fa86e7fc700 1 -- 192.168.123.105:0/913439876 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa88810e9e0 msgr2=0x7fa888115e30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:27.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.197+0000 7fa86e7fc700 1 --2- 192.168.123.105:0/913439876 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa88810e9e0 0x7fa888115e30 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7fa87c004930 tx=0x7fa87c004a10 comp rx=0 tx=0).stop 2026-03-09T15:03:27.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.198+0000 7fa86e7fc700 1 -- 192.168.123.105:0/913439876 shutdown_connections 2026-03-09T15:03:27.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.198+0000 7fa86e7fc700 1 --2- 192.168.123.105:0/913439876 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa870077670 0x7fa870079b20 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:27.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.198+0000 7fa86e7fc700 1 --2- 192.168.123.105:0/913439876 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa888071b60 0x7fa8881158f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:27.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.198+0000 7fa86e7fc700 1 --2- 192.168.123.105:0/913439876 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa88810e9e0 0x7fa888115e30 secure :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7fa87c004930 tx=0x7fa87c004a10 comp rx=0 tx=0).stop 2026-03-09T15:03:27.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.198+0000 7fa86e7fc700 1 -- 192.168.123.105:0/913439876 >> 192.168.123.105:0/913439876 conn(0x7fa88806c6c0 msgr2=0x7fa88806d020 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:27.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.198+0000 7fa86e7fc700 1 -- 192.168.123.105:0/913439876 shutdown_connections 2026-03-09T15:03:27.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.200+0000 7fa86e7fc700 1 -- 192.168.123.105:0/913439876 wait complete. 2026-03-09T15:03:27.266 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set global log_to_journald false --force' 2026-03-09T15:03:27.461 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:03:27.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.865+0000 7fa151db2700 1 -- 192.168.123.105:0/2142898254 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa14c071e40 msgr2=0x7fa14c0722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:27.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.865+0000 7fa151db2700 1 --2- 192.168.123.105:0/2142898254 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa14c071e40 0x7fa14c0722b0 secure :-1 s=READY pgs=353 cs=0 l=1 rev1=1 crypto rx=0x7fa14400d3f0 tx=0x7fa14400d700 comp rx=0 tx=0).stop 2026-03-09T15:03:27.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.866+0000 7fa151db2700 1 -- 192.168.123.105:0/2142898254 shutdown_connections 2026-03-09T15:03:27.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.866+0000 7fa151db2700 1 --2- 192.168.123.105:0/2142898254 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa14c071e40 0x7fa14c0722b0 unknown :-1 s=CLOSED pgs=353 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:27.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.866+0000 7fa151db2700 1 --2- 192.168.123.105:0/2142898254 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa14c10c8f0 0x7fa14c10ccc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:27.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.866+0000 7fa151db2700 1 -- 192.168.123.105:0/2142898254 >> 192.168.123.105:0/2142898254 conn(0x7fa14c06c6c0 msgr2=0x7fa14c06cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:27.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.867+0000 7fa151db2700 1 -- 192.168.123.105:0/2142898254 shutdown_connections 2026-03-09T15:03:27.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.867+0000 7fa151db2700 1 -- 192.168.123.105:0/2142898254 wait complete. 2026-03-09T15:03:27.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.867+0000 7fa151db2700 1 Processor -- start 2026-03-09T15:03:27.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.867+0000 7fa151db2700 1 -- start start 2026-03-09T15:03:27.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.867+0000 7fa151db2700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa14c10c8f0 0x7fa14c07ceb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:27.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.867+0000 7fa151db2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa14c07d3f0 0x7fa14c07d860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:27.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.867+0000 7fa151db2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa14c081a30 con 0x7fa14c07d3f0 2026-03-09T15:03:27.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.867+0000 7fa151db2700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa14c081b70 con 0x7fa14c10c8f0 2026-03-09T15:03:27.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.867+0000 7fa14bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa14c07d3f0 0x7fa14c07d860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:27.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.867+0000 7fa14bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa14c07d3f0 0x7fa14c07d860 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:57546/0 (socket says 192.168.123.105:57546) 2026-03-09T15:03:27.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.867+0000 7fa14bfff700 1 -- 192.168.123.105:0/3472073244 learned_addr learned my addr 192.168.123.105:0/3472073244 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:27.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.868+0000 7fa150db0700 1 --2- 192.168.123.105:0/3472073244 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa14c10c8f0 0x7fa14c07ceb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:27.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.868+0000 7fa14bfff700 1 -- 192.168.123.105:0/3472073244 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa14c10c8f0 msgr2=0x7fa14c07ceb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:27.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.868+0000 7fa14bfff700 1 --2- 192.168.123.105:0/3472073244 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa14c10c8f0 0x7fa14c07ceb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:27.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.868+0000 7fa14bfff700 1 -- 192.168.123.105:0/3472073244 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa144007ed0 con 0x7fa14c07d3f0 2026-03-09T15:03:27.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.868+0000 7fa14bfff700 1 --2- 192.168.123.105:0/3472073244 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa14c07d3f0 0x7fa14c07d860 secure :-1 s=READY pgs=354 cs=0 l=1 rev1=1 crypto rx=0x7fa144006270 tx=0x7fa144004b40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:27.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.869+0000 7fa149ffb700 1 -- 192.168.123.105:0/3472073244 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa144003f80 con 0x7fa14c07d3f0 2026-03-09T15:03:27.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.870+0000 7fa151db2700 1 -- 192.168.123.105:0/3472073244 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa14c081d90 con 0x7fa14c07d3f0 2026-03-09T15:03:27.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.870+0000 7fa151db2700 1 -- 192.168.123.105:0/3472073244 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa14c082280 con 0x7fa14c07d3f0 2026-03-09T15:03:27.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.871+0000 7fa149ffb700 1 -- 192.168.123.105:0/3472073244 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fa1440040e0 con 0x7fa14c07d3f0 2026-03-09T15:03:27.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.871+0000 7fa149ffb700 1 -- 192.168.123.105:0/3472073244 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa144017420 con 0x7fa14c07d3f0 2026-03-09T15:03:27.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.872+0000 7fa149ffb700 1 -- 192.168.123.105:0/3472073244 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7fa144017580 con 0x7fa14c07d3f0 2026-03-09T15:03:27.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.873+0000 7fa149ffb700 1 --2- 192.168.123.105:0/3472073244 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa134077780 0x7fa134079c30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:27.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.873+0000 7fa1337fe700 1 -- 192.168.123.105:0/3472073244 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa138005320 con 0x7fa14c07d3f0 2026-03-09T15:03:27.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.873+0000 7fa150db0700 1 --2- 192.168.123.105:0/3472073244 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa134077780 0x7fa134079c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:27.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.874+0000 7fa150db0700 1 --2- 192.168.123.105:0/3472073244 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa134077780 0x7fa134079c30 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fa13c005950 tx=0x7fa13c0058e0 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:27.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.874+0000 7fa149ffb700 1 -- 192.168.123.105:0/3472073244 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7fa144013070 con 0x7fa14c07d3f0 2026-03-09T15:03:27.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:27.877+0000 7fa149ffb700 1 -- 192.168.123.105:0/3472073244 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7fa14406d5c0 con 0x7fa14c07d3f0 2026-03-09T15:03:28.013 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.012+0000 7fa1337fe700 1 -- 192.168.123.105:0/3472073244 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command([{prefix=config set, name=log_to_journald}] v 0) v1 -- 0x7fa138005cc0 con 0x7fa14c07d3f0 2026-03-09T15:03:28.013 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.013+0000 7fa149ffb700 1 -- 192.168.123.105:0/3472073244 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{prefix=config set, name=log_to_journald}]=0 v37)=0 v37) v1 ==== 135+0+0 (secure 0 0 0) 0x7fa14406cd10 con 0x7fa14c07d3f0 2026-03-09T15:03:28.016 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.016+0000 7fa151db2700 1 -- 192.168.123.105:0/3472073244 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa134077780 msgr2=0x7fa134079c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:28.016 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.017+0000 7fa151db2700 1 --2- 192.168.123.105:0/3472073244 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa134077780 0x7fa134079c30 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7fa13c005950 tx=0x7fa13c0058e0 comp rx=0 tx=0).stop 2026-03-09T15:03:28.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.017+0000 7fa151db2700 1 -- 192.168.123.105:0/3472073244 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa14c07d3f0 msgr2=0x7fa14c07d860 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:28.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.017+0000 7fa151db2700 1 --2- 192.168.123.105:0/3472073244 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa14c07d3f0 0x7fa14c07d860 secure :-1 s=READY pgs=354 cs=0 l=1 rev1=1 crypto rx=0x7fa144006270 tx=0x7fa144004b40 comp rx=0 tx=0).stop 2026-03-09T15:03:28.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.017+0000 7fa151db2700 1 -- 192.168.123.105:0/3472073244 shutdown_connections 2026-03-09T15:03:28.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.017+0000 7fa151db2700 1 --2- 192.168.123.105:0/3472073244 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fa134077780 0x7fa134079c30 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:28.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.018+0000 7fa151db2700 1 --2- 192.168.123.105:0/3472073244 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa14c10c8f0 0x7fa14c07ceb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:28.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.018+0000 7fa151db2700 1 --2- 192.168.123.105:0/3472073244 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa14c07d3f0 0x7fa14c07d860 unknown :-1 s=CLOSED pgs=354 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:28.018 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.018+0000 7fa151db2700 1 -- 192.168.123.105:0/3472073244 >> 192.168.123.105:0/3472073244 conn(0x7fa14c06c6c0 msgr2=0x7fa14c070040 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:28.018 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.018+0000 7fa151db2700 1 -- 192.168.123.105:0/3472073244 shutdown_connections 2026-03-09T15:03:28.018 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.018+0000 7fa151db2700 1 -- 192.168.123.105:0/3472073244 wait complete. 2026-03-09T15:03:28.114 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1' 2026-03-09T15:03:28.320 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:03:28.625 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:28 vm05.local ceph-mon[50611]: pgmap v42: 65 pgs: 65 active+clean; 293 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 100 op/s 2026-03-09T15:03:28.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.705+0000 7f5e956d6700 1 -- 192.168.123.105:0/3818227012 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5e9010c8f0 msgr2=0x7f5e9010ccc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:28.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.705+0000 7f5e956d6700 1 --2- 192.168.123.105:0/3818227012 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5e9010c8f0 0x7f5e9010ccc0 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f5e80007780 tx=0x7f5e8000c050 comp rx=0 tx=0).stop 2026-03-09T15:03:28.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.705+0000 7f5e956d6700 1 -- 192.168.123.105:0/3818227012 shutdown_connections 2026-03-09T15:03:28.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.705+0000 7f5e956d6700 1 --2- 192.168.123.105:0/3818227012 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5e90071e40 0x7f5e900722b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:28.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.705+0000 7f5e956d6700 1 --2- 192.168.123.105:0/3818227012 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5e9010c8f0 0x7f5e9010ccc0 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:28.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.705+0000 7f5e956d6700 1 -- 192.168.123.105:0/3818227012 >> 192.168.123.105:0/3818227012 conn(0x7f5e9006c6c0 msgr2=0x7f5e9006cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.705+0000 7f5e956d6700 1 -- 192.168.123.105:0/3818227012 shutdown_connections 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.705+0000 7f5e956d6700 1 -- 192.168.123.105:0/3818227012 wait complete. 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e956d6700 1 Processor -- start 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e956d6700 1 -- start start 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e956d6700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5e90071e40 0x7f5e9007cf20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e956d6700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5e9007d460 0x7f5e9007d8d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e956d6700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5e90081a00 con 0x7f5e90071e40 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e956d6700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f5e90081b70 con 0x7f5e9007d460 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e8f7fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5e9007d460 0x7f5e9007d8d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e8f7fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5e9007d460 0x7f5e9007d8d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:34332/0 (socket says 192.168.123.105:34332) 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e8f7fe700 1 -- 192.168.123.105:0/749950769 learned_addr learned my addr 192.168.123.105:0/749950769 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e8f7fe700 1 -- 192.168.123.105:0/749950769 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5e90071e40 msgr2=0x7f5e9007cf20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e8f7fe700 1 --2- 192.168.123.105:0/749950769 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5e90071e40 0x7f5e9007cf20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e8f7fe700 1 -- 192.168.123.105:0/749950769 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f5e80007430 con 0x7f5e9007d460 2026-03-09T15:03:28.706 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.706+0000 7f5e8f7fe700 1 --2- 192.168.123.105:0/749950769 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5e9007d460 0x7f5e9007d8d0 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f5e88012ed0 tx=0x7f5e8800c9e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:28.707 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.707+0000 7f5e8d7fa700 1 -- 192.168.123.105:0/749950769 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5e8800e040 con 0x7f5e9007d460 2026-03-09T15:03:28.707 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.707+0000 7f5e956d6700 1 -- 192.168.123.105:0/749950769 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5e90081e50 con 0x7f5e9007d460 2026-03-09T15:03:28.709 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.707+0000 7f5e956d6700 1 -- 192.168.123.105:0/749950769 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f5e900823a0 con 0x7f5e9007d460 2026-03-09T15:03:28.709 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.710+0000 7f5e8d7fa700 1 -- 192.168.123.105:0/749950769 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f5e88012480 con 0x7f5e9007d460 2026-03-09T15:03:28.709 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.710+0000 7f5e8d7fa700 1 -- 192.168.123.105:0/749950769 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5e88007c70 con 0x7f5e9007d460 2026-03-09T15:03:28.710 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.710+0000 7f5e8d7fa700 1 -- 192.168.123.105:0/749950769 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f5e88025460 con 0x7f5e9007d460 2026-03-09T15:03:28.711 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.711+0000 7f5e8d7fa700 1 --2- 192.168.123.105:0/749950769 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f5e78077850 0x7f5e78079d00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:28.711 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.711+0000 7f5e8d7fa700 1 -- 192.168.123.105:0/749950769 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f5e880a2e70 con 0x7f5e9007d460 2026-03-09T15:03:28.711 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.711+0000 7f5e956d6700 1 -- 192.168.123.105:0/749950769 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f5e7c005320 con 0x7f5e9007d460 2026-03-09T15:03:28.713 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.713+0000 7f5e8ffff700 1 --2- 192.168.123.105:0/749950769 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f5e78077850 0x7f5e78079d00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:28.714 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.714+0000 7f5e8ffff700 1 --2- 192.168.123.105:0/749950769 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f5e78077850 0x7f5e78079d00 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f5e8000c420 tx=0x7f5e800058e0 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:28.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.715+0000 7f5e8d7fa700 1 -- 192.168.123.105:0/749950769 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f5e88067020 con 0x7f5e9007d460 2026-03-09T15:03:28.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:28 vm09.local ceph-mon[59673]: pgmap v42: 65 pgs: 65 active+clean; 293 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 100 op/s 2026-03-09T15:03:28.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.883+0000 7f5e956d6700 1 -- 192.168.123.105:0/749950769 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}) v1 -- 0x7f5e7c000c90 con 0x7f5e78077850 2026-03-09T15:03:28.892 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.892+0000 7f5e8d7fa700 1 -- 192.168.123.105:0/749950769 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+89 (secure 0 0 0) 0x7f5e7c000c90 con 0x7f5e78077850 2026-03-09T15:03:28.892 INFO:teuthology.orchestra.run.vm05.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:03:28.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.895+0000 7f5e76ffd700 1 -- 192.168.123.105:0/749950769 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f5e78077850 msgr2=0x7f5e78079d00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:28.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.895+0000 7f5e76ffd700 1 --2- 192.168.123.105:0/749950769 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f5e78077850 0x7f5e78079d00 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f5e8000c420 tx=0x7f5e800058e0 comp rx=0 tx=0).stop 2026-03-09T15:03:28.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.895+0000 7f5e76ffd700 1 -- 192.168.123.105:0/749950769 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5e9007d460 msgr2=0x7f5e9007d8d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:28.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.895+0000 7f5e76ffd700 1 --2- 192.168.123.105:0/749950769 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5e9007d460 0x7f5e9007d8d0 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f5e88012ed0 tx=0x7f5e8800c9e0 comp rx=0 tx=0).stop 2026-03-09T15:03:28.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.895+0000 7f5e76ffd700 1 -- 192.168.123.105:0/749950769 shutdown_connections 2026-03-09T15:03:28.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.895+0000 7f5e76ffd700 1 --2- 192.168.123.105:0/749950769 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f5e78077850 0x7f5e78079d00 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:28.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.895+0000 7f5e76ffd700 1 --2- 192.168.123.105:0/749950769 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5e90071e40 0x7f5e9007cf20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:28.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.895+0000 7f5e76ffd700 1 --2- 192.168.123.105:0/749950769 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5e9007d460 0x7f5e9007d8d0 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:28.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.895+0000 7f5e76ffd700 1 -- 192.168.123.105:0/749950769 >> 192.168.123.105:0/749950769 conn(0x7f5e9006c6c0 msgr2=0x7f5e90070150 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:28.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.896+0000 7f5e76ffd700 1 -- 192.168.123.105:0/749950769 shutdown_connections 2026-03-09T15:03:28.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:28.896+0000 7f5e76ffd700 1 -- 192.168.123.105:0/749950769 wait complete. 2026-03-09T15:03:28.953 INFO:teuthology.task.sequential:In sequential, running task cephadm.shell... 2026-03-09T15:03:28.953 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-09T15:03:28.954 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph fs dump; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done' 2026-03-09T15:03:29.176 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:03:29.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.736+0000 7f2337675700 1 -- 192.168.123.105:0/940030903 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f233010c8f0 msgr2=0x7f233010ccc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:29.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.736+0000 7f2337675700 1 --2- 192.168.123.105:0/940030903 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f233010c8f0 0x7f233010ccc0 secure :-1 s=READY pgs=355 cs=0 l=1 rev1=1 crypto rx=0x7f2324007780 tx=0x7f232400c050 comp rx=0 tx=0).stop 2026-03-09T15:03:29.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.736+0000 7f2337675700 1 -- 192.168.123.105:0/940030903 shutdown_connections 2026-03-09T15:03:29.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.736+0000 7f2337675700 1 --2- 192.168.123.105:0/940030903 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2330071e40 0x7f23300722b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:29.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.736+0000 7f2337675700 1 --2- 192.168.123.105:0/940030903 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f233010c8f0 0x7f233010ccc0 unknown :-1 s=CLOSED pgs=355 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:29.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.736+0000 7f2337675700 1 -- 192.168.123.105:0/940030903 >> 192.168.123.105:0/940030903 conn(0x7f233006c6c0 msgr2=0x7f233006cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:29.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.736+0000 7f2337675700 1 -- 192.168.123.105:0/940030903 shutdown_connections 2026-03-09T15:03:29.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.736+0000 7f2337675700 1 -- 192.168.123.105:0/940030903 wait complete. 2026-03-09T15:03:29.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.737+0000 7f2337675700 1 Processor -- start 2026-03-09T15:03:29.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.737+0000 7f2337675700 1 -- start start 2026-03-09T15:03:29.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.741+0000 7f2337675700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2330071e40 0x7f2330132710 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:29.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.741+0000 7f2337675700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2330132c50 0x7f23301330c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:29.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.741+0000 7f2337675700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f233007eef0 con 0x7f2330071e40 2026-03-09T15:03:29.741 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.741+0000 7f2337675700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f233007f060 con 0x7f2330132c50 2026-03-09T15:03:29.742 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.742+0000 7f2335e72700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2330132c50 0x7f23301330c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:29.742 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.742+0000 7f2335e72700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2330132c50 0x7f23301330c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:34362/0 (socket says 192.168.123.105:34362) 2026-03-09T15:03:29.742 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.742+0000 7f2335e72700 1 -- 192.168.123.105:0/4178368473 learned_addr learned my addr 192.168.123.105:0/4178368473 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:29.742 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.742+0000 7f2336673700 1 --2- 192.168.123.105:0/4178368473 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2330071e40 0x7f2330132710 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:29.742 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.742+0000 7f2335e72700 1 -- 192.168.123.105:0/4178368473 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2330071e40 msgr2=0x7f2330132710 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:29.742 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.742+0000 7f2335e72700 1 --2- 192.168.123.105:0/4178368473 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2330071e40 0x7f2330132710 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:29.742 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.742+0000 7f2335e72700 1 -- 192.168.123.105:0/4178368473 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f2324007430 con 0x7f2330132c50 2026-03-09T15:03:29.742 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.742+0000 7f2335e72700 1 --2- 192.168.123.105:0/4178368473 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2330132c50 0x7f23301330c0 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f232c00bf40 tx=0x7f232c00bf70 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:29.743 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.743+0000 7f23237fe700 1 -- 192.168.123.105:0/4178368473 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f232c00cbc0 con 0x7f2330132c50 2026-03-09T15:03:29.743 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.743+0000 7f2337675700 1 -- 192.168.123.105:0/4178368473 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f233007f2e0 con 0x7f2330132c50 2026-03-09T15:03:29.744 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.743+0000 7f2337675700 1 -- 192.168.123.105:0/4178368473 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f233007f830 con 0x7f2330132c50 2026-03-09T15:03:29.744 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.744+0000 7f23237fe700 1 -- 192.168.123.105:0/4178368473 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f232c00cd20 con 0x7f2330132c50 2026-03-09T15:03:29.744 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.744+0000 7f23237fe700 1 -- 192.168.123.105:0/4178368473 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f232c0078c0 con 0x7f2330132c50 2026-03-09T15:03:29.744 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.744+0000 7f2337675700 1 -- 192.168.123.105:0/4178368473 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f2314005320 con 0x7f2330132c50 2026-03-09T15:03:29.746 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.746+0000 7f23237fe700 1 -- 192.168.123.105:0/4178368473 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f232c007a20 con 0x7f2330132c50 2026-03-09T15:03:29.746 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.747+0000 7f23237fe700 1 --2- 192.168.123.105:0/4178368473 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f231c077780 0x7f231c079c30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:29.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.747+0000 7f2336673700 1 --2- 192.168.123.105:0/4178368473 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f231c077780 0x7f231c079c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:29.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.747+0000 7f23237fe700 1 -- 192.168.123.105:0/4178368473 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f232c0990e0 con 0x7f2330132c50 2026-03-09T15:03:29.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.748+0000 7f2336673700 1 --2- 192.168.123.105:0/4178368473 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f231c077780 0x7f231c079c30 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f2324007400 tx=0x7f2324015040 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:29.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.748+0000 7f23237fe700 1 -- 192.168.123.105:0/4178368473 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f232c061c70 con 0x7f2330132c50 2026-03-09T15:03:29.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.937+0000 7f2337675700 1 -- 192.168.123.105:0/4178368473 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f2314000bf0 con 0x7f231c077780 2026-03-09T15:03:29.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.938+0000 7f23237fe700 1 -- 192.168.123.105:0/4178368473 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7f2314000bf0 con 0x7f231c077780 2026-03-09T15:03:29.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.942+0000 7f2337675700 1 -- 192.168.123.105:0/4178368473 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f231c077780 msgr2=0x7f231c079c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:29.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.942+0000 7f2337675700 1 --2- 192.168.123.105:0/4178368473 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f231c077780 0x7f231c079c30 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f2324007400 tx=0x7f2324015040 comp rx=0 tx=0).stop 2026-03-09T15:03:29.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.942+0000 7f2337675700 1 -- 192.168.123.105:0/4178368473 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2330132c50 msgr2=0x7f23301330c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:29.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.942+0000 7f2337675700 1 --2- 192.168.123.105:0/4178368473 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2330132c50 0x7f23301330c0 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f232c00bf40 tx=0x7f232c00bf70 comp rx=0 tx=0).stop 2026-03-09T15:03:29.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.942+0000 7f2337675700 1 -- 192.168.123.105:0/4178368473 shutdown_connections 2026-03-09T15:03:29.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.942+0000 7f2337675700 1 --2- 192.168.123.105:0/4178368473 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f231c077780 0x7f231c079c30 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:29.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.942+0000 7f2337675700 1 --2- 192.168.123.105:0/4178368473 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2330071e40 0x7f2330132710 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:29.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.942+0000 7f2337675700 1 --2- 192.168.123.105:0/4178368473 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f2330132c50 0x7f23301330c0 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:29.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.942+0000 7f2337675700 1 -- 192.168.123.105:0/4178368473 >> 192.168.123.105:0/4178368473 conn(0x7f233006c6c0 msgr2=0x7f233006ffd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:29.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.943+0000 7f2337675700 1 -- 192.168.123.105:0/4178368473 shutdown_connections 2026-03-09T15:03:29.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:29.943+0000 7f2337675700 1 -- 192.168.123.105:0/4178368473 wait complete. 2026-03-09T15:03:29.952 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:03:30.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 -- 192.168.123.105:0/2445821502 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0594071b60 msgr2=0x7f0594071fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 --2- 192.168.123.105:0/2445821502 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0594071b60 0x7f0594071fd0 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f058c00b3a0 tx=0x7f058c00b6b0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 -- 192.168.123.105:0/2445821502 shutdown_connections 2026-03-09T15:03:30.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 --2- 192.168.123.105:0/2445821502 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0594071b60 0x7f0594071fd0 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 --2- 192.168.123.105:0/2445821502 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f059410eab0 0x7f059410ee80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 -- 192.168.123.105:0/2445821502 >> 192.168.123.105:0/2445821502 conn(0x7f059406c6c0 msgr2=0x7f059406cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:30.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 -- 192.168.123.105:0/2445821502 shutdown_connections 2026-03-09T15:03:30.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 -- 192.168.123.105:0/2445821502 wait complete. 2026-03-09T15:03:30.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 Processor -- start 2026-03-09T15:03:30.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 -- start start 2026-03-09T15:03:30.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0594071b60 0x7f0594117600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:30.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f059410eab0 0x7f0594112600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:30.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.027+0000 7f059b055700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0594112bd0 con 0x7f0594071b60 2026-03-09T15:03:30.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.028+0000 7f059b055700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0594112d40 con 0x7f059410eab0 2026-03-09T15:03:30.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.028+0000 7f0593fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f059410eab0 0x7f0594112600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:30.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.028+0000 7f0593fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f059410eab0 0x7f0594112600 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:34386/0 (socket says 192.168.123.105:34386) 2026-03-09T15:03:30.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.028+0000 7f0593fff700 1 -- 192.168.123.105:0/1537963950 learned_addr learned my addr 192.168.123.105:0/1537963950 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:30.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.028+0000 7f0593fff700 1 -- 192.168.123.105:0/1537963950 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0594071b60 msgr2=0x7f0594117600 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.028+0000 7f0593fff700 1 --2- 192.168.123.105:0/1537963950 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0594071b60 0x7f0594117600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.028+0000 7f0593fff700 1 -- 192.168.123.105:0/1537963950 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f058c00b050 con 0x7f059410eab0 2026-03-09T15:03:30.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.029+0000 7f0593fff700 1 --2- 192.168.123.105:0/1537963950 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f059410eab0 0x7f0594112600 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7f058c00b370 tx=0x7f058c012710 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:30.030 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.029+0000 7f0591ffb700 1 -- 192.168.123.105:0/1537963950 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f058c00e040 con 0x7f059410eab0 2026-03-09T15:03:30.030 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.029+0000 7f0591ffb700 1 -- 192.168.123.105:0/1537963950 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f058c012e90 con 0x7f059410eab0 2026-03-09T15:03:30.030 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.030+0000 7f0591ffb700 1 -- 192.168.123.105:0/1537963950 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f058c004770 con 0x7f059410eab0 2026-03-09T15:03:30.030 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.030+0000 7f059b055700 1 -- 192.168.123.105:0/1537963950 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0594112fc0 con 0x7f059410eab0 2026-03-09T15:03:30.030 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.030+0000 7f059b055700 1 -- 192.168.123.105:0/1537963950 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0594113480 con 0x7f059410eab0 2026-03-09T15:03:30.030 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.031+0000 7f059b055700 1 -- 192.168.123.105:0/1537963950 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f059404f2a0 con 0x7f059410eab0 2026-03-09T15:03:30.032 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.032+0000 7f0591ffb700 1 -- 192.168.123.105:0/1537963950 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f058c003c10 con 0x7f059410eab0 2026-03-09T15:03:30.032 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.032+0000 7f0591ffb700 1 --2- 192.168.123.105:0/1537963950 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f057c077450 0x7f057c079900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:30.032 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.032+0000 7f0591ffb700 1 -- 192.168.123.105:0/1537963950 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f058c09aec0 con 0x7f059410eab0 2026-03-09T15:03:30.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.033+0000 7f0598df1700 1 --2- 192.168.123.105:0/1537963950 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f057c077450 0x7f057c079900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:30.034 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.034+0000 7f0598df1700 1 --2- 192.168.123.105:0/1537963950 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f057c077450 0x7f057c079900 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f0584005950 tx=0x7f05840058e0 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:30.034 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.034+0000 7f0591ffb700 1 -- 192.168.123.105:0/1537963950 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f058c063a50 con 0x7f059410eab0 2026-03-09T15:03:30.193 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.193+0000 7f059b055700 1 -- 192.168.123.105:0/1537963950 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f0594113c00 con 0x7f057c077450 2026-03-09T15:03:30.193 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:29 vm05.local ceph-mon[50611]: from='client.24551 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:30.193 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:29 vm05.local ceph-mon[50611]: Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:03:30.193 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:29 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:30.193 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:29 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:30.193 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:29 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:30.193 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:29 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:30.193 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:29 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:30.193 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:29 vm05.local ceph-mon[50611]: Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:03:30.194 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:29 vm05.local ceph-mon[50611]: pgmap v43: 65 pgs: 65 active+clean; 293 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 961 KiB/s rd, 970 KiB/s wr, 69 op/s 2026-03-09T15:03:30.197 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.197+0000 7f0591ffb700 1 -- 192.168.123.105:0/1537963950 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7f0594113c00 con 0x7f057c077450 2026-03-09T15:03:30.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.201+0000 7f057b7fe700 1 -- 192.168.123.105:0/1537963950 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f057c077450 msgr2=0x7f057c079900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.201+0000 7f057b7fe700 1 --2- 192.168.123.105:0/1537963950 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f057c077450 0x7f057c079900 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f0584005950 tx=0x7f05840058e0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.201+0000 7f057b7fe700 1 -- 192.168.123.105:0/1537963950 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f059410eab0 msgr2=0x7f0594112600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.201+0000 7f057b7fe700 1 --2- 192.168.123.105:0/1537963950 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f059410eab0 0x7f0594112600 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7f058c00b370 tx=0x7f058c012710 comp rx=0 tx=0).stop 2026-03-09T15:03:30.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.202+0000 7f057b7fe700 1 -- 192.168.123.105:0/1537963950 shutdown_connections 2026-03-09T15:03:30.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.202+0000 7f057b7fe700 1 --2- 192.168.123.105:0/1537963950 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f057c077450 0x7f057c079900 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.202+0000 7f057b7fe700 1 --2- 192.168.123.105:0/1537963950 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0594071b60 0x7f0594117600 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.202+0000 7f057b7fe700 1 --2- 192.168.123.105:0/1537963950 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f059410eab0 0x7f0594112600 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.202+0000 7f057b7fe700 1 -- 192.168.123.105:0/1537963950 >> 192.168.123.105:0/1537963950 conn(0x7f059406c6c0 msgr2=0x7f0594070290 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:30.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.203+0000 7f057b7fe700 1 -- 192.168.123.105:0/1537963950 shutdown_connections 2026-03-09T15:03:30.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.203+0000 7f057b7fe700 1 -- 192.168.123.105:0/1537963950 wait complete. 2026-03-09T15:03:30.248 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:29 vm09.local ceph-mon[59673]: from='client.24551 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:30.248 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:29 vm09.local ceph-mon[59673]: Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:03:30.248 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:29 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:30.248 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:29 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:30.248 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:29 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:30.248 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:29 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:30.248 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:29 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:30.248 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:29 vm09.local ceph-mon[59673]: Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T15:03:30.248 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:29 vm09.local ceph-mon[59673]: pgmap v43: 65 pgs: 65 active+clean; 293 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 961 KiB/s rd, 970 KiB/s wr, 69 op/s 2026-03-09T15:03:30.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.304+0000 7f72ba3c2700 1 -- 192.168.123.105:0/210236354 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f72b4100f20 msgr2=0x7f72b4104ef0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.304+0000 7f72ba3c2700 1 --2- 192.168.123.105:0/210236354 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f72b4100f20 0x7f72b4104ef0 secure :-1 s=READY pgs=356 cs=0 l=1 rev1=1 crypto rx=0x7f72a8009a60 tx=0x7f72a8009d70 comp rx=0 tx=0).stop 2026-03-09T15:03:30.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.305+0000 7f72ba3c2700 1 -- 192.168.123.105:0/210236354 shutdown_connections 2026-03-09T15:03:30.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.305+0000 7f72ba3c2700 1 --2- 192.168.123.105:0/210236354 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f72b4100f20 0x7f72b4104ef0 unknown :-1 s=CLOSED pgs=356 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.305+0000 7f72ba3c2700 1 --2- 192.168.123.105:0/210236354 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b4100580 0x7f72b4100950 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.305+0000 7f72ba3c2700 1 -- 192.168.123.105:0/210236354 >> 192.168.123.105:0/210236354 conn(0x7f72b40fbe40 msgr2=0x7f72b40fe250 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.308+0000 7f72ba3c2700 1 -- 192.168.123.105:0/210236354 shutdown_connections 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.308+0000 7f72ba3c2700 1 -- 192.168.123.105:0/210236354 wait complete. 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.308+0000 7f72ba3c2700 1 Processor -- start 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.308+0000 7f72ba3c2700 1 -- start start 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.309+0000 7f72ba3c2700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b4100580 0x7f72b4195f60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.309+0000 7f72ba3c2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f72b4100f20 0x7f72b41964a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.309+0000 7f72ba3c2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f72b4196b80 con 0x7f72b4100f20 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.309+0000 7f72ba3c2700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f72b419a910 con 0x7f72b4100580 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.309+0000 7f72b3fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b4100580 0x7f72b4195f60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.309+0000 7f72b3fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b4100580 0x7f72b4195f60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:34418/0 (socket says 192.168.123.105:34418) 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.309+0000 7f72b3fff700 1 -- 192.168.123.105:0/527580353 learned_addr learned my addr 192.168.123.105:0/527580353 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.309+0000 7f72b37fe700 1 --2- 192.168.123.105:0/527580353 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f72b4100f20 0x7f72b41964a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.309+0000 7f72b3fff700 1 -- 192.168.123.105:0/527580353 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f72b4100f20 msgr2=0x7f72b41964a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.309+0000 7f72b3fff700 1 --2- 192.168.123.105:0/527580353 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f72b4100f20 0x7f72b41964a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.309+0000 7f72b3fff700 1 -- 192.168.123.105:0/527580353 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f72a8009710 con 0x7f72b4100580 2026-03-09T15:03:30.309 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.309+0000 7f72b3fff700 1 --2- 192.168.123.105:0/527580353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b4100580 0x7f72b4195f60 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f72a400ea30 tx=0x7f72a400edf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:30.310 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.310+0000 7f72b17fa700 1 -- 192.168.123.105:0/527580353 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f72a400cc40 con 0x7f72b4100580 2026-03-09T15:03:30.310 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.310+0000 7f72ba3c2700 1 -- 192.168.123.105:0/527580353 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f72b419abf0 con 0x7f72b4100580 2026-03-09T15:03:30.310 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.310+0000 7f72ba3c2700 1 -- 192.168.123.105:0/527580353 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f72b419b140 con 0x7f72b4100580 2026-03-09T15:03:30.310 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.310+0000 7f72b17fa700 1 -- 192.168.123.105:0/527580353 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f72a400cda0 con 0x7f72b4100580 2026-03-09T15:03:30.310 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.311+0000 7f72b17fa700 1 -- 192.168.123.105:0/527580353 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f72a4010430 con 0x7f72b4100580 2026-03-09T15:03:30.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.312+0000 7f72b17fa700 1 -- 192.168.123.105:0/527580353 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f72a4010650 con 0x7f72b4100580 2026-03-09T15:03:30.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.312+0000 7f72b17fa700 1 --2- 192.168.123.105:0/527580353 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f729c0776d0 0x7f729c079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:30.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.312+0000 7f72b17fa700 1 -- 192.168.123.105:0/527580353 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f72a4014070 con 0x7f72b4100580 2026-03-09T15:03:30.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.313+0000 7f729affd700 1 -- 192.168.123.105:0/527580353 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f72a0005320 con 0x7f72b4100580 2026-03-09T15:03:30.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.313+0000 7f72b37fe700 1 --2- 192.168.123.105:0/527580353 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f729c0776d0 0x7f729c079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:30.313 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.313+0000 7f72b37fe700 1 --2- 192.168.123.105:0/527580353 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f729c0776d0 0x7f729c079b80 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f72a8000c00 tx=0x7f72a8003680 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:30.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.317+0000 7f72b17fa700 1 -- 192.168.123.105:0/527580353 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f72a4063800 con 0x7f72b4100580 2026-03-09T15:03:30.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.450+0000 7f729affd700 1 -- 192.168.123.105:0/527580353 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f72a0000bf0 con 0x7f729c0776d0 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.457+0000 7f72b17fa700 1 -- 192.168.123.105:0/527580353 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3552 (secure 0 0 0) 0x7f72a0000bf0 con 0x7f729c0776d0 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (36s) 17s ago 7m 16.5M - 0.25.0 c8568f914cd2 7635cece310c 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (7m) 17s ago 7m 8493k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (6m) 55s ago 6m 11.1M - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (7m) 17s ago 7m 7411k - 18.2.0 dc2bc1663786 1c577d7a0de0 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (6m) 55s ago 6m 7402k - 18.2.0 dc2bc1663786 9e4961442551 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (21s) 17s ago 7m 39.9M - 10.4.0 c8b91775d855 eb6431f63d88 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (5m) 17s ago 5m 237M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (5m) 17s ago 5m 16.4M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (5m) 55s ago 5m 16.1M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (5m) 55s ago 5m 294M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:03:30.457 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (97s) 17s ago 8m 622M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:03:30.458 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (69s) 55s ago 6m 487M - 19.2.3-678-ge911bdeb 654f31e6858e acf5a6f3f804 2026-03-09T15:03:30.458 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (8m) 17s ago 8m 54.9M 2048M 18.2.0 dc2bc1663786 c83e96b62251 2026-03-09T15:03:30.458 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (6m) 55s ago 6m 47.2M 2048M 18.2.0 dc2bc1663786 7963792b5376 2026-03-09T15:03:30.458 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (61s) 17s ago 7m 9126k - 1.7.0 72c9c2088986 888d071c50d9 2026-03-09T15:03:30.458 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (57s) 55s ago 6m 5372k - 1.7.0 72c9c2088986 22c96a576a60 2026-03-09T15:03:30.458 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (6m) 17s ago 6m 298M 4096M 18.2.0 dc2bc1663786 50f3ca995318 2026-03-09T15:03:30.458 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (6m) 17s ago 6m 297M 4096M 18.2.0 dc2bc1663786 23e35bdafe50 2026-03-09T15:03:30.458 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (6m) 17s ago 5m 261M 4096M 18.2.0 dc2bc1663786 75097dc12979 2026-03-09T15:03:30.458 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (5m) 55s ago 5m 366M 4096M 18.2.0 dc2bc1663786 e79644a0564f 2026-03-09T15:03:30.458 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (5m) 55s ago 5m 310M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T15:03:30.458 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (5m) 55s ago 5m 278M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:03:30.458 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (43s) 17s ago 6m 48.9M - 2.51.0 1d3b7f56885b e6f470b0ba11 2026-03-09T15:03:30.460 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.460+0000 7f72ba3c2700 1 -- 192.168.123.105:0/527580353 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f729c0776d0 msgr2=0x7f729c079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.460 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.460+0000 7f72ba3c2700 1 --2- 192.168.123.105:0/527580353 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f729c0776d0 0x7f729c079b80 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f72a8000c00 tx=0x7f72a8003680 comp rx=0 tx=0).stop 2026-03-09T15:03:30.460 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.460+0000 7f72ba3c2700 1 -- 192.168.123.105:0/527580353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b4100580 msgr2=0x7f72b4195f60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.460 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.460+0000 7f72ba3c2700 1 --2- 192.168.123.105:0/527580353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b4100580 0x7f72b4195f60 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f72a400ea30 tx=0x7f72a400edf0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.460 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.461+0000 7f72ba3c2700 1 -- 192.168.123.105:0/527580353 shutdown_connections 2026-03-09T15:03:30.460 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.461+0000 7f72ba3c2700 1 --2- 192.168.123.105:0/527580353 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f729c0776d0 0x7f729c079b80 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.460 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.461+0000 7f72ba3c2700 1 --2- 192.168.123.105:0/527580353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f72b4100580 0x7f72b4195f60 unknown :-1 s=CLOSED pgs=124 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.461+0000 7f72ba3c2700 1 --2- 192.168.123.105:0/527580353 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f72b4100f20 0x7f72b41964a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.461+0000 7f72ba3c2700 1 -- 192.168.123.105:0/527580353 >> 192.168.123.105:0/527580353 conn(0x7f72b40fbe40 msgr2=0x7f72b4104760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:30.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.461+0000 7f72ba3c2700 1 -- 192.168.123.105:0/527580353 shutdown_connections 2026-03-09T15:03:30.461 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.461+0000 7f72ba3c2700 1 -- 192.168.123.105:0/527580353 wait complete. 2026-03-09T15:03:30.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 -- 192.168.123.105:0/4001126615 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67a010c8b0 msgr2=0x7f67a010cc80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 --2- 192.168.123.105:0/4001126615 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67a010c8b0 0x7f67a010cc80 secure :-1 s=READY pgs=357 cs=0 l=1 rev1=1 crypto rx=0x7f6790007780 tx=0x7f6790007a90 comp rx=0 tx=0).stop 2026-03-09T15:03:30.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 -- 192.168.123.105:0/4001126615 shutdown_connections 2026-03-09T15:03:30.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 --2- 192.168.123.105:0/4001126615 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f67a0071e40 0x7f67a00722b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 --2- 192.168.123.105:0/4001126615 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67a010c8b0 0x7f67a010cc80 unknown :-1 s=CLOSED pgs=357 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 -- 192.168.123.105:0/4001126615 >> 192.168.123.105:0/4001126615 conn(0x7f67a006c6c0 msgr2=0x7f67a006cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:30.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 -- 192.168.123.105:0/4001126615 shutdown_connections 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 -- 192.168.123.105:0/4001126615 wait complete. 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 Processor -- start 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 -- start start 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67a0071e40 0x7f67a007cd50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f67a007d290 0x7f67a007d700 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f67a0083dc0 con 0x7f67a0071e40 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.535+0000 7f67a6793700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f67a00818d0 con 0x7f67a007d290 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.536+0000 7f679ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67a0071e40 0x7f67a007cd50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.536+0000 7f679ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67a0071e40 0x7f67a007cd50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:37802/0 (socket says 192.168.123.105:37802) 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.536+0000 7f679ffff700 1 -- 192.168.123.105:0/1753736393 learned_addr learned my addr 192.168.123.105:0/1753736393 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.536+0000 7f679f7fe700 1 --2- 192.168.123.105:0/1753736393 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f67a007d290 0x7f67a007d700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.536+0000 7f679ffff700 1 -- 192.168.123.105:0/1753736393 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f67a007d290 msgr2=0x7f67a007d700 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.536+0000 7f679ffff700 1 --2- 192.168.123.105:0/1753736393 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f67a007d290 0x7f67a007d700 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.536+0000 7f679ffff700 1 -- 192.168.123.105:0/1753736393 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6790007430 con 0x7f67a0071e40 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.536+0000 7f679ffff700 1 --2- 192.168.123.105:0/1753736393 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67a0071e40 0x7f67a007cd50 secure :-1 s=READY pgs=358 cs=0 l=1 rev1=1 crypto rx=0x7f679000a9f0 tx=0x7f679000aa20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:30.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.537+0000 7f679d7fa700 1 -- 192.168.123.105:0/1753736393 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6790004210 con 0x7f67a0071e40 2026-03-09T15:03:30.537 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.537+0000 7f67a6793700 1 -- 192.168.123.105:0/1753736393 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f67a0081af0 con 0x7f67a0071e40 2026-03-09T15:03:30.537 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.537+0000 7f67a6793700 1 -- 192.168.123.105:0/1753736393 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f67a0081fe0 con 0x7f67a0071e40 2026-03-09T15:03:30.537 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.537+0000 7f679d7fa700 1 -- 192.168.123.105:0/1753736393 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f6790004370 con 0x7f67a0071e40 2026-03-09T15:03:30.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.537+0000 7f679d7fa700 1 -- 192.168.123.105:0/1753736393 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f679001b790 con 0x7f67a0071e40 2026-03-09T15:03:30.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.539+0000 7f679d7fa700 1 -- 192.168.123.105:0/1753736393 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f6790022020 con 0x7f67a0071e40 2026-03-09T15:03:30.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.540+0000 7f679d7fa700 1 --2- 192.168.123.105:0/1753736393 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f6788077780 0x7f6788079c30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:30.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.540+0000 7f679f7fe700 1 --2- 192.168.123.105:0/1753736393 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f6788077780 0x7f6788079c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:30.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.540+0000 7f679d7fa700 1 -- 192.168.123.105:0/1753736393 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f679009b9d0 con 0x7f67a0071e40 2026-03-09T15:03:30.541 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.541+0000 7f67a6793700 1 -- 192.168.123.105:0/1753736393 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f678c005320 con 0x7f67a0071e40 2026-03-09T15:03:30.541 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.542+0000 7f679f7fe700 1 --2- 192.168.123.105:0/1753736393 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f6788077780 0x7f6788079c30 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f6798003eb0 tx=0x7f679800b040 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:30.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.544+0000 7f679d7fa700 1 -- 192.168.123.105:0/1753736393 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f679000a070 con 0x7f67a0071e40 2026-03-09T15:03:30.759 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.759+0000 7f67a6793700 1 -- 192.168.123.105:0/1753736393 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f678c005cc0 con 0x7f67a0071e40 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 6 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 12, 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:03:30.765 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:03:30.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.763+0000 7f679d7fa700 1 -- 192.168.123.105:0/1753736393 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+679 (secure 0 0 0) 0x7f6790064560 con 0x7f67a0071e40 2026-03-09T15:03:30.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.766+0000 7f6786ffd700 1 -- 192.168.123.105:0/1753736393 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f6788077780 msgr2=0x7f6788079c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.766+0000 7f6786ffd700 1 --2- 192.168.123.105:0/1753736393 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f6788077780 0x7f6788079c30 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f6798003eb0 tx=0x7f679800b040 comp rx=0 tx=0).stop 2026-03-09T15:03:30.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.766+0000 7f6786ffd700 1 -- 192.168.123.105:0/1753736393 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67a0071e40 msgr2=0x7f67a007cd50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.766+0000 7f6786ffd700 1 --2- 192.168.123.105:0/1753736393 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67a0071e40 0x7f67a007cd50 secure :-1 s=READY pgs=358 cs=0 l=1 rev1=1 crypto rx=0x7f679000a9f0 tx=0x7f679000aa20 comp rx=0 tx=0).stop 2026-03-09T15:03:30.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.766+0000 7f6786ffd700 1 -- 192.168.123.105:0/1753736393 shutdown_connections 2026-03-09T15:03:30.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.766+0000 7f6786ffd700 1 --2- 192.168.123.105:0/1753736393 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f6788077780 0x7f6788079c30 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.766+0000 7f6786ffd700 1 --2- 192.168.123.105:0/1753736393 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67a0071e40 0x7f67a007cd50 unknown :-1 s=CLOSED pgs=358 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.766+0000 7f6786ffd700 1 --2- 192.168.123.105:0/1753736393 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f67a007d290 0x7f67a007d700 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.766+0000 7f6786ffd700 1 -- 192.168.123.105:0/1753736393 >> 192.168.123.105:0/1753736393 conn(0x7f67a006c6c0 msgr2=0x7f67a00708e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:30.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.766+0000 7f6786ffd700 1 -- 192.168.123.105:0/1753736393 shutdown_connections 2026-03-09T15:03:30.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.767+0000 7f6786ffd700 1 -- 192.168.123.105:0/1753736393 wait complete. 2026-03-09T15:03:30.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.853+0000 7f8bf2a23700 1 -- 192.168.123.105:0/3242241134 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8bec10eab0 msgr2=0x7f8bec10ee80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.853+0000 7f8bf2a23700 1 --2- 192.168.123.105:0/3242241134 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8bec10eab0 0x7f8bec10ee80 secure :-1 s=READY pgs=359 cs=0 l=1 rev1=1 crypto rx=0x7f8bdc009b00 tx=0x7f8bdc009e10 comp rx=0 tx=0).stop 2026-03-09T15:03:30.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.853+0000 7f8bf2a23700 1 -- 192.168.123.105:0/3242241134 shutdown_connections 2026-03-09T15:03:30.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.853+0000 7f8bf2a23700 1 --2- 192.168.123.105:0/3242241134 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8bec071b60 0x7f8bec071fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.853+0000 7f8bf2a23700 1 --2- 192.168.123.105:0/3242241134 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8bec10eab0 0x7f8bec10ee80 unknown :-1 s=CLOSED pgs=359 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.853+0000 7f8bf2a23700 1 -- 192.168.123.105:0/3242241134 >> 192.168.123.105:0/3242241134 conn(0x7f8bec06c6c0 msgr2=0x7f8bec06cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:30.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.854+0000 7f8bf2a23700 1 -- 192.168.123.105:0/3242241134 shutdown_connections 2026-03-09T15:03:30.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.854+0000 7f8bf2a23700 1 -- 192.168.123.105:0/3242241134 wait complete. 2026-03-09T15:03:30.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.854+0000 7f8bf2a23700 1 Processor -- start 2026-03-09T15:03:30.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.854+0000 7f8bf2a23700 1 -- start start 2026-03-09T15:03:30.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.855+0000 7f8bf2a23700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8bec071b60 0x7f8bec1a4d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:30.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.855+0000 7f8bf2a23700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8bec10eab0 0x7f8bec1a5270 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:30.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.855+0000 7f8bf2a23700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8bec1a5950 con 0x7f8bec10eab0 2026-03-09T15:03:30.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.855+0000 7f8bf2a23700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8bec1a96e0 con 0x7f8bec071b60 2026-03-09T15:03:30.857 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.855+0000 7f8beb7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8bec10eab0 0x7f8bec1a5270 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:30.857 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.855+0000 7f8beb7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8bec10eab0 0x7f8bec1a5270 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:37822/0 (socket says 192.168.123.105:37822) 2026-03-09T15:03:30.857 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.855+0000 7f8beb7fe700 1 -- 192.168.123.105:0/1337444310 learned_addr learned my addr 192.168.123.105:0/1337444310 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:30.857 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.855+0000 7f8bebfff700 1 --2- 192.168.123.105:0/1337444310 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8bec071b60 0x7f8bec1a4d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:30.857 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.856+0000 7f8beb7fe700 1 -- 192.168.123.105:0/1337444310 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8bec071b60 msgr2=0x7f8bec1a4d30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:30.857 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.856+0000 7f8beb7fe700 1 --2- 192.168.123.105:0/1337444310 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8bec071b60 0x7f8bec1a4d30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:30.857 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.856+0000 7f8beb7fe700 1 -- 192.168.123.105:0/1337444310 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8bdc0097e0 con 0x7f8bec10eab0 2026-03-09T15:03:30.857 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.856+0000 7f8beb7fe700 1 --2- 192.168.123.105:0/1337444310 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8bec10eab0 0x7f8bec1a5270 secure :-1 s=READY pgs=360 cs=0 l=1 rev1=1 crypto rx=0x7f8be400d350 tx=0x7f8be400d710 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:30.857 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.857+0000 7f8be97fa700 1 -- 192.168.123.105:0/1337444310 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8be40155b0 con 0x7f8bec10eab0 2026-03-09T15:03:30.858 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.857+0000 7f8bf2a23700 1 -- 192.168.123.105:0/1337444310 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8bec1a99c0 con 0x7f8bec10eab0 2026-03-09T15:03:30.858 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.858+0000 7f8bf2a23700 1 -- 192.168.123.105:0/1337444310 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8bec1a9f10 con 0x7f8bec10eab0 2026-03-09T15:03:30.859 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.859+0000 7f8be97fa700 1 -- 192.168.123.105:0/1337444310 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f8be400f040 con 0x7f8bec10eab0 2026-03-09T15:03:30.859 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.860+0000 7f8be97fa700 1 -- 192.168.123.105:0/1337444310 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8be40149c0 con 0x7f8bec10eab0 2026-03-09T15:03:30.860 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.860+0000 7f8be97fa700 1 -- 192.168.123.105:0/1337444310 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7f8be4014c20 con 0x7f8bec10eab0 2026-03-09T15:03:30.861 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.861+0000 7f8be97fa700 1 --2- 192.168.123.105:0/1337444310 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f8bd4077860 0x7f8bd4079d10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:30.861 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.861+0000 7f8bebfff700 1 --2- 192.168.123.105:0/1337444310 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f8bd4077860 0x7f8bd4079d10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:30.861 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.862+0000 7f8be97fa700 1 -- 192.168.123.105:0/1337444310 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7f8be409a4a0 con 0x7f8bec10eab0 2026-03-09T15:03:30.862 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.862+0000 7f8bebfff700 1 --2- 192.168.123.105:0/1337444310 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f8bd4077860 0x7f8bd4079d10 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f8bdc006010 tx=0x7f8bdc005c00 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:30.862 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.862+0000 7f8bf2a23700 1 -- 192.168.123.105:0/1337444310 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8bd8005320 con 0x7f8bec10eab0 2026-03-09T15:03:30.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:30.867+0000 7f8be97fa700 1 -- 192.168.123.105:0/1337444310 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7f8be40630e0 con 0x7f8bec10eab0 2026-03-09T15:03:31.018 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.018+0000 7f8bf2a23700 1 -- 192.168.123.105:0/1337444310 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f8bd8005cc0 con 0x7f8bec10eab0 2026-03-09T15:03:31.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.080+0000 7f8be97fa700 1 -- 192.168.123.105:0/1337444310 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1853 (secure 0 0 0) 0x7f8be401a750 con 0x7f8bec10eab0 2026-03-09T15:03:31.083 INFO:teuthology.orchestra.run.vm05.stdout:e11 2026-03-09T15:03:31.083 INFO:teuthology.orchestra.run.vm05.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:03:31.083 INFO:teuthology.orchestra.run.vm05.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:03:31.083 INFO:teuthology.orchestra.run.vm05.stdout:legacy client fscid: 1 2026-03-09T15:03:31.083 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:03:31.083 INFO:teuthology.orchestra.run.vm05.stdout:Filesystem 'cephfs' (1) 2026-03-09T15:03:31.083 INFO:teuthology.orchestra.run.vm05.stdout:fs_name cephfs 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:epoch 9 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:created 2026-03-09T14:58:23.182447+0000 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:modified 2026-03-09T14:58:30.215642+0000 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:tableserver 0 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:root 0 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:session_timeout 60 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:session_autoclose 300 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:max_file_size 1099511627776 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:required_client_features {} 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:last_failure 0 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:last_failure_osd_epoch 0 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:max_mds 1 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:in 0 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:up {0=14502} 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:failed 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:damaged 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:stopped 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:data_pools [3] 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:metadata_pool 2 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:inline_data enabled 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:balancer 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:bal_rank_mask -1 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:standby_count_wanted 1 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.nrocqt{0:14502} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.ohmitn{0:14510} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:Standby daemons: 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.rrcyql{-1:14518} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.jrhwzz{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.084+0000 7f8bf2a23700 1 -- 192.168.123.105:0/1337444310 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f8bd4077860 msgr2=0x7f8bd4079d10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.084+0000 7f8bf2a23700 1 --2- 192.168.123.105:0/1337444310 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f8bd4077860 0x7f8bd4079d10 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f8bdc006010 tx=0x7f8bdc005c00 comp rx=0 tx=0).stop 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.084+0000 7f8bf2a23700 1 -- 192.168.123.105:0/1337444310 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8bec10eab0 msgr2=0x7f8bec1a5270 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.084+0000 7f8bf2a23700 1 --2- 192.168.123.105:0/1337444310 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8bec10eab0 0x7f8bec1a5270 secure :-1 s=READY pgs=360 cs=0 l=1 rev1=1 crypto rx=0x7f8be400d350 tx=0x7f8be400d710 comp rx=0 tx=0).stop 2026-03-09T15:03:31.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.084+0000 7f8bf2a23700 1 -- 192.168.123.105:0/1337444310 shutdown_connections 2026-03-09T15:03:31.085 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.084+0000 7f8bf2a23700 1 --2- 192.168.123.105:0/1337444310 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7f8bd4077860 0x7f8bd4079d10 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.085 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.084+0000 7f8bf2a23700 1 --2- 192.168.123.105:0/1337444310 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8bec071b60 0x7f8bec1a4d30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.085 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.084+0000 7f8bf2a23700 1 --2- 192.168.123.105:0/1337444310 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8bec10eab0 0x7f8bec1a5270 unknown :-1 s=CLOSED pgs=360 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.086 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.084+0000 7f8bf2a23700 1 -- 192.168.123.105:0/1337444310 >> 192.168.123.105:0/1337444310 conn(0x7f8bec06c6c0 msgr2=0x7f8bec0702b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:31.086 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.085+0000 7f8bf2a23700 1 -- 192.168.123.105:0/1337444310 shutdown_connections 2026-03-09T15:03:31.086 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.085+0000 7f8bf2a23700 1 -- 192.168.123.105:0/1337444310 wait complete. 2026-03-09T15:03:31.090 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 11 2026-03-09T15:03:31.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.189+0000 7eff80338700 1 -- 192.168.123.105:0/2540476353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff78071e40 msgr2=0x7eff780722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:31.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.189+0000 7eff80338700 1 --2- 192.168.123.105:0/2540476353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff78071e40 0x7eff780722b0 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7eff7000cd40 tx=0x7eff7000a320 comp rx=0 tx=0).stop 2026-03-09T15:03:31.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.189+0000 7eff80338700 1 -- 192.168.123.105:0/2540476353 shutdown_connections 2026-03-09T15:03:31.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.189+0000 7eff80338700 1 --2- 192.168.123.105:0/2540476353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff78071e40 0x7eff780722b0 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.189+0000 7eff80338700 1 --2- 192.168.123.105:0/2540476353 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff7810c8b0 0x7eff7810cc80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.189 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.189+0000 7eff80338700 1 -- 192.168.123.105:0/2540476353 >> 192.168.123.105:0/2540476353 conn(0x7eff7806c6c0 msgr2=0x7eff7806cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:31.191 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.189+0000 7eff80338700 1 -- 192.168.123.105:0/2540476353 shutdown_connections 2026-03-09T15:03:31.191 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.189+0000 7eff80338700 1 -- 192.168.123.105:0/2540476353 wait complete. 2026-03-09T15:03:31.191 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.190+0000 7eff80338700 1 Processor -- start 2026-03-09T15:03:31.191 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.191+0000 7eff80338700 1 -- start start 2026-03-09T15:03:31.191 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.191+0000 7eff80338700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff7810c8b0 0x7eff7807ce00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:31.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.191+0000 7eff80338700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff7807d340 0x7eff7807d7b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:31.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.191+0000 7eff80338700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7eff78081980 con 0x7eff7807d340 2026-03-09T15:03:31.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.191+0000 7eff80338700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7eff78081af0 con 0x7eff7810c8b0 2026-03-09T15:03:31.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.191+0000 7eff7e0d4700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff7810c8b0 0x7eff7807ce00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:31.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.191+0000 7eff7e0d4700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff7810c8b0 0x7eff7807ce00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:34454/0 (socket says 192.168.123.105:34454) 2026-03-09T15:03:31.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.191+0000 7eff7e0d4700 1 -- 192.168.123.105:0/3610724773 learned_addr learned my addr 192.168.123.105:0/3610724773 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:31.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.191+0000 7eff7e0d4700 1 -- 192.168.123.105:0/3610724773 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff7807d340 msgr2=0x7eff7807d7b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:31.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.191+0000 7eff7e0d4700 1 --2- 192.168.123.105:0/3610724773 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff7807d340 0x7eff7807d7b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.191+0000 7eff7e0d4700 1 -- 192.168.123.105:0/3610724773 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7eff7000c9f0 con 0x7eff7810c8b0 2026-03-09T15:03:31.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.191+0000 7eff7e0d4700 1 --2- 192.168.123.105:0/3610724773 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff7810c8b0 0x7eff7807ce00 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7eff74011ce0 tx=0x7eff74009520 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:31.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.192+0000 7eff6f7fe700 1 -- 192.168.123.105:0/3610724773 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7eff740193f0 con 0x7eff7810c8b0 2026-03-09T15:03:31.192 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.192+0000 7eff80338700 1 -- 192.168.123.105:0/3610724773 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7eff78081dd0 con 0x7eff7810c8b0 2026-03-09T15:03:31.194 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.192+0000 7eff80338700 1 -- 192.168.123.105:0/3610724773 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7eff78082320 con 0x7eff7810c8b0 2026-03-09T15:03:31.194 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.193+0000 7eff6f7fe700 1 -- 192.168.123.105:0/3610724773 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7eff74019a30 con 0x7eff7810c8b0 2026-03-09T15:03:31.194 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.193+0000 7eff6f7fe700 1 -- 192.168.123.105:0/3610724773 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7eff74017b20 con 0x7eff7810c8b0 2026-03-09T15:03:31.194 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.194+0000 7eff6f7fe700 1 -- 192.168.123.105:0/3610724773 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7eff74017c80 con 0x7eff7810c8b0 2026-03-09T15:03:31.194 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.194+0000 7eff6f7fe700 1 --2- 192.168.123.105:0/3610724773 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7eff64077780 0x7eff64079c30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:31.197 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.197+0000 7eff7d8d3700 1 --2- 192.168.123.105:0/3610724773 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7eff64077780 0x7eff64079c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:31.197 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.197+0000 7eff6f7fe700 1 -- 192.168.123.105:0/3610724773 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7eff7409e290 con 0x7eff7810c8b0 2026-03-09T15:03:31.197 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.197+0000 7eff7d8d3700 1 --2- 192.168.123.105:0/3610724773 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7eff64077780 0x7eff64079c30 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7eff7000cd40 tx=0x7eff7000a760 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:31.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.197+0000 7eff80338700 1 -- 192.168.123.105:0/3610724773 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7eff5c005320 con 0x7eff7810c8b0 2026-03-09T15:03:31.205 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.201+0000 7eff6f7fe700 1 -- 192.168.123.105:0/3610724773 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7eff74066ed0 con 0x7eff7810c8b0 2026-03-09T15:03:31.272 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:31 vm05.local ceph-mon[50611]: from='client.24553 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:31.272 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:31 vm05.local ceph-mon[50611]: from='client.24557 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:31.272 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:31 vm05.local ceph-mon[50611]: from='client.24561 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:31.272 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:31 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/1753736393' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:31.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:31 vm09.local ceph-mon[59673]: from='client.24553 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:31.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:31 vm09.local ceph-mon[59673]: from='client.24557 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:31.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:31 vm09.local ceph-mon[59673]: from='client.24561 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:31.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:31 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/1753736393' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:31.389 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.389+0000 7eff80338700 1 -- 192.168.123.105:0/3610724773 --> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7eff5c000bf0 con 0x7eff64077780 2026-03-09T15:03:31.393 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.393+0000 7eff6f7fe700 1 -- 192.168.123.105:0/3610724773 <== mgr.14652 v2:192.168.123.105:6800/456689610 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+358 (secure 0 0 0) 0x7eff5c000bf0 con 0x7eff64077780 2026-03-09T15:03:31.393 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:03:31.393 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T15:03:31.393 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:03:31.393 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T15:03:31.393 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-09T15:03:31.393 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "", 2026-03-09T15:03:31.393 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-09T15:03:31.393 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:03:31.393 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:03:31.401 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.401+0000 7eff6d7ba700 1 -- 192.168.123.105:0/3610724773 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7eff64077780 msgr2=0x7eff64079c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:31.401 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.401+0000 7eff6d7ba700 1 --2- 192.168.123.105:0/3610724773 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7eff64077780 0x7eff64079c30 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7eff7000cd40 tx=0x7eff7000a760 comp rx=0 tx=0).stop 2026-03-09T15:03:31.401 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.401+0000 7eff6d7ba700 1 -- 192.168.123.105:0/3610724773 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff7810c8b0 msgr2=0x7eff7807ce00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:31.401 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.401+0000 7eff6d7ba700 1 --2- 192.168.123.105:0/3610724773 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff7810c8b0 0x7eff7807ce00 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7eff74011ce0 tx=0x7eff74009520 comp rx=0 tx=0).stop 2026-03-09T15:03:31.401 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.402+0000 7eff6d7ba700 1 -- 192.168.123.105:0/3610724773 shutdown_connections 2026-03-09T15:03:31.401 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.402+0000 7eff6d7ba700 1 --2- 192.168.123.105:0/3610724773 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7eff64077780 0x7eff64079c30 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.401 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.402+0000 7eff6d7ba700 1 --2- 192.168.123.105:0/3610724773 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff7810c8b0 0x7eff7807ce00 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.401 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.402+0000 7eff6d7ba700 1 --2- 192.168.123.105:0/3610724773 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff7807d340 0x7eff7807d7b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.402+0000 7eff6d7ba700 1 -- 192.168.123.105:0/3610724773 >> 192.168.123.105:0/3610724773 conn(0x7eff7806c6c0 msgr2=0x7eff7806ff50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:31.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.402+0000 7eff6d7ba700 1 -- 192.168.123.105:0/3610724773 shutdown_connections 2026-03-09T15:03:31.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.402+0000 7eff6d7ba700 1 -- 192.168.123.105:0/3610724773 wait complete. 2026-03-09T15:03:31.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.521+0000 7fbfe1b32700 1 -- 192.168.123.105:0/428329466 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbfdc10c8b0 msgr2=0x7fbfdc10cc80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:31.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.521+0000 7fbfe1b32700 1 --2- 192.168.123.105:0/428329466 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbfdc10c8b0 0x7fbfdc10cc80 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7fbfcc009a60 tx=0x7fbfcc009d70 comp rx=0 tx=0).stop 2026-03-09T15:03:31.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.521+0000 7fbfe1b32700 1 -- 192.168.123.105:0/428329466 shutdown_connections 2026-03-09T15:03:31.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.521+0000 7fbfe1b32700 1 --2- 192.168.123.105:0/428329466 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbfdc071e40 0x7fbfdc0722b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.521+0000 7fbfe1b32700 1 --2- 192.168.123.105:0/428329466 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbfdc10c8b0 0x7fbfdc10cc80 unknown :-1 s=CLOSED pgs=127 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.521+0000 7fbfe1b32700 1 -- 192.168.123.105:0/428329466 >> 192.168.123.105:0/428329466 conn(0x7fbfdc06c6c0 msgr2=0x7fbfdc06cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:31.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.522+0000 7fbfe1b32700 1 -- 192.168.123.105:0/428329466 shutdown_connections 2026-03-09T15:03:31.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.522+0000 7fbfe1b32700 1 -- 192.168.123.105:0/428329466 wait complete. 2026-03-09T15:03:31.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.522+0000 7fbfe1b32700 1 Processor -- start 2026-03-09T15:03:31.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.522+0000 7fbfe1b32700 1 -- start start 2026-03-09T15:03:31.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.522+0000 7fbfe1b32700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbfdc071e40 0x7fbfdc07ce70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:31.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.522+0000 7fbfe1b32700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbfdc07d3b0 0x7fbfdc07d820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:31.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.522+0000 7fbfe1b32700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbfdc0819f0 con 0x7fbfdc071e40 2026-03-09T15:03:31.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.522+0000 7fbfe1b32700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbfdc081b60 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.522+0000 7fbfdaffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbfdc07d3b0 0x7fbfdc07d820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:31.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.522+0000 7fbfdaffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbfdc07d3b0 0x7fbfdc07d820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:34476/0 (socket says 192.168.123.105:34476) 2026-03-09T15:03:31.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.522+0000 7fbfdaffd700 1 -- 192.168.123.105:0/2973255374 learned_addr learned my addr 192.168.123.105:0/2973255374 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:03:31.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.523+0000 7fbfdaffd700 1 -- 192.168.123.105:0/2973255374 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbfdc071e40 msgr2=0x7fbfdc07ce70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:31.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.523+0000 7fbfdaffd700 1 --2- 192.168.123.105:0/2973255374 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbfdc071e40 0x7fbfdc07ce70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.523+0000 7fbfdaffd700 1 -- 192.168.123.105:0/2973255374 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbfcc009710 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.523+0000 7fbfdaffd700 1 --2- 192.168.123.105:0/2973255374 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbfdc07d3b0 0x7fbfdc07d820 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7fbfd400deb0 tx=0x7fbfd400df90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:31.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.523+0000 7fbfd8ff9700 1 -- 192.168.123.105:0/2973255374 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbfd400cdf0 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.524 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.523+0000 7fbfe1b32700 1 -- 192.168.123.105:0/2973255374 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbfdc081e40 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.524 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.523+0000 7fbfe1b32700 1 -- 192.168.123.105:0/2973255374 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbfdc082390 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.525 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.525+0000 7fbfd8ff9700 1 -- 192.168.123.105:0/2973255374 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fbfd4012650 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.525 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.525+0000 7fbfd8ff9700 1 -- 192.168.123.105:0/2973255374 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbfd400f7f0 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.525 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.525+0000 7fbfd8ff9700 1 -- 192.168.123.105:0/2973255374 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 31) v1 ==== 100066+0+0 (secure 0 0 0) 0x7fbfd400f9d0 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.526 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.526+0000 7fbfd8ff9700 1 --2- 192.168.123.105:0/2973255374 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fbfc4077850 0x7fbfc4079d00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:03:31.527 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.526+0000 7fbfd8ff9700 1 -- 192.168.123.105:0/2973255374 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(40..40 src has 1..40) v4 ==== 5792+0+0 (secure 0 0 0) 0x7fbfd40a3470 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.527 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.527+0000 7fbfdb7fe700 1 --2- 192.168.123.105:0/2973255374 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fbfc4077850 0x7fbfc4079d00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:03:31.527 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.527+0000 7fbfe1b32700 1 -- 192.168.123.105:0/2973255374 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbfc8005320 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.528 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.529+0000 7fbfdb7fe700 1 --2- 192.168.123.105:0/2973255374 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fbfc4077850 0x7fbfc4079d00 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7fbfcc009a60 tx=0x7fbfcc00b540 comp rx=0 tx=0).ready entity=mgr.14652 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:03:31.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.533+0000 7fbfd8ff9700 1 -- 192.168.123.105:0/2973255374 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+191525 (secure 0 0 0) 0x7fbfd406c000 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.841+0000 7fbfe1b32700 1 -- 192.168.123.105:0/2973255374 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fbfc8005190 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.843+0000 7fbfd8ff9700 1 -- 192.168.123.105:0/2973255374 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+201 (secure 0 0 0) 0x7fbfd406b750 con 0x7fbfdc07d3b0 2026-03-09T15:03:31.843 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:03:31.843 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:03:31.843 INFO:teuthology.orchestra.run.vm05.stdout: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:03:31.847 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.845+0000 7fbfc27fc700 1 -- 192.168.123.105:0/2973255374 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fbfc4077850 msgr2=0x7fbfc4079d00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:31.847 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.846+0000 7fbfc27fc700 1 --2- 192.168.123.105:0/2973255374 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fbfc4077850 0x7fbfc4079d00 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7fbfcc009a60 tx=0x7fbfcc00b540 comp rx=0 tx=0).stop 2026-03-09T15:03:31.847 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.846+0000 7fbfc27fc700 1 -- 192.168.123.105:0/2973255374 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbfdc07d3b0 msgr2=0x7fbfdc07d820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:03:31.847 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.846+0000 7fbfc27fc700 1 --2- 192.168.123.105:0/2973255374 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbfdc07d3b0 0x7fbfdc07d820 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7fbfd400deb0 tx=0x7fbfd400df90 comp rx=0 tx=0).stop 2026-03-09T15:03:31.847 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.846+0000 7fbfc27fc700 1 -- 192.168.123.105:0/2973255374 shutdown_connections 2026-03-09T15:03:31.847 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.846+0000 7fbfc27fc700 1 --2- 192.168.123.105:0/2973255374 >> [v2:192.168.123.105:6800/456689610,v1:192.168.123.105:6801/456689610] conn(0x7fbfc4077850 0x7fbfc4079d00 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.847 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.846+0000 7fbfc27fc700 1 --2- 192.168.123.105:0/2973255374 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbfdc071e40 0x7fbfdc07ce70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.847 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.846+0000 7fbfc27fc700 1 --2- 192.168.123.105:0/2973255374 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbfdc07d3b0 0x7fbfdc07d820 unknown :-1 s=CLOSED pgs=128 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:03:31.847 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.846+0000 7fbfc27fc700 1 -- 192.168.123.105:0/2973255374 >> 192.168.123.105:0/2973255374 conn(0x7fbfdc06c6c0 msgr2=0x7fbfdc06ff80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:03:31.847 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.846+0000 7fbfc27fc700 1 -- 192.168.123.105:0/2973255374 shutdown_connections 2026-03-09T15:03:31.847 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:03:31.846+0000 7fbfc27fc700 1 -- 192.168.123.105:0/2973255374 wait complete. 2026-03-09T15:03:32.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local ceph-mon[50611]: pgmap v44: 65 pgs: 65 active+clean; 288 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 112 op/s 2026-03-09T15:03:32.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/1337444310' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:03:32.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local ceph-mon[50611]: from='client.24573 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:32.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:32.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local ceph-mon[50611]: Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T15:03:32.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local ceph-mon[50611]: Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T15:03:32.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:32.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:32.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local ceph-mon[50611]: Upgrade: Setting container_image for all mgr 2026-03-09T15:03:32.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:32.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local ceph-mon[50611]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T15:03:32.307 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local ceph-mon[50611]: from='client.? 192.168.123.105:0/2973255374' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:03:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:32 vm09.local ceph-mon[59673]: pgmap v44: 65 pgs: 65 active+clean; 288 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 112 op/s 2026-03-09T15:03:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:32 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/1337444310' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:03:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:32 vm09.local ceph-mon[59673]: from='client.24573 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:03:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:32 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:32 vm09.local ceph-mon[59673]: Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-09T15:03:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:32 vm09.local ceph-mon[59673]: Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T15:03:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:32 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:32 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:32 vm09.local ceph-mon[59673]: Upgrade: Setting container_image for all mgr 2026-03-09T15:03:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:32 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:32 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T15:03:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:32 vm09.local ceph-mon[59673]: from='client.? 192.168.123.105:0/2973255374' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:03:33.025 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:32 vm05.local systemd[1]: Stopping Ceph mon.vm05 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:03:33.026 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05[50607]: 2026-03-09T15:03:33.009+0000 7f3e5a9af700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm05 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:03:33.026 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05[50607]: 2026-03-09T15:03:33.009+0000 7f3e5a9af700 -1 mon.vm05@0(leader) e2 *** Got Signal Terminated *** 2026-03-09T15:03:33.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local podman[116381]: 2026-03-09 15:03:33.084899052 +0000 UTC m=+0.122721221 container died c83e96b622518bee42ad8f809a026a817b70dbacd70f6f3ad1494d52d8c535e1 (image=quay.io/ceph/ceph:v18.2.0, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.label-schema.build-date=20231212, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.schema-version=1.0, RELEASE=HEAD, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=-18.2.0, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, io.buildah.version=1.29.1, GIT_BRANCH=HEAD, GIT_CLEAN=True) 2026-03-09T15:03:33.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local podman[116381]: 2026-03-09 15:03:33.123005265 +0000 UTC m=+0.160827434 container remove c83e96b622518bee42ad8f809a026a817b70dbacd70f6f3ad1494d52d8c535e1 (image=quay.io/ceph/ceph:v18.2.0, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05, org.label-schema.name=CentOS Stream 8 Base Image, CEPH_POINT_RELEASE=-18.2.0, GIT_CLEAN=True, RELEASE=HEAD, ceph=True, org.label-schema.vendor=CentOS, GIT_BRANCH=HEAD, org.label-schema.schema-version=1.0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, io.buildah.version=1.29.1, maintainer=Guillaume Abrioux , org.label-schema.build-date=20231212, org.label-schema.license=GPLv2) 2026-03-09T15:03:33.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local bash[116381]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05 2026-03-09T15:03:33.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm05.service: Deactivated successfully. 2026-03-09T15:03:33.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local systemd[1]: Stopped Ceph mon.vm05 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:03:33.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm05.service: Consumed 8.282s CPU time. 2026-03-09T15:03:33.782 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local systemd[1]: Starting Ceph mon.vm05 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local podman[116502]: 2026-03-09 15:03:33.782086279 +0000 UTC m=+0.032161796 container create 1e11655f7d871829cc856e4fc8c352479aebc0264ee3492b595a07374b03ad0d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0) 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local podman[116502]: 2026-03-09 15:03:33.836481725 +0000 UTC m=+0.086557252 container init 1e11655f7d871829cc856e4fc8c352479aebc0264ee3492b595a07374b03ad0d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local podman[116502]: 2026-03-09 15:03:33.844174676 +0000 UTC m=+0.094250203 container start 1e11655f7d871829cc856e4fc8c352479aebc0264ee3492b595a07374b03ad0d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20260223) 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local bash[116502]: 1e11655f7d871829cc856e4fc8c352479aebc0264ee3492b595a07374b03ad0d 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local podman[116502]: 2026-03-09 15:03:33.766115322 +0000 UTC m=+0.016190849 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local systemd[1]: Started Ceph mon.vm05 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: pidfile_write: ignore empty --pid-file 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: load: jerasure load: lrc 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: RocksDB version: 7.9.2 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Git sha 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: DB SUMMARY 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: DB Session ID: DAVSKV47N8VZQO7USMRW 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: CURRENT file: CURRENT 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: MANIFEST file: MANIFEST-000015 size: 1029 Bytes 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm05/store.db dir, Total Num: 1, files: 000026.sst 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm05/store.db: 000024.log size: 1978972 ; 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.error_if_exists: 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.create_if_missing: 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.paranoid_checks: 1 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.env: 0x56256b5f7dc0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.info_log: 0x56256c9cd900 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.statistics: (nil) 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.use_fsync: 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_log_file_size: 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.allow_fallocate: 1 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.use_direct_reads: 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.db_log_dir: 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.wal_dir: 2026-03-09T15:03:34.057 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.write_buffer_manager: 0x56256c9d1900 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.unordered_write: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.row_cache: None 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.wal_filter: None 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.two_write_queues: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.wal_compression: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.atomic_flush: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.log_readahead_size: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_background_jobs: 2 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_background_compactions: -1 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_subcompactions: 1 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_open_files: -1 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_background_flushes: -1 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Compression algorithms supported: 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: kZSTD supported: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: kXpressCompression supported: 0 2026-03-09T15:03:34.058 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: kBZip2Compression supported: 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: kLZ4Compression supported: 1 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: kZlibCompression supported: 1 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: kSnappyCompression supported: 1 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm05/store.db/MANIFEST-000015 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.merge_operator: 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_filter: None 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56256c9cd580) 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: cache_index_and_filter_blocks: 1 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: pin_top_level_index_and_filter: 1 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: index_type: 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: data_block_index_type: 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: index_shortening: 1 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: checksum: 4 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: no_block_cache: 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_cache: 0x56256c9f09b0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_cache_name: BinnedLRUCache 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_cache_options: 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: capacity : 536870912 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: num_shard_bits : 4 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: strict_capacity_limit : 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: high_pri_pool_ratio: 0.000 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_cache_compressed: (nil) 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: persistent_cache: (nil) 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_size: 4096 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_size_deviation: 10 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_restart_interval: 16 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: index_block_restart_interval: 1 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: metadata_block_size: 4096 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: partition_filters: 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: use_delta_encoding: 1 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: filter_policy: bloomfilter 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: whole_key_filtering: 1 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: verify_compression: 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: read_amp_bytes_per_bit: 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: format_version: 5 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: enable_index_compression: 1 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: block_align: 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: max_auto_readahead_size: 262144 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: prepopulate_block_cache: 0 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: initial_auto_readahead_size: 8192 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compression: NoCompression 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.num_levels: 7 2026-03-09T15:03:34.059 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.inplace_update_support: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.bloom_locality: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.max_successive_merges: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T15:03:34.060 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.ttl: 2592000 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.enable_blob_files: false 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.min_blob_size: 0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm05/store.db/MANIFEST-000015 succeeded,manifest_file_number is 15, next_file_number is 28, last_sequence is 9764, log_number is 24,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 24 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 24 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4f82c324-c7ea-4fb7-862b-89fcdd638479 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773068613891346, "job": 1, "event": "recovery_started", "wal_files": [24]} 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #24 mode 2 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773068613905953, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 29, "file_size": 1651860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9765, "largest_seqno": 11057, "table_properties": {"data_size": 1645507, "index_size": 3839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 13643, "raw_average_key_size": 23, "raw_value_size": 1633078, "raw_average_value_size": 2860, "num_data_blocks": 179, "num_entries": 571, "num_filter_entries": 571, "num_deletions": 6, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773068613, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4f82c324-c7ea-4fb7-862b-89fcdd638479", "db_session_id": "DAVSKV47N8VZQO7USMRW", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773068613906069, "job": 1, "event": "recovery_finished"} 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [db/version_set.cc:5047] Creating manifest 31 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm05/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56256c9f2e00 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: DB pointer 0x56256ca02000 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: starting mon.vm05 rank 0 at public addrs [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] at bind addrs [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon_data /var/lib/ceph/mon/ceph-vm05 fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ** DB Stats ** 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ** Compaction Stats [default] ** 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: L0 1/0 1.58 MB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 190.1 0.01 0.00 1 0.008 0 0 0.0 0.0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: L6 1/0 7.03 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Sum 2/0 8.60 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 190.1 0.01 0.00 1 0.008 0 0 0.0 0.0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 190.1 0.01 0.00 1 0.008 0 0 0.0 0.0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ** Compaction Stats [default] ** 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 190.1 0.01 0.00 1 0.008 0 0 0.0 0.0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Flush(GB): cumulative 0.002, interval 0.002 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Cumulative compaction: 0.00 GB write, 70.82 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T15:03:34.061 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Interval compaction: 0.00 GB write, 70.82 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Block cache BinnedLRUCache@0x56256c9f09b0#2 capacity: 512.00 MB usage: 5.61 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1e-05 secs_since: 0 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Block cache entry stats(count,size,portion): FilterBlock(1,1.53 KB,0.000292063%) IndexBlock(1,4.08 KB,0.000777841%) Misc(1,0.00 KB,0%) 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: mon.vm05@-1(???) e2 preinit fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: mon.vm05@-1(???).mds e11 new map 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: mon.vm05@-1(???).mds e11 print_map 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: e11 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: btime 1970-01-01T00:00:00:000000+0000 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: legacy client fscid: 1 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Filesystem 'cephfs' (1) 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: fs_name cephfs 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: epoch 9 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: created 2026-03-09T14:58:23.182447+0000 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: modified 2026-03-09T14:58:30.215642+0000 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: tableserver 0 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: root 0 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: session_timeout 60 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: session_autoclose 300 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: max_file_size 1099511627776 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: max_xattr_size 65536 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: required_client_features {} 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: last_failure 0 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: last_failure_osd_epoch 0 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: max_mds 1 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: in 0 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: up {0=14502} 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: failed 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: damaged 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: stopped 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: data_pools [3] 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: metadata_pool 2 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: inline_data enabled 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: balancer 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: bal_rank_mask -1 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: standby_count_wanted 1 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: qdb_cluster leader: 0 members: 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: [mds.cephfs.vm05.nrocqt{0:14502} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: [mds.cephfs.vm09.ohmitn{0:14510} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: Standby daemons: 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: [mds.cephfs.vm05.rrcyql{-1:14518} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout: [mds.cephfs.vm09.jrhwzz{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: mon.vm05@-1(???).osd e40 crush map has features 3314933000852226048, adjusting msgr requires 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: mon.vm05@-1(???).osd e40 crush map has features 288514051259236352, adjusting msgr requires 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: mon.vm05@-1(???).osd e40 crush map has features 288514051259236352, adjusting msgr requires 2026-03-09T15:03:34.062 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: mon.vm05@-1(???).osd e40 crush map has features 288514051259236352, adjusting msgr requires 2026-03-09T15:03:34.063 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:33 vm05.local ceph-mon[116516]: mon.vm05@-1(???).paxosservice(auth 1..22) refresh upgraded, format 0 -> 3 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: pgmap v45: 65 pgs: 65 active+clean; 288 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 825 KiB/s rd, 870 KiB/s wr, 86 op/s 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: mon.vm05 calling monitor election 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: mon.vm05 is new leader, mons vm05,vm09 in quorum (ranks 0,1) 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: monmap epoch 2 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: last_changed 2026-03-09T14:56:49.179978+0000 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: created 2026-03-09T14:55:09.447382+0000 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: min_mon_release 18 (reef) 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: election_strategy: 1 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: osdmap e40: 6 total, 6 up, 6 in 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: mgrmap e31: vm05.lhsexd(active, since 86s), standbys: vm09.cfuwdz 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: Health detail: HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: [WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: from='mgr.14652 ' entity='' 2026-03-09T15:03:35.258 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:35 vm05.local ceph-mon[116516]: mgrmap e32: vm05.lhsexd(active, since 86s), standbys: vm09.cfuwdz 2026-03-09T15:03:35.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T15:03:35.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: from='mgr.14652 192.168.123.105:0/552345553' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T15:03:35.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: pgmap v45: 65 pgs: 65 active+clean; 288 MiB data, 2.9 GiB used, 117 GiB / 120 GiB avail; 825 KiB/s rd, 870 KiB/s wr, 86 op/s 2026-03-09T15:03:35.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: mon.vm05 calling monitor election 2026-03-09T15:03:35.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: mon.vm05 is new leader, mons vm05,vm09 in quorum (ranks 0,1) 2026-03-09T15:03:35.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: monmap epoch 2 2026-03-09T15:03:35.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T15:03:35.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: last_changed 2026-03-09T14:56:49.179978+0000 2026-03-09T15:03:35.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: created 2026-03-09T14:55:09.447382+0000 2026-03-09T15:03:35.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: min_mon_release 18 (reef) 2026-03-09T15:03:35.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: election_strategy: 1 2026-03-09T15:03:35.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-09T15:03:35.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-09T15:03:35.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T15:03:35.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: osdmap e40: 6 total, 6 up, 6 in 2026-03-09T15:03:35.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: mgrmap e31: vm05.lhsexd(active, since 86s), standbys: vm09.cfuwdz 2026-03-09T15:03:35.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: Health detail: HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:03:35.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: [WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:03:35.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:03:35.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: from='mgr.14652 ' entity='' 2026-03-09T15:03:35.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:35 vm09.local ceph-mon[59673]: mgrmap e32: vm05.lhsexd(active, since 86s), standbys: vm09.cfuwdz 2026-03-09T15:03:39.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:39 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/3269153734' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/crt"}]: dispatch 2026-03-09T15:03:39.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:39 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/3269153734' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:03:39.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:39 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/3269153734' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/key"}]: dispatch 2026-03-09T15:03:39.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:39 vm09.local ceph-mon[59673]: from='mgr.? 192.168.123.109:0/3269153734' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:03:39.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:39 vm09.local ceph-mon[59673]: Standby manager daemon vm09.cfuwdz restarted 2026-03-09T15:03:39.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:39 vm09.local ceph-mon[59673]: Standby manager daemon vm09.cfuwdz started 2026-03-09T15:03:40.036 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:39 vm05.local ceph-mon[116516]: from='mgr.? 192.168.123.109:0/3269153734' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/crt"}]: dispatch 2026-03-09T15:03:40.036 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:39 vm05.local ceph-mon[116516]: from='mgr.? 192.168.123.109:0/3269153734' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T15:03:40.036 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:39 vm05.local ceph-mon[116516]: from='mgr.? 192.168.123.109:0/3269153734' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.cfuwdz/key"}]: dispatch 2026-03-09T15:03:40.036 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:39 vm05.local ceph-mon[116516]: from='mgr.? 192.168.123.109:0/3269153734' entity='mgr.vm09.cfuwdz' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T15:03:40.036 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:39 vm05.local ceph-mon[116516]: Standby manager daemon vm09.cfuwdz restarted 2026-03-09T15:03:40.036 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:39 vm05.local ceph-mon[116516]: Standby manager daemon vm09.cfuwdz started 2026-03-09T15:03:41.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:40 vm05.local ceph-mon[116516]: mgrmap e33: vm05.lhsexd(active, since 90s), standbys: vm09.cfuwdz 2026-03-09T15:03:41.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:40 vm09.local ceph-mon[59673]: mgrmap e33: vm05.lhsexd(active, since 90s), standbys: vm09.cfuwdz 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: Active manager daemon vm05.lhsexd restarted 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: Activating manager daemon vm05.lhsexd 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: osdmap e41: 6 total, 6 up, 6 in 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: mgrmap e34: vm05.lhsexd(active, starting, since 0.0273084s), standbys: vm09.cfuwdz 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.nrocqt"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.jrhwzz"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm05.lhsexd", "id": "vm05.lhsexd"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm09.cfuwdz", "id": "vm09.cfuwdz"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: Manager daemon vm05.lhsexd is now available 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:41.969 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:03:41.970 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/trash_purge_schedule"}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: Active manager daemon vm05.lhsexd restarted 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: Activating manager daemon vm05.lhsexd 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: osdmap e41: 6 total, 6 up, 6 in 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: mgrmap e34: vm05.lhsexd(active, starting, since 0.0273084s), standbys: vm09.cfuwdz 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.nrocqt"}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.jrhwzz"}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm05.lhsexd", "id": "vm05.lhsexd"}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr metadata", "who": "vm09.cfuwdz", "id": "vm09.cfuwdz"}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T15:03:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: Manager daemon vm05.lhsexd is now available 2026-03-09T15:03:42.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:03:42.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:42.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/mirror_snapshot_schedule"}]: dispatch 2026-03-09T15:03:42.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:41 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm05.lhsexd/trash_purge_schedule"}]: dispatch 2026-03-09T15:03:43.341 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:43 vm09.local ceph-mon[59673]: mgrmap e35: vm05.lhsexd(active, since 1.03363s), standbys: vm09.cfuwdz 2026-03-09T15:03:43.341 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:43 vm09.local ceph-mon[59673]: pgmap v3: 65 pgs: 65 active+clean; 283 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail 2026-03-09T15:03:43.529 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:43 vm05.local ceph-mon[116516]: mgrmap e35: vm05.lhsexd(active, since 1.03363s), standbys: vm09.cfuwdz 2026-03-09T15:03:43.530 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:43 vm05.local ceph-mon[116516]: pgmap v3: 65 pgs: 65 active+clean; 283 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail 2026-03-09T15:03:44.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:44 vm05.local ceph-mon[116516]: mgrmap e36: vm05.lhsexd(active, since 2s), standbys: vm09.cfuwdz 2026-03-09T15:03:44.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:44 vm05.local ceph-mon[116516]: pgmap v4: 65 pgs: 65 active+clean; 283 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail 2026-03-09T15:03:44.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:44 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:44.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:44 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:44.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:44 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:44.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:44 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:44.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:44 vm09.local ceph-mon[59673]: mgrmap e36: vm05.lhsexd(active, since 2s), standbys: vm09.cfuwdz 2026-03-09T15:03:44.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:44 vm09.local ceph-mon[59673]: pgmap v4: 65 pgs: 65 active+clean; 283 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail 2026-03-09T15:03:44.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:44 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:44.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:44 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:44.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:44 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:44.604 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:44 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.253 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:46 vm05.local ceph-mon[116516]: [09/Mar/2026:15:03:44] ENGINE Bus STARTING 2026-03-09T15:03:46.254 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:46 vm05.local ceph-mon[116516]: [09/Mar/2026:15:03:44] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T15:03:46.254 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:46 vm05.local ceph-mon[116516]: [09/Mar/2026:15:03:44] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T15:03:46.254 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:46 vm05.local ceph-mon[116516]: [09/Mar/2026:15:03:44] ENGINE Bus STARTED 2026-03-09T15:03:46.254 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:46 vm05.local ceph-mon[116516]: [09/Mar/2026:15:03:44] ENGINE Client ('192.168.123.105', 53464) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:03:46.254 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.254 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.254 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.254 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.254 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.254 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.254 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:03:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:46 vm09.local ceph-mon[59673]: [09/Mar/2026:15:03:44] ENGINE Bus STARTING 2026-03-09T15:03:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:46 vm09.local ceph-mon[59673]: [09/Mar/2026:15:03:44] ENGINE Serving on http://192.168.123.105:8765 2026-03-09T15:03:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:46 vm09.local ceph-mon[59673]: [09/Mar/2026:15:03:44] ENGINE Serving on https://192.168.123.105:7150 2026-03-09T15:03:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:46 vm09.local ceph-mon[59673]: [09/Mar/2026:15:03:44] ENGINE Bus STARTED 2026-03-09T15:03:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:46 vm09.local ceph-mon[59673]: [09/Mar/2026:15:03:44] ENGINE Client ('192.168.123.105', 53464) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T15:03:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:46 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:46 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:46 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:46 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:46 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:46 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:46 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:03:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:47 vm05.local ceph-mon[116516]: pgmap v5: 65 pgs: 65 active+clean; 283 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail 2026-03-09T15:03:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:47 vm05.local ceph-mon[116516]: Detected new or changed devices on vm09 2026-03-09T15:03:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:47 vm05.local ceph-mon[116516]: mgrmap e37: vm05.lhsexd(active, since 4s), standbys: vm09.cfuwdz 2026-03-09T15:03:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:47 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:47 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:47 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:03:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:47 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:47 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:47.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:47 vm09.local ceph-mon[59673]: pgmap v5: 65 pgs: 65 active+clean; 283 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail 2026-03-09T15:03:47.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:47 vm09.local ceph-mon[59673]: Detected new or changed devices on vm09 2026-03-09T15:03:47.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:47 vm09.local ceph-mon[59673]: mgrmap e37: vm05.lhsexd(active, since 4s), standbys: vm09.cfuwdz 2026-03-09T15:03:47.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:47 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:47.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:47 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:47.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:47 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-09T15:03:47.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:47 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:47.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:47 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:48.099 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:48 vm09.local ceph-mon[59673]: Detected new or changed devices on vm05 2026-03-09T15:03:48.099 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:48 vm09.local ceph-mon[59673]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T15:03:48.099 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:48 vm09.local ceph-mon[59673]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:03:48.099 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:48 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:48.099 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:48 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:48.099 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:48 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:48.099 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:48 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:48.099 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:48 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:48.099 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:48 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:48.099 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:48 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:48.099 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:48 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T15:03:48.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:48 vm05.local ceph-mon[116516]: Detected new or changed devices on vm05 2026-03-09T15:03:48.560 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:48 vm05.local ceph-mon[116516]: Updating vm05:/etc/ceph/ceph.conf 2026-03-09T15:03:48.560 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:48 vm05.local ceph-mon[116516]: Updating vm09:/etc/ceph/ceph.conf 2026-03-09T15:03:48.560 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:48 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:48.560 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:48 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:48.560 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:48 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:48.560 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:48 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:48.560 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:48 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:48.560 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:48 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:48.560 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:48 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:03:48.560 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:48 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T15:03:49.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-mon[59673]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T15:03:49.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-mon[59673]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T15:03:49.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-mon[59673]: pgmap v6: 65 pgs: 65 active+clean; 283 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 0 B/s wr, 7 op/s 2026-03-09T15:03:49.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-mon[59673]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:03:49.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-mon[59673]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:03:49.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-mon[59673]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T15:03:49.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-mon[59673]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T15:03:49.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:49.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:03:49.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:03:49.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-mon[59673]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:49 vm05.local ceph-mon[116516]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T15:03:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:49 vm05.local ceph-mon[116516]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.conf 2026-03-09T15:03:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:49 vm05.local ceph-mon[116516]: pgmap v6: 65 pgs: 65 active+clean; 283 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 0 B/s wr, 7 op/s 2026-03-09T15:03:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:49 vm05.local ceph-mon[116516]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:03:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:49 vm05.local ceph-mon[116516]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T15:03:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:49 vm05.local ceph-mon[116516]: Updating vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T15:03:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:49 vm05.local ceph-mon[116516]: Updating vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/config/ceph.client.admin.keyring 2026-03-09T15:03:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:49 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:49 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:03:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:49 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:03:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:49 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:49.599 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local systemd[1]: Stopping Ceph mon.vm09 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:03:49.599 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm09[59669]: 2026-03-09T15:03:49.363+0000 7fe876a4c700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm09 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:03:49.599 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm09[59669]: 2026-03-09T15:03:49.363+0000 7fe876a4c700 -1 mon.vm09@1(peon) e2 *** Got Signal Terminated *** 2026-03-09T15:03:49.599 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local podman[98617]: 2026-03-09 15:03:49.599852618 +0000 UTC m=+0.256354939 container died 7963792b53765d467da8060a20a5e4cbc9a6c10cab893b14e6219ecae2d49632 (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm09, ceph=True, maintainer=Guillaume Abrioux , org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, org.label-schema.build-date=20231212, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.schema-version=1.0, CEPH_POINT_RELEASE=-18.2.0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.29.1, GIT_BRANCH=HEAD, GIT_CLEAN=True, RELEASE=HEAD) 2026-03-09T15:03:49.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local podman[98617]: 2026-03-09 15:03:49.620894429 +0000 UTC m=+0.277396750 container remove 7963792b53765d467da8060a20a5e4cbc9a6c10cab893b14e6219ecae2d49632 (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm09, org.label-schema.license=GPLv2, maintainer=Guillaume Abrioux , ceph=True, io.buildah.version=1.29.1, org.label-schema.build-date=20231212, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_POINT_RELEASE=-18.2.0, GIT_BRANCH=HEAD, GIT_CLEAN=True, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=HEAD) 2026-03-09T15:03:49.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local bash[98617]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm09 2026-03-09T15:03:49.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm09.service: Deactivated successfully. 2026-03-09T15:03:49.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local systemd[1]: Stopped Ceph mon.vm09 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:03:49.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm09.service: Consumed 4.605s CPU time. 2026-03-09T15:03:50.237 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:49 vm09.local systemd[1]: Starting Ceph mon.vm09 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:03:50.470 INFO:tasks.workunit:Stopping ['suites/fsstress.sh'] on client.1... 2026-03-09T15:03:50.470 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.1 /home/ubuntu/cephtest/clone.client.1 2026-03-09T15:03:50.506 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local podman[98728]: 2026-03-09 15:03:50.237311511 +0000 UTC m=+0.050268432 container create d1f0309f4d585eff230eb77c9c3fa7c79ad70118b35252900063ed4b79dd67ab (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm09, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T15:03:50.506 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local podman[98728]: 2026-03-09 15:03:50.282343242 +0000 UTC m=+0.095300172 container init d1f0309f4d585eff230eb77c9c3fa7c79ad70118b35252900063ed4b79dd67ab (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm09, org.label-schema.build-date=20260223, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T15:03:50.506 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local podman[98728]: 2026-03-09 15:03:50.286400444 +0000 UTC m=+0.099357365 container start d1f0309f4d585eff230eb77c9c3fa7c79ad70118b35252900063ed4b79dd67ab (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm09, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T15:03:50.506 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local bash[98728]: d1f0309f4d585eff230eb77c9c3fa7c79ad70118b35252900063ed4b79dd67ab 2026-03-09T15:03:50.506 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local podman[98728]: 2026-03-09 15:03:50.202336202 +0000 UTC m=+0.015293132 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:03:50.506 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local systemd[1]: Started Ceph mon.vm09 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:03:50.506 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T15:03:50.506 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-09T15:03:50.506 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: pidfile_write: ignore empty --pid-file 2026-03-09T15:03:50.506 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: load: jerasure load: lrc 2026-03-09T15:03:50.506 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: RocksDB version: 7.9.2 2026-03-09T15:03:50.506 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Git sha 0 2026-03-09T15:03:50.508 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T15:03:50.508 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: DB SUMMARY 2026-03-09T15:03:50.508 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: DB Session ID: 9VM9FK11OSTXHWXK9MQS 2026-03-09T15:03:50.508 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: CURRENT file: CURRENT 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: MANIFEST file: MANIFEST-000010 size: 922 Bytes 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm09/store.db dir, Total Num: 1, files: 000021.sst 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm09/store.db: 000019.log size: 6230864 ; 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.error_if_exists: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.create_if_missing: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.paranoid_checks: 1 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.env: 0x564ddac4bdc0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.info_log: 0x564ddba75900 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.statistics: (nil) 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.use_fsync: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_log_file_size: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.allow_fallocate: 1 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.use_direct_reads: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.db_log_dir: 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.wal_dir: 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.write_buffer_manager: 0x564ddba79900 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.unordered_write: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.row_cache: None 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.wal_filter: None 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.two_write_queues: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.wal_compression: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.atomic_flush: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T15:03:50.509 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.log_readahead_size: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_background_jobs: 2 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_background_compactions: -1 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_subcompactions: 1 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_open_files: -1 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_background_flushes: -1 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Compression algorithms supported: 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: kZSTD supported: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: kXpressCompression supported: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: kBZip2Compression supported: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: kLZ4Compression supported: 1 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: kZlibCompression supported: 1 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: kSnappyCompression supported: 1 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm09/store.db/MANIFEST-000010 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.merge_operator: 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_filter: None 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T15:03:50.510 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564ddba75580) 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: cache_index_and_filter_blocks: 1 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: pin_top_level_index_and_filter: 1 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: index_type: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: data_block_index_type: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: index_shortening: 1 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: checksum: 4 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: no_block_cache: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: block_cache: 0x564ddba989b0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: block_cache_name: BinnedLRUCache 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: block_cache_options: 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: capacity : 536870912 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: num_shard_bits : 4 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: strict_capacity_limit : 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: high_pri_pool_ratio: 0.000 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: block_cache_compressed: (nil) 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: persistent_cache: (nil) 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: block_size: 4096 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: block_size_deviation: 10 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: block_restart_interval: 16 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: index_block_restart_interval: 1 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: metadata_block_size: 4096 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: partition_filters: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: use_delta_encoding: 1 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: filter_policy: bloomfilter 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: whole_key_filtering: 1 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: verify_compression: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: read_amp_bytes_per_bit: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: format_version: 5 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: enable_index_compression: 1 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: block_align: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: max_auto_readahead_size: 262144 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: prepopulate_block_cache: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: initial_auto_readahead_size: 8192 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compression: NoCompression 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.num_levels: 7 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T15:03:50.511 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.inplace_update_support: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.bloom_locality: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.max_successive_merges: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.ttl: 2592000 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.enable_blob_files: false 2026-03-09T15:03:50.512 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.min_blob_size: 0 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm09/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 23, last_sequence is 9751, log_number is 19,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 19 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 19 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 923fd0e8-6d22-4674-95e7-026a1c43d332 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773068630334209, "job": 1, "event": "recovery_started", "wal_files": [19]} 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #19 mode 2 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773068630352670, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 24, "file_size": 3677999, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9756, "largest_seqno": 11634, "table_properties": {"data_size": 3669500, "index_size": 5468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 18477, "raw_average_key_size": 23, "raw_value_size": 3652474, "raw_average_value_size": 4712, "num_data_blocks": 254, "num_entries": 775, "num_filter_entries": 775, "num_deletions": 6, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773068630, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "923fd0e8-6d22-4674-95e7-026a1c43d332", "db_session_id": "9VM9FK11OSTXHWXK9MQS", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773068630352799, "job": 1, "event": "recovery_finished"} 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [db/version_set.cc:5047] Creating manifest 26 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm09/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564ddba9ae00 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: DB pointer 0x564ddbaaa000 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: starting mon.vm09 rank 1 at public addrs [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] at bind addrs [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon_data /var/lib/ceph/mon/ceph-vm09 fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: ** DB Stats ** 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: ** Compaction Stats [default] ** 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: L0 1/0 3.51 MB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 298.6 0.01 0.00 1 0.012 0 0 0.0 0.0 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: L6 1/0 7.03 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Sum 2/0 10.54 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 298.6 0.01 0.00 1 0.012 0 0 0.0 0.0 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 298.6 0.01 0.00 1 0.012 0 0 0.0 0.0 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: ** Compaction Stats [default] ** 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 298.6 0.01 0.00 1 0.012 0 0 0.0 0.0 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T15:03:50.513 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Flush(GB): cumulative 0.003, interval 0.003 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Cumulative compaction: 0.00 GB write, 130.38 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Interval compaction: 0.00 GB write, 130.38 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Block cache BinnedLRUCache@0x564ddba989b0#2 capacity: 512.00 MB usage: 49.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7e-06 secs_since: 0 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Block cache entry stats(count,size,portion): DataBlock(3,17.23 KB,0.0032872%) FilterBlock(2,10.06 KB,0.00191927%) IndexBlock(2,22.16 KB,0.00422597%) Misc(1,0.00 KB,0%) 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: mon.vm09@-1(???) e2 preinit fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: mon.vm09@-1(???).mds e11 new map 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: mon.vm09@-1(???).mds e11 print_map 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: e11 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: btime 1970-01-01T00:00:00:000000+0000 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: legacy client fscid: 1 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Filesystem 'cephfs' (1) 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: fs_name cephfs 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: epoch 9 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: created 2026-03-09T14:58:23.182447+0000 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: modified 2026-03-09T14:58:30.215642+0000 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: tableserver 0 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: root 0 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: session_timeout 60 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: session_autoclose 300 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: max_file_size 1099511627776 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: max_xattr_size 65536 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: required_client_features {} 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: last_failure 0 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: last_failure_osd_epoch 0 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: max_mds 1 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: in 0 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: up {0=14502} 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: failed 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: damaged 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: stopped 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: data_pools [3] 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: metadata_pool 2 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: inline_data enabled 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: balancer 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: bal_rank_mask -1 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: standby_count_wanted 1 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: qdb_cluster leader: 0 members: 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: [mds.cephfs.vm05.nrocqt{0:14502} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: [mds.cephfs.vm09.ohmitn{0:14510} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: Standby daemons: 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: [mds.cephfs.vm05.rrcyql{-1:14518} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout: [mds.cephfs.vm09.jrhwzz{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:03:50.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: mon.vm09@-1(???).osd e41 crush map has features 3314933000852226048, adjusting msgr requires 2026-03-09T15:03:50.515 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: mon.vm09@-1(???).osd e41 crush map has features 288514051259236352, adjusting msgr requires 2026-03-09T15:03:50.515 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: mon.vm09@-1(???).osd e41 crush map has features 288514051259236352, adjusting msgr requires 2026-03-09T15:03:50.515 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: mon.vm09@-1(???).osd e41 crush map has features 288514051259236352, adjusting msgr requires 2026-03-09T15:03:50.515 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:50 vm09.local ceph-mon[98742]: mon.vm09@-1(???).paxosservice(auth 1..23) refresh upgraded, format 0 -> 3 2026-03-09T15:03:50.963 DEBUG:teuthology.parallel:result is None 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: mon.vm05 calling monitor election 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: mon.vm09 calling monitor election 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: mon.vm05 is new leader, mons vm05,vm09 in quorum (ranks 0,1) 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: monmap epoch 3 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: last_changed 2026-03-09T15:03:50.690536+0000 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: created 2026-03-09T14:55:09.447382+0000 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: min_mon_release 19 (squid) 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: election_strategy: 1 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: osdmap e41: 6 total, 6 up, 6 in 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: mgrmap e37: vm05.lhsexd(active, since 9s), standbys: vm09.cfuwdz 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: Health detail: HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: [WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:52.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:51 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm05"}]: dispatch 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: mon.vm05 calling monitor election 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: mon.vm09 calling monitor election 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: mon.vm05 is new leader, mons vm05,vm09 in quorum (ranks 0,1) 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: monmap epoch 3 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: last_changed 2026-03-09T15:03:50.690536+0000 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: created 2026-03-09T14:55:09.447382+0000 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: min_mon_release 19 (squid) 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: election_strategy: 1 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.vm05 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: osdmap e41: 6 total, 6 up, 6 in 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: mgrmap e37: vm05.lhsexd(active, since 9s), standbys: vm09.cfuwdz 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: Health detail: HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: [WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:52.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:51 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:03:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:53 vm05.local ceph-mon[116516]: pgmap v8: 65 pgs: 65 active+clean; 280 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 433 KiB/s rd, 416 KiB/s wr, 61 op/s 2026-03-09T15:03:53.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:53 vm09.local ceph-mon[98742]: pgmap v8: 65 pgs: 65 active+clean; 280 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 433 KiB/s rd, 416 KiB/s wr, 61 op/s 2026-03-09T15:03:54.059 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:54.059 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:54.059 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:54.059 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:54.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:55.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:55 vm05.local ceph-mon[116516]: pgmap v9: 65 pgs: 65 active+clean; 277 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 885 KiB/s rd, 903 KiB/s wr, 117 op/s 2026-03-09T15:03:55.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:55 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:55.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:55 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:55.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:55 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:55.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:55 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:55.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:55 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:55.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:55 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:03:55.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:55 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:03:55.288 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:55 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:55 vm09.local ceph-mon[98742]: pgmap v9: 65 pgs: 65 active+clean; 277 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 885 KiB/s rd, 903 KiB/s wr, 117 op/s 2026-03-09T15:03:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:55 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:55 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:55 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:55 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:03:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:55 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:55 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:03:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:55 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:03:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:55 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:56.134 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: Reconfiguring mon.vm05 (monmap changed)... 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: Reconfiguring daemon mon.vm05 on vm05 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: Reconfiguring mgr.vm05.lhsexd (monmap changed)... 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.lhsexd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: Reconfiguring daemon mgr.vm05.lhsexd on vm05 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.ceph-exporter.vm05"}]: dispatch 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T15:03:56.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: Reconfiguring mon.vm05 (monmap changed)... 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: Reconfiguring daemon mon.vm05 on vm05 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: Reconfiguring mgr.vm05.lhsexd (monmap changed)... 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm05.lhsexd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: Reconfiguring daemon mgr.vm05.lhsexd on vm05 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.ceph-exporter.vm05"}]: dispatch 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:56.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T15:03:56.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: pgmap v10: 65 pgs: 65 active+clean; 277 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 810 KiB/s rd, 827 KiB/s wr, 107 op/s 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: Reconfiguring ceph-exporter.vm05 (monmap changed)... 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: Unable to update caps for client.ceph-exporter.vm05 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: Reconfiguring daemon ceph-exporter.vm05 on vm05 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: Reconfiguring crash.vm05 (monmap changed)... 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: Reconfiguring daemon crash.vm05 on vm05 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T15:03:57.306 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: pgmap v10: 65 pgs: 65 active+clean; 277 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 810 KiB/s rd, 827 KiB/s wr, 107 op/s 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: Reconfiguring ceph-exporter.vm05 (monmap changed)... 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: Unable to update caps for client.ceph-exporter.vm05 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: Reconfiguring daemon ceph-exporter.vm05 on vm05 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: Reconfiguring crash.vm05 (monmap changed)... 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: Reconfiguring daemon crash.vm05 on vm05 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T15:03:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: Reconfiguring osd.0 (monmap changed)... 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: Reconfiguring daemon osd.0 on vm05 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: Reconfiguring osd.1 (monmap changed)... 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: Reconfiguring daemon osd.1 on vm05 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.nrocqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.rrcyql", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:03:58.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:58 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: Reconfiguring osd.0 (monmap changed)... 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: Reconfiguring daemon osd.0 on vm05 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: Reconfiguring osd.1 (monmap changed)... 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: Reconfiguring daemon osd.1 on vm05 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.nrocqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.rrcyql", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:03:58.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:58 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: Reconfiguring osd.2 (monmap changed)... 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: Reconfiguring daemon osd.2 on vm05 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: pgmap v11: 65 pgs: 65 active+clean; 277 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 810 KiB/s rd, 826 KiB/s wr, 107 op/s 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: Reconfiguring mds.cephfs.vm05.nrocqt (monmap changed)... 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: Reconfiguring daemon mds.cephfs.vm05.nrocqt on vm05 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: Reconfiguring mds.cephfs.vm05.rrcyql (monmap changed)... 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: Reconfiguring daemon mds.cephfs.vm05.rrcyql on vm05 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.ceph-exporter.vm09"}]: dispatch 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T15:03:59.368 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:03:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: Reconfiguring osd.2 (monmap changed)... 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: Reconfiguring daemon osd.2 on vm05 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: pgmap v11: 65 pgs: 65 active+clean; 277 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 810 KiB/s rd, 826 KiB/s wr, 107 op/s 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: Reconfiguring mds.cephfs.vm05.nrocqt (monmap changed)... 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: Reconfiguring daemon mds.cephfs.vm05.nrocqt on vm05 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: Reconfiguring mds.cephfs.vm05.rrcyql (monmap changed)... 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: Reconfiguring daemon mds.cephfs.vm05.rrcyql on vm05 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "auth caps", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.ceph-exporter.vm09"}]: dispatch 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:03:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T15:03:59.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:03:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: Unable to update caps for client.ceph-exporter.vm09 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: Reconfiguring crash.vm09 (monmap changed)... 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: Reconfiguring daemon crash.vm09 on vm09 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.cfuwdz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T15:04:00.452 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:00 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: Unable to update caps for client.ceph-exporter.vm09 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: Reconfiguring crash.vm09 (monmap changed)... 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: Reconfiguring daemon crash.vm09 on vm09 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.cfuwdz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:00.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:00.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T15:04:00.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:00 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:01.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: Reconfiguring mgr.vm09.cfuwdz (monmap changed)... 2026-03-09T15:04:01.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: Reconfiguring daemon mgr.vm09.cfuwdz on vm09 2026-03-09T15:04:01.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: pgmap v12: 65 pgs: 65 active+clean; 268 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 168 op/s 2026-03-09T15:04:01.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: Reconfiguring mon.vm09 (monmap changed)... 2026-03-09T15:04:01.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: Reconfiguring daemon mon.vm09 on vm09 2026-03-09T15:04:01.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: Reconfiguring osd.3 (monmap changed)... 2026-03-09T15:04:01.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: Reconfiguring daemon osd.3 on vm09 2026-03-09T15:04:01.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:01.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:01.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T15:04:01.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:01.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:01.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:01.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T15:04:01.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: Reconfiguring mgr.vm09.cfuwdz (monmap changed)... 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: Reconfiguring daemon mgr.vm09.cfuwdz on vm09 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: pgmap v12: 65 pgs: 65 active+clean; 268 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 168 op/s 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: Reconfiguring mon.vm09 (monmap changed)... 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: Reconfiguring daemon mon.vm09 on vm09 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: Reconfiguring osd.3 (monmap changed)... 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: Reconfiguring daemon osd.3 on vm09 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T15:04:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:01.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.944+0000 7fe7bd310700 1 -- 192.168.123.105:0/1343762216 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe7b8108780 msgr2=0x7fe7b8110dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:01.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.944+0000 7fe7bd310700 1 --2- 192.168.123.105:0/1343762216 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe7b8108780 0x7fe7b8110dc0 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7fe7b000b240 tx=0x7fe7b000b550 comp rx=0 tx=0).stop 2026-03-09T15:04:01.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.944+0000 7fe7bd310700 1 -- 192.168.123.105:0/1343762216 shutdown_connections 2026-03-09T15:04:01.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.944+0000 7fe7bd310700 1 --2- 192.168.123.105:0/1343762216 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe7b8108780 0x7fe7b8110dc0 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:01.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.944+0000 7fe7bd310700 1 --2- 192.168.123.105:0/1343762216 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe7b8107de0 0x7fe7b81081b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:01.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.944+0000 7fe7bd310700 1 -- 192.168.123.105:0/1343762216 >> 192.168.123.105:0/1343762216 conn(0x7fe7b806cb20 msgr2=0x7fe7b806cf20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:01.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.944+0000 7fe7bd310700 1 -- 192.168.123.105:0/1343762216 shutdown_connections 2026-03-09T15:04:01.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.945+0000 7fe7bd310700 1 -- 192.168.123.105:0/1343762216 wait complete. 2026-03-09T15:04:01.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.945+0000 7fe7bd310700 1 Processor -- start 2026-03-09T15:04:01.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.945+0000 7fe7bd310700 1 -- start start 2026-03-09T15:04:01.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.945+0000 7fe7bd310700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe7b8107de0 0x7fe7b8120710 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:01.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.945+0000 7fe7bd310700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe7b8117660 0x7fe7b8117ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:01.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.945+0000 7fe7bd310700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe7b8118010 con 0x7fe7b8117660 2026-03-09T15:04:01.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.945+0000 7fe7bd310700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe7b8118150 con 0x7fe7b8107de0 2026-03-09T15:04:01.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.946+0000 7fe7b67fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe7b8117660 0x7fe7b8117ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:01.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.946+0000 7fe7b67fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe7b8117660 0x7fe7b8117ad0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41486/0 (socket says 192.168.123.105:41486) 2026-03-09T15:04:01.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.946+0000 7fe7b67fc700 1 -- 192.168.123.105:0/2615112458 learned_addr learned my addr 192.168.123.105:0/2615112458 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:01.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.946+0000 7fe7b6ffd700 1 --2- 192.168.123.105:0/2615112458 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe7b8107de0 0x7fe7b8120710 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:01.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.946+0000 7fe7b67fc700 1 -- 192.168.123.105:0/2615112458 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe7b8107de0 msgr2=0x7fe7b8120710 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:01.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.946+0000 7fe7b67fc700 1 --2- 192.168.123.105:0/2615112458 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe7b8107de0 0x7fe7b8120710 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:01.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.946+0000 7fe7b67fc700 1 -- 192.168.123.105:0/2615112458 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe7b0009e30 con 0x7fe7b8117660 2026-03-09T15:04:01.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.946+0000 7fe7b67fc700 1 --2- 192.168.123.105:0/2615112458 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe7b8117660 0x7fe7b8117ad0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fe7b0003c40 tx=0x7fe7b0003d20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:01.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.948+0000 7fe79ffff700 1 -- 192.168.123.105:0/2615112458 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe7b000e050 con 0x7fe7b8117660 2026-03-09T15:04:01.949 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.948+0000 7fe7bd310700 1 -- 192.168.123.105:0/2615112458 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe7b81183d0 con 0x7fe7b8117660 2026-03-09T15:04:01.949 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.948+0000 7fe7bd310700 1 -- 192.168.123.105:0/2615112458 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe7b811c3a0 con 0x7fe7b8117660 2026-03-09T15:04:01.949 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.949+0000 7fe7bd310700 1 -- 192.168.123.105:0/2615112458 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe7b804f2a0 con 0x7fe7b8117660 2026-03-09T15:04:01.949 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.949+0000 7fe79ffff700 1 -- 192.168.123.105:0/2615112458 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fe7b0004600 con 0x7fe7b8117660 2026-03-09T15:04:01.949 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.949+0000 7fe79ffff700 1 -- 192.168.123.105:0/2615112458 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe7b001b430 con 0x7fe7b8117660 2026-03-09T15:04:01.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.951+0000 7fe79ffff700 1 -- 192.168.123.105:0/2615112458 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fe7b0019040 con 0x7fe7b8117660 2026-03-09T15:04:01.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.952+0000 7fe79ffff700 1 --2- 192.168.123.105:0/2615112458 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe7a0077a00 0x7fe7a0079eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:01.953 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.953+0000 7fe7b6ffd700 1 --2- 192.168.123.105:0/2615112458 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe7a0077a00 0x7fe7a0079eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:01.953 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.953+0000 7fe79ffff700 1 -- 192.168.123.105:0/2615112458 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 6136+0+0 (secure 0 0 0) 0x7fe7b0067960 con 0x7fe7b8117660 2026-03-09T15:04:01.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.954+0000 7fe79ffff700 1 -- 192.168.123.105:0/2615112458 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe7b0063380 con 0x7fe7b8117660 2026-03-09T15:04:01.976 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:01.976+0000 7fe7b6ffd700 1 --2- 192.168.123.105:0/2615112458 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe7a0077a00 0x7fe7a0079eb0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fe7ac00afd0 tx=0x7fe7ac00c040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:02.122 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.122+0000 7fe7bd310700 1 -- 192.168.123.105:0/2615112458 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fe7b8113910 con 0x7fe7a0077a00 2026-03-09T15:04:02.123 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.124+0000 7fe79ffff700 1 -- 192.168.123.105:0/2615112458 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+383 (secure 0 0 0) 0x7fe7b8113910 con 0x7fe7a0077a00 2026-03-09T15:04:02.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.127+0000 7fe79dffb700 1 -- 192.168.123.105:0/2615112458 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe7a0077a00 msgr2=0x7fe7a0079eb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:02.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.127+0000 7fe79dffb700 1 --2- 192.168.123.105:0/2615112458 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe7a0077a00 0x7fe7a0079eb0 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fe7ac00afd0 tx=0x7fe7ac00c040 comp rx=0 tx=0).stop 2026-03-09T15:04:02.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.127+0000 7fe79dffb700 1 -- 192.168.123.105:0/2615112458 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe7b8117660 msgr2=0x7fe7b8117ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:02.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.127+0000 7fe79dffb700 1 --2- 192.168.123.105:0/2615112458 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe7b8117660 0x7fe7b8117ad0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fe7b0003c40 tx=0x7fe7b0003d20 comp rx=0 tx=0).stop 2026-03-09T15:04:02.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.127+0000 7fe79dffb700 1 -- 192.168.123.105:0/2615112458 shutdown_connections 2026-03-09T15:04:02.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.127+0000 7fe79dffb700 1 --2- 192.168.123.105:0/2615112458 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe7a0077a00 0x7fe7a0079eb0 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.127+0000 7fe79dffb700 1 --2- 192.168.123.105:0/2615112458 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe7b8107de0 0x7fe7b8120710 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.127+0000 7fe79dffb700 1 --2- 192.168.123.105:0/2615112458 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe7b8117660 0x7fe7b8117ad0 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.127+0000 7fe79dffb700 1 -- 192.168.123.105:0/2615112458 >> 192.168.123.105:0/2615112458 conn(0x7fe7b806cb20 msgr2=0x7fe7b810b600 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:02.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.127+0000 7fe79dffb700 1 -- 192.168.123.105:0/2615112458 shutdown_connections 2026-03-09T15:04:02.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.128+0000 7fe79dffb700 1 -- 192.168.123.105:0/2615112458 wait complete. 2026-03-09T15:04:02.143 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.287+0000 7fa032972700 1 -- 192.168.123.105:0/3379115172 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa02c10c8b0 msgr2=0x7fa02c10cc80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.287+0000 7fa032972700 1 --2- 192.168.123.105:0/3379115172 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa02c10c8b0 0x7fa02c10cc80 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fa01c009ab0 tx=0x7fa01c009dc0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.287+0000 7fa032972700 1 -- 192.168.123.105:0/3379115172 shutdown_connections 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.287+0000 7fa032972700 1 --2- 192.168.123.105:0/3379115172 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa02c071e40 0x7fa02c0722b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.287+0000 7fa032972700 1 --2- 192.168.123.105:0/3379115172 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa02c10c8b0 0x7fa02c10cc80 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.287+0000 7fa032972700 1 -- 192.168.123.105:0/3379115172 >> 192.168.123.105:0/3379115172 conn(0x7fa02c06c6c0 msgr2=0x7fa02c06cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.288+0000 7fa032972700 1 -- 192.168.123.105:0/3379115172 shutdown_connections 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.288+0000 7fa032972700 1 -- 192.168.123.105:0/3379115172 wait complete. 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.288+0000 7fa032972700 1 Processor -- start 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.288+0000 7fa032972700 1 -- start start 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.288+0000 7fa032972700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa02c071e40 0x7fa02c07ceb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.288+0000 7fa032972700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa02c07d3f0 0x7fa02c07d860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.288+0000 7fa032972700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa02c081a30 con 0x7fa02c07d3f0 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.288+0000 7fa032972700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa02c081ba0 con 0x7fa02c071e40 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.288+0000 7fa02b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa02c07d3f0 0x7fa02c07d860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.288+0000 7fa02b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa02c07d3f0 0x7fa02c07d860 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:41502/0 (socket says 192.168.123.105:41502) 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.288+0000 7fa02b7fe700 1 -- 192.168.123.105:0/3190484928 learned_addr learned my addr 192.168.123.105:0/3190484928 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.289+0000 7fa02bfff700 1 --2- 192.168.123.105:0/3190484928 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa02c071e40 0x7fa02c07ceb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.289+0000 7fa02b7fe700 1 -- 192.168.123.105:0/3190484928 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa02c071e40 msgr2=0x7fa02c07ceb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.289+0000 7fa02b7fe700 1 --2- 192.168.123.105:0/3190484928 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa02c071e40 0x7fa02c07ceb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.289+0000 7fa02b7fe700 1 -- 192.168.123.105:0/3190484928 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa01c009710 con 0x7fa02c07d3f0 2026-03-09T15:04:02.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.289+0000 7fa02b7fe700 1 --2- 192.168.123.105:0/3190484928 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa02c07d3f0 0x7fa02c07d860 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fa02400e3f0 tx=0x7fa02400e7b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:02.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.324+0000 7fa0297fa700 1 -- 192.168.123.105:0/3190484928 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa0240090d0 con 0x7fa02c07d3f0 2026-03-09T15:04:02.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.324+0000 7fa032972700 1 -- 192.168.123.105:0/3190484928 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa02c081e20 con 0x7fa02c07d3f0 2026-03-09T15:04:02.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.324+0000 7fa032972700 1 -- 192.168.123.105:0/3190484928 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa02c082370 con 0x7fa02c07d3f0 2026-03-09T15:04:02.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.325+0000 7fa0297fa700 1 -- 192.168.123.105:0/3190484928 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fa02400f040 con 0x7fa02c07d3f0 2026-03-09T15:04:02.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.325+0000 7fa0297fa700 1 -- 192.168.123.105:0/3190484928 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa024014700 con 0x7fa02c07d3f0 2026-03-09T15:04:02.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.326+0000 7fa0297fa700 1 -- 192.168.123.105:0/3190484928 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fa024009230 con 0x7fa02c07d3f0 2026-03-09T15:04:02.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.327+0000 7fa0297fa700 1 --2- 192.168.123.105:0/3190484928 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa014079d10 0x7fa01407c1c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:02.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.327+0000 7fa0297fa700 1 -- 192.168.123.105:0/3190484928 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 6136+0+0 (secure 0 0 0) 0x7fa024099d60 con 0x7fa02c07d3f0 2026-03-09T15:04:02.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.327+0000 7fa032972700 1 -- 192.168.123.105:0/3190484928 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa018005320 con 0x7fa02c07d3f0 2026-03-09T15:04:02.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.330+0000 7fa02bfff700 1 --2- 192.168.123.105:0/3190484928 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa014079d10 0x7fa01407c1c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:02.337 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.330+0000 7fa0297fa700 1 -- 192.168.123.105:0/3190484928 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fa0240623f0 con 0x7fa02c07d3f0 2026-03-09T15:04:02.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.362+0000 7fa02bfff700 1 --2- 192.168.123.105:0/3190484928 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa014079d10 0x7fa01407c1c0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fa01c000c00 tx=0x7fa01c011040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: Reconfiguring osd.4 (monmap changed)... 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: Reconfiguring daemon osd.4 on vm09 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: Reconfiguring osd.5 (monmap changed)... 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: Reconfiguring daemon osd.5 on vm09 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: Reconfiguring mds.cephfs.vm09.ohmitn (monmap changed)... 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.ohmitn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: Reconfiguring daemon mds.cephfs.vm09.ohmitn on vm09 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: pgmap v13: 65 pgs: 65 active+clean; 268 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 782 KiB/s rd, 805 KiB/s wr, 125 op/s 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: Reconfiguring mds.cephfs.vm09.jrhwzz (monmap changed)... 2026-03-09T15:04:02.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.jrhwzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: Reconfiguring daemon mds.cephfs.vm09.jrhwzz on vm09 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='client.34126 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: Upgrade: Setting container_image for all mon 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.vm05"}]: dispatch 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.vm05"}]': finished 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.vm09"}]: dispatch 2026-03-09T15:04:02.556 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.vm09"}]': finished 2026-03-09T15:04:02.564 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.563+0000 7fa032972700 1 -- 192.168.123.105:0/3190484928 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fa018000bf0 con 0x7fa014079d10 2026-03-09T15:04:02.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.569+0000 7fa0297fa700 1 -- 192.168.123.105:0/3190484928 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+385 (secure 0 0 0) 0x7fa018000bf0 con 0x7fa014079d10 2026-03-09T15:04:02.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.572+0000 7fa032972700 1 -- 192.168.123.105:0/3190484928 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa014079d10 msgr2=0x7fa01407c1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:02.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.572+0000 7fa032972700 1 --2- 192.168.123.105:0/3190484928 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa014079d10 0x7fa01407c1c0 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fa01c000c00 tx=0x7fa01c011040 comp rx=0 tx=0).stop 2026-03-09T15:04:02.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.572+0000 7fa032972700 1 -- 192.168.123.105:0/3190484928 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa02c07d3f0 msgr2=0x7fa02c07d860 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:02.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.572+0000 7fa032972700 1 --2- 192.168.123.105:0/3190484928 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa02c07d3f0 0x7fa02c07d860 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7fa02400e3f0 tx=0x7fa02400e7b0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.572+0000 7fa032972700 1 -- 192.168.123.105:0/3190484928 shutdown_connections 2026-03-09T15:04:02.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.572+0000 7fa032972700 1 --2- 192.168.123.105:0/3190484928 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa014079d10 0x7fa01407c1c0 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.572+0000 7fa032972700 1 --2- 192.168.123.105:0/3190484928 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa02c071e40 0x7fa02c07ceb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.572+0000 7fa032972700 1 --2- 192.168.123.105:0/3190484928 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa02c07d3f0 0x7fa02c07d860 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.572+0000 7fa032972700 1 -- 192.168.123.105:0/3190484928 >> 192.168.123.105:0/3190484928 conn(0x7fa02c06c6c0 msgr2=0x7fa02c06ffd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:02.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.572+0000 7fa032972700 1 -- 192.168.123.105:0/3190484928 shutdown_connections 2026-03-09T15:04:02.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.572+0000 7fa032972700 1 -- 192.168.123.105:0/3190484928 wait complete. 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: Reconfiguring osd.4 (monmap changed)... 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: Reconfiguring daemon osd.4 on vm09 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: Reconfiguring osd.5 (monmap changed)... 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: Reconfiguring daemon osd.5 on vm09 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: Reconfiguring mds.cephfs.vm09.ohmitn (monmap changed)... 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.ohmitn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: Reconfiguring daemon mds.cephfs.vm09.ohmitn on vm09 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: pgmap v13: 65 pgs: 65 active+clean; 268 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 782 KiB/s rd, 805 KiB/s wr, 125 op/s 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: Reconfiguring mds.cephfs.vm09.jrhwzz (monmap changed)... 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.jrhwzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: Reconfiguring daemon mds.cephfs.vm09.jrhwzz on vm09 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='client.34126 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:02.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:02.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: Upgrade: Setting container_image for all mon 2026-03-09T15:04:02.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:02.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.vm05"}]: dispatch 2026-03-09T15:04:02.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.vm05"}]': finished 2026-03-09T15:04:02.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.vm09"}]: dispatch 2026-03-09T15:04:02.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.vm09"}]': finished 2026-03-09T15:04:02.679 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.678+0000 7fdb1abcd700 1 -- 192.168.123.105:0/585272305 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb14071e40 msgr2=0x7fdb140722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:02.679 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.678+0000 7fdb1abcd700 1 --2- 192.168.123.105:0/585272305 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb14071e40 0x7fdb140722b0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fdb0c00d3f0 tx=0x7fdb0c00d700 comp rx=0 tx=0).stop 2026-03-09T15:04:02.679 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.678+0000 7fdb1abcd700 1 -- 192.168.123.105:0/585272305 shutdown_connections 2026-03-09T15:04:02.679 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.678+0000 7fdb1abcd700 1 --2- 192.168.123.105:0/585272305 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb14071e40 0x7fdb140722b0 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.679 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.678+0000 7fdb1abcd700 1 --2- 192.168.123.105:0/585272305 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdb1410c8b0 0x7fdb1410cc80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.679 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.678+0000 7fdb1abcd700 1 -- 192.168.123.105:0/585272305 >> 192.168.123.105:0/585272305 conn(0x7fdb1406c6c0 msgr2=0x7fdb1406cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.679+0000 7fdb1abcd700 1 -- 192.168.123.105:0/585272305 shutdown_connections 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.679+0000 7fdb1abcd700 1 -- 192.168.123.105:0/585272305 wait complete. 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.679+0000 7fdb1abcd700 1 Processor -- start 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.679+0000 7fdb1abcd700 1 -- start start 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.679+0000 7fdb1abcd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdb1410c8b0 0x7fdb1407cef0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.679+0000 7fdb1abcd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb1407d430 0x7fdb1407d8a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.679+0000 7fdb1abcd700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdb14081a70 con 0x7fdb1407d430 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.679+0000 7fdb1abcd700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdb14081be0 con 0x7fdb1410c8b0 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.679+0000 7fdb18969700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdb1410c8b0 0x7fdb1407cef0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.680+0000 7fdb18969700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdb1410c8b0 0x7fdb1407cef0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45836/0 (socket says 192.168.123.105:45836) 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.680+0000 7fdb18969700 1 -- 192.168.123.105:0/2175143767 learned_addr learned my addr 192.168.123.105:0/2175143767 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.680+0000 7fdb18969700 1 -- 192.168.123.105:0/2175143767 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb1407d430 msgr2=0x7fdb1407d8a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.680+0000 7fdb18969700 1 --2- 192.168.123.105:0/2175143767 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb1407d430 0x7fdb1407d8a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.680+0000 7fdb18969700 1 -- 192.168.123.105:0/2175143767 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fdb0c007ed0 con 0x7fdb1410c8b0 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.680+0000 7fdb18969700 1 --2- 192.168.123.105:0/2175143767 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdb1410c8b0 0x7fdb1407cef0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fdb0400d8d0 tx=0x7fdb0400dc90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:02.680 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.680+0000 7fdb11ffb700 1 -- 192.168.123.105:0/2175143767 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdb04009940 con 0x7fdb1410c8b0 2026-03-09T15:04:02.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.680+0000 7fdb1abcd700 1 -- 192.168.123.105:0/2175143767 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fdb14081ec0 con 0x7fdb1410c8b0 2026-03-09T15:04:02.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.680+0000 7fdb1abcd700 1 -- 192.168.123.105:0/2175143767 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fdb14082410 con 0x7fdb1410c8b0 2026-03-09T15:04:02.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.681+0000 7fdb11ffb700 1 -- 192.168.123.105:0/2175143767 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fdb04010460 con 0x7fdb1410c8b0 2026-03-09T15:04:02.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.681+0000 7fdb11ffb700 1 -- 192.168.123.105:0/2175143767 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdb0400f5d0 con 0x7fdb1410c8b0 2026-03-09T15:04:02.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.684+0000 7fdb11ffb700 1 -- 192.168.123.105:0/2175143767 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fdb0400f7e0 con 0x7fdb1410c8b0 2026-03-09T15:04:02.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.687+0000 7fdb11ffb700 1 --2- 192.168.123.105:0/2175143767 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fdafc079c60 0x7fdafc07c110 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:02.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.687+0000 7fdb13fff700 1 --2- 192.168.123.105:0/2175143767 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fdafc079c60 0x7fdafc07c110 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:02.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.687+0000 7fdb11ffb700 1 -- 192.168.123.105:0/2175143767 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 6136+0+0 (secure 0 0 0) 0x7fdb0409ac70 con 0x7fdb1410c8b0 2026-03-09T15:04:02.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.690+0000 7fdb1abcd700 1 -- 192.168.123.105:0/2175143767 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fdb00005320 con 0x7fdb1410c8b0 2026-03-09T15:04:02.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.690+0000 7fdb13fff700 1 --2- 192.168.123.105:0/2175143767 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fdafc079c60 0x7fdafc07c110 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7fdb0c00db80 tx=0x7fdb0c006040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:02.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.693+0000 7fdb11ffb700 1 -- 192.168.123.105:0/2175143767 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fdb04062bb0 con 0x7fdb1410c8b0 2026-03-09T15:04:02.935 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.933+0000 7fdb1abcd700 1 -- 192.168.123.105:0/2175143767 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fdb00000bf0 con 0x7fdafc079c60 2026-03-09T15:04:02.944 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:04:02.944 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (69s) 18s ago 8m 16.6M - 0.25.0 c8568f914cd2 7635cece310c 2026-03-09T15:04:02.944 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (8m) 18s ago 8m 8552k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:04:02.944 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (7m) 10s ago 7m 11.2M - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:04:02.944 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (8m) 18s ago 8m 7411k - 18.2.0 dc2bc1663786 1c577d7a0de0 2026-03-09T15:04:02.944 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (7m) 10s ago 7m 7402k - 18.2.0 dc2bc1663786 9e4961442551 2026-03-09T15:04:02.944 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (53s) 18s ago 7m 79.2M - 10.4.0 c8b91775d855 eb6431f63d88 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (5m) 18s ago 5m 219M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (5m) 18s ago 5m 17.1M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (5m) 10s ago 5m 16.9M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (5m) 10s ago 5m 157M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (2m) 18s ago 8m 590M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (102s) 10s ago 7m 494M - 19.2.3-678-ge911bdeb 654f31e6858e acf5a6f3f804 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (29s) 18s ago 8m 46.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1e11655f7d87 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (12s) 10s ago 7m 37.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e d1f0309f4d58 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (93s) 18s ago 8m 9.83M - 1.7.0 72c9c2088986 888d071c50d9 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (89s) 10s ago 7m 9445k - 1.7.0 72c9c2088986 22c96a576a60 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (6m) 18s ago 6m 334M 4096M 18.2.0 dc2bc1663786 50f3ca995318 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (6m) 18s ago 6m 318M 4096M 18.2.0 dc2bc1663786 23e35bdafe50 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (6m) 18s ago 6m 281M 4096M 18.2.0 dc2bc1663786 75097dc12979 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (6m) 10s ago 6m 423M 4096M 18.2.0 dc2bc1663786 e79644a0564f 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (6m) 10s ago 6m 371M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (6m) 10s ago 6m 331M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (75s) 18s ago 7m 49.3M - 2.51.0 1d3b7f56885b e6f470b0ba11 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.942+0000 7fdb11ffb700 1 -- 192.168.123.105:0/2175143767 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7fdb00000bf0 con 0x7fdafc079c60 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.944+0000 7fdafb7fe700 1 -- 192.168.123.105:0/2175143767 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fdafc079c60 msgr2=0x7fdafc07c110 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.944+0000 7fdafb7fe700 1 --2- 192.168.123.105:0/2175143767 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fdafc079c60 0x7fdafc07c110 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7fdb0c00db80 tx=0x7fdb0c006040 comp rx=0 tx=0).stop 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.944+0000 7fdafb7fe700 1 -- 192.168.123.105:0/2175143767 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdb1410c8b0 msgr2=0x7fdb1407cef0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.944+0000 7fdafb7fe700 1 --2- 192.168.123.105:0/2175143767 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdb1410c8b0 0x7fdb1407cef0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fdb0400d8d0 tx=0x7fdb0400dc90 comp rx=0 tx=0).stop 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.944+0000 7fdafb7fe700 1 -- 192.168.123.105:0/2175143767 shutdown_connections 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.944+0000 7fdafb7fe700 1 --2- 192.168.123.105:0/2175143767 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fdafc079c60 0x7fdafc07c110 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.944+0000 7fdafb7fe700 1 --2- 192.168.123.105:0/2175143767 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdb1410c8b0 0x7fdb1407cef0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.944+0000 7fdafb7fe700 1 --2- 192.168.123.105:0/2175143767 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdb1407d430 0x7fdb1407d8a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:02.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.944+0000 7fdafb7fe700 1 -- 192.168.123.105:0/2175143767 >> 192.168.123.105:0/2175143767 conn(0x7fdb1406c6c0 msgr2=0x7fdb1406fff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:02.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.944+0000 7fdafb7fe700 1 -- 192.168.123.105:0/2175143767 shutdown_connections 2026-03-09T15:04:02.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:02.945+0000 7fdafb7fe700 1 -- 192.168.123.105:0/2175143767 wait complete. 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.093+0000 7f9840841700 1 -- 192.168.123.105:0/696444430 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f983810c8b0 msgr2=0x7f983810cc80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.093+0000 7f9840841700 1 --2- 192.168.123.105:0/696444430 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f983810c8b0 0x7f983810cc80 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f983400d3f0 tx=0x7f983400d700 comp rx=0 tx=0).stop 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.093+0000 7f9840841700 1 -- 192.168.123.105:0/696444430 shutdown_connections 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.093+0000 7f9840841700 1 --2- 192.168.123.105:0/696444430 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9838071e40 0x7f98380722b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.093+0000 7f9840841700 1 --2- 192.168.123.105:0/696444430 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f983810c8b0 0x7f983810cc80 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.093+0000 7f9840841700 1 -- 192.168.123.105:0/696444430 >> 192.168.123.105:0/696444430 conn(0x7f983806c6c0 msgr2=0x7f983806cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.093+0000 7f9840841700 1 -- 192.168.123.105:0/696444430 shutdown_connections 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.093+0000 7f9840841700 1 -- 192.168.123.105:0/696444430 wait complete. 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.096+0000 7f9840841700 1 Processor -- start 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.096+0000 7f9840841700 1 -- start start 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.096+0000 7f9840841700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9838071e40 0x7f983807d170 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.096+0000 7f9840841700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f983807d6b0 0x7f9838081b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.096+0000 7f9840841700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f983807db20 con 0x7f9838071e40 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.096+0000 7f9840841700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f983807dc60 con 0x7f983807d6b0 2026-03-09T15:04:03.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.097+0000 7f983dddc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f983807d6b0 0x7f9838081b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:03.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.097+0000 7f983dddc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f983807d6b0 0x7f9838081b20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45858/0 (socket says 192.168.123.105:45858) 2026-03-09T15:04:03.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.097+0000 7f983dddc700 1 -- 192.168.123.105:0/210919579 learned_addr learned my addr 192.168.123.105:0/210919579 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:03.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.097+0000 7f983e5dd700 1 --2- 192.168.123.105:0/210919579 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9838071e40 0x7f983807d170 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:03.099 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.099+0000 7f983dddc700 1 -- 192.168.123.105:0/210919579 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9838071e40 msgr2=0x7f983807d170 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:03.099 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.099+0000 7f983dddc700 1 --2- 192.168.123.105:0/210919579 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9838071e40 0x7f983807d170 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.099 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.099+0000 7f983dddc700 1 -- 192.168.123.105:0/210919579 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9834007ed0 con 0x7f983807d6b0 2026-03-09T15:04:03.099 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.099+0000 7f983dddc700 1 --2- 192.168.123.105:0/210919579 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f983807d6b0 0x7f9838081b20 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f982c00eb10 tx=0x7f982c00ee20 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:03.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.099+0000 7f982b7fe700 1 -- 192.168.123.105:0/210919579 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f982c00cc40 con 0x7f983807d6b0 2026-03-09T15:04:03.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.099+0000 7f9840841700 1 -- 192.168.123.105:0/210919579 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9838082060 con 0x7f983807d6b0 2026-03-09T15:04:03.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.099+0000 7f9840841700 1 -- 192.168.123.105:0/210919579 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f98380825b0 con 0x7f983807d6b0 2026-03-09T15:04:03.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.100+0000 7f982b7fe700 1 -- 192.168.123.105:0/210919579 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f982c00cda0 con 0x7f983807d6b0 2026-03-09T15:04:03.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.100+0000 7f982b7fe700 1 -- 192.168.123.105:0/210919579 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f982c018810 con 0x7f983807d6b0 2026-03-09T15:04:03.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.101+0000 7f982b7fe700 1 -- 192.168.123.105:0/210919579 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f982c018aa0 con 0x7f983807d6b0 2026-03-09T15:04:03.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.101+0000 7f982b7fe700 1 --2- 192.168.123.105:0/210919579 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9824077a50 0x7f9824079f00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:03.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.102+0000 7f982b7fe700 1 -- 192.168.123.105:0/210919579 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f982c014070 con 0x7f983807d6b0 2026-03-09T15:04:03.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.102+0000 7f98297fa700 1 -- 192.168.123.105:0/210919579 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f981c005320 con 0x7f983807d6b0 2026-03-09T15:04:03.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.102+0000 7f983e5dd700 1 --2- 192.168.123.105:0/210919579 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9824077a50 0x7f9824079f00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:03.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.103+0000 7f983e5dd700 1 --2- 192.168.123.105:0/210919579 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9824077a50 0x7f9824079f00 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f983400dad0 tx=0x7f983401b040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:03.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.107+0000 7f982b7fe700 1 -- 192.168.123.105:0/210919579 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f982c063d00 con 0x7f983807d6b0 2026-03-09T15:04:03.484 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.483+0000 7f98297fa700 1 -- 192.168.123.105:0/210919579 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f981c005cc0 con 0x7f983807d6b0 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 6 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 10, 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:04:03.486 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.486+0000 7f982b7fe700 1 -- 192.168.123.105:0/210919579 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+694 (secure 0 0 0) 0x7f982c063450 con 0x7f983807d6b0 2026-03-09T15:04:03.489 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.489+0000 7f9840841700 1 -- 192.168.123.105:0/210919579 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9824077a50 msgr2=0x7f9824079f00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:03.489 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.489+0000 7f9840841700 1 --2- 192.168.123.105:0/210919579 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9824077a50 0x7f9824079f00 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f983400dad0 tx=0x7f983401b040 comp rx=0 tx=0).stop 2026-03-09T15:04:03.489 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.489+0000 7f9840841700 1 -- 192.168.123.105:0/210919579 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f983807d6b0 msgr2=0x7f9838081b20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:03.489 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.489+0000 7f9840841700 1 --2- 192.168.123.105:0/210919579 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f983807d6b0 0x7f9838081b20 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f982c00eb10 tx=0x7f982c00ee20 comp rx=0 tx=0).stop 2026-03-09T15:04:03.489 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.489+0000 7f9840841700 1 -- 192.168.123.105:0/210919579 shutdown_connections 2026-03-09T15:04:03.489 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.489+0000 7f9840841700 1 --2- 192.168.123.105:0/210919579 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9824077a50 0x7f9824079f00 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.489 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.489+0000 7f9840841700 1 --2- 192.168.123.105:0/210919579 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9838071e40 0x7f983807d170 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.489 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.489+0000 7f9840841700 1 --2- 192.168.123.105:0/210919579 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f983807d6b0 0x7f9838081b20 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.489 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.489+0000 7f9840841700 1 -- 192.168.123.105:0/210919579 >> 192.168.123.105:0/210919579 conn(0x7f983806c6c0 msgr2=0x7f9838070920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:03.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.492+0000 7f9840841700 1 -- 192.168.123.105:0/210919579 shutdown_connections 2026-03-09T15:04:03.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.492+0000 7f9840841700 1 -- 192.168.123.105:0/210919579 wait complete. 2026-03-09T15:04:03.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.654+0000 7f6a0deea700 1 -- 192.168.123.105:0/237068202 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6a08071e40 msgr2=0x7f6a080722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:03.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.654+0000 7f6a0deea700 1 --2- 192.168.123.105:0/237068202 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6a08071e40 0x7f6a080722b0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f6a00009230 tx=0x7f6a00009260 comp rx=0 tx=0).stop 2026-03-09T15:04:03.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.654+0000 7f6a0deea700 1 -- 192.168.123.105:0/237068202 shutdown_connections 2026-03-09T15:04:03.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.654+0000 7f6a0deea700 1 --2- 192.168.123.105:0/237068202 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6a08071e40 0x7f6a080722b0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.654+0000 7f6a0deea700 1 --2- 192.168.123.105:0/237068202 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6a0810c8b0 0x7f6a0810cc80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.654 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.654+0000 7f6a0deea700 1 -- 192.168.123.105:0/237068202 >> 192.168.123.105:0/237068202 conn(0x7f6a0806c6c0 msgr2=0x7f6a0806cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:03.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.654+0000 7f6a0deea700 1 -- 192.168.123.105:0/237068202 shutdown_connections 2026-03-09T15:04:03.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.654+0000 7f6a0deea700 1 -- 192.168.123.105:0/237068202 wait complete. 2026-03-09T15:04:03.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.654+0000 7f6a0deea700 1 Processor -- start 2026-03-09T15:04:03.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.654+0000 7f6a0deea700 1 -- start start 2026-03-09T15:04:03.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.655+0000 7f6a0deea700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6a0810c8b0 0x7f6a0807ce30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:03.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.655+0000 7f6a0deea700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6a0807d370 0x7f6a0807d7e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:03.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.655+0000 7f6a0deea700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6a080819b0 con 0x7f6a0810c8b0 2026-03-09T15:04:03.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.655+0000 7f6a0deea700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6a08081b20 con 0x7f6a0807d370 2026-03-09T15:04:03.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.655+0000 7f6a06ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6a0807d370 0x7f6a0807d7e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:03.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.655+0000 7f6a06ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6a0807d370 0x7f6a0807d7e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45866/0 (socket says 192.168.123.105:45866) 2026-03-09T15:04:03.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.655+0000 7f6a06ffd700 1 -- 192.168.123.105:0/3384289733 learned_addr learned my addr 192.168.123.105:0/3384289733 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:03.658 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.658+0000 7f6a077fe700 1 --2- 192.168.123.105:0/3384289733 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6a0810c8b0 0x7f6a0807ce30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:03.658 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.658+0000 7f6a06ffd700 1 -- 192.168.123.105:0/3384289733 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6a0810c8b0 msgr2=0x7f6a0807ce30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:03.658 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.658+0000 7f6a06ffd700 1 --2- 192.168.123.105:0/3384289733 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6a0810c8b0 0x7f6a0807ce30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.658 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.658+0000 7f6a06ffd700 1 -- 192.168.123.105:0/3384289733 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6a00008ee0 con 0x7f6a0807d370 2026-03-09T15:04:03.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.660+0000 7f6a06ffd700 1 --2- 192.168.123.105:0/3384289733 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6a0807d370 0x7f6a0807d7e0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f6a00004740 tx=0x7f6a00004820 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:03.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.660+0000 7f6a04ff9700 1 -- 192.168.123.105:0/3384289733 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6a0001d070 con 0x7f6a0807d370 2026-03-09T15:04:03.661 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.660+0000 7f6a0deea700 1 -- 192.168.123.105:0/3384289733 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6a08081da0 con 0x7f6a0807d370 2026-03-09T15:04:03.661 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.660+0000 7f6a0deea700 1 -- 192.168.123.105:0/3384289733 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6a08082290 con 0x7f6a0807d370 2026-03-09T15:04:03.661 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.661+0000 7f6a04ff9700 1 -- 192.168.123.105:0/3384289733 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f6a0000ece0 con 0x7f6a0807d370 2026-03-09T15:04:03.661 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.661+0000 7f6a04ff9700 1 -- 192.168.123.105:0/3384289733 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6a00016b40 con 0x7f6a0807d370 2026-03-09T15:04:03.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.662+0000 7f6a04ff9700 1 -- 192.168.123.105:0/3384289733 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f6a00016ca0 con 0x7f6a0807d370 2026-03-09T15:04:03.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.663+0000 7f6a04ff9700 1 --2- 192.168.123.105:0/3384289733 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f69f0077a40 0x7f69f0079ef0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:03.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.663+0000 7f6a077fe700 1 --2- 192.168.123.105:0/3384289733 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f69f0077a40 0x7f69f0079ef0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:03.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.663+0000 7f6a04ff9700 1 -- 192.168.123.105:0/3384289733 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f6a00012070 con 0x7f6a0807d370 2026-03-09T15:04:03.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.664+0000 7f6a0deea700 1 -- 192.168.123.105:0/3384289733 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f69f4005320 con 0x7f6a0807d370 2026-03-09T15:04:03.665 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.665+0000 7f6a077fe700 1 --2- 192.168.123.105:0/3384289733 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f69f0077a40 0x7f69f0079ef0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f69f800b3c0 tx=0x7f69f800d040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:03.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.667+0000 7f6a04ff9700 1 -- 192.168.123.105:0/3384289733 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f6a00064bc0 con 0x7f6a0807d370 2026-03-09T15:04:03.842 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.842+0000 7f6a0deea700 1 -- 192.168.123.105:0/3384289733 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f69f4005cc0 con 0x7f6a0807d370 2026-03-09T15:04:03.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.842+0000 7f6a04ff9700 1 -- 192.168.123.105:0/3384289733 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1944 (secure 0 0 0) 0x7f6a00026020 con 0x7f6a0807d370 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:e11 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:legacy client fscid: 1 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:Filesystem 'cephfs' (1) 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:fs_name cephfs 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:epoch 9 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:created 2026-03-09T14:58:23.182447+0000 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:modified 2026-03-09T14:58:30.215642+0000 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:tableserver 0 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:root 0 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:session_timeout 60 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:session_autoclose 300 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:max_file_size 1099511627776 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:max_xattr_size 65536 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:required_client_features {} 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:last_failure 0 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:last_failure_osd_epoch 0 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:max_mds 1 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:in 0 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:up {0=14502} 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:failed 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:damaged 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:stopped 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:data_pools [3] 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:metadata_pool 2 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:inline_data enabled 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:balancer 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:bal_rank_mask -1 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:standby_count_wanted 1 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:qdb_cluster leader: 0 members: 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.nrocqt{0:14502} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.ohmitn{0:14510} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:04:03.844 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:04:03.845 INFO:teuthology.orchestra.run.vm05.stdout:Standby daemons: 2026-03-09T15:04:03.845 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:04:03.845 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.rrcyql{-1:14518} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:04:03.845 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.jrhwzz{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:04:03.849 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.849+0000 7f69ee7fc700 1 -- 192.168.123.105:0/3384289733 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f69f0077a40 msgr2=0x7f69f0079ef0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:03.849 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.849+0000 7f69ee7fc700 1 --2- 192.168.123.105:0/3384289733 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f69f0077a40 0x7f69f0079ef0 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f69f800b3c0 tx=0x7f69f800d040 comp rx=0 tx=0).stop 2026-03-09T15:04:03.849 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.849+0000 7f69ee7fc700 1 -- 192.168.123.105:0/3384289733 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6a0807d370 msgr2=0x7f6a0807d7e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:03.849 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.849+0000 7f69ee7fc700 1 --2- 192.168.123.105:0/3384289733 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6a0807d370 0x7f6a0807d7e0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f6a00004740 tx=0x7f6a00004820 comp rx=0 tx=0).stop 2026-03-09T15:04:03.850 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.850+0000 7f69ee7fc700 1 -- 192.168.123.105:0/3384289733 shutdown_connections 2026-03-09T15:04:03.850 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.850+0000 7f69ee7fc700 1 --2- 192.168.123.105:0/3384289733 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f69f0077a40 0x7f69f0079ef0 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.850 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.850+0000 7f69ee7fc700 1 --2- 192.168.123.105:0/3384289733 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6a0810c8b0 0x7f6a0807ce30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.850 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.850+0000 7f69ee7fc700 1 --2- 192.168.123.105:0/3384289733 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6a0807d370 0x7f6a0807d7e0 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.850 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.850+0000 7f69ee7fc700 1 -- 192.168.123.105:0/3384289733 >> 192.168.123.105:0/3384289733 conn(0x7f6a0806c6c0 msgr2=0x7f6a08070840 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:03.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.850+0000 7f69ee7fc700 1 -- 192.168.123.105:0/3384289733 shutdown_connections 2026-03-09T15:04:03.851 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.851+0000 7f69ee7fc700 1 -- 192.168.123.105:0/3384289733 wait complete. 2026-03-09T15:04:03.851 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 11 2026-03-09T15:04:03.980 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.979+0000 7f9241e9e700 1 -- 192.168.123.105:0/1285197292 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f923c071e40 msgr2=0x7f923c0722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:03.980 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.979+0000 7f9241e9e700 1 --2- 192.168.123.105:0/1285197292 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f923c071e40 0x7f923c0722b0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f923400b3a0 tx=0x7f923400b6b0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.980 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.979+0000 7f9241e9e700 1 -- 192.168.123.105:0/1285197292 shutdown_connections 2026-03-09T15:04:03.980 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.979+0000 7f9241e9e700 1 --2- 192.168.123.105:0/1285197292 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f923c071e40 0x7f923c0722b0 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.980 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.979+0000 7f9241e9e700 1 --2- 192.168.123.105:0/1285197292 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f923c10c8b0 0x7f923c10cc80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.980 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.979+0000 7f9241e9e700 1 -- 192.168.123.105:0/1285197292 >> 192.168.123.105:0/1285197292 conn(0x7f923c06c6c0 msgr2=0x7f923c06cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:03.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.979+0000 7f9241e9e700 1 -- 192.168.123.105:0/1285197292 shutdown_connections 2026-03-09T15:04:03.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.979+0000 7f9241e9e700 1 -- 192.168.123.105:0/1285197292 wait complete. 2026-03-09T15:04:03.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.980+0000 7f9241e9e700 1 Processor -- start 2026-03-09T15:04:03.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.980+0000 7f9241e9e700 1 -- start start 2026-03-09T15:04:03.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.980+0000 7f9241e9e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f923c10c8b0 0x7f923c07ce70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:03.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.980+0000 7f9241e9e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f923c07d3b0 0x7f923c07d820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:03.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.980+0000 7f9241e9e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f923c083e50 con 0x7f923c10c8b0 2026-03-09T15:04:03.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.980+0000 7f9241e9e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f923c0819f0 con 0x7f923c07d3b0 2026-03-09T15:04:03.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.981+0000 7f923affd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f923c07d3b0 0x7f923c07d820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:03.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.981+0000 7f923affd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f923c07d3b0 0x7f923c07d820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45880/0 (socket says 192.168.123.105:45880) 2026-03-09T15:04:03.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.981+0000 7f923affd700 1 -- 192.168.123.105:0/3136976682 learned_addr learned my addr 192.168.123.105:0/3136976682 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:03.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.981+0000 7f923b7fe700 1 --2- 192.168.123.105:0/3136976682 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f923c10c8b0 0x7f923c07ce70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:03.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.982+0000 7f923affd700 1 -- 192.168.123.105:0/3136976682 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f923c10c8b0 msgr2=0x7f923c07ce70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:03.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.982+0000 7f923affd700 1 --2- 192.168.123.105:0/3136976682 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f923c10c8b0 0x7f923c07ce70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:03.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.982+0000 7f923affd700 1 -- 192.168.123.105:0/3136976682 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f923400b050 con 0x7f923c07d3b0 2026-03-09T15:04:03.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.982+0000 7f923affd700 1 --2- 192.168.123.105:0/3136976682 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f923c07d3b0 0x7f923c07d820 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f9234007b60 tx=0x7f9234009700 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:03.983 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.983+0000 7f9238ff9700 1 -- 192.168.123.105:0/3136976682 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f923400e050 con 0x7f923c07d3b0 2026-03-09T15:04:03.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.983+0000 7f9241e9e700 1 -- 192.168.123.105:0/3136976682 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f923c081c70 con 0x7f923c07d3b0 2026-03-09T15:04:03.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.983+0000 7f9241e9e700 1 -- 192.168.123.105:0/3136976682 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f923c0821c0 con 0x7f923c07d3b0 2026-03-09T15:04:03.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.984+0000 7f9238ff9700 1 -- 192.168.123.105:0/3136976682 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f923401f070 con 0x7f923c07d3b0 2026-03-09T15:04:03.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.984+0000 7f9238ff9700 1 -- 192.168.123.105:0/3136976682 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9234012a10 con 0x7f923c07d3b0 2026-03-09T15:04:03.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.985+0000 7f9241e9e700 1 -- 192.168.123.105:0/3136976682 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f923c04f2a0 con 0x7f923c07d3b0 2026-03-09T15:04:03.985 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.985+0000 7f9238ff9700 1 -- 192.168.123.105:0/3136976682 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f9234019070 con 0x7f923c07d3b0 2026-03-09T15:04:03.986 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.986+0000 7f9238ff9700 1 --2- 192.168.123.105:0/3136976682 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9224077b00 0x7f9224079fb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:03.986 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.986+0000 7f923b7fe700 1 --2- 192.168.123.105:0/3136976682 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9224077b00 0x7f9224079fb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:03.987 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.987+0000 7f923b7fe700 1 --2- 192.168.123.105:0/3136976682 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9224077b00 0x7f9224079fb0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f922c013d30 tx=0x7f922c016040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:03.987 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.987+0000 7f9238ff9700 1 -- 192.168.123.105:0/3136976682 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f923409aef0 con 0x7f923c07d3b0 2026-03-09T15:04:03.994 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:03.993+0000 7f9238ff9700 1 -- 192.168.123.105:0/3136976682 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f92340645b0 con 0x7f923c07d3b0 2026-03-09T15:04:04.178 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.178+0000 7f9241e9e700 1 -- 192.168.123.105:0/3136976682 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f923c07e440 con 0x7f9224077b00 2026-03-09T15:04:04.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:03 vm05.local ceph-mon[116516]: from='client.34130 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:04.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:03 vm05.local ceph-mon[116516]: Upgrade: Updating crash.vm05 (1/2) 2026-03-09T15:04:04.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:03 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:04.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:03 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T15:04:04.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:03 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:04.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:03 vm05.local ceph-mon[116516]: Deploying daemon crash.vm05 on vm05 2026-03-09T15:04:04.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:03 vm05.local ceph-mon[116516]: from='client.44109 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:04.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:03 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/210919579' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:04.179 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:03 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3384289733' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [ 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stdout: "mgr", 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stdout: "mon" 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "4/23 daemons upgraded", 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading crash daemons", 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.182+0000 7f9238ff9700 1 -- 192.168.123.105:0/3136976682 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+385 (secure 0 0 0) 0x7f923c07e440 con 0x7f9224077b00 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.184+0000 7f92227fc700 1 -- 192.168.123.105:0/3136976682 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9224077b00 msgr2=0x7f9224079fb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.184+0000 7f92227fc700 1 --2- 192.168.123.105:0/3136976682 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9224077b00 0x7f9224079fb0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f922c013d30 tx=0x7f922c016040 comp rx=0 tx=0).stop 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.184+0000 7f92227fc700 1 -- 192.168.123.105:0/3136976682 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f923c07d3b0 msgr2=0x7f923c07d820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:04.184 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.184+0000 7f92227fc700 1 --2- 192.168.123.105:0/3136976682 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f923c07d3b0 0x7f923c07d820 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f9234007b60 tx=0x7f9234009700 comp rx=0 tx=0).stop 2026-03-09T15:04:04.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.185+0000 7f92227fc700 1 -- 192.168.123.105:0/3136976682 shutdown_connections 2026-03-09T15:04:04.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.185+0000 7f92227fc700 1 --2- 192.168.123.105:0/3136976682 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9224077b00 0x7f9224079fb0 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:04.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.185+0000 7f92227fc700 1 --2- 192.168.123.105:0/3136976682 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f923c10c8b0 0x7f923c07ce70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:04.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.185+0000 7f92227fc700 1 --2- 192.168.123.105:0/3136976682 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f923c07d3b0 0x7f923c07d820 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:04.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.185+0000 7f92227fc700 1 -- 192.168.123.105:0/3136976682 >> 192.168.123.105:0/3136976682 conn(0x7f923c06c6c0 msgr2=0x7f923c06ff50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:04.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.185+0000 7f92227fc700 1 -- 192.168.123.105:0/3136976682 shutdown_connections 2026-03-09T15:04:04.185 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.185+0000 7f92227fc700 1 -- 192.168.123.105:0/3136976682 wait complete. 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 -- 192.168.123.105:0/747587494 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b30071e40 msgr2=0x7f8b300722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 --2- 192.168.123.105:0/747587494 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b30071e40 0x7f8b300722b0 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f8b2800d3e0 tx=0x7f8b2800d6f0 comp rx=0 tx=0).stop 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 -- 192.168.123.105:0/747587494 shutdown_connections 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 --2- 192.168.123.105:0/747587494 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b30071e40 0x7f8b300722b0 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 --2- 192.168.123.105:0/747587494 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b3010c8b0 0x7f8b3010cc80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 -- 192.168.123.105:0/747587494 >> 192.168.123.105:0/747587494 conn(0x7f8b3006c6c0 msgr2=0x7f8b3006cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 -- 192.168.123.105:0/747587494 shutdown_connections 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 -- 192.168.123.105:0/747587494 wait complete. 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 Processor -- start 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 -- start start 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b3010c8b0 0x7f8b3007ce00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b3007d340 0x7f8b3007d7b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8b30081980 con 0x7f8b3010c8b0 2026-03-09T15:04:04.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.291+0000 7f8b38108700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8b30081af0 con 0x7f8b3007d340 2026-03-09T15:04:04.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.292+0000 7f8b356a3700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b3007d340 0x7f8b3007d7b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:04.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.292+0000 7f8b356a3700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b3007d340 0x7f8b3007d7b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45900/0 (socket says 192.168.123.105:45900) 2026-03-09T15:04:04.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.292+0000 7f8b356a3700 1 -- 192.168.123.105:0/3686787583 learned_addr learned my addr 192.168.123.105:0/3686787583 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:04.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.292+0000 7f8b356a3700 1 -- 192.168.123.105:0/3686787583 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b3010c8b0 msgr2=0x7f8b3007ce00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:04.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.292+0000 7f8b356a3700 1 --2- 192.168.123.105:0/3686787583 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b3010c8b0 0x7f8b3007ce00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:04.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.292+0000 7f8b356a3700 1 -- 192.168.123.105:0/3686787583 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8b2800d090 con 0x7f8b3007d340 2026-03-09T15:04:04.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.292+0000 7f8b356a3700 1 --2- 192.168.123.105:0/3686787583 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b3007d340 0x7f8b3007d7b0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f8b2800dac0 tx=0x7f8b2800afc0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:04.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.293+0000 7f8b26ffd700 1 -- 192.168.123.105:0/3686787583 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8b28010040 con 0x7f8b3007d340 2026-03-09T15:04:04.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.293+0000 7f8b38108700 1 -- 192.168.123.105:0/3686787583 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8b30081d70 con 0x7f8b3007d340 2026-03-09T15:04:04.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.293+0000 7f8b38108700 1 -- 192.168.123.105:0/3686787583 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8b300822c0 con 0x7f8b3007d340 2026-03-09T15:04:04.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.293+0000 7f8b26ffd700 1 -- 192.168.123.105:0/3686787583 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f8b2800b640 con 0x7f8b3007d340 2026-03-09T15:04:04.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.293+0000 7f8b26ffd700 1 -- 192.168.123.105:0/3686787583 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8b2801ec20 con 0x7f8b3007d340 2026-03-09T15:04:04.294 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.295+0000 7f8b26ffd700 1 -- 192.168.123.105:0/3686787583 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f8b28004250 con 0x7f8b3007d340 2026-03-09T15:04:04.295 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.295+0000 7f8b26ffd700 1 --2- 192.168.123.105:0/3686787583 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f8b1c077a40 0x7f8b1c079ef0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:04.295 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.295+0000 7f8b26ffd700 1 -- 192.168.123.105:0/3686787583 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(41..41 src has 1..41) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f8b2802c030 con 0x7f8b3007d340 2026-03-09T15:04:04.296 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.297+0000 7f8b35ea4700 1 --2- 192.168.123.105:0/3686787583 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f8b1c077a40 0x7f8b1c079ef0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:04.297 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.297+0000 7f8b38108700 1 -- 192.168.123.105:0/3686787583 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8b14005320 con 0x7f8b3007d340 2026-03-09T15:04:04.297 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.298+0000 7f8b35ea4700 1 --2- 192.168.123.105:0/3686787583 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f8b1c077a40 0x7f8b1c079ef0 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f8b2c004510 tx=0x7f8b2c00ae50 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:04.303 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.300+0000 7f8b26ffd700 1 -- 192.168.123.105:0/3686787583 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f8b28065b80 con 0x7f8b3007d340 2026-03-09T15:04:04.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:03 vm09.local ceph-mon[98742]: from='client.34130 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:04.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:03 vm09.local ceph-mon[98742]: Upgrade: Updating crash.vm05 (1/2) 2026-03-09T15:04:04.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:03 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:04.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:03 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm05", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T15:04:04.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:03 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:04.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:03 vm09.local ceph-mon[98742]: Deploying daemon crash.vm05 on vm05 2026-03-09T15:04:04.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:03 vm09.local ceph-mon[98742]: from='client.44109 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:04.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:03 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/210919579' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:04.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:03 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3384289733' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:04:04.507 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.506+0000 7f8b38108700 1 -- 192.168.123.105:0/3686787583 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f8b14005190 con 0x7f8b3007d340 2026-03-09T15:04:04.510 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.510+0000 7f8b26ffd700 1 -- 192.168.123.105:0/3686787583 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+201 (secure 0 0 0) 0x7f8b280220e0 con 0x7f8b3007d340 2026-03-09T15:04:04.512 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:04:04.512 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:04:04.513 INFO:teuthology.orchestra.run.vm05.stdout: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:04:04.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.515+0000 7f8b24ff9700 1 -- 192.168.123.105:0/3686787583 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f8b1c077a40 msgr2=0x7f8b1c079ef0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:04.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.515+0000 7f8b24ff9700 1 --2- 192.168.123.105:0/3686787583 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f8b1c077a40 0x7f8b1c079ef0 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f8b2c004510 tx=0x7f8b2c00ae50 comp rx=0 tx=0).stop 2026-03-09T15:04:04.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.515+0000 7f8b24ff9700 1 -- 192.168.123.105:0/3686787583 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b3007d340 msgr2=0x7f8b3007d7b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:04.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.515+0000 7f8b24ff9700 1 --2- 192.168.123.105:0/3686787583 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b3007d340 0x7f8b3007d7b0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f8b2800dac0 tx=0x7f8b2800afc0 comp rx=0 tx=0).stop 2026-03-09T15:04:04.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.515+0000 7f8b24ff9700 1 -- 192.168.123.105:0/3686787583 shutdown_connections 2026-03-09T15:04:04.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.515+0000 7f8b24ff9700 1 --2- 192.168.123.105:0/3686787583 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f8b1c077a40 0x7f8b1c079ef0 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:04.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.515+0000 7f8b24ff9700 1 --2- 192.168.123.105:0/3686787583 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8b3010c8b0 0x7f8b3007ce00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:04.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.515+0000 7f8b24ff9700 1 --2- 192.168.123.105:0/3686787583 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8b3007d340 0x7f8b3007d7b0 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:04.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.515+0000 7f8b24ff9700 1 -- 192.168.123.105:0/3686787583 >> 192.168.123.105:0/3686787583 conn(0x7f8b3006c6c0 msgr2=0x7f8b3006ff50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:04.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.516+0000 7f8b24ff9700 1 -- 192.168.123.105:0/3686787583 shutdown_connections 2026-03-09T15:04:04.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:04.516+0000 7f8b24ff9700 1 -- 192.168.123.105:0/3686787583 wait complete. 2026-03-09T15:04:04.934 INFO:tasks.workunit:Stopping ['suites/fsstress.sh'] on client.0... 2026-03-09T15:04:04.934 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-09T15:04:05.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:04 vm05.local ceph-mon[116516]: pgmap v14: 65 pgs: 65 active+clean; 264 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 192 op/s 2026-03-09T15:04:05.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:04 vm05.local ceph-mon[116516]: from='client.44121 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:05.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:04 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3686787583' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:04:05.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:04 vm09.local ceph-mon[98742]: pgmap v14: 65 pgs: 65 active+clean; 264 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 192 op/s 2026-03-09T15:04:05.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:04 vm09.local ceph-mon[98742]: from='client.44121 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:05.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:04 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3686787583' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:04:05.391 DEBUG:teuthology.parallel:result is None 2026-03-09T15:04:05.391 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T15:04:05.441 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T15:04:05.441 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.1/client.1 2026-03-09T15:04:05.504 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.1/client.1 2026-03-09T15:04:05.504 DEBUG:teuthology.parallel:result is None 2026-03-09T15:04:06.363 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:06 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:06.363 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:06 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:06.363 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:06 vm09.local ceph-mon[98742]: pgmap v15: 65 pgs: 65 active+clean; 264 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 598 KiB/s rd, 639 KiB/s wr, 130 op/s 2026-03-09T15:04:06.363 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:06 vm09.local ceph-mon[98742]: Upgrade: Updating crash.vm09 (2/2) 2026-03-09T15:04:06.363 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:06 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:06.363 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:06 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T15:04:06.363 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:06 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:06.363 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:06 vm09.local ceph-mon[98742]: Deploying daemon crash.vm09 on vm09 2026-03-09T15:04:06.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:06 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:06.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:06 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:06.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:06 vm05.local ceph-mon[116516]: pgmap v15: 65 pgs: 65 active+clean; 264 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 598 KiB/s rd, 639 KiB/s wr, 130 op/s 2026-03-09T15:04:06.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:06 vm05.local ceph-mon[116516]: Upgrade: Updating crash.vm09 (2/2) 2026-03-09T15:04:06.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:06 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:06.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:06 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-09T15:04:06.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:06 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:06.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:06 vm05.local ceph-mon[116516]: Deploying daemon crash.vm09 on vm09 2026-03-09T15:04:07.991 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:07 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:07.991 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:07 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:07.991 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:07 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:08.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:07 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:08.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:07 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:08.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:07 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:09.245 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:09 vm09.local ceph-mon[98742]: pgmap v16: 65 pgs: 65 active+clean; 261 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 598 KiB/s rd, 639 KiB/s wr, 138 op/s 2026-03-09T15:04:09.245 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:09 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:09.246 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:09 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:09.246 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:09 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:09.246 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:09 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:09.246 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:09 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:09.246 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:09 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:09.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:09 vm05.local ceph-mon[116516]: pgmap v16: 65 pgs: 65 active+clean; 261 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 598 KiB/s rd, 639 KiB/s wr, 138 op/s 2026-03-09T15:04:09.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:09 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:09.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:09 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:09.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:09 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:09.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:09 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:09.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:09 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:09.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:09 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:10.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:10 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:10.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:10 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:10.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:10 vm05.local ceph-mon[116516]: pgmap v17: 65 pgs: 65 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 808 KiB/s rd, 820 KiB/s wr, 163 op/s 2026-03-09T15:04:10.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:10 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:10.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:10 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:10.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:10 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:10.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:10 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:10.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:10 vm09.local ceph-mon[98742]: pgmap v17: 65 pgs: 65 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 808 KiB/s rd, 820 KiB/s wr, 163 op/s 2026-03-09T15:04:10.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:10 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:10.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:10 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:11.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:11.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:11.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:11.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:04:11.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:11.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:11.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:11.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: Upgrade: Setting container_image for all crash 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm05"}]: dispatch 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm05"}]': finished 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm09"}]: dispatch 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm09"}]': finished 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: Upgrade: osd.0 is safe to restart 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: Upgrade: Updating osd.0 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: Deploying daemon osd.0 on vm05 2026-03-09T15:04:11.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:04:11.882 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:11.882 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: Upgrade: Setting container_image for all crash 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm05"}]: dispatch 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm05"}]': finished 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm09"}]: dispatch 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash.vm09"}]': finished 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: Upgrade: osd.0 is safe to restart 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: Upgrade: Updating osd.0 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: Deploying daemon osd.0 on vm05 2026-03-09T15:04:11.883 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:04:12.304 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:11 vm05.local systemd[1]: Stopping Ceph osd.0 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:04:12.304 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[68904]: 2026-03-09T15:04:11.992+0000 7f106a07d700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:04:12.304 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[68904]: 2026-03-09T15:04:11.992+0000 7f106a07d700 -1 osd.0 41 *** Got signal Terminated *** 2026-03-09T15:04:12.304 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:11 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[68904]: 2026-03-09T15:04:11.992+0000 7f106a07d700 -1 osd.0 41 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T15:04:12.944 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:12 vm05.local podman[125248]: 2026-03-09 15:04:12.750703902 +0000 UTC m=+0.775575051 container died 50f3ca995318d5057650dd6aecaf16486cdfba0adb8acb7ab2811c6a463634dd (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.label-schema.build-date=20231212, GIT_CLEAN=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.schema-version=1.0, GIT_BRANCH=HEAD, RELEASE=HEAD, ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=-18.2.0, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, io.buildah.version=1.29.1, org.label-schema.vendor=CentOS) 2026-03-09T15:04:12.944 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:12 vm05.local podman[125248]: 2026-03-09 15:04:12.775882906 +0000 UTC m=+0.800754055 container remove 50f3ca995318d5057650dd6aecaf16486cdfba0adb8acb7ab2811c6a463634dd (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, org.label-schema.vendor=CentOS, maintainer=Guillaume Abrioux , org.label-schema.license=GPLv2, CEPH_POINT_RELEASE=-18.2.0, GIT_BRANCH=HEAD, GIT_REPO=https://github.com/ceph/ceph-container.git, org.label-schema.build-date=20231212, org.label-schema.schema-version=1.0, RELEASE=HEAD, ceph=True, io.buildah.version=1.29.1, org.label-schema.name=CentOS Stream 8 Base Image, GIT_CLEAN=True) 2026-03-09T15:04:12.944 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:12 vm05.local bash[125248]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0 2026-03-09T15:04:12.944 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:12 vm05.local podman[125314]: 2026-03-09 15:04:12.917709758 +0000 UTC m=+0.016371598 container create cd9f61c60b00b84b1cf445e3e3c7f2d340755b5811de00941cd11390707caf0c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-deactivate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T15:04:12.944 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:12 vm05.local ceph-mon[116516]: pgmap v18: 65 pgs: 65 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 511 KiB/s rd, 530 KiB/s wr, 99 op/s 2026-03-09T15:04:12.944 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:12 vm05.local ceph-mon[116516]: osd.0 marked itself down and dead 2026-03-09T15:04:13.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:12 vm09.local ceph-mon[98742]: pgmap v18: 65 pgs: 65 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 511 KiB/s rd, 530 KiB/s wr, 99 op/s 2026-03-09T15:04:13.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:12 vm09.local ceph-mon[98742]: osd.0 marked itself down and dead 2026-03-09T15:04:13.211 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:12 vm05.local podman[125314]: 2026-03-09 15:04:12.963154036 +0000 UTC m=+0.061815887 container init cd9f61c60b00b84b1cf445e3e3c7f2d340755b5811de00941cd11390707caf0c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-deactivate, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T15:04:13.211 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:12 vm05.local podman[125314]: 2026-03-09 15:04:12.966392238 +0000 UTC m=+0.065054078 container start cd9f61c60b00b84b1cf445e3e3c7f2d340755b5811de00941cd11390707caf0c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-deactivate, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T15:04:13.211 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:12 vm05.local podman[125314]: 2026-03-09 15:04:12.969609711 +0000 UTC m=+0.068271561 container attach cd9f61c60b00b84b1cf445e3e3c7f2d340755b5811de00941cd11390707caf0c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-deactivate, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True) 2026-03-09T15:04:13.211 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local podman[125314]: 2026-03-09 15:04:12.911333181 +0000 UTC m=+0.009995021 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:04:13.211 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local podman[125314]: 2026-03-09 15:04:13.098329727 +0000 UTC m=+0.196991567 container died cd9f61c60b00b84b1cf445e3e3c7f2d340755b5811de00941cd11390707caf0c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T15:04:13.211 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local podman[125314]: 2026-03-09 15:04:13.115950202 +0000 UTC m=+0.214612042 container remove cd9f61c60b00b84b1cf445e3e3c7f2d340755b5811de00941cd11390707caf0c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-deactivate, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default) 2026-03-09T15:04:13.211 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.0.service: Deactivated successfully. 2026-03-09T15:04:13.211 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local systemd[1]: Stopped Ceph osd.0 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:04:13.211 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.0.service: Consumed 38.938s CPU time. 2026-03-09T15:04:13.582 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local systemd[1]: Starting Ceph osd.0 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:04:13.582 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local podman[125415]: 2026-03-09 15:04:13.445728687 +0000 UTC m=+0.024400737 container create 709895b0305756f6a908b37c721f3d0011670967ed6f59b21685a753043dbdc1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid) 2026-03-09T15:04:13.582 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local podman[125415]: 2026-03-09 15:04:13.494884303 +0000 UTC m=+0.073556364 container init 709895b0305756f6a908b37c721f3d0011670967ed6f59b21685a753043dbdc1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2) 2026-03-09T15:04:13.582 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local podman[125415]: 2026-03-09 15:04:13.498172218 +0000 UTC m=+0.076844268 container start 709895b0305756f6a908b37c721f3d0011670967ed6f59b21685a753043dbdc1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) 2026-03-09T15:04:13.582 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local podman[125415]: 2026-03-09 15:04:13.501458249 +0000 UTC m=+0.080130299 container attach 709895b0305756f6a908b37c721f3d0011670967ed6f59b21685a753043dbdc1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS) 2026-03-09T15:04:13.582 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local podman[125415]: 2026-03-09 15:04:13.433076282 +0000 UTC m=+0.011748343 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:04:14.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:13 vm05.local ceph-mon[116516]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T15:04:14.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:13 vm05.local ceph-mon[116516]: osdmap e42: 6 total, 5 up, 6 in 2026-03-09T15:04:14.054 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate[125426]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:14.054 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local bash[125415]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:14.054 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate[125426]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:14.054 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:13 vm05.local bash[125415]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:14.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:13 vm09.local ceph-mon[98742]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T15:04:14.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:13 vm09.local ceph-mon[98742]: osdmap e42: 6 total, 5 up, 6 in 2026-03-09T15:04:14.432 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate[125426]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T15:04:14.432 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate[125426]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:14.432 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local bash[125415]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T15:04:14.432 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local bash[125415]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:14.432 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate[125426]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:14.432 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local bash[125415]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:14.432 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate[125426]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T15:04:14.432 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local bash[125415]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T15:04:14.432 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate[125426]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-4a70fab4-2583-4d35-97c1-c54cbe7d7db4/osd-block-1022e816-4ad0-4a27-9052-07d4015a684e --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-09T15:04:14.432 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local bash[125415]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-4a70fab4-2583-4d35-97c1-c54cbe7d7db4/osd-block-1022e816-4ad0-4a27-9052-07d4015a684e --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate[125426]: Running command: /usr/bin/ln -snf /dev/ceph-4a70fab4-2583-4d35-97c1-c54cbe7d7db4/osd-block-1022e816-4ad0-4a27-9052-07d4015a684e /var/lib/ceph/osd/ceph-0/block 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local bash[125415]: Running command: /usr/bin/ln -snf /dev/ceph-4a70fab4-2583-4d35-97c1-c54cbe7d7db4/osd-block-1022e816-4ad0-4a27-9052-07d4015a684e /var/lib/ceph/osd/ceph-0/block 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate[125426]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local bash[125415]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate[125426]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local bash[125415]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate[125426]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local bash[125415]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate[125426]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local bash[125415]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local podman[125638]: 2026-03-09 15:04:14.499004958 +0000 UTC m=+0.013401989 container died 709895b0305756f6a908b37c721f3d0011670967ed6f59b21685a753043dbdc1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local podman[125638]: 2026-03-09 15:04:14.515852415 +0000 UTC m=+0.030249446 container remove 709895b0305756f6a908b37c721f3d0011670967ed6f59b21685a753043dbdc1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3) 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local podman[125678]: 2026-03-09 15:04:14.638745574 +0000 UTC m=+0.019910241 container create f2883abca2d23322474e24e6f3effa5ec059a26f9c2ae1c66fe51430c364ec7b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local podman[125678]: 2026-03-09 15:04:14.672438786 +0000 UTC m=+0.053603454 container init f2883abca2d23322474e24e6f3effa5ec059a26f9c2ae1c66fe51430c364ec7b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS) 2026-03-09T15:04:14.683 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local podman[125678]: 2026-03-09 15:04:14.677356261 +0000 UTC m=+0.058520939 container start f2883abca2d23322474e24e6f3effa5ec059a26f9c2ae1c66fe51430c364ec7b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, ceph=True) 2026-03-09T15:04:14.684 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local bash[125678]: f2883abca2d23322474e24e6f3effa5ec059a26f9c2ae1c66fe51430c364ec7b 2026-03-09T15:04:14.684 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local podman[125678]: 2026-03-09 15:04:14.629962421 +0000 UTC m=+0.011127110 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:04:15.017 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:14 vm05.local ceph-mon[116516]: pgmap v20: 65 pgs: 9 stale+active+clean, 56 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 266 KiB/s rd, 232 KiB/s wr, 41 op/s 2026-03-09T15:04:15.017 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:14 vm05.local ceph-mon[116516]: osdmap e43: 6 total, 5 up, 6 in 2026-03-09T15:04:15.017 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:14 vm05.local systemd[1]: Started Ceph osd.0 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:04:15.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:14 vm09.local ceph-mon[98742]: pgmap v20: 65 pgs: 9 stale+active+clean, 56 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 266 KiB/s rd, 232 KiB/s wr, 41 op/s 2026-03-09T15:04:15.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:14 vm09.local ceph-mon[98742]: osdmap e43: 6 total, 5 up, 6 in 2026-03-09T15:04:15.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:15 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:15.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:15 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:15.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:15 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:15.872 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:15 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:15.872 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:15 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:15.872 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:15 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:15.873 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:15 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[125688]: 2026-03-09T15:04:15.793+0000 7f405a310740 -1 Falling back to public interface 2026-03-09T15:04:17.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:16 vm05.local ceph-mon[116516]: pgmap v22: 65 pgs: 9 stale+active+clean, 56 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 333 KiB/s rd, 290 KiB/s wr, 40 op/s 2026-03-09T15:04:17.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:17.029 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:17.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:16 vm09.local ceph-mon[98742]: pgmap v22: 65 pgs: 9 stale+active+clean, 56 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 333 KiB/s rd, 290 KiB/s wr, 40 op/s 2026-03-09T15:04:17.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:17.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:17.784 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:17.784 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:18.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:18.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: pgmap v23: 65 pgs: 7 active+undersized, 4 stale+active+clean, 4 active+undersized+degraded, 50 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.1 KiB/s rd, 18 KiB/s wr, 3 op/s; 9/264 objects degraded (3.409%) 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: Health check failed: Degraded data redundancy: 9/264 objects degraded (3.409%), 4 pgs degraded (PG_DEGRADED) 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T15:04:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:18 vm05.local ceph-mon[116516]: Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: pgmap v23: 65 pgs: 7 active+undersized, 4 stale+active+clean, 4 active+undersized+degraded, 50 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.1 KiB/s rd, 18 KiB/s wr, 3 op/s; 9/264 objects degraded (3.409%) 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: Health check failed: Degraded data redundancy: 9/264 objects degraded (3.409%), 4 pgs degraded (PG_DEGRADED) 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T15:04:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:18 vm09.local ceph-mon[98742]: Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-09T15:04:19.554 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:19 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[125688]: 2026-03-09T15:04:19.271+0000 7f405a310740 -1 osd.0 0 read_superblock omap replica is missing. 2026-03-09T15:04:19.554 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:19 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[125688]: 2026-03-09T15:04:19.435+0000 7f405a310740 -1 osd.0 41 log_to_monitors true 2026-03-09T15:04:20.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:19 vm05.local ceph-mon[116516]: from='osd.0 [v2:192.168.123.105:6802/2888355487,v1:192.168.123.105:6803/2888355487]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T15:04:20.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:19 vm09.local ceph-mon[98742]: from='osd.0 [v2:192.168.123.105:6802/2888355487,v1:192.168.123.105:6803/2888355487]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T15:04:21.165 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:20 vm09.local ceph-mon[98742]: pgmap v24: 65 pgs: 15 active+undersized, 19 active+undersized+degraded, 31 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.4 KiB/s rd, 18 KiB/s wr, 3 op/s; 50/264 objects degraded (18.939%) 2026-03-09T15:04:21.165 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:20 vm09.local ceph-mon[98742]: from='osd.0 [v2:192.168.123.105:6802/2888355487,v1:192.168.123.105:6803/2888355487]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T15:04:21.165 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:20 vm09.local ceph-mon[98742]: osdmap e44: 6 total, 5 up, 6 in 2026-03-09T15:04:21.165 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:20 vm09.local ceph-mon[98742]: from='osd.0 [v2:192.168.123.105:6802/2888355487,v1:192.168.123.105:6803/2888355487]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T15:04:21.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:20 vm05.local ceph-mon[116516]: pgmap v24: 65 pgs: 15 active+undersized, 19 active+undersized+degraded, 31 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 1.4 KiB/s rd, 18 KiB/s wr, 3 op/s; 50/264 objects degraded (18.939%) 2026-03-09T15:04:21.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:20 vm05.local ceph-mon[116516]: from='osd.0 [v2:192.168.123.105:6802/2888355487,v1:192.168.123.105:6803/2888355487]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T15:04:21.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:20 vm05.local ceph-mon[116516]: osdmap e44: 6 total, 5 up, 6 in 2026-03-09T15:04:21.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:20 vm05.local ceph-mon[116516]: from='osd.0 [v2:192.168.123.105:6802/2888355487,v1:192.168.123.105:6803/2888355487]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T15:04:21.804 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:04:21 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[125688]: 2026-03-09T15:04:21.377+0000 7f40518a9640 -1 osd.0 41 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T15:04:22.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:22 vm05.local ceph-mon[116516]: pgmap v26: 65 pgs: 15 active+undersized, 19 active+undersized+degraded, 31 active+clean; 257 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s; 50/264 objects degraded (18.939%) 2026-03-09T15:04:22.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:22 vm05.local ceph-mon[116516]: from='osd.0 [v2:192.168.123.105:6802/2888355487,v1:192.168.123.105:6803/2888355487]' entity='osd.0' 2026-03-09T15:04:22.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:22 vm09.local ceph-mon[98742]: pgmap v26: 65 pgs: 15 active+undersized, 19 active+undersized+degraded, 31 active+clean; 257 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s; 50/264 objects degraded (18.939%) 2026-03-09T15:04:22.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:22 vm09.local ceph-mon[98742]: from='osd.0 [v2:192.168.123.105:6802/2888355487,v1:192.168.123.105:6803/2888355487]' entity='osd.0' 2026-03-09T15:04:23.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:23 vm05.local ceph-mon[116516]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T15:04:23.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:23 vm05.local ceph-mon[116516]: osd.0 [v2:192.168.123.105:6802/2888355487,v1:192.168.123.105:6803/2888355487] boot 2026-03-09T15:04:23.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:23 vm05.local ceph-mon[116516]: osdmap e45: 6 total, 6 up, 6 in 2026-03-09T15:04:23.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:23 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:04:23.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:23 vm05.local ceph-mon[116516]: osdmap e46: 6 total, 6 up, 6 in 2026-03-09T15:04:23.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:23 vm09.local ceph-mon[98742]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T15:04:23.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:23 vm09.local ceph-mon[98742]: osd.0 [v2:192.168.123.105:6802/2888355487,v1:192.168.123.105:6803/2888355487] boot 2026-03-09T15:04:23.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:23 vm09.local ceph-mon[98742]: osdmap e45: 6 total, 6 up, 6 in 2026-03-09T15:04:23.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:23 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T15:04:23.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:23 vm09.local ceph-mon[98742]: osdmap e46: 6 total, 6 up, 6 in 2026-03-09T15:04:24.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:24 vm05.local ceph-mon[116516]: pgmap v28: 65 pgs: 15 active+undersized, 19 active+undersized+degraded, 31 active+clean; 257 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 255 B/s wr, 1 op/s; 50/264 objects degraded (18.939%) 2026-03-09T15:04:24.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:24 vm05.local ceph-mon[116516]: Health check update: Degraded data redundancy: 50/264 objects degraded (18.939%), 19 pgs degraded (PG_DEGRADED) 2026-03-09T15:04:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:24 vm09.local ceph-mon[98742]: pgmap v28: 65 pgs: 15 active+undersized, 19 active+undersized+degraded, 31 active+clean; 257 MiB data, 2.6 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 255 B/s wr, 1 op/s; 50/264 objects degraded (18.939%) 2026-03-09T15:04:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:24 vm09.local ceph-mon[98742]: Health check update: Degraded data redundancy: 50/264 objects degraded (18.939%), 19 pgs degraded (PG_DEGRADED) 2026-03-09T15:04:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:26 vm09.local ceph-mon[98742]: pgmap v30: 65 pgs: 9 active+undersized, 16 active+undersized+degraded, 40 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 511 B/s rd, 0 op/s; 43/264 objects degraded (16.288%); 199 B/s, 0 objects/s recovering 2026-03-09T15:04:26.948 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:26 vm05.local ceph-mon[116516]: pgmap v30: 65 pgs: 9 active+undersized, 16 active+undersized+degraded, 40 active+clean; 257 MiB data, 3.0 GiB used, 117 GiB / 120 GiB avail; 511 B/s rd, 0 op/s; 43/264 objects degraded (16.288%); 199 B/s, 0 objects/s recovering 2026-03-09T15:04:28.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:27 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:28.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:27 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:04:28.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:27 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:28.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:27 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:04:29.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:28 vm05.local ceph-mon[116516]: pgmap v31: 65 pgs: 5 active+undersized, 14 active+undersized+degraded, 46 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 405 B/s rd, 0 op/s; 38/264 objects degraded (14.394%); 158 B/s, 0 objects/s recovering 2026-03-09T15:04:29.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:28 vm09.local ceph-mon[98742]: pgmap v31: 65 pgs: 5 active+undersized, 14 active+undersized+degraded, 46 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 405 B/s rd, 0 op/s; 38/264 objects degraded (14.394%); 158 B/s, 0 objects/s recovering 2026-03-09T15:04:30.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:29 vm05.local ceph-mon[116516]: Health check update: Degraded data redundancy: 38/264 objects degraded (14.394%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T15:04:30.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:29 vm09.local ceph-mon[98742]: Health check update: Degraded data redundancy: 38/264 objects degraded (14.394%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T15:04:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:30 vm05.local ceph-mon[116516]: pgmap v32: 65 pgs: 65 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 895 B/s rd, 0 op/s; 149 B/s, 0 objects/s recovering 2026-03-09T15:04:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:30 vm05.local ceph-mon[116516]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 38/264 objects degraded (14.394%), 14 pgs degraded) 2026-03-09T15:04:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:30 vm09.local ceph-mon[98742]: pgmap v32: 65 pgs: 65 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 895 B/s rd, 0 op/s; 149 B/s, 0 objects/s recovering 2026-03-09T15:04:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:30 vm09.local ceph-mon[98742]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 38/264 objects degraded (14.394%), 14 pgs degraded) 2026-03-09T15:04:32.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:32 vm05.local ceph-mon[116516]: pgmap v33: 65 pgs: 65 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 229 B/s rd, 0 op/s; 134 B/s, 0 objects/s recovering 2026-03-09T15:04:32.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:32 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:32.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:32 vm09.local ceph-mon[98742]: pgmap v33: 65 pgs: 65 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 229 B/s rd, 0 op/s; 134 B/s, 0 objects/s recovering 2026-03-09T15:04:32.875 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:32 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:33.549 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:33 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T15:04:33.549 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:33 vm05.local ceph-mon[116516]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T15:04:33.549 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:33 vm05.local ceph-mon[116516]: Upgrade: osd.1 is safe to restart 2026-03-09T15:04:33.549 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:33 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:33.549 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:33 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T15:04:33.549 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:33 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:33 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T15:04:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:33 vm09.local ceph-mon[98742]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T15:04:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:33 vm09.local ceph-mon[98742]: Upgrade: osd.1 is safe to restart 2026-03-09T15:04:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:33 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:33 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T15:04:33.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:33 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:34.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.608+0000 7f65ecd05700 1 -- 192.168.123.105:0/3004659331 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65e8068490 msgr2=0x7f65e8068900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:34.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.608+0000 7f65ecd05700 1 --2- 192.168.123.105:0/3004659331 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65e8068490 0x7f65e8068900 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f65d8009b00 tx=0x7f65d8009e10 comp rx=0 tx=0).stop 2026-03-09T15:04:34.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.610+0000 7f65ecd05700 1 -- 192.168.123.105:0/3004659331 shutdown_connections 2026-03-09T15:04:34.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.610+0000 7f65ecd05700 1 --2- 192.168.123.105:0/3004659331 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65e8068490 0x7f65e8068900 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.610+0000 7f65ecd05700 1 --2- 192.168.123.105:0/3004659331 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f65e81066c0 0x7f65e8106a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.610+0000 7f65ecd05700 1 -- 192.168.123.105:0/3004659331 >> 192.168.123.105:0/3004659331 conn(0x7f65e80754a0 msgr2=0x7f65e80758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:34.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.610+0000 7f65ecd05700 1 -- 192.168.123.105:0/3004659331 shutdown_connections 2026-03-09T15:04:34.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.610+0000 7f65ecd05700 1 -- 192.168.123.105:0/3004659331 wait complete. 2026-03-09T15:04:34.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.611+0000 7f65ecd05700 1 Processor -- start 2026-03-09T15:04:34.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.611+0000 7f65ecd05700 1 -- start start 2026-03-09T15:04:34.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.611+0000 7f65ecd05700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f65e8068490 0x7f65e8198e00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:34.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.611+0000 7f65ecd05700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65e81066c0 0x7f65e8199360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:34.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.611+0000 7f65ecd05700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f65e819eb90 con 0x7f65e81066c0 2026-03-09T15:04:34.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.611+0000 7f65ecd05700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f65e819ed00 con 0x7f65e8068490 2026-03-09T15:04:34.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.611+0000 7f65e659c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f65e8068490 0x7f65e8198e00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:34.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.612+0000 7f65e5d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65e81066c0 0x7f65e8199360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:34.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.612+0000 7f65e5d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65e81066c0 0x7f65e8199360 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:60492/0 (socket says 192.168.123.105:60492) 2026-03-09T15:04:34.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.612+0000 7f65e5d9b700 1 -- 192.168.123.105:0/2905408064 learned_addr learned my addr 192.168.123.105:0/2905408064 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:34.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.612+0000 7f65e5d9b700 1 -- 192.168.123.105:0/2905408064 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f65e8068490 msgr2=0x7f65e8198e00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:34.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.612+0000 7f65e5d9b700 1 --2- 192.168.123.105:0/2905408064 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f65e8068490 0x7f65e8198e00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.612+0000 7f65e5d9b700 1 -- 192.168.123.105:0/2905408064 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f65d80097e0 con 0x7f65e81066c0 2026-03-09T15:04:34.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.612+0000 7f65e5d9b700 1 --2- 192.168.123.105:0/2905408064 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65e81066c0 0x7f65e8199360 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f65d8004cb0 tx=0x7f65d8005dc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:34.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.613+0000 7f65df7fe700 1 -- 192.168.123.105:0/2905408064 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f65d801d070 con 0x7f65e81066c0 2026-03-09T15:04:34.613 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.613+0000 7f65df7fe700 1 -- 192.168.123.105:0/2905408064 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f65d800bc30 con 0x7f65e81066c0 2026-03-09T15:04:34.613 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.613+0000 7f65df7fe700 1 -- 192.168.123.105:0/2905408064 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f65d800f780 con 0x7f65e81066c0 2026-03-09T15:04:34.613 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.613+0000 7f65ecd05700 1 -- 192.168.123.105:0/2905408064 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f65e819ef00 con 0x7f65e81066c0 2026-03-09T15:04:34.614 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.613+0000 7f65ecd05700 1 -- 192.168.123.105:0/2905408064 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f65e819f3f0 con 0x7f65e81066c0 2026-03-09T15:04:34.615 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.615+0000 7f65ecd05700 1 -- 192.168.123.105:0/2905408064 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f65e804ea50 con 0x7f65e81066c0 2026-03-09T15:04:34.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.615+0000 7f65df7fe700 1 -- 192.168.123.105:0/2905408064 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f65d8022470 con 0x7f65e81066c0 2026-03-09T15:04:34.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.615+0000 7f65df7fe700 1 --2- 192.168.123.105:0/2905408064 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f65d40779f0 0x7f65d4079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:34.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.616+0000 7f65df7fe700 1 -- 192.168.123.105:0/2905408064 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(46..46 src has 1..46) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f65d809b4f0 con 0x7f65e81066c0 2026-03-09T15:04:34.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.616+0000 7f65e659c700 1 --2- 192.168.123.105:0/2905408064 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f65d40779f0 0x7f65d4079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:34.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.616+0000 7f65e659c700 1 --2- 192.168.123.105:0/2905408064 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f65d40779f0 0x7f65d4079ea0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f65d0006fd0 tx=0x7f65d0008040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:34.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.619+0000 7f65df7fe700 1 -- 192.168.123.105:0/2905408064 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f65d8063b00 con 0x7f65e81066c0 2026-03-09T15:04:34.751 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:34 vm05.local ceph-mon[116516]: Upgrade: Updating osd.1 2026-03-09T15:04:34.751 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:34 vm05.local ceph-mon[116516]: Deploying daemon osd.1 on vm05 2026-03-09T15:04:34.751 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:34 vm05.local ceph-mon[116516]: pgmap v34: 65 pgs: 65 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 716 B/s rd, 0 op/s; 119 B/s, 0 objects/s recovering 2026-03-09T15:04:34.751 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:34 vm05.local ceph-mon[116516]: osd.1 marked itself down and dead 2026-03-09T15:04:34.751 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:34 vm05.local systemd[1]: Stopping Ceph osd.1 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:04:34.751 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:34 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[75070]: 2026-03-09T15:04:34.509+0000 7f168f8c3700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:04:34.751 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:34 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[75070]: 2026-03-09T15:04:34.509+0000 7f168f8c3700 -1 osd.1 46 *** Got signal Terminated *** 2026-03-09T15:04:34.751 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:34 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[75070]: 2026-03-09T15:04:34.509+0000 7f168f8c3700 -1 osd.1 46 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T15:04:34.751 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.750+0000 7f65ecd05700 1 -- 192.168.123.105:0/2905408064 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f65e8195450 con 0x7f65d40779f0 2026-03-09T15:04:34.754 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.754+0000 7f65df7fe700 1 -- 192.168.123.105:0/2905408064 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f65e8195450 con 0x7f65d40779f0 2026-03-09T15:04:34.757 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.757+0000 7f65ecd05700 1 -- 192.168.123.105:0/2905408064 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f65d40779f0 msgr2=0x7f65d4079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:34.757 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.757+0000 7f65ecd05700 1 --2- 192.168.123.105:0/2905408064 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f65d40779f0 0x7f65d4079ea0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f65d0006fd0 tx=0x7f65d0008040 comp rx=0 tx=0).stop 2026-03-09T15:04:34.757 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.757+0000 7f65ecd05700 1 -- 192.168.123.105:0/2905408064 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65e81066c0 msgr2=0x7f65e8199360 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:34.757 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.757+0000 7f65ecd05700 1 --2- 192.168.123.105:0/2905408064 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65e81066c0 0x7f65e8199360 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f65d8004cb0 tx=0x7f65d8005dc0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.757 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.757+0000 7f65ecd05700 1 -- 192.168.123.105:0/2905408064 shutdown_connections 2026-03-09T15:04:34.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.757+0000 7f65ecd05700 1 --2- 192.168.123.105:0/2905408064 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f65d40779f0 0x7f65d4079ea0 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.757+0000 7f65ecd05700 1 --2- 192.168.123.105:0/2905408064 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f65e8068490 0x7f65e8198e00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.757+0000 7f65ecd05700 1 --2- 192.168.123.105:0/2905408064 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f65e81066c0 0x7f65e8199360 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.758+0000 7f65ecd05700 1 -- 192.168.123.105:0/2905408064 >> 192.168.123.105:0/2905408064 conn(0x7f65e80754a0 msgr2=0x7f65e81002e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:34.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.758+0000 7f65ecd05700 1 -- 192.168.123.105:0/2905408064 shutdown_connections 2026-03-09T15:04:34.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.758+0000 7f65ecd05700 1 -- 192.168.123.105:0/2905408064 wait complete. 2026-03-09T15:04:34.768 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:04:34.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.839+0000 7f9cecf70700 1 -- 192.168.123.105:0/2482056005 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9ce8069000 msgr2=0x7f9ce81051e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:34.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.839+0000 7f9cecf70700 1 --2- 192.168.123.105:0/2482056005 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9ce8069000 0x7f9ce81051e0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f9cd8009b50 tx=0x7f9cd8009e60 comp rx=0 tx=0).stop 2026-03-09T15:04:34.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.840+0000 7f9cecf70700 1 -- 192.168.123.105:0/2482056005 shutdown_connections 2026-03-09T15:04:34.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.840+0000 7f9cecf70700 1 --2- 192.168.123.105:0/2482056005 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9ce8069000 0x7f9ce81051e0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.840+0000 7f9cecf70700 1 --2- 192.168.123.105:0/2482056005 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ce80686f0 0x7f9ce8068ac0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.840+0000 7f9cecf70700 1 -- 192.168.123.105:0/2482056005 >> 192.168.123.105:0/2482056005 conn(0x7f9ce80754a0 msgr2=0x7f9ce80758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:34.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.840+0000 7f9cecf70700 1 -- 192.168.123.105:0/2482056005 shutdown_connections 2026-03-09T15:04:34.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.840+0000 7f9cecf70700 1 -- 192.168.123.105:0/2482056005 wait complete. 2026-03-09T15:04:34.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.840+0000 7f9cecf70700 1 Processor -- start 2026-03-09T15:04:34.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.840+0000 7f9cecf70700 1 -- start start 2026-03-09T15:04:34.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.841+0000 7f9cecf70700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9ce80686f0 0x7f9ce81962e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:34.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.841+0000 7f9ce659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9ce80686f0 0x7f9ce81962e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:34.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.841+0000 7f9ce659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9ce80686f0 0x7f9ce81962e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:60506/0 (socket says 192.168.123.105:60506) 2026-03-09T15:04:34.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.841+0000 7f9ce659c700 1 -- 192.168.123.105:0/3882841103 learned_addr learned my addr 192.168.123.105:0/3882841103 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:34.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.841+0000 7f9cecf70700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ce8069000 0x7f9ce8196820 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:34.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.841+0000 7f9cecf70700 1 -- 192.168.123.105:0/3882841103 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9ce8196f00 con 0x7f9ce80686f0 2026-03-09T15:04:34.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.841+0000 7f9cecf70700 1 -- 192.168.123.105:0/3882841103 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9ce819ac90 con 0x7f9ce8069000 2026-03-09T15:04:34.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.841+0000 7f9ce5d9b700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ce8069000 0x7f9ce8196820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:34.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.841+0000 7f9ce659c700 1 -- 192.168.123.105:0/3882841103 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ce8069000 msgr2=0x7f9ce8196820 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:34.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.841+0000 7f9ce659c700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ce8069000 0x7f9ce8196820 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.842 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.841+0000 7f9ce659c700 1 -- 192.168.123.105:0/3882841103 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9cd80097e0 con 0x7f9ce80686f0 2026-03-09T15:04:34.842 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.842+0000 7f9ce5d9b700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ce8069000 0x7f9ce8196820 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:04:34.842 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.842+0000 7f9ce659c700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9ce80686f0 0x7f9ce81962e0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f9cd000d8d0 tx=0x7f9cd000dc90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:34.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.842+0000 7f9cdf7fe700 1 -- 192.168.123.105:0/3882841103 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9cd0009940 con 0x7f9ce80686f0 2026-03-09T15:04:34.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.842+0000 7f9cdf7fe700 1 -- 192.168.123.105:0/3882841103 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f9cd0010460 con 0x7f9ce80686f0 2026-03-09T15:04:34.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.842+0000 7f9cdf7fe700 1 -- 192.168.123.105:0/3882841103 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9cd000f5d0 con 0x7f9ce80686f0 2026-03-09T15:04:34.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.842+0000 7f9cecf70700 1 -- 192.168.123.105:0/3882841103 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9ce819ae90 con 0x7f9ce80686f0 2026-03-09T15:04:34.843 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.842+0000 7f9cecf70700 1 -- 192.168.123.105:0/3882841103 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9ce819b3e0 con 0x7f9ce80686f0 2026-03-09T15:04:34.847 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.844+0000 7f9cdf7fe700 1 -- 192.168.123.105:0/3882841103 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f9cd000f730 con 0x7f9ce80686f0 2026-03-09T15:04:34.848 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.844+0000 7f9cecf70700 1 -- 192.168.123.105:0/3882841103 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9ce8108b10 con 0x7f9ce80686f0 2026-03-09T15:04:34.848 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.844+0000 7f9cdf7fe700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9cd4077ab0 0x7f9cd4079f60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:34.848 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.844+0000 7f9cdf7fe700 1 -- 192.168.123.105:0/3882841103 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(46..46 src has 1..46) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f9cd009ab80 con 0x7f9ce80686f0 2026-03-09T15:04:34.848 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.845+0000 7f9ce5d9b700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9cd4077ab0 0x7f9cd4079f60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:34.848 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.846+0000 7f9ce5d9b700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9cd4077ab0 0x7f9cd4079f60 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f9cd800b5c0 tx=0x7f9cd80058e0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:34.848 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.847+0000 7f9cdf7fe700 1 -- 192.168.123.105:0/3882841103 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9cd0063210 con 0x7f9ce80686f0 2026-03-09T15:04:34.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:34 vm09.local ceph-mon[98742]: Upgrade: Updating osd.1 2026-03-09T15:04:34.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:34 vm09.local ceph-mon[98742]: Deploying daemon osd.1 on vm05 2026-03-09T15:04:34.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:34 vm09.local ceph-mon[98742]: pgmap v34: 65 pgs: 65 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 716 B/s rd, 0 op/s; 119 B/s, 0 objects/s recovering 2026-03-09T15:04:34.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:34 vm09.local ceph-mon[98742]: osd.1 marked itself down and dead 2026-03-09T15:04:34.977 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.976+0000 7f9cecf70700 1 -- 192.168.123.105:0/3882841103 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f9ce8197640 con 0x7f9cd4077ab0 2026-03-09T15:04:34.977 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.977+0000 7f9cdf7fe700 1 -- 192.168.123.105:0/3882841103 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f9ce8197640 con 0x7f9cd4077ab0 2026-03-09T15:04:34.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.980+0000 7f9cecf70700 1 -- 192.168.123.105:0/3882841103 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9cd4077ab0 msgr2=0x7f9cd4079f60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:34.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.980+0000 7f9cecf70700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9cd4077ab0 0x7f9cd4079f60 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f9cd800b5c0 tx=0x7f9cd80058e0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.980+0000 7f9cecf70700 1 -- 192.168.123.105:0/3882841103 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9ce80686f0 msgr2=0x7f9ce81962e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:34.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.980+0000 7f9cecf70700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9ce80686f0 0x7f9ce81962e0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f9cd000d8d0 tx=0x7f9cd000dc90 comp rx=0 tx=0).stop 2026-03-09T15:04:34.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.981+0000 7f9cecf70700 1 -- 192.168.123.105:0/3882841103 shutdown_connections 2026-03-09T15:04:34.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.981+0000 7f9cecf70700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9cd4077ab0 0x7f9cd4079f60 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.981+0000 7f9cecf70700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9ce80686f0 0x7f9ce81962e0 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.981+0000 7f9cecf70700 1 --2- 192.168.123.105:0/3882841103 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9ce8069000 0x7f9ce8196820 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:34.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.981+0000 7f9cecf70700 1 -- 192.168.123.105:0/3882841103 >> 192.168.123.105:0/3882841103 conn(0x7f9ce80754a0 msgr2=0x7f9ce80fea20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:34.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.981+0000 7f9cecf70700 1 -- 192.168.123.105:0/3882841103 shutdown_connections 2026-03-09T15:04:34.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:34.981+0000 7f9cecf70700 1 -- 192.168.123.105:0/3882841103 wait complete. 2026-03-09T15:04:35.057 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.057+0000 7f079a5b2700 1 -- 192.168.123.105:0/1336332107 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0794102780 msgr2=0x7f0794102bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.057+0000 7f079a5b2700 1 --2- 192.168.123.105:0/1336332107 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0794102780 0x7f0794102bf0 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f0784009b00 tx=0x7f0784009e10 comp rx=0 tx=0).stop 2026-03-09T15:04:35.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.057+0000 7f079a5b2700 1 -- 192.168.123.105:0/1336332107 shutdown_connections 2026-03-09T15:04:35.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.057+0000 7f079a5b2700 1 --2- 192.168.123.105:0/1336332107 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0794102780 0x7f0794102bf0 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.057+0000 7f079a5b2700 1 --2- 192.168.123.105:0/1336332107 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0794108780 0x7f0794108b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.057+0000 7f079a5b2700 1 -- 192.168.123.105:0/1336332107 >> 192.168.123.105:0/1336332107 conn(0x7f07940fe280 msgr2=0x7f0794100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:35.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.058+0000 7f079a5b2700 1 -- 192.168.123.105:0/1336332107 shutdown_connections 2026-03-09T15:04:35.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.058+0000 7f079a5b2700 1 -- 192.168.123.105:0/1336332107 wait complete. 2026-03-09T15:04:35.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.058+0000 7f079a5b2700 1 Processor -- start 2026-03-09T15:04:35.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.058+0000 7f079a5b2700 1 -- start start 2026-03-09T15:04:35.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.059+0000 7f079a5b2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0794102780 0x7f0794075260 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:35.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.059+0000 7f079a5b2700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0794108780 0x7f07940757a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:35.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.059+0000 7f079a5b2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f07940793f0 con 0x7f0794102780 2026-03-09T15:04:35.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.059+0000 7f079a5b2700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0794075ce0 con 0x7f0794108780 2026-03-09T15:04:35.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.059+0000 7f0793fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0794102780 0x7f0794075260 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:35.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.059+0000 7f07937fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0794108780 0x7f07940757a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:35.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.059+0000 7f07937fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0794108780 0x7f07940757a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:55138/0 (socket says 192.168.123.105:55138) 2026-03-09T15:04:35.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.059+0000 7f07937fe700 1 -- 192.168.123.105:0/958708501 learned_addr learned my addr 192.168.123.105:0/958708501 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:35.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.059+0000 7f07937fe700 1 -- 192.168.123.105:0/958708501 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0794102780 msgr2=0x7f0794075260 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.059+0000 7f07937fe700 1 --2- 192.168.123.105:0/958708501 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0794102780 0x7f0794075260 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.059+0000 7f07937fe700 1 -- 192.168.123.105:0/958708501 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f07840097e0 con 0x7f0794108780 2026-03-09T15:04:35.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.060+0000 7f07937fe700 1 --2- 192.168.123.105:0/958708501 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0794108780 0x7f07940757a0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f0784004930 tx=0x7f0784004a10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:35.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.060+0000 7f07917fa700 1 -- 192.168.123.105:0/958708501 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f078401d070 con 0x7f0794108780 2026-03-09T15:04:35.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.060+0000 7f079a5b2700 1 -- 192.168.123.105:0/958708501 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0794075f60 con 0x7f0794108780 2026-03-09T15:04:35.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.060+0000 7f079a5b2700 1 -- 192.168.123.105:0/958708501 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f07941a6ad0 con 0x7f0794108780 2026-03-09T15:04:35.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.060+0000 7f07917fa700 1 -- 192.168.123.105:0/958708501 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f078400bc50 con 0x7f0794108780 2026-03-09T15:04:35.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.060+0000 7f07917fa700 1 -- 192.168.123.105:0/958708501 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f078400f790 con 0x7f0794108780 2026-03-09T15:04:35.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.061+0000 7f079a5b2700 1 -- 192.168.123.105:0/958708501 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0774005320 con 0x7f0794108780 2026-03-09T15:04:35.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.062+0000 7f07917fa700 1 -- 192.168.123.105:0/958708501 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f0784022470 con 0x7f0794108780 2026-03-09T15:04:35.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.062+0000 7f07917fa700 1 --2- 192.168.123.105:0/958708501 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07800779a0 0x7f0780079e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:35.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.062+0000 7f07917fa700 1 -- 192.168.123.105:0/958708501 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(46..46 src has 1..46) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f078409b290 con 0x7f0794108780 2026-03-09T15:04:35.063 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.063+0000 7f0793fff700 1 --2- 192.168.123.105:0/958708501 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07800779a0 0x7f0780079e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:35.063 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.063+0000 7f0793fff700 1 --2- 192.168.123.105:0/958708501 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07800779a0 0x7f0780079e50 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f077c005fd0 tx=0x7f077c005dd0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:35.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.065+0000 7f07917fa700 1 -- 192.168.123.105:0/958708501 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f0784063950 con 0x7f0794108780 2026-03-09T15:04:35.193 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.193+0000 7f079a5b2700 1 -- 192.168.123.105:0/958708501 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f0774000bf0 con 0x7f07800779a0 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.198+0000 7f07917fa700 1 -- 192.168.123.105:0/958708501 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f0774000bf0 con 0x7f07800779a0 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (101s) 18s ago 8m 20.1M - 0.25.0 c8568f914cd2 7635cece310c 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (8m) 18s ago 8m 8841k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (7m) 27s ago 7m 11.2M - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (30s) 18s ago 8m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 35d8c0ae5a58 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (28s) 27s ago 7m 8308k - 19.2.3-678-ge911bdeb 654f31e6858e 82bdad36caf9 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (86s) 18s ago 8m 84.6M - 10.4.0 c8b91775d855 eb6431f63d88 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (6m) 18s ago 6m 182M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (6m) 18s ago 6m 17.4M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (6m) 27s ago 6m 17.0M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (6m) 27s ago 6m 133M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (2m) 18s ago 9m 614M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (2m) 27s ago 7m 495M - 19.2.3-678-ge911bdeb 654f31e6858e acf5a6f3f804 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (61s) 18s ago 9m 57.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1e11655f7d87 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (44s) 27s ago 7m 50.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e d1f0309f4d58 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (2m) 18s ago 8m 9.88M - 1.7.0 72c9c2088986 888d071c50d9 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (2m) 27s ago 7m 9445k - 1.7.0 72c9c2088986 22c96a576a60 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (20s) 18s ago 7m 33.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f2883abca2d2 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (7m) 18s ago 7m 336M 4096M 18.2.0 dc2bc1663786 23e35bdafe50 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (7m) 18s ago 7m 293M 4096M 18.2.0 dc2bc1663786 75097dc12979 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (6m) 27s ago 6m 436M 4096M 18.2.0 dc2bc1663786 e79644a0564f 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (6m) 27s ago 6m 382M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (6m) 27s ago 6m 334M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:04:35.198 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (108s) 18s ago 8m 53.3M - 2.51.0 1d3b7f56885b e6f470b0ba11 2026-03-09T15:04:35.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.201+0000 7f079a5b2700 1 -- 192.168.123.105:0/958708501 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07800779a0 msgr2=0x7f0780079e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.201+0000 7f079a5b2700 1 --2- 192.168.123.105:0/958708501 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07800779a0 0x7f0780079e50 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f077c005fd0 tx=0x7f077c005dd0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.201+0000 7f079a5b2700 1 -- 192.168.123.105:0/958708501 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0794108780 msgr2=0x7f07940757a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.201+0000 7f079a5b2700 1 --2- 192.168.123.105:0/958708501 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0794108780 0x7f07940757a0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f0784004930 tx=0x7f0784004a10 comp rx=0 tx=0).stop 2026-03-09T15:04:35.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.201+0000 7f079a5b2700 1 -- 192.168.123.105:0/958708501 shutdown_connections 2026-03-09T15:04:35.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.201+0000 7f079a5b2700 1 --2- 192.168.123.105:0/958708501 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07800779a0 0x7f0780079e50 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.201+0000 7f079a5b2700 1 --2- 192.168.123.105:0/958708501 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0794102780 0x7f0794075260 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.201+0000 7f079a5b2700 1 --2- 192.168.123.105:0/958708501 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0794108780 0x7f07940757a0 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.201+0000 7f079a5b2700 1 -- 192.168.123.105:0/958708501 >> 192.168.123.105:0/958708501 conn(0x7f07940fe280 msgr2=0x7f07940ffa40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:35.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.201+0000 7f079a5b2700 1 -- 192.168.123.105:0/958708501 shutdown_connections 2026-03-09T15:04:35.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.201+0000 7f079a5b2700 1 -- 192.168.123.105:0/958708501 wait complete. 2026-03-09T15:04:35.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.272+0000 7fba517d2700 1 -- 192.168.123.105:0/768773937 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba4c1013a0 msgr2=0x7fba4c101770 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.272+0000 7fba517d2700 1 --2- 192.168.123.105:0/768773937 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba4c1013a0 0x7fba4c101770 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7fba40009b50 tx=0x7fba40009e60 comp rx=0 tx=0).stop 2026-03-09T15:04:35.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.273+0000 7fba517d2700 1 -- 192.168.123.105:0/768773937 shutdown_connections 2026-03-09T15:04:35.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.273+0000 7fba517d2700 1 --2- 192.168.123.105:0/768773937 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba4c068490 0x7fba4c068900 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.273+0000 7fba517d2700 1 --2- 192.168.123.105:0/768773937 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba4c1013a0 0x7fba4c101770 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.273+0000 7fba517d2700 1 -- 192.168.123.105:0/768773937 >> 192.168.123.105:0/768773937 conn(0x7fba4c0754a0 msgr2=0x7fba4c0758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:35.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.273+0000 7fba517d2700 1 -- 192.168.123.105:0/768773937 shutdown_connections 2026-03-09T15:04:35.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.273+0000 7fba517d2700 1 -- 192.168.123.105:0/768773937 wait complete. 2026-03-09T15:04:35.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.274+0000 7fba517d2700 1 Processor -- start 2026-03-09T15:04:35.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.274+0000 7fba517d2700 1 -- start start 2026-03-09T15:04:35.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.274+0000 7fba517d2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba4c068490 0x7fba4c105780 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:35.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.274+0000 7fba517d2700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba4c1013a0 0x7fba4c105cc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:35.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.274+0000 7fba517d2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fba4c106290 con 0x7fba4c068490 2026-03-09T15:04:35.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.274+0000 7fba517d2700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fba4c1063d0 con 0x7fba4c1013a0 2026-03-09T15:04:35.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.274+0000 7fba4affd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba4c068490 0x7fba4c105780 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:35.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.274+0000 7fba4a7fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba4c1013a0 0x7fba4c105cc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:35.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.274+0000 7fba4a7fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba4c1013a0 0x7fba4c105cc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:55162/0 (socket says 192.168.123.105:55162) 2026-03-09T15:04:35.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.274+0000 7fba4a7fc700 1 -- 192.168.123.105:0/1778969542 learned_addr learned my addr 192.168.123.105:0/1778969542 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:35.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.275+0000 7fba4affd700 1 -- 192.168.123.105:0/1778969542 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba4c1013a0 msgr2=0x7fba4c105cc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.275+0000 7fba4affd700 1 --2- 192.168.123.105:0/1778969542 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba4c1013a0 0x7fba4c105cc0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.275+0000 7fba4affd700 1 -- 192.168.123.105:0/1778969542 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fba400097e0 con 0x7fba4c068490 2026-03-09T15:04:35.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.275+0000 7fba4a7fc700 1 --2- 192.168.123.105:0/1778969542 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba4c1013a0 0x7fba4c105cc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:04:35.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.275+0000 7fba4affd700 1 --2- 192.168.123.105:0/1778969542 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba4c068490 0x7fba4c105780 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fba40009b50 tx=0x7fba40004c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:35.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.276+0000 7fba33fff700 1 -- 192.168.123.105:0/1778969542 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fba4001d070 con 0x7fba4c068490 2026-03-09T15:04:35.276 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.276+0000 7fba33fff700 1 -- 192.168.123.105:0/1778969542 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fba40022470 con 0x7fba4c068490 2026-03-09T15:04:35.276 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.276+0000 7fba33fff700 1 -- 192.168.123.105:0/1778969542 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fba4000f650 con 0x7fba4c068490 2026-03-09T15:04:35.276 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.276+0000 7fba517d2700 1 -- 192.168.123.105:0/1778969542 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fba4c1aedc0 con 0x7fba4c068490 2026-03-09T15:04:35.276 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.276+0000 7fba517d2700 1 -- 192.168.123.105:0/1778969542 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fba4c1af2b0 con 0x7fba4c068490 2026-03-09T15:04:35.278 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.278+0000 7fba33fff700 1 -- 192.168.123.105:0/1778969542 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fba4000f7b0 con 0x7fba4c068490 2026-03-09T15:04:35.278 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.278+0000 7fba517d2700 1 -- 192.168.123.105:0/1778969542 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fba4c112f60 con 0x7fba4c068490 2026-03-09T15:04:35.283 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.283+0000 7fba33fff700 1 --2- 192.168.123.105:0/1778969542 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fba34077a00 0x7fba34079eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:35.283 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.283+0000 7fba4a7fc700 1 --2- 192.168.123.105:0/1778969542 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fba34077a00 0x7fba34079eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:35.283 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.284+0000 7fba33fff700 1 -- 192.168.123.105:0/1778969542 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(46..46 src has 1..46) v4 ==== 6136+0+0 (secure 0 0 0) 0x7fba4009c200 con 0x7fba4c068490 2026-03-09T15:04:35.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.284+0000 7fba33fff700 1 -- 192.168.123.105:0/1778969542 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fba400648c0 con 0x7fba4c068490 2026-03-09T15:04:35.284 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.284+0000 7fba4a7fc700 1 --2- 192.168.123.105:0/1778969542 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fba34077a00 0x7fba34079eb0 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7fba4c106da0 tx=0x7fba3c00b480 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:35.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.492+0000 7fba517d2700 1 -- 192.168.123.105:0/1778969542 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fba4c04ea50 con 0x7fba4c068490 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.493+0000 7fba33fff700 1 -- 192.168.123.105:0/1778969542 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+799 (secure 0 0 0) 0x7fba40064010 con 0x7fba4c068490 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4, 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 8, 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:04:35.493 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:04:35.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.499+0000 7fba517d2700 1 -- 192.168.123.105:0/1778969542 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fba34077a00 msgr2=0x7fba34079eb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.499+0000 7fba517d2700 1 --2- 192.168.123.105:0/1778969542 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fba34077a00 0x7fba34079eb0 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7fba4c106da0 tx=0x7fba3c00b480 comp rx=0 tx=0).stop 2026-03-09T15:04:35.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.499+0000 7fba517d2700 1 -- 192.168.123.105:0/1778969542 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba4c068490 msgr2=0x7fba4c105780 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.499+0000 7fba517d2700 1 --2- 192.168.123.105:0/1778969542 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba4c068490 0x7fba4c105780 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fba40009b50 tx=0x7fba40004c80 comp rx=0 tx=0).stop 2026-03-09T15:04:35.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.499+0000 7fba517d2700 1 -- 192.168.123.105:0/1778969542 shutdown_connections 2026-03-09T15:04:35.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.499+0000 7fba517d2700 1 --2- 192.168.123.105:0/1778969542 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fba34077a00 0x7fba34079eb0 unknown :-1 s=CLOSED pgs=31 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.499+0000 7fba517d2700 1 --2- 192.168.123.105:0/1778969542 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba4c068490 0x7fba4c105780 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.499+0000 7fba517d2700 1 --2- 192.168.123.105:0/1778969542 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba4c1013a0 0x7fba4c105cc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.499+0000 7fba517d2700 1 -- 192.168.123.105:0/1778969542 >> 192.168.123.105:0/1778969542 conn(0x7fba4c0754a0 msgr2=0x7fba4c0fdeb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:35.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.500+0000 7fba517d2700 1 -- 192.168.123.105:0/1778969542 shutdown_connections 2026-03-09T15:04:35.499 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.500+0000 7fba517d2700 1 -- 192.168.123.105:0/1778969542 wait complete. 2026-03-09T15:04:35.577 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:35 vm05.local ceph-mon[116516]: from='client.34152 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:35.577 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:35 vm05.local ceph-mon[116516]: from='client.34156 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:35.577 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:35 vm05.local ceph-mon[116516]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T15:04:35.577 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:35 vm05.local ceph-mon[116516]: osdmap e47: 6 total, 5 up, 6 in 2026-03-09T15:04:35.577 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:35 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1778969542' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:35.577 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local podman[129690]: 2026-03-09 15:04:35.423653447 +0000 UTC m=+0.930714173 container died 23e35bdafe5055c7dd67f66446150e1e7adf89ba727481131f6c20cba70e25a6 (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1, org.label-schema.build-date=20231212, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.vendor=CentOS, GIT_BRANCH=HEAD, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.29.1, maintainer=Guillaume Abrioux , org.label-schema.license=GPLv2, CEPH_POINT_RELEASE=-18.2.0, RELEASE=HEAD, org.label-schema.schema-version=1.0, GIT_CLEAN=True, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308) 2026-03-09T15:04:35.577 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local podman[129690]: 2026-03-09 15:04:35.466762565 +0000 UTC m=+0.973823280 container remove 23e35bdafe5055c7dd67f66446150e1e7adf89ba727481131f6c20cba70e25a6 (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1, GIT_REPO=https://github.com/ceph/ceph-container.git, org.label-schema.build-date=20231212, org.label-schema.license=GPLv2, CEPH_POINT_RELEASE=-18.2.0, GIT_BRANCH=HEAD, RELEASE=HEAD, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux , org.label-schema.vendor=CentOS, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, io.buildah.version=1.29.1, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.schema-version=1.0) 2026-03-09T15:04:35.577 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local bash[129690]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1 2026-03-09T15:04:35.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.608+0000 7f1fb37ce700 1 -- 192.168.123.105:0/4216181881 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1fac071e40 msgr2=0x7f1fac0722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.608+0000 7f1fb37ce700 1 --2- 192.168.123.105:0/4216181881 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1fac071e40 0x7f1fac0722b0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f1fa400a390 tx=0x7f1fa400a6a0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.608+0000 7f1fb37ce700 1 -- 192.168.123.105:0/4216181881 shutdown_connections 2026-03-09T15:04:35.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.608+0000 7f1fb37ce700 1 --2- 192.168.123.105:0/4216181881 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1fac071e40 0x7f1fac0722b0 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.608+0000 7f1fb37ce700 1 --2- 192.168.123.105:0/4216181881 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1fac10c8b0 0x7f1fac10cc80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.608+0000 7f1fb37ce700 1 -- 192.168.123.105:0/4216181881 >> 192.168.123.105:0/4216181881 conn(0x7f1fac06c6c0 msgr2=0x7f1fac06cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:35.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.608+0000 7f1fb37ce700 1 -- 192.168.123.105:0/4216181881 shutdown_connections 2026-03-09T15:04:35.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.608+0000 7f1fb37ce700 1 -- 192.168.123.105:0/4216181881 wait complete. 2026-03-09T15:04:35.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.608+0000 7f1fb37ce700 1 Processor -- start 2026-03-09T15:04:35.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.608+0000 7f1fb37ce700 1 -- start start 2026-03-09T15:04:35.609 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.609+0000 7f1fb37ce700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1fac10c8b0 0x7f1fac07cf20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:35.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.609+0000 7f1fb37ce700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1fac07d460 0x7f1fac07d8d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:35.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.609+0000 7f1fb37ce700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1fac081aa0 con 0x7f1fac10c8b0 2026-03-09T15:04:35.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.609+0000 7f1fb37ce700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1fac081c10 con 0x7f1fac07d460 2026-03-09T15:04:35.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.609+0000 7f1fb156a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1fac10c8b0 0x7f1fac07cf20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:35.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.609+0000 7f1fb156a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1fac10c8b0 0x7f1fac07cf20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:60552/0 (socket says 192.168.123.105:60552) 2026-03-09T15:04:35.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.609+0000 7f1fb156a700 1 -- 192.168.123.105:0/2774090193 learned_addr learned my addr 192.168.123.105:0/2774090193 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:35.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.609+0000 7f1fb0d69700 1 --2- 192.168.123.105:0/2774090193 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1fac07d460 0x7f1fac07d8d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:35.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.609+0000 7f1fb0d69700 1 -- 192.168.123.105:0/2774090193 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1fac10c8b0 msgr2=0x7f1fac07cf20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.609+0000 7f1fb0d69700 1 --2- 192.168.123.105:0/2774090193 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1fac10c8b0 0x7f1fac07cf20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.610 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.609+0000 7f1fb0d69700 1 -- 192.168.123.105:0/2774090193 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1fa400a040 con 0x7f1fac07d460 2026-03-09T15:04:35.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.610+0000 7f1fb0d69700 1 --2- 192.168.123.105:0/2774090193 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1fac07d460 0x7f1fac07d8d0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f1fa400a750 tx=0x7f1fa400a880 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:35.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.610+0000 7f1fa27fc700 1 -- 192.168.123.105:0/2774090193 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1fa400b680 con 0x7f1fac07d460 2026-03-09T15:04:35.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.610+0000 7f1fb37ce700 1 -- 192.168.123.105:0/2774090193 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1fac081e90 con 0x7f1fac07d460 2026-03-09T15:04:35.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.610+0000 7f1fb37ce700 1 -- 192.168.123.105:0/2774090193 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1fac0823e0 con 0x7f1fac07d460 2026-03-09T15:04:35.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.611+0000 7f1fa27fc700 1 -- 192.168.123.105:0/2774090193 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f1fa40075c0 con 0x7f1fac07d460 2026-03-09T15:04:35.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.611+0000 7f1fa27fc700 1 -- 192.168.123.105:0/2774090193 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1fa4004050 con 0x7f1fac07d460 2026-03-09T15:04:35.611 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.611+0000 7f1f97fff700 1 -- 192.168.123.105:0/2774090193 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1f90005320 con 0x7f1fac07d460 2026-03-09T15:04:35.613 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.612+0000 7f1fa27fc700 1 -- 192.168.123.105:0/2774090193 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f1fa401a070 con 0x7f1fac07d460 2026-03-09T15:04:35.613 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.613+0000 7f1fa27fc700 1 --2- 192.168.123.105:0/2774090193 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f1f98077a00 0x7f1f98079eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:35.613 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.613+0000 7f1fa27fc700 1 -- 192.168.123.105:0/2774090193 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(47..47 src has 1..47) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f1fa409b3d0 con 0x7f1fac07d460 2026-03-09T15:04:35.613 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.613+0000 7f1fb156a700 1 --2- 192.168.123.105:0/2774090193 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f1f98077a00 0x7f1f98079eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:35.614 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.614+0000 7f1fb156a700 1 --2- 192.168.123.105:0/2774090193 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f1f98077a00 0x7f1f98079eb0 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f1fa800b3c0 tx=0x7f1fa800d040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:35.615 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.615+0000 7f1fa27fc700 1 -- 192.168.123.105:0/2774090193 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f1fa4063b70 con 0x7f1fac07d460 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.768+0000 7f1f97fff700 1 -- 192.168.123.105:0/2774090193 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f1f90005cc0 con 0x7f1fac07d460 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.774+0000 7f1fa27fc700 1 -- 192.168.123.105:0/2774090193 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1944 (secure 0 0 0) 0x7f1fa40632c0 con 0x7f1fac07d460 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout:e11 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout:legacy client fscid: 1 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout:Filesystem 'cephfs' (1) 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout:fs_name cephfs 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout:epoch 9 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout:created 2026-03-09T14:58:23.182447+0000 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout:modified 2026-03-09T14:58:30.215642+0000 2026-03-09T15:04:35.782 INFO:teuthology.orchestra.run.vm05.stdout:tableserver 0 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:root 0 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:session_timeout 60 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:session_autoclose 300 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:max_file_size 1099511627776 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:max_xattr_size 65536 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:required_client_features {} 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:last_failure 0 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:last_failure_osd_epoch 0 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:max_mds 1 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:in 0 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:up {0=14502} 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:failed 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:damaged 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:stopped 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:data_pools [3] 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:metadata_pool 2 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:inline_data enabled 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:balancer 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:bal_rank_mask -1 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:standby_count_wanted 1 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:qdb_cluster leader: 0 members: 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.nrocqt{0:14502} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.ohmitn{0:14510} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:Standby daemons: 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.rrcyql{-1:14518} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:04:35.783 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.jrhwzz{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:04:35.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.787+0000 7f1fb37ce700 1 -- 192.168.123.105:0/2774090193 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f1f98077a00 msgr2=0x7f1f98079eb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.787+0000 7f1fb37ce700 1 --2- 192.168.123.105:0/2774090193 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f1f98077a00 0x7f1f98079eb0 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f1fa800b3c0 tx=0x7f1fa800d040 comp rx=0 tx=0).stop 2026-03-09T15:04:35.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.787+0000 7f1fb37ce700 1 -- 192.168.123.105:0/2774090193 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1fac07d460 msgr2=0x7f1fac07d8d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.787+0000 7f1fb37ce700 1 --2- 192.168.123.105:0/2774090193 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1fac07d460 0x7f1fac07d8d0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f1fa400a750 tx=0x7f1fa400a880 comp rx=0 tx=0).stop 2026-03-09T15:04:35.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.787+0000 7f1fb37ce700 1 -- 192.168.123.105:0/2774090193 shutdown_connections 2026-03-09T15:04:35.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.787+0000 7f1fb37ce700 1 --2- 192.168.123.105:0/2774090193 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f1f98077a00 0x7f1f98079eb0 unknown :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.787+0000 7f1fb37ce700 1 --2- 192.168.123.105:0/2774090193 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1fac10c8b0 0x7f1fac07cf20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.787+0000 7f1fb37ce700 1 --2- 192.168.123.105:0/2774090193 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1fac07d460 0x7f1fac07d8d0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.787+0000 7f1fb37ce700 1 -- 192.168.123.105:0/2774090193 >> 192.168.123.105:0/2774090193 conn(0x7f1fac06c6c0 msgr2=0x7f1fac070970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:35.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.787+0000 7f1fb37ce700 1 -- 192.168.123.105:0/2774090193 shutdown_connections 2026-03-09T15:04:35.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.787+0000 7f1fb37ce700 1 -- 192.168.123.105:0/2774090193 wait complete. 2026-03-09T15:04:35.789 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 11 2026-03-09T15:04:35.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.864+0000 7f07df26e700 1 -- 192.168.123.105:0/1324281757 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07d810e9e0 msgr2=0x7f07d810edb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.864+0000 7f07df26e700 1 --2- 192.168.123.105:0/1324281757 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07d810e9e0 0x7f07d810edb0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f07d4009b00 tx=0x7f07d4009e10 comp rx=0 tx=0).stop 2026-03-09T15:04:35.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.864+0000 7f07df26e700 1 -- 192.168.123.105:0/1324281757 shutdown_connections 2026-03-09T15:04:35.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.864+0000 7f07df26e700 1 --2- 192.168.123.105:0/1324281757 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f07d8071b60 0x7f07d8071fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.864+0000 7f07df26e700 1 --2- 192.168.123.105:0/1324281757 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07d810e9e0 0x7f07d810edb0 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.864+0000 7f07df26e700 1 -- 192.168.123.105:0/1324281757 >> 192.168.123.105:0/1324281757 conn(0x7f07d806c6c0 msgr2=0x7f07d806cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:35.865 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local podman[129874]: 2026-03-09 15:04:35.733342887 +0000 UTC m=+0.028880323 container create bf98fea2039e2373f18de14af3af2a0f654f65a6dafd8f9aa38c988315071e25 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-deactivate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T15:04:35.865 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local podman[129874]: 2026-03-09 15:04:35.774142482 +0000 UTC m=+0.069679918 container init bf98fea2039e2373f18de14af3af2a0f654f65a6dafd8f9aa38c988315071e25 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True) 2026-03-09T15:04:35.865 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local podman[129874]: 2026-03-09 15:04:35.784867309 +0000 UTC m=+0.080404745 container start bf98fea2039e2373f18de14af3af2a0f654f65a6dafd8f9aa38c988315071e25 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-deactivate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS) 2026-03-09T15:04:35.865 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local podman[129874]: 2026-03-09 15:04:35.787475691 +0000 UTC m=+0.083013127 container attach bf98fea2039e2373f18de14af3af2a0f654f65a6dafd8f9aa38c988315071e25 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-deactivate, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2) 2026-03-09T15:04:35.865 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local podman[129874]: 2026-03-09 15:04:35.717076669 +0000 UTC m=+0.012614115 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:04:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:35 vm09.local ceph-mon[98742]: from='client.34152 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:35 vm09.local ceph-mon[98742]: from='client.34156 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:35 vm09.local ceph-mon[98742]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T15:04:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:35 vm09.local ceph-mon[98742]: osdmap e47: 6 total, 5 up, 6 in 2026-03-09T15:04:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:35 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1778969542' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:35.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.869+0000 7f07df26e700 1 -- 192.168.123.105:0/1324281757 shutdown_connections 2026-03-09T15:04:35.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.869+0000 7f07df26e700 1 -- 192.168.123.105:0/1324281757 wait complete. 2026-03-09T15:04:35.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.870+0000 7f07df26e700 1 Processor -- start 2026-03-09T15:04:35.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.870+0000 7f07df26e700 1 -- start start 2026-03-09T15:04:35.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.870+0000 7f07df26e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07d8071b60 0x7f07d8119590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:35.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.870+0000 7f07df26e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f07d810e9e0 0x7f07d8114590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:35.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.870+0000 7f07df26e700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f07d8114ad0 con 0x7f07d8071b60 2026-03-09T15:04:35.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.870+0000 7f07df26e700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f07d8114c10 con 0x7f07d810e9e0 2026-03-09T15:04:35.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.871+0000 7f07dda6b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f07d810e9e0 0x7f07d8114590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:35.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.871+0000 7f07de26c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07d8071b60 0x7f07d8119590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:35.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.871+0000 7f07dda6b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f07d810e9e0 0x7f07d8114590 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:55196/0 (socket says 192.168.123.105:55196) 2026-03-09T15:04:35.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.871+0000 7f07dda6b700 1 -- 192.168.123.105:0/780247726 learned_addr learned my addr 192.168.123.105:0/780247726 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:35.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.871+0000 7f07dda6b700 1 -- 192.168.123.105:0/780247726 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07d8071b60 msgr2=0x7f07d8119590 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:35.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.871+0000 7f07dda6b700 1 --2- 192.168.123.105:0/780247726 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07d8071b60 0x7f07d8119590 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:35.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.871+0000 7f07dda6b700 1 -- 192.168.123.105:0/780247726 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f07d40097e0 con 0x7f07d810e9e0 2026-03-09T15:04:35.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.872+0000 7f07dda6b700 1 --2- 192.168.123.105:0/780247726 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f07d810e9e0 0x7f07d8114590 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f07d000d350 tx=0x7f07d000d660 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:35.876 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.876+0000 7f07cf7fe700 1 -- 192.168.123.105:0/780247726 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f07d0015400 con 0x7f07d810e9e0 2026-03-09T15:04:35.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.876+0000 7f07df26e700 1 -- 192.168.123.105:0/780247726 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f07d8114ef0 con 0x7f07d810e9e0 2026-03-09T15:04:35.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.876+0000 7f07df26e700 1 -- 192.168.123.105:0/780247726 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f07d81b7a80 con 0x7f07d810e9e0 2026-03-09T15:04:35.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.877+0000 7f07cf7fe700 1 -- 192.168.123.105:0/780247726 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f07d000f040 con 0x7f07d810e9e0 2026-03-09T15:04:35.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.877+0000 7f07cf7fe700 1 -- 192.168.123.105:0/780247726 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f07d0014910 con 0x7f07d810e9e0 2026-03-09T15:04:35.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.877+0000 7f07df26e700 1 -- 192.168.123.105:0/780247726 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f07bc005320 con 0x7f07d810e9e0 2026-03-09T15:04:35.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.881+0000 7f07cf7fe700 1 -- 192.168.123.105:0/780247726 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f07d0014b70 con 0x7f07d810e9e0 2026-03-09T15:04:35.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.881+0000 7f07cf7fe700 1 --2- 192.168.123.105:0/780247726 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07c40778f0 0x7f07c4079da0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:35.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.881+0000 7f07de26c700 1 --2- 192.168.123.105:0/780247726 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07c40778f0 0x7f07c4079da0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:35.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.882+0000 7f07de26c700 1 --2- 192.168.123.105:0/780247726 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07c40778f0 0x7f07c4079da0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f07d400b5c0 tx=0x7f07d4005fb0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:35.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.882+0000 7f07cf7fe700 1 -- 192.168.123.105:0/780247726 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(47..47 src has 1..47) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f07d0099c80 con 0x7f07d810e9e0 2026-03-09T15:04:35.883 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:35.883+0000 7f07cf7fe700 1 -- 192.168.123.105:0/780247726 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f07d00623c0 con 0x7f07d810e9e0 2026-03-09T15:04:36.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.028+0000 7f07df26e700 1 -- 192.168.123.105:0/780247726 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f07bc000bf0 con 0x7f07c40778f0 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.029+0000 7f07cf7fe700 1 -- 192.168.123.105:0/780247726 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f07bc000bf0 con 0x7f07c40778f0 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [ 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout: "mgr", 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout: "mon", 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout: "crash" 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "7/23 daemons upgraded", 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:04:36.030 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:04:36.034 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.034+0000 7f07cd7fa700 1 -- 192.168.123.105:0/780247726 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07c40778f0 msgr2=0x7f07c4079da0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:36.034 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.034+0000 7f07cd7fa700 1 --2- 192.168.123.105:0/780247726 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07c40778f0 0x7f07c4079da0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f07d400b5c0 tx=0x7f07d4005fb0 comp rx=0 tx=0).stop 2026-03-09T15:04:36.034 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.034+0000 7f07cd7fa700 1 -- 192.168.123.105:0/780247726 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f07d810e9e0 msgr2=0x7f07d8114590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:36.034 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.034+0000 7f07cd7fa700 1 --2- 192.168.123.105:0/780247726 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f07d810e9e0 0x7f07d8114590 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f07d000d350 tx=0x7f07d000d660 comp rx=0 tx=0).stop 2026-03-09T15:04:36.034 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.034+0000 7f07cd7fa700 1 -- 192.168.123.105:0/780247726 shutdown_connections 2026-03-09T15:04:36.034 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.034+0000 7f07cd7fa700 1 --2- 192.168.123.105:0/780247726 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07c40778f0 0x7f07c4079da0 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:36.034 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.034+0000 7f07cd7fa700 1 --2- 192.168.123.105:0/780247726 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f07d8071b60 0x7f07d8119590 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:36.034 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.034+0000 7f07cd7fa700 1 --2- 192.168.123.105:0/780247726 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f07d810e9e0 0x7f07d8114590 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:36.035 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.034+0000 7f07cd7fa700 1 -- 192.168.123.105:0/780247726 >> 192.168.123.105:0/780247726 conn(0x7f07d806c6c0 msgr2=0x7f07d806cfb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:36.035 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.035+0000 7f07cd7fa700 1 -- 192.168.123.105:0/780247726 shutdown_connections 2026-03-09T15:04:36.035 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.035+0000 7f07cd7fa700 1 -- 192.168.123.105:0/780247726 wait complete. 2026-03-09T15:04:36.133 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local conmon[129884]: conmon bf98fea2039e2373f18d : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf98fea2039e2373f18de14af3af2a0f654f65a6dafd8f9aa38c988315071e25.scope/container/memory.events 2026-03-09T15:04:36.133 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local podman[129874]: 2026-03-09 15:04:35.940153225 +0000 UTC m=+0.235690661 container died bf98fea2039e2373f18de14af3af2a0f654f65a6dafd8f9aa38c988315071e25 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-deactivate, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) 2026-03-09T15:04:36.133 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local podman[129874]: 2026-03-09 15:04:35.961171389 +0000 UTC m=+0.256708825 container remove bf98fea2039e2373f18de14af3af2a0f654f65a6dafd8f9aa38c988315071e25 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-deactivate, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T15:04:36.133 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.1.service: Deactivated successfully. 2026-03-09T15:04:36.133 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.1.service: Unit process 129884 (conmon) remains running after unit stopped. 2026-03-09T15:04:36.133 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.1.service: Unit process 129916 (podman) remains running after unit stopped. 2026-03-09T15:04:36.133 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local systemd[1]: Stopped Ceph osd.1 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:04:36.133 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:35 vm05.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.1.service: Consumed 38.779s CPU time, 473.3M memory peak. 2026-03-09T15:04:36.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.132+0000 7f59552ab700 1 -- 192.168.123.105:0/1755185802 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5950108290 msgr2=0x7f5950108700 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:36.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.132+0000 7f59552ab700 1 --2- 192.168.123.105:0/1755185802 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5950108290 0x7f5950108700 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f5940009b00 tx=0x7f5940009e10 comp rx=0 tx=0).stop 2026-03-09T15:04:36.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.133+0000 7f59552ab700 1 -- 192.168.123.105:0/1755185802 shutdown_connections 2026-03-09T15:04:36.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.133+0000 7f59552ab700 1 --2- 192.168.123.105:0/1755185802 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5950108290 0x7f5950108700 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:36.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.133+0000 7f59552ab700 1 --2- 192.168.123.105:0/1755185802 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5950072aa0 0x7f5950107d50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:36.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.133+0000 7f59552ab700 1 -- 192.168.123.105:0/1755185802 >> 192.168.123.105:0/1755185802 conn(0x7f595006c6c0 msgr2=0x7f595006cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:36.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.138+0000 7f59552ab700 1 -- 192.168.123.105:0/1755185802 shutdown_connections 2026-03-09T15:04:36.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.139+0000 7f59552ab700 1 -- 192.168.123.105:0/1755185802 wait complete. 2026-03-09T15:04:36.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.140+0000 7f59552ab700 1 Processor -- start 2026-03-09T15:04:36.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.140+0000 7f59552ab700 1 -- start start 2026-03-09T15:04:36.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.140+0000 7f59552ab700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5950072aa0 0x7f5950108f30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:36.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.140+0000 7f59552ab700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5950108290 0x7f5950109470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:36.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.140+0000 7f59552ab700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f595010ace0 con 0x7f5950108290 2026-03-09T15:04:36.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.140+0000 7f59552ab700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f595010ae50 con 0x7f5950072aa0 2026-03-09T15:04:36.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.141+0000 7f594f7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5950108290 0x7f5950109470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:36.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.141+0000 7f594f7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5950108290 0x7f5950109470 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:60578/0 (socket says 192.168.123.105:60578) 2026-03-09T15:04:36.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.141+0000 7f594f7fe700 1 -- 192.168.123.105:0/2406683321 learned_addr learned my addr 192.168.123.105:0/2406683321 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:04:36.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.141+0000 7f594ffff700 1 --2- 192.168.123.105:0/2406683321 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5950072aa0 0x7f5950108f30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:36.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.141+0000 7f594f7fe700 1 -- 192.168.123.105:0/2406683321 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5950072aa0 msgr2=0x7f5950108f30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:36.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.141+0000 7f594f7fe700 1 --2- 192.168.123.105:0/2406683321 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5950072aa0 0x7f5950108f30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:36.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.141+0000 7f594f7fe700 1 -- 192.168.123.105:0/2406683321 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f59400097e0 con 0x7f5950108290 2026-03-09T15:04:36.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.142+0000 7f594f7fe700 1 --2- 192.168.123.105:0/2406683321 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5950108290 0x7f5950109470 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f5940000c00 tx=0x7f5940004900 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:36.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.142+0000 7f594d7fa700 1 -- 192.168.123.105:0/2406683321 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f594001d070 con 0x7f5950108290 2026-03-09T15:04:36.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.142+0000 7f59552ab700 1 -- 192.168.123.105:0/2406683321 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f5950109a10 con 0x7f5950108290 2026-03-09T15:04:36.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.142+0000 7f59552ab700 1 -- 192.168.123.105:0/2406683321 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f59501a2840 con 0x7f5950108290 2026-03-09T15:04:36.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.142+0000 7f594d7fa700 1 -- 192.168.123.105:0/2406683321 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f594000b890 con 0x7f5950108290 2026-03-09T15:04:36.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.142+0000 7f594d7fa700 1 -- 192.168.123.105:0/2406683321 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f5940003d10 con 0x7f5950108290 2026-03-09T15:04:36.143 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.144+0000 7f594d7fa700 1 -- 192.168.123.105:0/2406683321 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f5940022470 con 0x7f5950108290 2026-03-09T15:04:36.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.144+0000 7f594d7fa700 1 --2- 192.168.123.105:0/2406683321 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f5938077b10 0x7f5938079fc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:04:36.144 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.145+0000 7f594ffff700 1 --2- 192.168.123.105:0/2406683321 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f5938077b10 0x7f5938079fc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:04:36.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.145+0000 7f594d7fa700 1 -- 192.168.123.105:0/2406683321 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(47..47 src has 1..47) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f594009c6e0 con 0x7f5950108290 2026-03-09T15:04:36.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.145+0000 7f594ffff700 1 --2- 192.168.123.105:0/2406683321 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f5938077b10 0x7f5938079fc0 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f5948009510 tx=0x7f59480093a0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:04:36.145 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.146+0000 7f59552ab700 1 -- 192.168.123.105:0/2406683321 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f593c005320 con 0x7f5950108290 2026-03-09T15:04:36.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.150+0000 7f594d7fa700 1 -- 192.168.123.105:0/2406683321 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f5940064d70 con 0x7f5950108290 2026-03-09T15:04:36.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.339+0000 7f59552ab700 1 -- 192.168.123.105:0/2406683321 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f593c005190 con 0x7f5950108290 2026-03-09T15:04:36.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.344+0000 7f594d7fa700 1 -- 192.168.123.105:0/2406683321 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+285 (secure 0 0 0) 0x7f59400644c0 con 0x7f5950108290 2026-03-09T15:04:36.344 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_WARN 1 filesystem with deprecated feature inline_data; 1 osds down 2026-03-09T15:04:36.344 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:04:36.344 INFO:teuthology.orchestra.run.vm05.stdout: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:04:36.344 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] OSD_DOWN: 1 osds down 2026-03-09T15:04:36.344 INFO:teuthology.orchestra.run.vm05.stdout: osd.1 (root=default,host=vm05) is down 2026-03-09T15:04:36.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.354+0000 7f5936ffd700 1 -- 192.168.123.105:0/2406683321 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f5938077b10 msgr2=0x7f5938079fc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:36.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.354+0000 7f5936ffd700 1 --2- 192.168.123.105:0/2406683321 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f5938077b10 0x7f5938079fc0 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f5948009510 tx=0x7f59480093a0 comp rx=0 tx=0).stop 2026-03-09T15:04:36.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.354+0000 7f5936ffd700 1 -- 192.168.123.105:0/2406683321 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5950108290 msgr2=0x7f5950109470 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:04:36.355 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.354+0000 7f5936ffd700 1 --2- 192.168.123.105:0/2406683321 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5950108290 0x7f5950109470 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f5940000c00 tx=0x7f5940004900 comp rx=0 tx=0).stop 2026-03-09T15:04:36.355 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.354+0000 7f5936ffd700 1 -- 192.168.123.105:0/2406683321 shutdown_connections 2026-03-09T15:04:36.355 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.354+0000 7f5936ffd700 1 --2- 192.168.123.105:0/2406683321 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f5938077b10 0x7f5938079fc0 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:36.355 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.354+0000 7f5936ffd700 1 --2- 192.168.123.105:0/2406683321 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5950072aa0 0x7f5950108f30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:04:36.355 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.354+0000 7f5936ffd700 1 --2- 192.168.123.105:0/2406683321 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5950108290 0x7f5950109470 secure :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f5940000c00 tx=0x7f5940004900 comp rx=0 tx=0).stop 2026-03-09T15:04:36.355 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.354+0000 7f5936ffd700 1 -- 192.168.123.105:0/2406683321 >> 192.168.123.105:0/2406683321 conn(0x7f595006c6c0 msgr2=0x7f5950111420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:04:36.355 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.355+0000 7f5936ffd700 1 -- 192.168.123.105:0/2406683321 shutdown_connections 2026-03-09T15:04:36.355 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:04:36.355+0000 7f5936ffd700 1 -- 192.168.123.105:0/2406683321 wait complete. 2026-03-09T15:04:36.390 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local systemd[1]: Starting Ceph osd.1 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:04:36.390 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local podman[130019]: 2026-03-09 15:04:36.344923401 +0000 UTC m=+0.030156473 container create 08525993140b81de3bbf0dc5c3d4d94b10e58ccdfb5ac859397876baf6e3439f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate, org.label-schema.build-date=20260223, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3) 2026-03-09T15:04:36.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:36 vm05.local ceph-mon[116516]: from='client.44135 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:36.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:36 vm05.local ceph-mon[116516]: pgmap v36: 65 pgs: 12 stale+active+clean, 53 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 921 B/s rd, 1 op/s 2026-03-09T15:04:36.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:36 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/2774090193' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:04:36.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:36 vm05.local ceph-mon[116516]: from='client.44147 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:36.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:36 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/2406683321' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:04:36.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:36 vm05.local ceph-mon[116516]: osdmap e48: 6 total, 5 up, 6 in 2026-03-09T15:04:36.805 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local podman[130019]: 2026-03-09 15:04:36.410564987 +0000 UTC m=+0.095798059 container init 08525993140b81de3bbf0dc5c3d4d94b10e58ccdfb5ac859397876baf6e3439f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS) 2026-03-09T15:04:36.805 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local podman[130019]: 2026-03-09 15:04:36.414763586 +0000 UTC m=+0.099996658 container start 08525993140b81de3bbf0dc5c3d4d94b10e58ccdfb5ac859397876baf6e3439f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) 2026-03-09T15:04:36.805 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local podman[130019]: 2026-03-09 15:04:36.416308979 +0000 UTC m=+0.101542051 container attach 08525993140b81de3bbf0dc5c3d4d94b10e58ccdfb5ac859397876baf6e3439f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3) 2026-03-09T15:04:36.805 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local podman[130019]: 2026-03-09 15:04:36.334101251 +0000 UTC m=+0.019334323 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:04:36.805 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate[130033]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:36.805 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local bash[130019]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:36.805 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate[130033]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:36.805 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local bash[130019]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:36.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:36 vm09.local ceph-mon[98742]: from='client.44135 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:36.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:36 vm09.local ceph-mon[98742]: pgmap v36: 65 pgs: 12 stale+active+clean, 53 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 921 B/s rd, 1 op/s 2026-03-09T15:04:36.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:36 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/2774090193' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:04:36.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:36 vm09.local ceph-mon[98742]: from='client.44147 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:04:36.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:36 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/2406683321' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:04:36.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:36 vm09.local ceph-mon[98742]: osdmap e48: 6 total, 5 up, 6 in 2026-03-09T15:04:37.305 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate[130033]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T15:04:37.305 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate[130033]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:37.305 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local bash[130019]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T15:04:37.305 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local bash[130019]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:37.305 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate[130033]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:37.305 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:36 vm05.local bash[130019]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:37.305 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate[130033]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-09T15:04:37.305 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local bash[130019]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-09T15:04:37.305 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate[130033]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-ab9234b7-b78b-43a0-94cd-0a16399ff736/osd-block-73421280-9a73-4092-8e8f-854babd94f13 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-09T15:04:37.305 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local bash[130019]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-ab9234b7-b78b-43a0-94cd-0a16399ff736/osd-block-73421280-9a73-4092-8e8f-854babd94f13 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-09T15:04:37.586 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate[130033]: Running command: /usr/bin/ln -snf /dev/ceph-ab9234b7-b78b-43a0-94cd-0a16399ff736/osd-block-73421280-9a73-4092-8e8f-854babd94f13 /var/lib/ceph/osd/ceph-1/block 2026-03-09T15:04:37.586 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local bash[130019]: Running command: /usr/bin/ln -snf /dev/ceph-ab9234b7-b78b-43a0-94cd-0a16399ff736/osd-block-73421280-9a73-4092-8e8f-854babd94f13 /var/lib/ceph/osd/ceph-1/block 2026-03-09T15:04:37.586 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate[130033]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-09T15:04:37.586 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local bash[130019]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-09T15:04:37.586 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate[130033]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-09T15:04:37.586 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local bash[130019]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-09T15:04:37.586 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate[130033]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-09T15:04:37.587 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local bash[130019]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-09T15:04:37.587 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate[130033]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-09T15:04:37.587 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local bash[130019]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-09T15:04:37.587 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local podman[130019]: 2026-03-09 15:04:37.383728201 +0000 UTC m=+1.068961263 container died 08525993140b81de3bbf0dc5c3d4d94b10e58ccdfb5ac859397876baf6e3439f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T15:04:37.587 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local podman[130019]: 2026-03-09 15:04:37.417706065 +0000 UTC m=+1.102939126 container remove 08525993140b81de3bbf0dc5c3d4d94b10e58ccdfb5ac859397876baf6e3439f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2) 2026-03-09T15:04:37.587 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local podman[130278]: 2026-03-09 15:04:37.532001169 +0000 UTC m=+0.021487666 container create b830d7f764983fbff5a713df464dd8eade826327ce723fc348710c49b5cf2735 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2) 2026-03-09T15:04:38.000 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local podman[130278]: 2026-03-09 15:04:37.586086603 +0000 UTC m=+0.075573110 container init b830d7f764983fbff5a713df464dd8eade826327ce723fc348710c49b5cf2735 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default) 2026-03-09T15:04:38.000 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local podman[130278]: 2026-03-09 15:04:37.590980104 +0000 UTC m=+0.080466601 container start b830d7f764983fbff5a713df464dd8eade826327ce723fc348710c49b5cf2735 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1, ceph=True, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0) 2026-03-09T15:04:38.000 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local bash[130278]: b830d7f764983fbff5a713df464dd8eade826327ce723fc348710c49b5cf2735 2026-03-09T15:04:38.000 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local podman[130278]: 2026-03-09 15:04:37.521789813 +0000 UTC m=+0.011276301 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:04:38.000 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:37 vm05.local systemd[1]: Started Ceph osd.1 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:04:38.000 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:37 vm05.local ceph-mon[116516]: Health check failed: Degraded data redundancy: 8/264 objects degraded (3.030%), 4 pgs degraded (PG_DEGRADED) 2026-03-09T15:04:38.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:37 vm09.local ceph-mon[98742]: Health check failed: Degraded data redundancy: 8/264 objects degraded (3.030%), 4 pgs degraded (PG_DEGRADED) 2026-03-09T15:04:38.554 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:38 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[130289]: 2026-03-09T15:04:38.440+0000 7efffc6bf740 -1 Falling back to public interface 2026-03-09T15:04:39.018 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:38 vm09.local ceph-mon[98742]: pgmap v38: 65 pgs: 10 active+undersized, 8 stale+active+clean, 4 active+undersized+degraded, 43 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 8/264 objects degraded (3.030%) 2026-03-09T15:04:39.018 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:38 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:39.018 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:38 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:39.018 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:38 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:39.024 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:38 vm05.local ceph-mon[116516]: pgmap v38: 65 pgs: 10 active+undersized, 8 stale+active+clean, 4 active+undersized+degraded, 43 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 8/264 objects degraded (3.030%) 2026-03-09T15:04:39.024 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:38 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:39.024 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:38 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:39.024 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:38 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:40.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:40 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:40.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:40 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:40.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:40 vm05.local ceph-mon[116516]: pgmap v39: 65 pgs: 18 active+undersized, 3 stale+active+clean, 11 active+undersized+degraded, 33 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 32/264 objects degraded (12.121%) 2026-03-09T15:04:40.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:40 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:40.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:40 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:40.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:40 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:40.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:40 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:40.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:40 vm09.local ceph-mon[98742]: pgmap v39: 65 pgs: 18 active+undersized, 3 stale+active+clean, 11 active+undersized+degraded, 33 active+clean; 257 MiB data, 3.1 GiB used, 117 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 32/264 objects degraded (12.121%) 2026-03-09T15:04:40.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:40 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:40.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:40 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:42.127 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[130289]: 2026-03-09T15:04:41.949+0000 7efffc6bf740 -1 osd.1 0 read_superblock omap replica is missing. 2026-03-09T15:04:42.127 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:42.127 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:42.127 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:42.127 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:04:42.127 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:42.128 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:42.128 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:42.128 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:42.128 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:42.128 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T15:04:42.128 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T15:04:42.128 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: Upgrade: unsafe to stop osd(s) at this time (12 PGs are or would become offline) 2026-03-09T15:04:42.128 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:42.128 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: Upgrade: unsafe to stop osd(s) at this time (12 PGs are or would become offline) 2026-03-09T15:04:42.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:42.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:04:42.554 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:42 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[130289]: 2026-03-09T15:04:42.127+0000 7efffc6bf740 -1 osd.1 46 log_to_monitors true 2026-03-09T15:04:43.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:42 vm05.local ceph-mon[116516]: pgmap v40: 65 pgs: 21 active+undersized, 13 active+undersized+degraded, 31 active+clean; 257 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 34/264 objects degraded (12.879%) 2026-03-09T15:04:43.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:42 vm05.local ceph-mon[116516]: from='osd.1 [v2:192.168.123.105:6810/369429769,v1:192.168.123.105:6811/369429769]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:04:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:42 vm09.local ceph-mon[98742]: pgmap v40: 65 pgs: 21 active+undersized, 13 active+undersized+degraded, 31 active+clean; 257 MiB data, 2.7 GiB used, 117 GiB / 120 GiB avail; 34/264 objects degraded (12.879%) 2026-03-09T15:04:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:42 vm09.local ceph-mon[98742]: from='osd.1 [v2:192.168.123.105:6810/369429769,v1:192.168.123.105:6811/369429769]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T15:04:44.304 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:04:44 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[130289]: 2026-03-09T15:04:44.142+0000 7efff3c58640 -1 osd.1 46 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T15:04:44.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:43 vm05.local ceph-mon[116516]: from='osd.1 [v2:192.168.123.105:6810/369429769,v1:192.168.123.105:6811/369429769]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T15:04:44.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:43 vm05.local ceph-mon[116516]: osdmap e49: 6 total, 5 up, 6 in 2026-03-09T15:04:44.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:43 vm05.local ceph-mon[116516]: from='osd.1 [v2:192.168.123.105:6810/369429769,v1:192.168.123.105:6811/369429769]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T15:04:44.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:43 vm09.local ceph-mon[98742]: from='osd.1 [v2:192.168.123.105:6810/369429769,v1:192.168.123.105:6811/369429769]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T15:04:44.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:43 vm09.local ceph-mon[98742]: osdmap e49: 6 total, 5 up, 6 in 2026-03-09T15:04:44.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:43 vm09.local ceph-mon[98742]: from='osd.1 [v2:192.168.123.105:6810/369429769,v1:192.168.123.105:6811/369429769]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T15:04:45.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:44 vm05.local ceph-mon[116516]: pgmap v42: 65 pgs: 21 active+undersized, 13 active+undersized+degraded, 31 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 34/264 objects degraded (12.879%) 2026-03-09T15:04:45.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:44 vm05.local ceph-mon[116516]: Health check update: Degraded data redundancy: 34/264 objects degraded (12.879%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T15:04:45.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:44 vm05.local ceph-mon[116516]: from='osd.1 [v2:192.168.123.105:6810/369429769,v1:192.168.123.105:6811/369429769]' entity='osd.1' 2026-03-09T15:04:45.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:44 vm09.local ceph-mon[98742]: pgmap v42: 65 pgs: 21 active+undersized, 13 active+undersized+degraded, 31 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 34/264 objects degraded (12.879%) 2026-03-09T15:04:45.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:44 vm09.local ceph-mon[98742]: Health check update: Degraded data redundancy: 34/264 objects degraded (12.879%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T15:04:45.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:44 vm09.local ceph-mon[98742]: from='osd.1 [v2:192.168.123.105:6810/369429769,v1:192.168.123.105:6811/369429769]' entity='osd.1' 2026-03-09T15:04:46.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:45 vm09.local ceph-mon[98742]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T15:04:46.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:45 vm09.local ceph-mon[98742]: osd.1 [v2:192.168.123.105:6810/369429769,v1:192.168.123.105:6811/369429769] boot 2026-03-09T15:04:46.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:45 vm09.local ceph-mon[98742]: osdmap e50: 6 total, 6 up, 6 in 2026-03-09T15:04:46.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:04:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:45 vm05.local ceph-mon[116516]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T15:04:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:45 vm05.local ceph-mon[116516]: osd.1 [v2:192.168.123.105:6810/369429769,v1:192.168.123.105:6811/369429769] boot 2026-03-09T15:04:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:45 vm05.local ceph-mon[116516]: osdmap e50: 6 total, 6 up, 6 in 2026-03-09T15:04:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T15:04:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:47 vm05.local ceph-mon[116516]: pgmap v44: 65 pgs: 21 active+undersized, 13 active+undersized+degraded, 31 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 34/264 objects degraded (12.879%) 2026-03-09T15:04:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:47 vm05.local ceph-mon[116516]: osdmap e51: 6 total, 6 up, 6 in 2026-03-09T15:04:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:47 vm09.local ceph-mon[98742]: pgmap v44: 65 pgs: 21 active+undersized, 13 active+undersized+degraded, 31 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 34/264 objects degraded (12.879%) 2026-03-09T15:04:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:47 vm09.local ceph-mon[98742]: osdmap e51: 6 total, 6 up, 6 in 2026-03-09T15:04:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:49 vm05.local ceph-mon[116516]: pgmap v46: 65 pgs: 9 active+undersized, 9 active+undersized+degraded, 47 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 21/264 objects degraded (7.955%) 2026-03-09T15:04:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:49 vm05.local ceph-mon[116516]: Health check update: Degraded data redundancy: 21/264 objects degraded (7.955%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T15:04:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:49 vm09.local ceph-mon[98742]: pgmap v46: 65 pgs: 9 active+undersized, 9 active+undersized+degraded, 47 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 21/264 objects degraded (7.955%) 2026-03-09T15:04:49.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:49 vm09.local ceph-mon[98742]: Health check update: Degraded data redundancy: 21/264 objects degraded (7.955%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T15:04:50.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:50 vm05.local ceph-mon[116516]: pgmap v47: 65 pgs: 2 active+undersized, 63 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 793 B/s rd, 1 op/s 2026-03-09T15:04:50.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:50 vm05.local ceph-mon[116516]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 21/264 objects degraded (7.955%), 9 pgs degraded) 2026-03-09T15:04:50.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:50 vm09.local ceph-mon[98742]: pgmap v47: 65 pgs: 2 active+undersized, 63 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 793 B/s rd, 1 op/s 2026-03-09T15:04:50.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:50 vm09.local ceph-mon[98742]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 21/264 objects degraded (7.955%), 9 pgs degraded) 2026-03-09T15:04:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:52 vm05.local ceph-mon[116516]: pgmap v48: 65 pgs: 65 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 639 B/s rd, 1 op/s 2026-03-09T15:04:52.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:52 vm09.local ceph-mon[98742]: pgmap v48: 65 pgs: 65 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 639 B/s rd, 1 op/s 2026-03-09T15:04:54.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:54 vm05.local ceph-mon[116516]: pgmap v49: 65 pgs: 65 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:04:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:54 vm09.local ceph-mon[98742]: pgmap v49: 65 pgs: 65 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:04:56.441 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:56 vm05.local ceph-mon[116516]: pgmap v50: 65 pgs: 65 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T15:04:56.441 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T15:04:56.441 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:56 vm05.local ceph-mon[116516]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T15:04:56.441 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:56 vm05.local ceph-mon[116516]: Upgrade: osd.2 is safe to restart 2026-03-09T15:04:56.441 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:56.441 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T15:04:56.441 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:56 vm09.local ceph-mon[98742]: pgmap v50: 65 pgs: 65 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T15:04:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T15:04:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:56 vm09.local ceph-mon[98742]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T15:04:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:56 vm09.local ceph-mon[98742]: Upgrade: osd.2 is safe to restart 2026-03-09T15:04:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T15:04:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:04:57.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local systemd[1]: Stopping Ceph osd.2 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:04:57.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[81044]: 2026-03-09T15:04:57.164+0000 7fed11d7a700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:04:57.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[81044]: 2026-03-09T15:04:57.164+0000 7fed11d7a700 -1 osd.2 51 *** Got signal Terminated *** 2026-03-09T15:04:57.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[81044]: 2026-03-09T15:04:57.164+0000 7fed11d7a700 -1 osd.2 51 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T15:04:57.697 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local podman[134536]: 2026-03-09 15:04:57.529880474 +0000 UTC m=+0.377425691 container died 75097dc129797588ce66b971ca3f66223446d1f18ca84e5aff8b3a2239e7eacc (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=-18.2.0, org.label-schema.build-date=20231212, org.label-schema.license=GPLv2, GIT_CLEAN=True, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, RELEASE=HEAD, ceph=True, io.buildah.version=1.29.1, org.label-schema.name=CentOS Stream 8 Base Image, GIT_BRANCH=HEAD) 2026-03-09T15:04:57.697 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local podman[134536]: 2026-03-09 15:04:57.548792185 +0000 UTC m=+0.396337401 container remove 75097dc129797588ce66b971ca3f66223446d1f18ca84e5aff8b3a2239e7eacc (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GIT_BRANCH=HEAD, GIT_CLEAN=True, RELEASE=HEAD, maintainer=Guillaume Abrioux , GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, org.label-schema.build-date=20231212, CEPH_POINT_RELEASE=-18.2.0, ceph=True, io.buildah.version=1.29.1, org.label-schema.license=GPLv2, GIT_REPO=https://github.com/ceph/ceph-container.git) 2026-03-09T15:04:57.698 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local bash[134536]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2 2026-03-09T15:04:57.698 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:57 vm05.local ceph-mon[116516]: Upgrade: Updating osd.2 2026-03-09T15:04:57.698 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:57 vm05.local ceph-mon[116516]: Deploying daemon osd.2 on vm05 2026-03-09T15:04:57.698 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:57.698 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:04:57.698 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:57.698 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:57 vm05.local ceph-mon[116516]: osd.2 marked itself down and dead 2026-03-09T15:04:57.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:57 vm09.local ceph-mon[98742]: Upgrade: Updating osd.2 2026-03-09T15:04:57.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:57 vm09.local ceph-mon[98742]: Deploying daemon osd.2 on vm05 2026-03-09T15:04:57.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:57.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:04:57.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:57.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:57 vm09.local ceph-mon[98742]: osd.2 marked itself down and dead 2026-03-09T15:04:57.991 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local podman[134603]: 2026-03-09 15:04:57.697814837 +0000 UTC m=+0.019641829 container create 14c4379199d52e1464f05b1d388aaa16ca77eabf7a39acf27b8513329c3052bc (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T15:04:57.992 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local podman[134603]: 2026-03-09 15:04:57.741523478 +0000 UTC m=+0.063350470 container init 14c4379199d52e1464f05b1d388aaa16ca77eabf7a39acf27b8513329c3052bc (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default) 2026-03-09T15:04:57.992 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local podman[134603]: 2026-03-09 15:04:57.744380605 +0000 UTC m=+0.066207597 container start 14c4379199d52e1464f05b1d388aaa16ca77eabf7a39acf27b8513329c3052bc (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-deactivate, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T15:04:57.992 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local podman[134603]: 2026-03-09 15:04:57.750288514 +0000 UTC m=+0.072115516 container attach 14c4379199d52e1464f05b1d388aaa16ca77eabf7a39acf27b8513329c3052bc (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T15:04:57.992 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local podman[134603]: 2026-03-09 15:04:57.689479454 +0000 UTC m=+0.011306456 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:04:57.992 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local podman[134603]: 2026-03-09 15:04:57.874475445 +0000 UTC m=+0.196302437 container died 14c4379199d52e1464f05b1d388aaa16ca77eabf7a39acf27b8513329c3052bc (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True) 2026-03-09T15:04:57.992 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local podman[134603]: 2026-03-09 15:04:57.891985923 +0000 UTC m=+0.213812905 container remove 14c4379199d52e1464f05b1d388aaa16ca77eabf7a39acf27b8513329c3052bc (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-deactivate, ceph=True, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid) 2026-03-09T15:04:57.992 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.2.service: Deactivated successfully. 2026-03-09T15:04:57.992 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local systemd[1]: Stopped Ceph osd.2 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:04:57.992 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:57 vm05.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.2.service: Consumed 33.245s CPU time. 2026-03-09T15:04:58.455 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local systemd[1]: Starting Ceph osd.2 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:04:58.455 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local podman[134706]: 2026-03-09 15:04:58.192187233 +0000 UTC m=+0.019031497 container create 3fb9a47b184be5c5d8a0ef16974ea50af60a5a6a472704ccf552ccda22230bd2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260223) 2026-03-09T15:04:58.455 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local podman[134706]: 2026-03-09 15:04:58.238110419 +0000 UTC m=+0.064954692 container init 3fb9a47b184be5c5d8a0ef16974ea50af60a5a6a472704ccf552ccda22230bd2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.build-date=20260223) 2026-03-09T15:04:58.455 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local podman[134706]: 2026-03-09 15:04:58.241296131 +0000 UTC m=+0.068140395 container start 3fb9a47b184be5c5d8a0ef16974ea50af60a5a6a472704ccf552ccda22230bd2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid) 2026-03-09T15:04:58.455 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local podman[134706]: 2026-03-09 15:04:58.242147304 +0000 UTC m=+0.068991568 container attach 3fb9a47b184be5c5d8a0ef16974ea50af60a5a6a472704ccf552ccda22230bd2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-09T15:04:58.455 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local podman[134706]: 2026-03-09 15:04:58.184598236 +0000 UTC m=+0.011442500 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:04:58.455 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate[134717]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:58.456 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local bash[134706]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:58.456 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate[134717]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:58.456 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local bash[134706]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:58.456 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:58 vm05.local ceph-mon[116516]: pgmap v51: 65 pgs: 65 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 912 B/s rd, 1 op/s 2026-03-09T15:04:58.456 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:58 vm05.local ceph-mon[116516]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T15:04:58.456 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:58 vm05.local ceph-mon[116516]: osdmap e52: 6 total, 5 up, 6 in 2026-03-09T15:04:58.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:58 vm09.local ceph-mon[98742]: pgmap v51: 65 pgs: 65 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 912 B/s rd, 1 op/s 2026-03-09T15:04:58.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:58 vm09.local ceph-mon[98742]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T15:04:58.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:58 vm09.local ceph-mon[98742]: osdmap e52: 6 total, 5 up, 6 in 2026-03-09T15:04:59.054 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate[134717]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T15:04:59.054 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local bash[134706]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T15:04:59.054 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local bash[134706]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:59.054 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate[134717]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:59.054 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate[134717]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:59.054 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local bash[134706]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:04:59.054 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate[134717]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T15:04:59.054 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local bash[134706]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T15:04:59.054 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate[134717]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-a04d2e4d-ed53-4f2f-8ab6-8d1e2dbfa634/osd-block-14d84b3a-06be-48f8-89a7-0e9c83f76e3c --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-09T15:04:59.054 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:58 vm05.local bash[134706]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-a04d2e4d-ed53-4f2f-8ab6-8d1e2dbfa634/osd-block-14d84b3a-06be-48f8-89a7-0e9c83f76e3c --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-09T15:04:59.479 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:59 vm05.local ceph-mon[116516]: osdmap e53: 6 total, 5 up, 6 in 2026-03-09T15:04:59.479 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:59.479 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:59.479 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:04:59 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:04:59.479 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate[134717]: Running command: /usr/bin/ln -snf /dev/ceph-a04d2e4d-ed53-4f2f-8ab6-8d1e2dbfa634/osd-block-14d84b3a-06be-48f8-89a7-0e9c83f76e3c /var/lib/ceph/osd/ceph-2/block 2026-03-09T15:04:59.479 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local bash[134706]: Running command: /usr/bin/ln -snf /dev/ceph-a04d2e4d-ed53-4f2f-8ab6-8d1e2dbfa634/osd-block-14d84b3a-06be-48f8-89a7-0e9c83f76e3c /var/lib/ceph/osd/ceph-2/block 2026-03-09T15:04:59.479 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate[134717]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local bash[134706]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate[134717]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local bash[134706]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate[134717]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local bash[134706]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate[134717]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local bash[134706]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local conmon[134717]: conmon 3fb9a47b184be5c5d8a0 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3fb9a47b184be5c5d8a0ef16974ea50af60a5a6a472704ccf552ccda22230bd2.scope/container/memory.events 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local podman[134706]: 2026-03-09 15:04:59.20046614 +0000 UTC m=+1.027310405 container died 3fb9a47b184be5c5d8a0ef16974ea50af60a5a6a472704ccf552ccda22230bd2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, ceph=True) 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local podman[134706]: 2026-03-09 15:04:59.218034527 +0000 UTC m=+1.044878781 container remove 3fb9a47b184be5c5d8a0ef16974ea50af60a5a6a472704ccf552ccda22230bd2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-activate, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local podman[134978]: 2026-03-09 15:04:59.318553813 +0000 UTC m=+0.018038588 container create 01cf87b8bc05621c4b373948476c7352d7858eabed22c53c7a4ff62e6d8bd9eb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid) 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local podman[134978]: 2026-03-09 15:04:59.354519289 +0000 UTC m=+0.054004073 container init 01cf87b8bc05621c4b373948476c7352d7858eabed22c53c7a4ff62e6d8bd9eb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid) 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local podman[134978]: 2026-03-09 15:04:59.359151621 +0000 UTC m=+0.058636395 container start 01cf87b8bc05621c4b373948476c7352d7858eabed22c53c7a4ff62e6d8bd9eb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local bash[134978]: 01cf87b8bc05621c4b373948476c7352d7858eabed22c53c7a4ff62e6d8bd9eb 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local podman[134978]: 2026-03-09 15:04:59.311349066 +0000 UTC m=+0.010833851 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:04:59.480 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:04:59 vm05.local systemd[1]: Started Ceph osd.2 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:04:59.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:59 vm09.local ceph-mon[98742]: osdmap e53: 6 total, 5 up, 6 in 2026-03-09T15:04:59.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:59.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:04:59.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:04:59 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:05:00.476 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:00 vm05.local ceph-mon[116516]: pgmap v54: 65 pgs: 7 peering, 5 stale+active+clean, 53 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:05:00.476 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:05:00 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[134989]: 2026-03-09T15:05:00.450+0000 7fc333847740 -1 Falling back to public interface 2026-03-09T15:05:00.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:00 vm09.local ceph-mon[98742]: pgmap v54: 65 pgs: 7 peering, 5 stale+active+clean, 53 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:05:01.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:01.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:01.896 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:02.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:02.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:02.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:02.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:02.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:02 vm05.local ceph-mon[116516]: pgmap v55: 65 pgs: 4 active+undersized, 7 peering, 2 stale+active+clean, 4 active+undersized+degraded, 48 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 10/264 objects degraded (3.788%) 2026-03-09T15:05:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:02 vm05.local ceph-mon[116516]: Health check failed: Degraded data redundancy: 10/264 objects degraded (3.788%), 4 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:05:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:05:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:05:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:02 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T15:05:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:02 vm09.local ceph-mon[98742]: pgmap v55: 65 pgs: 4 active+undersized, 7 peering, 2 stale+active+clean, 4 active+undersized+degraded, 48 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 10/264 objects degraded (3.788%) 2026-03-09T15:05:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:02 vm09.local ceph-mon[98742]: Health check failed: Degraded data redundancy: 10/264 objects degraded (3.788%), 4 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:05:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:05:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:05:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:02 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T15:05:04.278 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:03 vm05.local ceph-mon[116516]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T15:05:04.278 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:03 vm05.local ceph-mon[116516]: Upgrade: unsafe to stop osd(s) at this time (4 PGs are or would become offline) 2026-03-09T15:05:04.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:03 vm09.local ceph-mon[98742]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T15:05:04.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:03 vm09.local ceph-mon[98742]: Upgrade: unsafe to stop osd(s) at this time (4 PGs are or would become offline) 2026-03-09T15:05:04.554 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:05:04 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[134989]: 2026-03-09T15:05:04.277+0000 7fc333847740 -1 osd.2 0 read_superblock omap replica is missing. 2026-03-09T15:05:04.554 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:05:04 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[134989]: 2026-03-09T15:05:04.413+0000 7fc333847740 -1 osd.2 51 log_to_monitors true 2026-03-09T15:05:05.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:05:05 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[134989]: 2026-03-09T15:05:05.012+0000 7fc32b5e1640 -1 osd.2 51 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T15:05:05.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:04 vm05.local ceph-mon[116516]: pgmap v56: 65 pgs: 15 active+undersized, 12 active+undersized+degraded, 38 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 34/264 objects degraded (12.879%) 2026-03-09T15:05:05.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:04 vm05.local ceph-mon[116516]: from='osd.2 [v2:192.168.123.105:6818/3455638482,v1:192.168.123.105:6819/3455638482]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:05:05.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:04 vm09.local ceph-mon[98742]: pgmap v56: 65 pgs: 15 active+undersized, 12 active+undersized+degraded, 38 active+clean; 257 MiB data, 2.4 GiB used, 118 GiB / 120 GiB avail; 34/264 objects degraded (12.879%) 2026-03-09T15:05:05.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:04 vm09.local ceph-mon[98742]: from='osd.2 [v2:192.168.123.105:6818/3455638482,v1:192.168.123.105:6819/3455638482]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T15:05:06.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:05 vm05.local ceph-mon[116516]: from='osd.2 [v2:192.168.123.105:6818/3455638482,v1:192.168.123.105:6819/3455638482]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T15:05:06.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:05 vm05.local ceph-mon[116516]: osdmap e54: 6 total, 5 up, 6 in 2026-03-09T15:05:06.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:05 vm05.local ceph-mon[116516]: from='osd.2 [v2:192.168.123.105:6818/3455638482,v1:192.168.123.105:6819/3455638482]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T15:05:06.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:05 vm09.local ceph-mon[98742]: from='osd.2 [v2:192.168.123.105:6818/3455638482,v1:192.168.123.105:6819/3455638482]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T15:05:06.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:05 vm09.local ceph-mon[98742]: osdmap e54: 6 total, 5 up, 6 in 2026-03-09T15:05:06.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:05 vm09.local ceph-mon[98742]: from='osd.2 [v2:192.168.123.105:6818/3455638482,v1:192.168.123.105:6819/3455638482]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-09T15:05:06.433 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.432+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1676311596 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67bc101100 msgr2=0x7f67bc101570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:06.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.432+0000 7f67c2cd8700 1 --2- 192.168.123.105:0/1676311596 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67bc101100 0x7f67bc101570 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f67b0009b00 tx=0x7f67b0009e10 comp rx=0 tx=0).stop 2026-03-09T15:05:06.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.433+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1676311596 shutdown_connections 2026-03-09T15:05:06.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.433+0000 7f67c2cd8700 1 --2- 192.168.123.105:0/1676311596 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67bc101100 0x7f67bc101570 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.433+0000 7f67c2cd8700 1 --2- 192.168.123.105:0/1676311596 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f67bc0ff480 0x7f67bc100bc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.433+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1676311596 >> 192.168.123.105:0/1676311596 conn(0x7f67bc0747e0 msgr2=0x7f67bc074be0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:06.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.433+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1676311596 shutdown_connections 2026-03-09T15:05:06.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.433+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1676311596 wait complete. 2026-03-09T15:05:06.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.434+0000 7f67c2cd8700 1 Processor -- start 2026-03-09T15:05:06.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.434+0000 7f67c2cd8700 1 -- start start 2026-03-09T15:05:06.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.434+0000 7f67c2cd8700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f67bc0ff480 0x7f67bc198250 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:06.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.434+0000 7f67c2cd8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67bc101100 0x7f67bc198790 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:06.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.434+0000 7f67c2cd8700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f67bc198e70 con 0x7f67bc101100 2026-03-09T15:05:06.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.434+0000 7f67c2cd8700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f67bc19cc00 con 0x7f67bc0ff480 2026-03-09T15:05:06.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.434+0000 7f67bbfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67bc101100 0x7f67bc198790 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:06.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.434+0000 7f67bbfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67bc101100 0x7f67bc198790 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:34584/0 (socket says 192.168.123.105:34584) 2026-03-09T15:05:06.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.434+0000 7f67bbfff700 1 -- 192.168.123.105:0/1705966325 learned_addr learned my addr 192.168.123.105:0/1705966325 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:06.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.435+0000 7f67bbfff700 1 -- 192.168.123.105:0/1705966325 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f67bc0ff480 msgr2=0x7f67bc198250 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:05:06.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.435+0000 7f67bbfff700 1 --2- 192.168.123.105:0/1705966325 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f67bc0ff480 0x7f67bc198250 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.435+0000 7f67bbfff700 1 -- 192.168.123.105:0/1705966325 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f67b00097e0 con 0x7f67bc101100 2026-03-09T15:05:06.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.435+0000 7f67bbfff700 1 --2- 192.168.123.105:0/1705966325 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67bc101100 0x7f67bc198790 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f67b00094d0 tx=0x7f67b00049e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:06.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.435+0000 7f67b9ffb700 1 -- 192.168.123.105:0/1705966325 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f67b001d070 con 0x7f67bc101100 2026-03-09T15:05:06.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.435+0000 7f67b9ffb700 1 -- 192.168.123.105:0/1705966325 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f67b000bc50 con 0x7f67bc101100 2026-03-09T15:05:06.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.435+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1705966325 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f67bc19cee0 con 0x7f67bc101100 2026-03-09T15:05:06.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.435+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1705966325 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f67bc19d430 con 0x7f67bc101100 2026-03-09T15:05:06.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.436+0000 7f67b9ffb700 1 -- 192.168.123.105:0/1705966325 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f67b0022620 con 0x7f67bc101100 2026-03-09T15:05:06.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.437+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1705966325 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f67bc04ea50 con 0x7f67bc101100 2026-03-09T15:05:06.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.437+0000 7f67b9ffb700 1 -- 192.168.123.105:0/1705966325 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f67b000f660 con 0x7f67bc101100 2026-03-09T15:05:06.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.438+0000 7f67b9ffb700 1 --2- 192.168.123.105:0/1705966325 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f67a40779f0 0x7f67a4079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:06.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.438+0000 7f67b9ffb700 1 -- 192.168.123.105:0/1705966325 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(55..55 src has 1..55) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f67b009b600 con 0x7f67bc101100 2026-03-09T15:05:06.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.440+0000 7f67b9ffb700 1 -- 192.168.123.105:0/1705966325 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f67b0063b90 con 0x7f67bc101100 2026-03-09T15:05:06.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.440+0000 7f67c0a74700 1 --2- 192.168.123.105:0/1705966325 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f67a40779f0 0x7f67a4079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:06.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.441+0000 7f67c0a74700 1 --2- 192.168.123.105:0/1705966325 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f67a40779f0 0x7f67a4079ea0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f67ac00a960 tx=0x7f67ac005c10 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:06.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.572+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1705966325 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f67bc19d710 con 0x7f67a40779f0 2026-03-09T15:05:06.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.573+0000 7f67b9ffb700 1 -- 192.168.123.105:0/1705966325 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f67bc19d710 con 0x7f67a40779f0 2026-03-09T15:05:06.576 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.575+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1705966325 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f67a40779f0 msgr2=0x7f67a4079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:06.576 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.575+0000 7f67c2cd8700 1 --2- 192.168.123.105:0/1705966325 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f67a40779f0 0x7f67a4079ea0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f67ac00a960 tx=0x7f67ac005c10 comp rx=0 tx=0).stop 2026-03-09T15:05:06.576 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.576+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1705966325 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67bc101100 msgr2=0x7f67bc198790 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:06.576 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.576+0000 7f67c2cd8700 1 --2- 192.168.123.105:0/1705966325 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67bc101100 0x7f67bc198790 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f67b00094d0 tx=0x7f67b00049e0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.576+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1705966325 shutdown_connections 2026-03-09T15:05:06.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.576+0000 7f67c2cd8700 1 --2- 192.168.123.105:0/1705966325 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f67a40779f0 0x7f67a4079ea0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.576+0000 7f67c2cd8700 1 --2- 192.168.123.105:0/1705966325 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f67bc0ff480 0x7f67bc198250 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.576+0000 7f67c2cd8700 1 --2- 192.168.123.105:0/1705966325 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f67bc101100 0x7f67bc198790 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.576+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1705966325 >> 192.168.123.105:0/1705966325 conn(0x7f67bc0747e0 msgr2=0x7f67bc0fe050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:06.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.576+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1705966325 shutdown_connections 2026-03-09T15:05:06.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.576+0000 7f67c2cd8700 1 -- 192.168.123.105:0/1705966325 wait complete. 2026-03-09T15:05:06.586 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:05:06.648 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.647+0000 7ff446520700 1 -- 192.168.123.105:0/1224234441 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff440102810 msgr2=0x7ff440102c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:06.648 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.647+0000 7ff446520700 1 --2- 192.168.123.105:0/1224234441 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff440102810 0x7ff440102c80 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7ff430009b00 tx=0x7ff430009e10 comp rx=0 tx=0).stop 2026-03-09T15:05:06.648 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.647+0000 7ff446520700 1 -- 192.168.123.105:0/1224234441 shutdown_connections 2026-03-09T15:05:06.648 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.647+0000 7ff446520700 1 --2- 192.168.123.105:0/1224234441 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff440102810 0x7ff440102c80 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.648 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.647+0000 7ff446520700 1 --2- 192.168.123.105:0/1224234441 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff440108810 0x7ff440108be0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.648 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.647+0000 7ff446520700 1 -- 192.168.123.105:0/1224234441 >> 192.168.123.105:0/1224234441 conn(0x7ff4400fe330 msgr2=0x7ff440100740 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:06.648 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.647+0000 7ff446520700 1 -- 192.168.123.105:0/1224234441 shutdown_connections 2026-03-09T15:05:06.648 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.647+0000 7ff446520700 1 -- 192.168.123.105:0/1224234441 wait complete. 2026-03-09T15:05:06.649 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.648+0000 7ff446520700 1 Processor -- start 2026-03-09T15:05:06.649 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.648+0000 7ff446520700 1 -- start start 2026-03-09T15:05:06.649 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.648+0000 7ff446520700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff440102810 0x7ff440198400 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:06.649 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.648+0000 7ff446520700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff440108810 0x7ff440198940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:06.649 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.648+0000 7ff446520700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff440199020 con 0x7ff440108810 2026-03-09T15:05:06.649 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.648+0000 7ff43f7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff440108810 0x7ff440198940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:06.649 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.648+0000 7ff43f7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff440108810 0x7ff440198940 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:34596/0 (socket says 192.168.123.105:34596) 2026-03-09T15:05:06.649 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.648+0000 7ff43f7fe700 1 -- 192.168.123.105:0/2077143757 learned_addr learned my addr 192.168.123.105:0/2077143757 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:06.649 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.649+0000 7ff446520700 1 -- 192.168.123.105:0/2077143757 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff44019cdb0 con 0x7ff440102810 2026-03-09T15:05:06.650 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.649+0000 7ff43f7fe700 1 -- 192.168.123.105:0/2077143757 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff440102810 msgr2=0x7ff440198400 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:05:06.650 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.649+0000 7ff43f7fe700 1 --2- 192.168.123.105:0/2077143757 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff440102810 0x7ff440198400 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.650 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.649+0000 7ff43f7fe700 1 -- 192.168.123.105:0/2077143757 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff4300097e0 con 0x7ff440108810 2026-03-09T15:05:06.650 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.649+0000 7ff43f7fe700 1 --2- 192.168.123.105:0/2077143757 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff440108810 0x7ff440198940 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7ff4300094d0 tx=0x7ff4300049e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:06.650 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.649+0000 7ff43d7fa700 1 -- 192.168.123.105:0/2077143757 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff43001d070 con 0x7ff440108810 2026-03-09T15:05:06.651 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.650+0000 7ff43d7fa700 1 -- 192.168.123.105:0/2077143757 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7ff43000bc50 con 0x7ff440108810 2026-03-09T15:05:06.651 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.650+0000 7ff446520700 1 -- 192.168.123.105:0/2077143757 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff44019d030 con 0x7ff440108810 2026-03-09T15:05:06.651 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.650+0000 7ff446520700 1 -- 192.168.123.105:0/2077143757 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff44019d520 con 0x7ff440108810 2026-03-09T15:05:06.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.651+0000 7ff446520700 1 -- 192.168.123.105:0/2077143757 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff44004ea50 con 0x7ff440108810 2026-03-09T15:05:06.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.650+0000 7ff43d7fa700 1 -- 192.168.123.105:0/2077143757 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff430022620 con 0x7ff440108810 2026-03-09T15:05:06.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.652+0000 7ff43d7fa700 1 -- 192.168.123.105:0/2077143757 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7ff43000f620 con 0x7ff440108810 2026-03-09T15:05:06.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.655+0000 7ff43d7fa700 1 --2- 192.168.123.105:0/2077143757 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff42c0779a0 0x7ff42c079e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:06.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.655+0000 7ff43ffff700 1 --2- 192.168.123.105:0/2077143757 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff42c0779a0 0x7ff42c079e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:06.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.656+0000 7ff43d7fa700 1 -- 192.168.123.105:0/2077143757 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(55..55 src has 1..55) v4 ==== 6136+0+0 (secure 0 0 0) 0x7ff43009b5c0 con 0x7ff440108810 2026-03-09T15:05:06.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.656+0000 7ff43ffff700 1 --2- 192.168.123.105:0/2077143757 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff42c0779a0 0x7ff42c079e50 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7ff440103950 tx=0x7ff428005c30 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:06.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.656+0000 7ff43d7fa700 1 -- 192.168.123.105:0/2077143757 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ff4300a11f0 con 0x7ff440108810 2026-03-09T15:05:06.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.782+0000 7ff446520700 1 -- 192.168.123.105:0/2077143757 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7ff44019d800 con 0x7ff42c0779a0 2026-03-09T15:05:06.784 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.783+0000 7ff43d7fa700 1 -- 192.168.123.105:0/2077143757 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7ff44019d800 con 0x7ff42c0779a0 2026-03-09T15:05:06.788 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.787+0000 7ff446520700 1 -- 192.168.123.105:0/2077143757 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff42c0779a0 msgr2=0x7ff42c079e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:06.788 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.787+0000 7ff446520700 1 --2- 192.168.123.105:0/2077143757 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff42c0779a0 0x7ff42c079e50 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7ff440103950 tx=0x7ff428005c30 comp rx=0 tx=0).stop 2026-03-09T15:05:06.788 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.787+0000 7ff446520700 1 -- 192.168.123.105:0/2077143757 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff440108810 msgr2=0x7ff440198940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:06.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.787+0000 7ff446520700 1 --2- 192.168.123.105:0/2077143757 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff440108810 0x7ff440198940 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7ff4300094d0 tx=0x7ff4300049e0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.787+0000 7ff446520700 1 -- 192.168.123.105:0/2077143757 shutdown_connections 2026-03-09T15:05:06.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.787+0000 7ff446520700 1 --2- 192.168.123.105:0/2077143757 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff42c0779a0 0x7ff42c079e50 secure :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7ff440103950 tx=0x7ff428005c30 comp rx=0 tx=0).stop 2026-03-09T15:05:06.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.787+0000 7ff446520700 1 --2- 192.168.123.105:0/2077143757 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff440102810 0x7ff440198400 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.787+0000 7ff446520700 1 --2- 192.168.123.105:0/2077143757 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff440108810 0x7ff440198940 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.787+0000 7ff446520700 1 -- 192.168.123.105:0/2077143757 >> 192.168.123.105:0/2077143757 conn(0x7ff4400fe330 msgr2=0x7ff4400ff9f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:06.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.788+0000 7ff446520700 1 -- 192.168.123.105:0/2077143757 shutdown_connections 2026-03-09T15:05:06.789 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.788+0000 7ff446520700 1 -- 192.168.123.105:0/2077143757 wait complete. 2026-03-09T15:05:06.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.865+0000 7f6610ac5700 1 -- 192.168.123.105:0/2123002187 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f660c068490 msgr2=0x7f660c068900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:06.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.865+0000 7f6610ac5700 1 --2- 192.168.123.105:0/2123002187 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f660c068490 0x7f660c068900 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f65fc009b00 tx=0x7f65fc009e10 comp rx=0 tx=0).stop 2026-03-09T15:05:06.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.866+0000 7f6610ac5700 1 -- 192.168.123.105:0/2123002187 shutdown_connections 2026-03-09T15:05:06.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.866+0000 7f6610ac5700 1 --2- 192.168.123.105:0/2123002187 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f660c068490 0x7f660c068900 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.866+0000 7f6610ac5700 1 --2- 192.168.123.105:0/2123002187 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f660c1066c0 0x7f660c106a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.866+0000 7f6610ac5700 1 -- 192.168.123.105:0/2123002187 >> 192.168.123.105:0/2123002187 conn(0x7f660c0754a0 msgr2=0x7f660c0758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:06.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.866+0000 7f6610ac5700 1 -- 192.168.123.105:0/2123002187 shutdown_connections 2026-03-09T15:05:06.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.866+0000 7f6610ac5700 1 -- 192.168.123.105:0/2123002187 wait complete. 2026-03-09T15:05:06.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.867+0000 7f6610ac5700 1 Processor -- start 2026-03-09T15:05:06.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.867+0000 7f6610ac5700 1 -- start start 2026-03-09T15:05:06.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.867+0000 7f6610ac5700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f660c068490 0x7f660c1960b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:06.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.867+0000 7f6610ac5700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f660c1066c0 0x7f660c1965f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:06.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.867+0000 7f6610ac5700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f660c196cd0 con 0x7f660c1066c0 2026-03-09T15:05:06.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.867+0000 7f6610ac5700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f660c19aa60 con 0x7f660c068490 2026-03-09T15:05:06.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.868+0000 7f6609d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f660c1066c0 0x7f660c1965f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:06.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.868+0000 7f6609d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f660c1066c0 0x7f660c1965f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:34608/0 (socket says 192.168.123.105:34608) 2026-03-09T15:05:06.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.868+0000 7f6609d9b700 1 -- 192.168.123.105:0/4188496499 learned_addr learned my addr 192.168.123.105:0/4188496499 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:06.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.868+0000 7f6609d9b700 1 -- 192.168.123.105:0/4188496499 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f660c068490 msgr2=0x7f660c1960b0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:05:06.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.868+0000 7f660a59c700 1 --2- 192.168.123.105:0/4188496499 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f660c068490 0x7f660c1960b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:06.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.868+0000 7f6609d9b700 1 --2- 192.168.123.105:0/4188496499 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f660c068490 0x7f660c1960b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:06.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.868+0000 7f6609d9b700 1 -- 192.168.123.105:0/4188496499 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f65fc0097e0 con 0x7f660c1066c0 2026-03-09T15:05:06.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.868+0000 7f660a59c700 1 --2- 192.168.123.105:0/4188496499 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f660c068490 0x7f660c1960b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:05:06.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.868+0000 7f6609d9b700 1 --2- 192.168.123.105:0/4188496499 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f660c1066c0 0x7f660c1965f0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f65fc009fd0 tx=0x7f65fc0049e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:06.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.869+0000 7f66037fe700 1 -- 192.168.123.105:0/4188496499 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f65fc01d070 con 0x7f660c1066c0 2026-03-09T15:05:06.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.869+0000 7f66037fe700 1 -- 192.168.123.105:0/4188496499 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f65fc00bc50 con 0x7f660c1066c0 2026-03-09T15:05:06.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.869+0000 7f6610ac5700 1 -- 192.168.123.105:0/4188496499 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f660c19ace0 con 0x7f660c1066c0 2026-03-09T15:05:06.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.869+0000 7f6610ac5700 1 -- 192.168.123.105:0/4188496499 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f660c19b1d0 con 0x7f660c1066c0 2026-03-09T15:05:06.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.870+0000 7f66037fe700 1 -- 192.168.123.105:0/4188496499 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f65fc022620 con 0x7f660c1066c0 2026-03-09T15:05:06.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.871+0000 7f6610ac5700 1 -- 192.168.123.105:0/4188496499 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f660c04ea50 con 0x7f660c1066c0 2026-03-09T15:05:06.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.871+0000 7f66037fe700 1 -- 192.168.123.105:0/4188496499 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f65fc00f620 con 0x7f660c1066c0 2026-03-09T15:05:06.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.872+0000 7f66037fe700 1 --2- 192.168.123.105:0/4188496499 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f65f80779f0 0x7f65f8079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:06.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.872+0000 7f66037fe700 1 -- 192.168.123.105:0/4188496499 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(55..55 src has 1..55) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f65fc09c050 con 0x7f660c1066c0 2026-03-09T15:05:06.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.873+0000 7f660a59c700 1 --2- 192.168.123.105:0/4188496499 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f65f80779f0 0x7f65f8079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:06.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.873+0000 7f660a59c700 1 --2- 192.168.123.105:0/4188496499 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f65f80779f0 0x7f65f8079ea0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f65f4005fd0 tx=0x7f65f4005dc0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:06.876 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.875+0000 7f66037fe700 1 -- 192.168.123.105:0/4188496499 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f65fc064660 con 0x7f660c1066c0 2026-03-09T15:05:06.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:06.994+0000 7f6610ac5700 1 -- 192.168.123.105:0/4188496499 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f660c19b4b0 con 0x7f65f80779f0 2026-03-09T15:05:07.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.000+0000 7f66037fe700 1 -- 192.168.123.105:0/4188496499 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f660c19b4b0 con 0x7f65f80779f0 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (2m) 6s ago 9m 24.0M - 0.25.0 c8568f914cd2 7635cece310c 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (9m) 6s ago 9m 9332k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (8m) 58s ago 8m 11.2M - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (62s) 6s ago 9m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 35d8c0ae5a58 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (60s) 58s ago 8m 8308k - 19.2.3-678-ge911bdeb 654f31e6858e 82bdad36caf9 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (117s) 6s ago 8m 73.9M - 10.4.0 c8b91775d855 eb6431f63d88 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (6m) 6s ago 6m 181M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (6m) 6s ago 6m 17.7M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (6m) 58s ago 6m 17.0M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (6m) 58s ago 6m 133M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (3m) 6s ago 9m 617M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (2m) 58s ago 8m 495M - 19.2.3-678-ge911bdeb 654f31e6858e acf5a6f3f804 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (93s) 6s ago 9m 59.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1e11655f7d87 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (76s) 58s ago 8m 50.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e d1f0309f4d58 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (2m) 6s ago 9m 10.1M - 1.7.0 72c9c2088986 888d071c50d9 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (2m) 58s ago 8m 9445k - 1.7.0 72c9c2088986 22c96a576a60 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (52s) 6s ago 7m 144M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f2883abca2d2 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (29s) 6s ago 7m 109M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b830d7f76498 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (7s) 6s ago 7m 12.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 01cf87b8bc05 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (7m) 58s ago 7m 436M 4096M 18.2.0 dc2bc1663786 e79644a0564f 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (7m) 58s ago 7m 382M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (7m) 58s ago 7m 334M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:05:07.001 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (2m) 6s ago 8m 54.9M - 2.51.0 1d3b7f56885b e6f470b0ba11 2026-03-09T15:05:07.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.002+0000 7f6610ac5700 1 -- 192.168.123.105:0/4188496499 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f65f80779f0 msgr2=0x7f65f8079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.002+0000 7f6610ac5700 1 --2- 192.168.123.105:0/4188496499 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f65f80779f0 0x7f65f8079ea0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f65f4005fd0 tx=0x7f65f4005dc0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.002+0000 7f6610ac5700 1 -- 192.168.123.105:0/4188496499 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f660c1066c0 msgr2=0x7f660c1965f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.003 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.002+0000 7f6610ac5700 1 --2- 192.168.123.105:0/4188496499 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f660c1066c0 0x7f660c1965f0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f65fc009fd0 tx=0x7f65fc0049e0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.003+0000 7f6610ac5700 1 -- 192.168.123.105:0/4188496499 shutdown_connections 2026-03-09T15:05:07.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.003+0000 7f6610ac5700 1 --2- 192.168.123.105:0/4188496499 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f65f80779f0 0x7f65f8079ea0 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.003+0000 7f6610ac5700 1 --2- 192.168.123.105:0/4188496499 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f660c068490 0x7f660c1960b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.003+0000 7f6610ac5700 1 --2- 192.168.123.105:0/4188496499 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f660c1066c0 0x7f660c1965f0 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.003+0000 7f6610ac5700 1 -- 192.168.123.105:0/4188496499 >> 192.168.123.105:0/4188496499 conn(0x7f660c0754a0 msgr2=0x7f660c0feb10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:07.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.003+0000 7f6610ac5700 1 -- 192.168.123.105:0/4188496499 shutdown_connections 2026-03-09T15:05:07.004 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.003+0000 7f6610ac5700 1 -- 192.168.123.105:0/4188496499 wait complete. 2026-03-09T15:05:07.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:07 vm05.local ceph-mon[116516]: pgmap v58: 65 pgs: 15 active+undersized, 12 active+undersized+degraded, 38 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 34/264 objects degraded (12.879%) 2026-03-09T15:05:07.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:07 vm05.local ceph-mon[116516]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T15:05:07.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:07 vm05.local ceph-mon[116516]: osd.2 [v2:192.168.123.105:6818/3455638482,v1:192.168.123.105:6819/3455638482] boot 2026-03-09T15:05:07.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:07 vm05.local ceph-mon[116516]: osdmap e55: 6 total, 6 up, 6 in 2026-03-09T15:05:07.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:07 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:05:07.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.086+0000 7fdc7b125700 1 -- 192.168.123.105:0/1773009741 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc74108790 msgr2=0x7fdc74108b60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.086+0000 7fdc7b125700 1 --2- 192.168.123.105:0/1773009741 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc74108790 0x7fdc74108b60 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fdc68009b00 tx=0x7fdc68009e10 comp rx=0 tx=0).stop 2026-03-09T15:05:07.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.086+0000 7fdc7b125700 1 -- 192.168.123.105:0/1773009741 shutdown_connections 2026-03-09T15:05:07.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.086+0000 7fdc7b125700 1 --2- 192.168.123.105:0/1773009741 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc74102790 0x7fdc74102c00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.086+0000 7fdc7b125700 1 --2- 192.168.123.105:0/1773009741 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc74108790 0x7fdc74108b60 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.086+0000 7fdc7b125700 1 -- 192.168.123.105:0/1773009741 >> 192.168.123.105:0/1773009741 conn(0x7fdc740fe2b0 msgr2=0x7fdc741006c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:07.088 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.087+0000 7fdc7b125700 1 -- 192.168.123.105:0/1773009741 shutdown_connections 2026-03-09T15:05:07.088 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.087+0000 7fdc7b125700 1 -- 192.168.123.105:0/1773009741 wait complete. 2026-03-09T15:05:07.088 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.087+0000 7fdc7b125700 1 Processor -- start 2026-03-09T15:05:07.088 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.087+0000 7fdc7b125700 1 -- start start 2026-03-09T15:05:07.089 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.088+0000 7fdc7b125700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc74102790 0x7fdc74198360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:07.089 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.088+0000 7fdc7b125700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc74108790 0x7fdc741988a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:07.089 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.088+0000 7fdc7b125700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdc74198f80 con 0x7fdc74102790 2026-03-09T15:05:07.089 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.088+0000 7fdc7b125700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fdc7419cd10 con 0x7fdc74108790 2026-03-09T15:05:07.089 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.088+0000 7fdc78ec1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc74102790 0x7fdc74198360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:07.089 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.088+0000 7fdc78ec1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc74102790 0x7fdc74198360 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:34620/0 (socket says 192.168.123.105:34620) 2026-03-09T15:05:07.089 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.088+0000 7fdc78ec1700 1 -- 192.168.123.105:0/2725495152 learned_addr learned my addr 192.168.123.105:0/2725495152 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:07.089 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.088+0000 7fdc78ec1700 1 -- 192.168.123.105:0/2725495152 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc74108790 msgr2=0x7fdc741988a0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:05:07.089 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.088+0000 7fdc78ec1700 1 --2- 192.168.123.105:0/2725495152 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc74108790 0x7fdc741988a0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.089 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.088+0000 7fdc78ec1700 1 -- 192.168.123.105:0/2725495152 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fdc64009710 con 0x7fdc74102790 2026-03-09T15:05:07.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.089+0000 7fdc78ec1700 1 --2- 192.168.123.105:0/2725495152 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc74102790 0x7fdc74198360 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fdc6800ba30 tx=0x7fdc6800bb10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:07.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.089+0000 7fdc71ffb700 1 -- 192.168.123.105:0/2725495152 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdc6801d070 con 0x7fdc74102790 2026-03-09T15:05:07.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.089+0000 7fdc7b125700 1 -- 192.168.123.105:0/2725495152 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fdc680097e0 con 0x7fdc74102790 2026-03-09T15:05:07.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.090+0000 7fdc7b125700 1 -- 192.168.123.105:0/2725495152 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fdc7419d2f0 con 0x7fdc74102790 2026-03-09T15:05:07.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.090+0000 7fdc71ffb700 1 -- 192.168.123.105:0/2725495152 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fdc6800f460 con 0x7fdc74102790 2026-03-09T15:05:07.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.090+0000 7fdc71ffb700 1 -- 192.168.123.105:0/2725495152 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fdc68021600 con 0x7fdc74102790 2026-03-09T15:05:07.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.092+0000 7fdc7b125700 1 -- 192.168.123.105:0/2725495152 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fdc7404ea50 con 0x7fdc74102790 2026-03-09T15:05:07.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.092+0000 7fdc71ffb700 1 -- 192.168.123.105:0/2725495152 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7fdc68017400 con 0x7fdc74102790 2026-03-09T15:05:07.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.095+0000 7fdc71ffb700 1 --2- 192.168.123.105:0/2725495152 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fdc5c0779e0 0x7fdc5c079e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:07.096 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.096+0000 7fdc71ffb700 1 -- 192.168.123.105:0/2725495152 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(56..56 src has 1..56) v4 ==== 6136+0+0 (secure 0 0 0) 0x7fdc68003a40 con 0x7fdc74102790 2026-03-09T15:05:07.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.096+0000 7fdc73fff700 1 --2- 192.168.123.105:0/2725495152 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fdc5c0779e0 0x7fdc5c079e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:07.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.096+0000 7fdc71ffb700 1 -- 192.168.123.105:0/2725495152 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fdc680a0050 con 0x7fdc74102790 2026-03-09T15:05:07.097 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.096+0000 7fdc73fff700 1 --2- 192.168.123.105:0/2725495152 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fdc5c0779e0 0x7fdc5c079e90 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fdc74199980 tx=0x7fdc64009450 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.264+0000 7fdc7b125700 1 -- 192.168.123.105:0/2725495152 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fdc7419d570 con 0x7fdc74102790 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.265+0000 7fdc71ffb700 1 -- 192.168.123.105:0/2725495152 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+799 (secure 0 0 0) 0x7fdc68063760 con 0x7fdc74102790 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 3, 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 7, 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 7 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:05:07.266 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:05:07.268 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.267+0000 7fdc7b125700 1 -- 192.168.123.105:0/2725495152 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fdc5c0779e0 msgr2=0x7fdc5c079e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.268 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.267+0000 7fdc7b125700 1 --2- 192.168.123.105:0/2725495152 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fdc5c0779e0 0x7fdc5c079e90 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7fdc74199980 tx=0x7fdc64009450 comp rx=0 tx=0).stop 2026-03-09T15:05:07.268 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.267+0000 7fdc7b125700 1 -- 192.168.123.105:0/2725495152 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc74102790 msgr2=0x7fdc74198360 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.268 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.267+0000 7fdc7b125700 1 --2- 192.168.123.105:0/2725495152 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc74102790 0x7fdc74198360 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fdc6800ba30 tx=0x7fdc6800bb10 comp rx=0 tx=0).stop 2026-03-09T15:05:07.269 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.268+0000 7fdc7b125700 1 -- 192.168.123.105:0/2725495152 shutdown_connections 2026-03-09T15:05:07.269 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.268+0000 7fdc7b125700 1 --2- 192.168.123.105:0/2725495152 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fdc5c0779e0 0x7fdc5c079e90 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.269 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.268+0000 7fdc7b125700 1 --2- 192.168.123.105:0/2725495152 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fdc74102790 0x7fdc74198360 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.269 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.268+0000 7fdc7b125700 1 --2- 192.168.123.105:0/2725495152 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fdc74108790 0x7fdc741988a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.269 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.268+0000 7fdc7b125700 1 -- 192.168.123.105:0/2725495152 >> 192.168.123.105:0/2725495152 conn(0x7fdc740fe2b0 msgr2=0x7fdc740ffa10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:07.269 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.268+0000 7fdc7b125700 1 -- 192.168.123.105:0/2725495152 shutdown_connections 2026-03-09T15:05:07.269 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.268+0000 7fdc7b125700 1 -- 192.168.123.105:0/2725495152 wait complete. 2026-03-09T15:05:07.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.339+0000 7f3772932700 1 -- 192.168.123.105:0/3933626514 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f376c102780 msgr2=0x7f376c102bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.339+0000 7f3772932700 1 --2- 192.168.123.105:0/3933626514 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f376c102780 0x7f376c102bf0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f375c009b00 tx=0x7f375c009e10 comp rx=0 tx=0).stop 2026-03-09T15:05:07.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.341+0000 7f3772932700 1 -- 192.168.123.105:0/3933626514 shutdown_connections 2026-03-09T15:05:07.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.341+0000 7f3772932700 1 --2- 192.168.123.105:0/3933626514 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f376c102780 0x7f376c102bf0 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.341+0000 7f3772932700 1 --2- 192.168.123.105:0/3933626514 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f376c108780 0x7f376c108b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.341+0000 7f3772932700 1 -- 192.168.123.105:0/3933626514 >> 192.168.123.105:0/3933626514 conn(0x7f376c0fe280 msgr2=0x7f376c100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:07.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.342+0000 7f3772932700 1 -- 192.168.123.105:0/3933626514 shutdown_connections 2026-03-09T15:05:07.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.342+0000 7f3772932700 1 -- 192.168.123.105:0/3933626514 wait complete. 2026-03-09T15:05:07.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.342+0000 7f3772932700 1 Processor -- start 2026-03-09T15:05:07.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.342+0000 7f3772932700 1 -- start start 2026-03-09T15:05:07.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.343+0000 7f3772932700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f376c102780 0x7f376c075260 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:07.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.343+0000 7f3772932700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f376c108780 0x7f376c0757a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:07.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.343+0000 7f3772932700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f376c079360 con 0x7f376c108780 2026-03-09T15:05:07.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.343+0000 7f3772932700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f376c075ce0 con 0x7f376c102780 2026-03-09T15:05:07.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.343+0000 7f376b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f376c108780 0x7f376c0757a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:07.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.343+0000 7f376b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f376c108780 0x7f376c0757a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:34636/0 (socket says 192.168.123.105:34636) 2026-03-09T15:05:07.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.343+0000 7f376b7fe700 1 -- 192.168.123.105:0/3185736359 learned_addr learned my addr 192.168.123.105:0/3185736359 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:07.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.344+0000 7f376b7fe700 1 -- 192.168.123.105:0/3185736359 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f376c102780 msgr2=0x7f376c075260 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:05:07.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.344+0000 7f376b7fe700 1 --2- 192.168.123.105:0/3185736359 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f376c102780 0x7f376c075260 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.345 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.344+0000 7f376b7fe700 1 -- 192.168.123.105:0/3185736359 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f375c0097e0 con 0x7f376c108780 2026-03-09T15:05:07.345 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.344+0000 7f376b7fe700 1 --2- 192.168.123.105:0/3185736359 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f376c108780 0x7f376c0757a0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f375c006010 tx=0x7f375c004c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:07.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.345+0000 7f37697fa700 1 -- 192.168.123.105:0/3185736359 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f375c01d070 con 0x7f376c108780 2026-03-09T15:05:07.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.345+0000 7f37697fa700 1 -- 192.168.123.105:0/3185736359 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f375c00bbf0 con 0x7f376c108780 2026-03-09T15:05:07.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.345+0000 7f37697fa700 1 -- 192.168.123.105:0/3185736359 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f375c00f8a0 con 0x7f376c108780 2026-03-09T15:05:07.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.345+0000 7f3772932700 1 -- 192.168.123.105:0/3185736359 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f376c075f60 con 0x7f376c108780 2026-03-09T15:05:07.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.345+0000 7f3772932700 1 -- 192.168.123.105:0/3185736359 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f376c1a6ae0 con 0x7f376c108780 2026-03-09T15:05:07.350 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.347+0000 7f37697fa700 1 -- 192.168.123.105:0/3185736359 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f375c00fa00 con 0x7f376c108780 2026-03-09T15:05:07.350 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.347+0000 7f3772932700 1 -- 192.168.123.105:0/3185736359 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f376c04ea50 con 0x7f376c108780 2026-03-09T15:05:07.350 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.347+0000 7f37697fa700 1 --2- 192.168.123.105:0/3185736359 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f37540779f0 0x7f3754079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:07.350 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.347+0000 7f37697fa700 1 -- 192.168.123.105:0/3185736359 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(56..56 src has 1..56) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f375c09b360 con 0x7f376c108780 2026-03-09T15:05:07.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.350+0000 7f376bfff700 1 --2- 192.168.123.105:0/3185736359 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f37540779f0 0x7f3754079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:07.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.350+0000 7f37697fa700 1 -- 192.168.123.105:0/3185736359 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f375c063a20 con 0x7f376c108780 2026-03-09T15:05:07.351 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.350+0000 7f376bfff700 1 --2- 192.168.123.105:0/3185736359 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f37540779f0 0x7f3754079ea0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f376c1038c0 tx=0x7f3760005d20 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:07.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:07 vm09.local ceph-mon[98742]: pgmap v58: 65 pgs: 15 active+undersized, 12 active+undersized+degraded, 38 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 34/264 objects degraded (12.879%) 2026-03-09T15:05:07.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:07 vm09.local ceph-mon[98742]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T15:05:07.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:07 vm09.local ceph-mon[98742]: osd.2 [v2:192.168.123.105:6818/3455638482,v1:192.168.123.105:6819/3455638482] boot 2026-03-09T15:05:07.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:07 vm09.local ceph-mon[98742]: osdmap e55: 6 total, 6 up, 6 in 2026-03-09T15:05:07.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:07 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T15:05:07.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.492+0000 7f3772932700 1 -- 192.168.123.105:0/3185736359 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f376c1a6cd0 con 0x7f376c108780 2026-03-09T15:05:07.492 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.492+0000 7f37697fa700 1 -- 192.168.123.105:0/3185736359 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1944 (secure 0 0 0) 0x7f375c063170 con 0x7f376c108780 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:e11 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:legacy client fscid: 1 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:Filesystem 'cephfs' (1) 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:fs_name cephfs 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:epoch 9 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:created 2026-03-09T14:58:23.182447+0000 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:modified 2026-03-09T14:58:30.215642+0000 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:tableserver 0 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:root 0 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:session_timeout 60 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:session_autoclose 300 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:max_file_size 1099511627776 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:max_xattr_size 65536 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:required_client_features {} 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:last_failure 0 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:last_failure_osd_epoch 0 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:max_mds 1 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:in 0 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:up {0=14502} 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:failed 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:damaged 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:stopped 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:data_pools [3] 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:metadata_pool 2 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:inline_data enabled 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:balancer 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:bal_rank_mask -1 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:standby_count_wanted 1 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:qdb_cluster leader: 0 members: 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.nrocqt{0:14502} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.ohmitn{0:14510} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout:Standby daemons: 2026-03-09T15:05:07.493 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:05:07.494 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.rrcyql{-1:14518} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:05:07.494 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.jrhwzz{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:05:07.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.494+0000 7f3772932700 1 -- 192.168.123.105:0/3185736359 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f37540779f0 msgr2=0x7f3754079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.495 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.494+0000 7f3772932700 1 --2- 192.168.123.105:0/3185736359 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f37540779f0 0x7f3754079ea0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f376c1038c0 tx=0x7f3760005d20 comp rx=0 tx=0).stop 2026-03-09T15:05:07.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.495+0000 7f3772932700 1 -- 192.168.123.105:0/3185736359 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f376c108780 msgr2=0x7f376c0757a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.495+0000 7f3772932700 1 --2- 192.168.123.105:0/3185736359 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f376c108780 0x7f376c0757a0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f375c006010 tx=0x7f375c004c80 comp rx=0 tx=0).stop 2026-03-09T15:05:07.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.495+0000 7f3772932700 1 -- 192.168.123.105:0/3185736359 shutdown_connections 2026-03-09T15:05:07.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.495+0000 7f3772932700 1 --2- 192.168.123.105:0/3185736359 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f37540779f0 0x7f3754079ea0 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.495+0000 7f3772932700 1 --2- 192.168.123.105:0/3185736359 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f376c102780 0x7f376c075260 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.495+0000 7f3772932700 1 --2- 192.168.123.105:0/3185736359 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f376c108780 0x7f376c0757a0 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.495+0000 7f3772932700 1 -- 192.168.123.105:0/3185736359 >> 192.168.123.105:0/3185736359 conn(0x7f376c0fe280 msgr2=0x7f376c0ffaf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:07.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.495+0000 7f3772932700 1 -- 192.168.123.105:0/3185736359 shutdown_connections 2026-03-09T15:05:07.496 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.495+0000 7f3772932700 1 -- 192.168.123.105:0/3185736359 wait complete. 2026-03-09T15:05:07.497 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 11 2026-03-09T15:05:07.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.564+0000 7f3a910ac700 1 -- 192.168.123.105:0/1158595114 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a8c073a00 msgr2=0x7f3a8c110ef0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.564+0000 7f3a910ac700 1 --2- 192.168.123.105:0/1158595114 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a8c073a00 0x7f3a8c110ef0 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f3a7c009b50 tx=0x7f3a7c009e60 comp rx=0 tx=0).stop 2026-03-09T15:05:07.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.564+0000 7f3a910ac700 1 -- 192.168.123.105:0/1158595114 shutdown_connections 2026-03-09T15:05:07.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.564+0000 7f3a910ac700 1 --2- 192.168.123.105:0/1158595114 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a8c073a00 0x7f3a8c110ef0 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.564+0000 7f3a910ac700 1 --2- 192.168.123.105:0/1158595114 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a8c0730f0 0x7f3a8c0734c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.564+0000 7f3a910ac700 1 -- 192.168.123.105:0/1158595114 >> 192.168.123.105:0/1158595114 conn(0x7f3a8c0fbf20 msgr2=0x7f3a8c0fe330 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:07.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.565+0000 7f3a910ac700 1 -- 192.168.123.105:0/1158595114 shutdown_connections 2026-03-09T15:05:07.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.565+0000 7f3a910ac700 1 -- 192.168.123.105:0/1158595114 wait complete. 2026-03-09T15:05:07.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.566+0000 7f3a910ac700 1 Processor -- start 2026-03-09T15:05:07.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.566+0000 7f3a910ac700 1 -- start start 2026-03-09T15:05:07.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.566+0000 7f3a910ac700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a8c0730f0 0x7f3a8c06dd10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:07.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.566+0000 7f3a910ac700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a8c06e250 0x7f3a8c06e6c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:07.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.566+0000 7f3a910ac700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3a8c072da0 con 0x7f3a8c06e250 2026-03-09T15:05:07.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.566+0000 7f3a910ac700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3a8c072f10 con 0x7f3a8c0730f0 2026-03-09T15:05:07.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.566+0000 7f3a8a59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a8c06e250 0x7f3a8c06e6c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:07.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.566+0000 7f3a8a59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a8c06e250 0x7f3a8c06e6c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:34650/0 (socket says 192.168.123.105:34650) 2026-03-09T15:05:07.567 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.566+0000 7f3a8a59c700 1 -- 192.168.123.105:0/1303765998 learned_addr learned my addr 192.168.123.105:0/1303765998 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:07.568 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.567+0000 7f3a8ad9d700 1 --2- 192.168.123.105:0/1303765998 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a8c0730f0 0x7f3a8c06dd10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:07.568 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.567+0000 7f3a8ad9d700 1 -- 192.168.123.105:0/1303765998 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a8c06e250 msgr2=0x7f3a8c06e6c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.568 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.567+0000 7f3a8ad9d700 1 --2- 192.168.123.105:0/1303765998 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a8c06e250 0x7f3a8c06e6c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.568 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.567+0000 7f3a8ad9d700 1 -- 192.168.123.105:0/1303765998 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3a7c0097e0 con 0x7f3a8c0730f0 2026-03-09T15:05:07.568 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.567+0000 7f3a8ad9d700 1 --2- 192.168.123.105:0/1303765998 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a8c0730f0 0x7f3a8c06dd10 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f3a8000b6d0 tx=0x7f3a8000b9e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:07.568 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.567+0000 7f3a8a59c700 1 --2- 192.168.123.105:0/1303765998 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a8c06e250 0x7f3a8c06e6c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:05:07.568 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.567+0000 7f3a73fff700 1 -- 192.168.123.105:0/1303765998 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3a80011630 con 0x7f3a8c0730f0 2026-03-09T15:05:07.568 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.568+0000 7f3a73fff700 1 -- 192.168.123.105:0/1303765998 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f3a80011c70 con 0x7f3a8c0730f0 2026-03-09T15:05:07.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.568+0000 7f3a910ac700 1 -- 192.168.123.105:0/1303765998 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3a8c104720 con 0x7f3a8c0730f0 2026-03-09T15:05:07.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.568+0000 7f3a73fff700 1 -- 192.168.123.105:0/1303765998 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3a80011630 con 0x7f3a8c0730f0 2026-03-09T15:05:07.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.568+0000 7f3a910ac700 1 -- 192.168.123.105:0/1303765998 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3a8c104c40 con 0x7f3a8c0730f0 2026-03-09T15:05:07.570 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.569+0000 7f3a910ac700 1 -- 192.168.123.105:0/1303765998 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3a8c04f310 con 0x7f3a8c0730f0 2026-03-09T15:05:07.571 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.570+0000 7f3a73fff700 1 -- 192.168.123.105:0/1303765998 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f3a80011790 con 0x7f3a8c0730f0 2026-03-09T15:05:07.571 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.570+0000 7f3a73fff700 1 --2- 192.168.123.105:0/1303765998 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3a74077a00 0x7f3a74079eb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:07.571 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.570+0000 7f3a8a59c700 1 --2- 192.168.123.105:0/1303765998 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3a74077a00 0x7f3a74079eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:07.571 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.570+0000 7f3a73fff700 1 -- 192.168.123.105:0/1303765998 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(56..56 src has 1..56) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f3a80099a30 con 0x7f3a8c0730f0 2026-03-09T15:05:07.572 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.571+0000 7f3a8a59c700 1 --2- 192.168.123.105:0/1303765998 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3a74077a00 0x7f3a74079eb0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f3a7c009b20 tx=0x7f3a7c00b580 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:07.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.572+0000 7f3a73fff700 1 -- 192.168.123.105:0/1303765998 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3a80062040 con 0x7f3a8c0730f0 2026-03-09T15:05:07.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.698+0000 7f3a910ac700 1 -- 192.168.123.105:0/1303765998 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f3a8c06f380 con 0x7f3a74077a00 2026-03-09T15:05:07.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.699+0000 7f3a73fff700 1 -- 192.168.123.105:0/1303765998 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+400 (secure 0 0 0) 0x7f3a8c06f380 con 0x7f3a74077a00 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [ 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout: "mgr", 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout: "mon", 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout: "crash" 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "9/23 daemons upgraded", 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:05:07.701 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:05:07.703 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.702+0000 7f3a910ac700 1 -- 192.168.123.105:0/1303765998 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3a74077a00 msgr2=0x7f3a74079eb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.703 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.703+0000 7f3a910ac700 1 --2- 192.168.123.105:0/1303765998 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3a74077a00 0x7f3a74079eb0 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f3a7c009b20 tx=0x7f3a7c00b580 comp rx=0 tx=0).stop 2026-03-09T15:05:07.704 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.703+0000 7f3a910ac700 1 -- 192.168.123.105:0/1303765998 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a8c0730f0 msgr2=0x7f3a8c06dd10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.704 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.703+0000 7f3a910ac700 1 --2- 192.168.123.105:0/1303765998 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a8c0730f0 0x7f3a8c06dd10 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f3a8000b6d0 tx=0x7f3a8000b9e0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.704 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.703+0000 7f3a910ac700 1 -- 192.168.123.105:0/1303765998 shutdown_connections 2026-03-09T15:05:07.704 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.703+0000 7f3a910ac700 1 --2- 192.168.123.105:0/1303765998 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3a74077a00 0x7f3a74079eb0 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.704 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.703+0000 7f3a910ac700 1 --2- 192.168.123.105:0/1303765998 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3a8c0730f0 0x7f3a8c06dd10 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.704 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.704+0000 7f3a910ac700 1 --2- 192.168.123.105:0/1303765998 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3a8c06e250 0x7f3a8c06e6c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.704 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.704+0000 7f3a910ac700 1 -- 192.168.123.105:0/1303765998 >> 192.168.123.105:0/1303765998 conn(0x7f3a8c0fbf20 msgr2=0x7f3a8c102ba0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:07.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.704+0000 7f3a910ac700 1 -- 192.168.123.105:0/1303765998 shutdown_connections 2026-03-09T15:05:07.705 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.704+0000 7f3a910ac700 1 -- 192.168.123.105:0/1303765998 wait complete. 2026-03-09T15:05:07.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.774+0000 7f13a7401700 1 -- 192.168.123.105:0/1403478632 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13a0068490 msgr2=0x7f13a0068900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.774+0000 7f13a7401700 1 --2- 192.168.123.105:0/1403478632 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13a0068490 0x7f13a0068900 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f139c009b00 tx=0x7f139c009e10 comp rx=0 tx=0).stop 2026-03-09T15:05:07.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.774+0000 7f13a7401700 1 -- 192.168.123.105:0/1403478632 shutdown_connections 2026-03-09T15:05:07.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.774+0000 7f13a7401700 1 --2- 192.168.123.105:0/1403478632 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13a0068490 0x7f13a0068900 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.774+0000 7f13a7401700 1 --2- 192.168.123.105:0/1403478632 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f13a01066c0 0x7f13a0106a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.775+0000 7f13a7401700 1 -- 192.168.123.105:0/1403478632 >> 192.168.123.105:0/1403478632 conn(0x7f13a00754a0 msgr2=0x7f13a00758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:07.776 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.775+0000 7f13a7401700 1 -- 192.168.123.105:0/1403478632 shutdown_connections 2026-03-09T15:05:07.776 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.775+0000 7f13a7401700 1 -- 192.168.123.105:0/1403478632 wait complete. 2026-03-09T15:05:07.776 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.775+0000 7f13a7401700 1 Processor -- start 2026-03-09T15:05:07.776 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.776+0000 7f13a7401700 1 -- start start 2026-03-09T15:05:07.777 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.776+0000 7f13a7401700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13a0068490 0x7f13a00fffd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:07.777 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.776+0000 7f13a7401700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f13a01066c0 0x7f13a0100510 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:07.777 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.776+0000 7f13a7401700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f13a0100a50 con 0x7f13a0068490 2026-03-09T15:05:07.777 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.776+0000 7f13a7401700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f13a0100b90 con 0x7f13a01066c0 2026-03-09T15:05:07.777 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.776+0000 7f13a499c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f13a01066c0 0x7f13a0100510 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:07.777 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.776+0000 7f13a499c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f13a01066c0 0x7f13a0100510 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:48114/0 (socket says 192.168.123.105:48114) 2026-03-09T15:05:07.777 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.776+0000 7f13a499c700 1 -- 192.168.123.105:0/10486680 learned_addr learned my addr 192.168.123.105:0/10486680 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:07.777 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.776+0000 7f13a499c700 1 -- 192.168.123.105:0/10486680 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13a0068490 msgr2=0x7f13a00fffd0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:05:07.777 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.777+0000 7f13a519d700 1 --2- 192.168.123.105:0/10486680 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13a0068490 0x7f13a00fffd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:07.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.777+0000 7f13a499c700 1 --2- 192.168.123.105:0/10486680 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13a0068490 0x7f13a00fffd0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.777+0000 7f13a499c700 1 -- 192.168.123.105:0/10486680 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f139c0097e0 con 0x7f13a01066c0 2026-03-09T15:05:07.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.777+0000 7f13a519d700 1 --2- 192.168.123.105:0/10486680 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13a0068490 0x7f13a00fffd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:05:07.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.777+0000 7f13a499c700 1 --2- 192.168.123.105:0/10486680 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f13a01066c0 0x7f13a0100510 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f139c0094d0 tx=0x7f139c0049e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:07.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.777+0000 7f13967fc700 1 -- 192.168.123.105:0/10486680 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f139c01d070 con 0x7f13a01066c0 2026-03-09T15:05:07.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.777+0000 7f13967fc700 1 -- 192.168.123.105:0/10486680 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f139c00bc50 con 0x7f13a01066c0 2026-03-09T15:05:07.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.777+0000 7f13a7401700 1 -- 192.168.123.105:0/10486680 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f13a0100e10 con 0x7f13a01066c0 2026-03-09T15:05:07.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.777+0000 7f13a7401700 1 -- 192.168.123.105:0/10486680 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f13a01a26d0 con 0x7f13a01066c0 2026-03-09T15:05:07.779 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.778+0000 7f13967fc700 1 -- 192.168.123.105:0/10486680 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f139c022620 con 0x7f13a01066c0 2026-03-09T15:05:07.779 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.778+0000 7f13a7401700 1 -- 192.168.123.105:0/10486680 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f13a004ea50 con 0x7f13a01066c0 2026-03-09T15:05:07.780 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.779+0000 7f13967fc700 1 -- 192.168.123.105:0/10486680 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 37) v1 ==== 100217+0+0 (secure 0 0 0) 0x7f139c00f660 con 0x7f13a01066c0 2026-03-09T15:05:07.780 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.779+0000 7f13967fc700 1 --2- 192.168.123.105:0/10486680 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f138c0779f0 0x7f138c079ea0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:07.780 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.779+0000 7f13967fc700 1 -- 192.168.123.105:0/10486680 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(56..56 src has 1..56) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f139c09bfb0 con 0x7f13a01066c0 2026-03-09T15:05:07.780 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.780+0000 7f13a519d700 1 --2- 192.168.123.105:0/10486680 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f138c0779f0 0x7f138c079ea0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:07.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.780+0000 7f13a519d700 1 --2- 192.168.123.105:0/10486680 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f138c0779f0 0x7f138c079ea0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f1390005fd0 tx=0x7f1390005e20 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:07.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.781+0000 7f13967fc700 1 -- 192.168.123.105:0/10486680 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f139c0645c0 con 0x7f13a01066c0 2026-03-09T15:05:07.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.941+0000 7f13a7401700 1 -- 192.168.123.105:0/10486680 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f13a01015a0 con 0x7f13a01066c0 2026-03-09T15:05:07.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.942+0000 7f13967fc700 1 -- 192.168.123.105:0/10486680 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+1041 (secure 0 0 0) 0x7f139c063d10 con 0x7f13a01066c0 2026-03-09T15:05:07.943 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_WARN 1 filesystem with deprecated feature inline_data; Degraded data redundancy: 34/264 objects degraded (12.879%), 12 pgs degraded 2026-03-09T15:05:07.943 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:05:07.943 INFO:teuthology.orchestra.run.vm05.stdout: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:05:07.943 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 34/264 objects degraded (12.879%), 12 pgs degraded 2026-03-09T15:05:07.943 INFO:teuthology.orchestra.run.vm05.stdout: pg 2.1 is active+undersized+degraded, acting [1,0] 2026-03-09T15:05:07.943 INFO:teuthology.orchestra.run.vm05.stdout: pg 2.3 is active+undersized+degraded, acting [5,1] 2026-03-09T15:05:07.943 INFO:teuthology.orchestra.run.vm05.stdout: pg 2.7 is active+undersized+degraded, acting [3,4] 2026-03-09T15:05:07.943 INFO:teuthology.orchestra.run.vm05.stdout: pg 2.c is active+undersized+degraded, acting [3,0] 2026-03-09T15:05:07.943 INFO:teuthology.orchestra.run.vm05.stdout: pg 2.d is active+undersized+degraded, acting [1,3] 2026-03-09T15:05:07.944 INFO:teuthology.orchestra.run.vm05.stdout: pg 2.e is active+undersized+degraded, acting [0,3] 2026-03-09T15:05:07.944 INFO:teuthology.orchestra.run.vm05.stdout: pg 2.10 is active+undersized+degraded, acting [1,0] 2026-03-09T15:05:07.944 INFO:teuthology.orchestra.run.vm05.stdout: pg 2.13 is active+undersized+degraded, acting [0,4] 2026-03-09T15:05:07.944 INFO:teuthology.orchestra.run.vm05.stdout: pg 2.16 is active+undersized+degraded, acting [5,3] 2026-03-09T15:05:07.944 INFO:teuthology.orchestra.run.vm05.stdout: pg 2.19 is active+undersized+degraded, acting [0,4] 2026-03-09T15:05:07.944 INFO:teuthology.orchestra.run.vm05.stdout: pg 2.1c is active+undersized+degraded, acting [4,5] 2026-03-09T15:05:07.944 INFO:teuthology.orchestra.run.vm05.stdout: pg 2.1e is active+undersized+degraded, acting [0,5] 2026-03-09T15:05:07.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.944+0000 7f13a7401700 1 -- 192.168.123.105:0/10486680 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f138c0779f0 msgr2=0x7f138c079ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.944+0000 7f13a7401700 1 --2- 192.168.123.105:0/10486680 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f138c0779f0 0x7f138c079ea0 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f1390005fd0 tx=0x7f1390005e20 comp rx=0 tx=0).stop 2026-03-09T15:05:07.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.945+0000 7f13a7401700 1 -- 192.168.123.105:0/10486680 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f13a01066c0 msgr2=0x7f13a0100510 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:07.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.945+0000 7f13a7401700 1 --2- 192.168.123.105:0/10486680 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f13a01066c0 0x7f13a0100510 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7f139c0094d0 tx=0x7f139c0049e0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.945+0000 7f13a7401700 1 -- 192.168.123.105:0/10486680 shutdown_connections 2026-03-09T15:05:07.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.945+0000 7f13a7401700 1 --2- 192.168.123.105:0/10486680 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f138c0779f0 0x7f138c079ea0 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.945+0000 7f13a7401700 1 --2- 192.168.123.105:0/10486680 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f13a0068490 0x7f13a00fffd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.945+0000 7f13a7401700 1 --2- 192.168.123.105:0/10486680 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f13a01066c0 0x7f13a0100510 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:07.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.945+0000 7f13a7401700 1 -- 192.168.123.105:0/10486680 >> 192.168.123.105:0/10486680 conn(0x7f13a00754a0 msgr2=0x7f13a00fec80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:07.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.945+0000 7f13a7401700 1 -- 192.168.123.105:0/10486680 shutdown_connections 2026-03-09T15:05:07.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:07.945+0000 7f13a7401700 1 -- 192.168.123.105:0/10486680 wait complete. 2026-03-09T15:05:08.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:08 vm05.local ceph-mon[116516]: from='client.34186 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:08.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:08 vm05.local ceph-mon[116516]: from='client.34190 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:08.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:08 vm05.local ceph-mon[116516]: from='client.34194 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:08.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:08 vm05.local ceph-mon[116516]: osdmap e56: 6 total, 6 up, 6 in 2026-03-09T15:05:08.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:08 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/2725495152' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:08.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:08 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3185736359' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:05:08.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:08 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/10486680' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:05:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:08 vm09.local ceph-mon[98742]: from='client.34186 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:08 vm09.local ceph-mon[98742]: from='client.34190 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:08 vm09.local ceph-mon[98742]: from='client.34194 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:08 vm09.local ceph-mon[98742]: osdmap e56: 6 total, 6 up, 6 in 2026-03-09T15:05:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:08 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/2725495152' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:08 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3185736359' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:05:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:08 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/10486680' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:05:09.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:09 vm05.local ceph-mon[116516]: pgmap v61: 65 pgs: 11 active+undersized, 9 active+undersized+degraded, 45 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 1 op/s; 27/264 objects degraded (10.227%) 2026-03-09T15:05:09.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:09 vm05.local ceph-mon[116516]: from='client.44159 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:09.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:09 vm05.local ceph-mon[116516]: Health check update: Degraded data redundancy: 27/264 objects degraded (10.227%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:09.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:09 vm09.local ceph-mon[98742]: pgmap v61: 65 pgs: 11 active+undersized, 9 active+undersized+degraded, 45 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 1 op/s; 27/264 objects degraded (10.227%) 2026-03-09T15:05:09.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:09 vm09.local ceph-mon[98742]: from='client.44159 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:09.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:09 vm09.local ceph-mon[98742]: Health check update: Degraded data redundancy: 27/264 objects degraded (10.227%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:11.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:11 vm05.local ceph-mon[116516]: pgmap v62: 65 pgs: 7 active+undersized, 7 active+undersized+degraded, 51 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 21/264 objects degraded (7.955%) 2026-03-09T15:05:11.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:11 vm09.local ceph-mon[98742]: pgmap v62: 65 pgs: 7 active+undersized, 7 active+undersized+degraded, 51 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 21/264 objects degraded (7.955%) 2026-03-09T15:05:12.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:12 vm05.local ceph-mon[116516]: pgmap v63: 65 pgs: 65 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 801 B/s rd, 1 op/s 2026-03-09T15:05:12.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:12 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:12.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:12 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:05:12.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:12 vm05.local ceph-mon[116516]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 21/264 objects degraded (7.955%), 7 pgs degraded) 2026-03-09T15:05:12.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:12 vm09.local ceph-mon[98742]: pgmap v63: 65 pgs: 65 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 801 B/s rd, 1 op/s 2026-03-09T15:05:12.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:12 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:12.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:12 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:05:12.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:12 vm09.local ceph-mon[98742]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 21/264 objects degraded (7.955%), 7 pgs degraded) 2026-03-09T15:05:14.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:14 vm05.local ceph-mon[116516]: pgmap v64: 65 pgs: 65 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 1.2 KiB/s rd, 127 B/s wr, 1 op/s 2026-03-09T15:05:14.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:14 vm05.local ceph-mon[116516]: mgrmap e38: vm05.lhsexd(active, since 92s), standbys: vm09.cfuwdz 2026-03-09T15:05:14.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:14 vm09.local ceph-mon[98742]: pgmap v64: 65 pgs: 65 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 1.2 KiB/s rd, 127 B/s wr, 1 op/s 2026-03-09T15:05:14.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:14 vm09.local ceph-mon[98742]: mgrmap e38: vm05.lhsexd(active, since 92s), standbys: vm09.cfuwdz 2026-03-09T15:05:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:16 vm05.local ceph-mon[116516]: pgmap v65: 65 pgs: 65 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 1.2 KiB/s rd, 327 B/s wr, 2 op/s 2026-03-09T15:05:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:16.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:16 vm09.local ceph-mon[98742]: pgmap v65: 65 pgs: 65 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 1.2 KiB/s rd, 327 B/s wr, 2 op/s 2026-03-09T15:05:16.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T15:05:17.827 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T15:05:19.004 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:18 vm09.local systemd[1]: Stopping Ceph osd.3 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:05:19.004 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:18 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[65598]: 2026-03-09T15:05:18.790+0000 7f29f05c3700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:05:19.004 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:18 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[65598]: 2026-03-09T15:05:18.790+0000 7f29f05c3700 -1 osd.3 56 *** Got signal Terminated *** 2026-03-09T15:05:19.004 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:18 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[65598]: 2026-03-09T15:05:18.790+0000 7f29f05c3700 -1 osd.3 56 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T15:05:19.004 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:18 vm09.local ceph-mon[98742]: pgmap v66: 65 pgs: 65 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 1.1 KiB/s rd, 296 B/s wr, 2 op/s 2026-03-09T15:05:19.004 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:18 vm09.local ceph-mon[98742]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T15:05:19.004 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:18 vm09.local ceph-mon[98742]: Upgrade: osd.3 is safe to restart 2026-03-09T15:05:19.004 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:18 vm09.local ceph-mon[98742]: Upgrade: Updating osd.3 2026-03-09T15:05:19.004 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:19.004 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T15:05:19.004 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:18 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:05:19.004 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:18 vm09.local ceph-mon[98742]: Deploying daemon osd.3 on vm09 2026-03-09T15:05:19.004 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:18 vm09.local ceph-mon[98742]: osd.3 marked itself down and dead 2026-03-09T15:05:19.274 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[105999]: 2026-03-09 15:05:19.00433173 +0000 UTC m=+0.228935310 container died e79644a0564f35b6d1c2374f23749ad21d4d218f192bf11935eef2f1036cc96c (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3, GIT_REPO=https://github.com/ceph/ceph-container.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20231212, GIT_CLEAN=True, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, RELEASE=HEAD, io.buildah.version=1.29.1, maintainer=Guillaume Abrioux , org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 8 Base Image, org.label-schema.schema-version=1.0, CEPH_POINT_RELEASE=-18.2.0, GIT_BRANCH=HEAD) 2026-03-09T15:05:19.274 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[105999]: 2026-03-09 15:05:19.026405529 +0000 UTC m=+0.251009109 container remove e79644a0564f35b6d1c2374f23749ad21d4d218f192bf11935eef2f1036cc96c (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3, maintainer=Guillaume Abrioux , org.label-schema.vendor=CentOS, CEPH_POINT_RELEASE=-18.2.0, GIT_BRANCH=HEAD, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, org.label-schema.name=CentOS Stream 8 Base Image, GIT_CLEAN=True, org.label-schema.build-date=20231212, org.label-schema.schema-version=1.0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=HEAD, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.29.1) 2026-03-09T15:05:19.274 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local bash[105999]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3 2026-03-09T15:05:19.274 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[106066]: 2026-03-09 15:05:19.183077526 +0000 UTC m=+0.019624366 container create e0ef82d70bd96c2f04a85224339ba49b676355149ea2e3ced9d36061854414da (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-deactivate, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T15:05:19.274 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[106066]: 2026-03-09 15:05:19.222012429 +0000 UTC m=+0.058559269 container init e0ef82d70bd96c2f04a85224339ba49b676355149ea2e3ced9d36061854414da (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) 2026-03-09T15:05:19.274 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[106066]: 2026-03-09 15:05:19.225413733 +0000 UTC m=+0.061960574 container start e0ef82d70bd96c2f04a85224339ba49b676355149ea2e3ced9d36061854414da (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-deactivate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T15:05:19.274 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[106066]: 2026-03-09 15:05:19.228588916 +0000 UTC m=+0.065135766 container attach e0ef82d70bd96c2f04a85224339ba49b676355149ea2e3ced9d36061854414da (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T15:05:19.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:18 vm05.local ceph-mon[116516]: pgmap v66: 65 pgs: 65 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 1.1 KiB/s rd, 296 B/s wr, 2 op/s 2026-03-09T15:05:19.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:18 vm05.local ceph-mon[116516]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T15:05:19.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:18 vm05.local ceph-mon[116516]: Upgrade: osd.3 is safe to restart 2026-03-09T15:05:19.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:18 vm05.local ceph-mon[116516]: Upgrade: Updating osd.3 2026-03-09T15:05:19.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:19.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T15:05:19.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:18 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:05:19.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:18 vm05.local ceph-mon[116516]: Deploying daemon osd.3 on vm09 2026-03-09T15:05:19.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:18 vm05.local ceph-mon[116516]: osd.3 marked itself down and dead 2026-03-09T15:05:19.582 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[106066]: 2026-03-09 15:05:19.175835152 +0000 UTC m=+0.012381992 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:05:19.583 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local conmon[106078]: conmon e0ef82d70bd96c2f04a8 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0ef82d70bd96c2f04a85224339ba49b676355149ea2e3ced9d36061854414da.scope/container/memory.events 2026-03-09T15:05:19.583 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[106066]: 2026-03-09 15:05:19.36943823 +0000 UTC m=+0.205985071 container died e0ef82d70bd96c2f04a85224339ba49b676355149ea2e3ced9d36061854414da (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3) 2026-03-09T15:05:19.583 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[106066]: 2026-03-09 15:05:19.394047214 +0000 UTC m=+0.230594054 container remove e0ef82d70bd96c2f04a85224339ba49b676355149ea2e3ced9d36061854414da (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T15:05:19.583 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.3.service: Deactivated successfully. 2026-03-09T15:05:19.583 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local systemd[1]: Stopped Ceph osd.3 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:05:19.583 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.3.service: Consumed 46.234s CPU time. 2026-03-09T15:05:19.583 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local systemd[1]: Starting Ceph osd.3 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:05:19.867 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[106171]: 2026-03-09 15:05:19.682520413 +0000 UTC m=+0.016680687 container create 5369601b5f30d8bda309df53a0b2d41ddd632bf975d9c440bc3b9b20d8d263c1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T15:05:19.867 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[106171]: 2026-03-09 15:05:19.719872113 +0000 UTC m=+0.054032387 container init 5369601b5f30d8bda309df53a0b2d41ddd632bf975d9c440bc3b9b20d8d263c1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3) 2026-03-09T15:05:19.867 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[106171]: 2026-03-09 15:05:19.722886033 +0000 UTC m=+0.057046307 container start 5369601b5f30d8bda309df53a0b2d41ddd632bf975d9c440bc3b9b20d8d263c1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T15:05:19.867 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[106171]: 2026-03-09 15:05:19.723905872 +0000 UTC m=+0.058066146 container attach 5369601b5f30d8bda309df53a0b2d41ddd632bf975d9c440bc3b9b20d8d263c1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid) 2026-03-09T15:05:19.867 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local podman[106171]: 2026-03-09 15:05:19.676067888 +0000 UTC m=+0.010228172 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:05:19.867 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate[106181]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:19.867 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local bash[106171]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:19.867 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate[106181]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:19.867 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:19 vm09.local bash[106171]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:20.296 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:19 vm09.local ceph-mon[98742]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T15:05:20.297 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:19 vm09.local ceph-mon[98742]: osdmap e57: 6 total, 5 up, 6 in 2026-03-09T15:05:20.297 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate[106181]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T15:05:20.297 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate[106181]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:20.297 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local bash[106171]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T15:05:20.297 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local bash[106171]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:20.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:19 vm05.local ceph-mon[116516]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T15:05:20.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:19 vm05.local ceph-mon[116516]: osdmap e57: 6 total, 5 up, 6 in 2026-03-09T15:05:20.616 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate[106181]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:20.616 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local bash[106171]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:20.616 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate[106181]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-09T15:05:20.617 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local bash[106171]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-09T15:05:20.617 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate[106181]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-99d800b9-e44e-4477-a66f-867f717a6987/osd-block-24dfa5ad-72df-4f80-bf60-0507508104f2 --path /var/lib/ceph/osd/ceph-3 --no-mon-config 2026-03-09T15:05:20.617 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local bash[106171]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-99d800b9-e44e-4477-a66f-867f717a6987/osd-block-24dfa5ad-72df-4f80-bf60-0507508104f2 --path /var/lib/ceph/osd/ceph-3 --no-mon-config 2026-03-09T15:05:20.931 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-mon[98742]: pgmap v68: 65 pgs: 16 stale+active+clean, 49 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 1.1 KiB/s rd, 307 B/s wr, 2 op/s 2026-03-09T15:05:20.931 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-mon[98742]: osdmap e58: 6 total, 5 up, 6 in 2026-03-09T15:05:20.931 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:20.931 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:20.931 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:05:20.931 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate[106181]: Running command: /usr/bin/ln -snf /dev/ceph-99d800b9-e44e-4477-a66f-867f717a6987/osd-block-24dfa5ad-72df-4f80-bf60-0507508104f2 /var/lib/ceph/osd/ceph-3/block 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local bash[106171]: Running command: /usr/bin/ln -snf /dev/ceph-99d800b9-e44e-4477-a66f-867f717a6987/osd-block-24dfa5ad-72df-4f80-bf60-0507508104f2 /var/lib/ceph/osd/ceph-3/block 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate[106181]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local bash[106171]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate[106181]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local bash[106171]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate[106181]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local bash[106171]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate[106181]: --> ceph-volume lvm activate successful for osd ID: 3 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local bash[106171]: --> ceph-volume lvm activate successful for osd ID: 3 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local conmon[106181]: conmon 5369601b5f30d8bda309 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5369601b5f30d8bda309df53a0b2d41ddd632bf975d9c440bc3b9b20d8d263c1.scope/container/memory.events 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local podman[106171]: 2026-03-09 15:05:20.673497839 +0000 UTC m=+1.007658103 container died 5369601b5f30d8bda309df53a0b2d41ddd632bf975d9c440bc3b9b20d8d263c1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local podman[106171]: 2026-03-09 15:05:20.697828622 +0000 UTC m=+1.031988896 container remove 5369601b5f30d8bda309df53a0b2d41ddd632bf975d9c440bc3b9b20d8d263c1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-activate, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local podman[106424]: 2026-03-09 15:05:20.810221714 +0000 UTC m=+0.016877035 container create 9359c3ced4d3cc7e6d01ec4dfa16c7490d477e6917eb557a8e1c78a1995686f6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local podman[106424]: 2026-03-09 15:05:20.850815451 +0000 UTC m=+0.057470772 container init 9359c3ced4d3cc7e6d01ec4dfa16c7490d477e6917eb557a8e1c78a1995686f6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.41.3) 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local podman[106424]: 2026-03-09 15:05:20.858509339 +0000 UTC m=+0.065164660 container start 9359c3ced4d3cc7e6d01ec4dfa16c7490d477e6917eb557a8e1c78a1995686f6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local bash[106424]: 9359c3ced4d3cc7e6d01ec4dfa16c7490d477e6917eb557a8e1c78a1995686f6 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local podman[106424]: 2026-03-09 15:05:20.802196185 +0000 UTC m=+0.008851515 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:05:20.932 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:20 vm09.local systemd[1]: Started Ceph osd.3 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:05:21.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:20 vm05.local ceph-mon[116516]: pgmap v68: 65 pgs: 16 stale+active+clean, 49 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 1.1 KiB/s rd, 307 B/s wr, 2 op/s 2026-03-09T15:05:21.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:20 vm05.local ceph-mon[116516]: osdmap e58: 6 total, 5 up, 6 in 2026-03-09T15:05:21.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:20 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:21.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:20 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:21.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:20 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:05:22.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:21 vm05.local ceph-mon[116516]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-09T15:05:22.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:21 vm05.local ceph-mon[116516]: Health check failed: Degraded data redundancy: 4/264 objects degraded (1.515%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:22.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:21 vm09.local ceph-mon[98742]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-09T15:05:22.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:21 vm09.local ceph-mon[98742]: Health check failed: Degraded data redundancy: 4/264 objects degraded (1.515%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:22.366 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:21 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[106435]: 2026-03-09T15:05:21.983+0000 7faec0d6b740 -1 Falling back to public interface 2026-03-09T15:05:23.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:23 vm05.local ceph-mon[116516]: pgmap v70: 65 pgs: 2 active+undersized, 7 peering, 12 stale+active+clean, 2 active+undersized+degraded, 42 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s; 4/264 objects degraded (1.515%) 2026-03-09T15:05:23.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:23 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:23.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:23 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:23.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:23 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:23.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:23 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:23.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:23 vm09.local ceph-mon[98742]: pgmap v70: 65 pgs: 2 active+undersized, 7 peering, 12 stale+active+clean, 2 active+undersized+degraded, 42 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s; 4/264 objects degraded (1.515%) 2026-03-09T15:05:23.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:23 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:23.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:23 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:23.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:23 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:23.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:23 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:24.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: pgmap v71: 65 pgs: 16 active+undersized, 7 peering, 15 active+undersized+degraded, 27 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 50/264 objects degraded (18.939%) 2026-03-09T15:05:24.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:24.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:24.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:05:24.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:05:24.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:24.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:05:24.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:24.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:24.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:24.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T15:05:24.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T15:05:24.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:24 vm05.local ceph-mon[116516]: Upgrade: unsafe to stop osd(s) at this time (18 PGs are or would become offline) 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: pgmap v71: 65 pgs: 16 active+undersized, 7 peering, 15 active+undersized+degraded, 27 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 50/264 objects degraded (18.939%) 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T15:05:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:24 vm09.local ceph-mon[98742]: Upgrade: unsafe to stop osd(s) at this time (18 PGs are or would become offline) 2026-03-09T15:05:25.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:25 vm09.local ceph-mon[98742]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-09T15:05:25.866 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:25 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[106435]: 2026-03-09T15:05:25.491+0000 7faec0d6b740 -1 osd.3 0 read_superblock omap replica is missing. 2026-03-09T15:05:25.866 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:25 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[106435]: 2026-03-09T15:05:25.694+0000 7faec0d6b740 -1 osd.3 56 log_to_monitors true 2026-03-09T15:05:25.908 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:25 vm05.local ceph-mon[116516]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-09T15:05:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:26 vm09.local ceph-mon[98742]: pgmap v72: 65 pgs: 20 active+undersized, 18 active+undersized+degraded, 27 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 59/264 objects degraded (22.348%) 2026-03-09T15:05:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:26 vm09.local ceph-mon[98742]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T15:05:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:26 vm09.local ceph-mon[98742]: from='osd.3 [v2:192.168.123.109:6800/2047887177,v1:192.168.123.109:6801/2047887177]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T15:05:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:26 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:26 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:05:26.866 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:05:26 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[106435]: 2026-03-09T15:05:26.623+0000 7faeb8b05640 -1 osd.3 56 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T15:05:26.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:26 vm05.local ceph-mon[116516]: pgmap v72: 65 pgs: 20 active+undersized, 18 active+undersized+degraded, 27 active+clean; 257 MiB data, 2.1 GiB used, 118 GiB / 120 GiB avail; 59/264 objects degraded (22.348%) 2026-03-09T15:05:26.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:26 vm05.local ceph-mon[116516]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T15:05:26.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:26 vm05.local ceph-mon[116516]: from='osd.3 [v2:192.168.123.109:6800/2047887177,v1:192.168.123.109:6801/2047887177]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T15:05:26.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:26 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:26.949 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:26 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:05:28.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:27 vm05.local ceph-mon[116516]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T15:05:28.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:27 vm05.local ceph-mon[116516]: osdmap e59: 6 total, 5 up, 6 in 2026-03-09T15:05:28.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:27 vm05.local ceph-mon[116516]: from='osd.3 [v2:192.168.123.109:6800/2047887177,v1:192.168.123.109:6801/2047887177]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:05:28.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:27 vm05.local ceph-mon[116516]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:05:28.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:27 vm09.local ceph-mon[98742]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T15:05:28.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:27 vm09.local ceph-mon[98742]: osdmap e59: 6 total, 5 up, 6 in 2026-03-09T15:05:28.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:27 vm09.local ceph-mon[98742]: from='osd.3 [v2:192.168.123.109:6800/2047887177,v1:192.168.123.109:6801/2047887177]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:05:28.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:27 vm09.local ceph-mon[98742]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:05:29.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:28 vm05.local ceph-mon[116516]: pgmap v74: 65 pgs: 20 active+undersized, 18 active+undersized+degraded, 27 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 59/264 objects degraded (22.348%) 2026-03-09T15:05:29.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:28 vm05.local ceph-mon[116516]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T15:05:29.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:28 vm05.local ceph-mon[116516]: osd.3 [v2:192.168.123.109:6800/2047887177,v1:192.168.123.109:6801/2047887177] boot 2026-03-09T15:05:29.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:28 vm05.local ceph-mon[116516]: osdmap e60: 6 total, 6 up, 6 in 2026-03-09T15:05:29.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:28 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:05:29.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:28 vm09.local ceph-mon[98742]: pgmap v74: 65 pgs: 20 active+undersized, 18 active+undersized+degraded, 27 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 59/264 objects degraded (22.348%) 2026-03-09T15:05:29.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:28 vm09.local ceph-mon[98742]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T15:05:29.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:28 vm09.local ceph-mon[98742]: osd.3 [v2:192.168.123.109:6800/2047887177,v1:192.168.123.109:6801/2047887177] boot 2026-03-09T15:05:29.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:28 vm09.local ceph-mon[98742]: osdmap e60: 6 total, 6 up, 6 in 2026-03-09T15:05:29.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:28 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T15:05:30.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:29 vm05.local ceph-mon[116516]: osdmap e61: 6 total, 6 up, 6 in 2026-03-09T15:05:30.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:29 vm05.local ceph-mon[116516]: Health check update: Degraded data redundancy: 59/264 objects degraded (22.348%), 18 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:30.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:29 vm09.local ceph-mon[98742]: osdmap e61: 6 total, 6 up, 6 in 2026-03-09T15:05:30.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:29 vm09.local ceph-mon[98742]: Health check update: Degraded data redundancy: 59/264 objects degraded (22.348%), 18 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:30.941 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:30 vm05.local ceph-mon[116516]: pgmap v77: 65 pgs: 7 peering, 17 active+undersized, 14 active+undersized+degraded, 27 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 45/264 objects degraded (17.045%) 2026-03-09T15:05:31.075 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:30 vm09.local ceph-mon[98742]: pgmap v77: 65 pgs: 7 peering, 17 active+undersized, 14 active+undersized+degraded, 27 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 45/264 objects degraded (17.045%) 2026-03-09T15:05:33.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:32 vm05.local ceph-mon[116516]: pgmap v78: 65 pgs: 7 peering, 13 active+undersized, 11 active+undersized+degraded, 34 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 38/264 objects degraded (14.394%) 2026-03-09T15:05:33.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:32 vm09.local ceph-mon[98742]: pgmap v78: 65 pgs: 7 peering, 13 active+undersized, 11 active+undersized+degraded, 34 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 38/264 objects degraded (14.394%) 2026-03-09T15:05:34.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:33 vm05.local ceph-mon[116516]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 38/264 objects degraded (14.394%), 11 pgs degraded) 2026-03-09T15:05:34.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:33 vm09.local ceph-mon[98742]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 38/264 objects degraded (14.394%), 11 pgs degraded) 2026-03-09T15:05:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:34 vm05.local ceph-mon[116516]: pgmap v79: 65 pgs: 65 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T15:05:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:34 vm09.local ceph-mon[98742]: pgmap v79: 65 pgs: 65 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T15:05:37.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:36 vm05.local ceph-mon[116516]: pgmap v80: 65 pgs: 65 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 255 B/s rd, 1 op/s 2026-03-09T15:05:37.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:36 vm09.local ceph-mon[98742]: pgmap v80: 65 pgs: 65 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 255 B/s rd, 1 op/s 2026-03-09T15:05:38.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.016+0000 7f43bccfc700 1 -- 192.168.123.105:0/1739759618 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f43b8102760 msgr2=0x7f43b8102bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.016+0000 7f43bccfc700 1 --2- 192.168.123.105:0/1739759618 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f43b8102760 0x7f43b8102bd0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f43a8009b00 tx=0x7f43a8009e10 comp rx=0 tx=0).stop 2026-03-09T15:05:38.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.016+0000 7f43bccfc700 1 -- 192.168.123.105:0/1739759618 shutdown_connections 2026-03-09T15:05:38.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.016+0000 7f43bccfc700 1 --2- 192.168.123.105:0/1739759618 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f43b8102760 0x7f43b8102bd0 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.016+0000 7f43bccfc700 1 --2- 192.168.123.105:0/1739759618 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f43b8108760 0x7f43b8108b30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.016+0000 7f43bccfc700 1 -- 192.168.123.105:0/1739759618 >> 192.168.123.105:0/1739759618 conn(0x7f43b80fe280 msgr2=0x7f43b8100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:38.017 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.017+0000 7f43bccfc700 1 -- 192.168.123.105:0/1739759618 shutdown_connections 2026-03-09T15:05:38.018 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.017+0000 7f43bccfc700 1 -- 192.168.123.105:0/1739759618 wait complete. 2026-03-09T15:05:38.018 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.017+0000 7f43bccfc700 1 Processor -- start 2026-03-09T15:05:38.018 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.017+0000 7f43bccfc700 1 -- start start 2026-03-09T15:05:38.018 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.017+0000 7f43bccfc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f43b8102760 0x7f43b81983d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.018 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.018+0000 7f43b659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f43b8102760 0x7f43b81983d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.018+0000 7f43b659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f43b8102760 0x7f43b81983d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:38004/0 (socket says 192.168.123.105:38004) 2026-03-09T15:05:38.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.017+0000 7f43bccfc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f43b8108760 0x7f43b8198910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.018+0000 7f43b659c700 1 -- 192.168.123.105:0/262687546 learned_addr learned my addr 192.168.123.105:0/262687546 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:38.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.018+0000 7f43bccfc700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f43b8198ff0 con 0x7f43b8102760 2026-03-09T15:05:38.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.018+0000 7f43bccfc700 1 -- 192.168.123.105:0/262687546 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f43b819cd80 con 0x7f43b8108760 2026-03-09T15:05:38.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.018+0000 7f43b5d9b700 1 --2- 192.168.123.105:0/262687546 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f43b8108760 0x7f43b8198910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.018+0000 7f43b659c700 1 -- 192.168.123.105:0/262687546 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f43b8108760 msgr2=0x7f43b8198910 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.018+0000 7f43b659c700 1 --2- 192.168.123.105:0/262687546 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f43b8108760 0x7f43b8198910 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.018+0000 7f43b659c700 1 -- 192.168.123.105:0/262687546 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f43a80097e0 con 0x7f43b8102760 2026-03-09T15:05:38.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.019+0000 7f43b659c700 1 --2- 192.168.123.105:0/262687546 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f43b8102760 0x7f43b81983d0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f43a000ba70 tx=0x7f43a000bd80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:38.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.019+0000 7f43af7fe700 1 -- 192.168.123.105:0/262687546 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f43a000c700 con 0x7f43b8102760 2026-03-09T15:05:38.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.019+0000 7f43af7fe700 1 -- 192.168.123.105:0/262687546 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f43a000cd40 con 0x7f43b8102760 2026-03-09T15:05:38.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.019+0000 7f43af7fe700 1 -- 192.168.123.105:0/262687546 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f43a0012340 con 0x7f43b8102760 2026-03-09T15:05:38.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.019+0000 7f43bccfc700 1 -- 192.168.123.105:0/262687546 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f43b819d060 con 0x7f43b8102760 2026-03-09T15:05:38.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.019+0000 7f43bccfc700 1 -- 192.168.123.105:0/262687546 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f43b819d5b0 con 0x7f43b8102760 2026-03-09T15:05:38.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.020+0000 7f43af7fe700 1 -- 192.168.123.105:0/262687546 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f43a0014440 con 0x7f43b8102760 2026-03-09T15:05:38.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.020+0000 7f43bccfc700 1 -- 192.168.123.105:0/262687546 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f43b804ea50 con 0x7f43b8102760 2026-03-09T15:05:38.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.021+0000 7f43af7fe700 1 --2- 192.168.123.105:0/262687546 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f43a40778c0 0x7f43a4079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.021+0000 7f43af7fe700 1 -- 192.168.123.105:0/262687546 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(61..61 src has 1..61) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f43a0098840 con 0x7f43b8102760 2026-03-09T15:05:38.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.021+0000 7f43b5d9b700 1 --2- 192.168.123.105:0/262687546 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f43a40778c0 0x7f43a4079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.025 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.024+0000 7f43af7fe700 1 -- 192.168.123.105:0/262687546 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f43a0061230 con 0x7f43b8102760 2026-03-09T15:05:38.025 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.024+0000 7f43b5d9b700 1 --2- 192.168.123.105:0/262687546 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f43a40778c0 0x7f43a4079d70 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f43b81999f0 tx=0x7f43a8005fb0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:38.150 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.149+0000 7f43bccfc700 1 -- 192.168.123.105:0/262687546 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f43b819d890 con 0x7f43a40778c0 2026-03-09T15:05:38.151 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.151+0000 7f43af7fe700 1 -- 192.168.123.105:0/262687546 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+401 (secure 0 0 0) 0x7f43b819d890 con 0x7f43a40778c0 2026-03-09T15:05:38.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.153+0000 7f43bccfc700 1 -- 192.168.123.105:0/262687546 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f43a40778c0 msgr2=0x7f43a4079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.153+0000 7f43bccfc700 1 --2- 192.168.123.105:0/262687546 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f43a40778c0 0x7f43a4079d70 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f43b81999f0 tx=0x7f43a8005fb0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.153+0000 7f43bccfc700 1 -- 192.168.123.105:0/262687546 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f43b8102760 msgr2=0x7f43b81983d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.153+0000 7f43bccfc700 1 --2- 192.168.123.105:0/262687546 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f43b8102760 0x7f43b81983d0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f43a000ba70 tx=0x7f43a000bd80 comp rx=0 tx=0).stop 2026-03-09T15:05:38.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.153+0000 7f43bccfc700 1 -- 192.168.123.105:0/262687546 shutdown_connections 2026-03-09T15:05:38.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.153+0000 7f43bccfc700 1 --2- 192.168.123.105:0/262687546 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f43a40778c0 0x7f43a4079d70 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.153+0000 7f43bccfc700 1 --2- 192.168.123.105:0/262687546 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f43b8102760 0x7f43b81983d0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.153+0000 7f43bccfc700 1 --2- 192.168.123.105:0/262687546 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f43b8108760 0x7f43b8198910 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.154+0000 7f43bccfc700 1 -- 192.168.123.105:0/262687546 >> 192.168.123.105:0/262687546 conn(0x7f43b80fe280 msgr2=0x7f43b80ff990 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:38.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.154+0000 7f43bccfc700 1 -- 192.168.123.105:0/262687546 shutdown_connections 2026-03-09T15:05:38.154 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.154+0000 7f43bccfc700 1 -- 192.168.123.105:0/262687546 wait complete. 2026-03-09T15:05:38.163 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:05:38.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.222+0000 7faeda4c6700 1 -- 192.168.123.105:0/4205667908 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faed4108780 msgr2=0x7faed4108b50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.222+0000 7faeda4c6700 1 --2- 192.168.123.105:0/4205667908 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faed4108780 0x7faed4108b50 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7faebc009b50 tx=0x7faebc009e60 comp rx=0 tx=0).stop 2026-03-09T15:05:38.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.222+0000 7faeda4c6700 1 -- 192.168.123.105:0/4205667908 shutdown_connections 2026-03-09T15:05:38.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.222+0000 7faeda4c6700 1 --2- 192.168.123.105:0/4205667908 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7faed4102780 0x7faed4102bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.222+0000 7faeda4c6700 1 --2- 192.168.123.105:0/4205667908 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faed4108780 0x7faed4108b50 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.222+0000 7faeda4c6700 1 -- 192.168.123.105:0/4205667908 >> 192.168.123.105:0/4205667908 conn(0x7faed40fe280 msgr2=0x7faed4100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:38.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.222+0000 7faeda4c6700 1 -- 192.168.123.105:0/4205667908 shutdown_connections 2026-03-09T15:05:38.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.222+0000 7faeda4c6700 1 -- 192.168.123.105:0/4205667908 wait complete. 2026-03-09T15:05:38.224 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.223+0000 7faeda4c6700 1 Processor -- start 2026-03-09T15:05:38.224 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.223+0000 7faeda4c6700 1 -- start start 2026-03-09T15:05:38.224 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.223+0000 7faeda4c6700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faed4102780 0x7faed4198690 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.224 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.224+0000 7faed3fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faed4102780 0x7faed4198690 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.224 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.224+0000 7faed3fff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faed4102780 0x7faed4198690 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:38028/0 (socket says 192.168.123.105:38028) 2026-03-09T15:05:38.224 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.224+0000 7faeda4c6700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7faed4198bd0 0x7faed419cff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.224+0000 7faeda4c6700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7faed4199150 con 0x7faed4102780 2026-03-09T15:05:38.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.224+0000 7faeda4c6700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7faed41992c0 con 0x7faed4198bd0 2026-03-09T15:05:38.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.224+0000 7faed3fff700 1 -- 192.168.123.105:0/3441100951 learned_addr learned my addr 192.168.123.105:0/3441100951 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:38.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.224+0000 7faed37fe700 1 --2- 192.168.123.105:0/3441100951 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7faed4198bd0 0x7faed419cff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.224+0000 7faed3fff700 1 -- 192.168.123.105:0/3441100951 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7faed4198bd0 msgr2=0x7faed419cff0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.224+0000 7faed3fff700 1 --2- 192.168.123.105:0/3441100951 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7faed4198bd0 0x7faed419cff0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.224+0000 7faed3fff700 1 -- 192.168.123.105:0/3441100951 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7faebc0097e0 con 0x7faed4102780 2026-03-09T15:05:38.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.225+0000 7faed3fff700 1 --2- 192.168.123.105:0/3441100951 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faed4102780 0x7faed4198690 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7faed41038c0 tx=0x7faebc004c30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:38.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.225+0000 7faed17fa700 1 -- 192.168.123.105:0/3441100951 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7faebc01d070 con 0x7faed4102780 2026-03-09T15:05:38.227 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.225+0000 7faed17fa700 1 -- 192.168.123.105:0/3441100951 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7faebc022470 con 0x7faed4102780 2026-03-09T15:05:38.227 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.225+0000 7faed17fa700 1 -- 192.168.123.105:0/3441100951 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7faebc00f630 con 0x7faed4102780 2026-03-09T15:05:38.227 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.225+0000 7faeda4c6700 1 -- 192.168.123.105:0/3441100951 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7faed419d590 con 0x7faed4102780 2026-03-09T15:05:38.227 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.225+0000 7faeda4c6700 1 -- 192.168.123.105:0/3441100951 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7faed419da90 con 0x7faed4102780 2026-03-09T15:05:38.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.227+0000 7faed17fa700 1 -- 192.168.123.105:0/3441100951 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7faebc0225e0 con 0x7faed4102780 2026-03-09T15:05:38.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.227+0000 7faeda4c6700 1 -- 192.168.123.105:0/3441100951 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7faed410acf0 con 0x7faed4102780 2026-03-09T15:05:38.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.227+0000 7faed17fa700 1 --2- 192.168.123.105:0/3441100951 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7faec0077870 0x7faec0079d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.227+0000 7faed17fa700 1 -- 192.168.123.105:0/3441100951 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(61..61 src has 1..61) v4 ==== 6136+0+0 (secure 0 0 0) 0x7faebc09af70 con 0x7faed4102780 2026-03-09T15:05:38.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.227+0000 7faed37fe700 1 --2- 192.168.123.105:0/3441100951 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7faec0077870 0x7faec0079d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.229+0000 7faed37fe700 1 --2- 192.168.123.105:0/3441100951 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7faec0077870 0x7faec0079d20 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7faec400ac50 tx=0x7faec400a380 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:38.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.231+0000 7faed17fa700 1 -- 192.168.123.105:0/3441100951 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7faebc063850 con 0x7faed4102780 2026-03-09T15:05:38.360 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.359+0000 7faeda4c6700 1 -- 192.168.123.105:0/3441100951 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7faed419dd70 con 0x7faec0077870 2026-03-09T15:05:38.361 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.361+0000 7faed17fa700 1 -- 192.168.123.105:0/3441100951 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+401 (secure 0 0 0) 0x7faed419dd70 con 0x7faec0077870 2026-03-09T15:05:38.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.363+0000 7faeda4c6700 1 -- 192.168.123.105:0/3441100951 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7faec0077870 msgr2=0x7faec0079d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.363+0000 7faeda4c6700 1 --2- 192.168.123.105:0/3441100951 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7faec0077870 0x7faec0079d20 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7faec400ac50 tx=0x7faec400a380 comp rx=0 tx=0).stop 2026-03-09T15:05:38.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.363+0000 7faeda4c6700 1 -- 192.168.123.105:0/3441100951 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faed4102780 msgr2=0x7faed4198690 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.363+0000 7faeda4c6700 1 --2- 192.168.123.105:0/3441100951 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faed4102780 0x7faed4198690 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7faed41038c0 tx=0x7faebc004c30 comp rx=0 tx=0).stop 2026-03-09T15:05:38.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.363+0000 7faeda4c6700 1 -- 192.168.123.105:0/3441100951 shutdown_connections 2026-03-09T15:05:38.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.363+0000 7faeda4c6700 1 --2- 192.168.123.105:0/3441100951 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7faec0077870 0x7faec0079d20 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.363+0000 7faeda4c6700 1 --2- 192.168.123.105:0/3441100951 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7faed4102780 0x7faed4198690 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.363+0000 7faeda4c6700 1 --2- 192.168.123.105:0/3441100951 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7faed4198bd0 0x7faed419cff0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.363+0000 7faeda4c6700 1 -- 192.168.123.105:0/3441100951 >> 192.168.123.105:0/3441100951 conn(0x7faed40fe280 msgr2=0x7faed40ffe70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:38.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.364+0000 7faeda4c6700 1 -- 192.168.123.105:0/3441100951 shutdown_connections 2026-03-09T15:05:38.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.364+0000 7faeda4c6700 1 -- 192.168.123.105:0/3441100951 wait complete. 2026-03-09T15:05:38.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.435+0000 7fd63b117700 1 -- 192.168.123.105:0/50473610 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd6340730f0 msgr2=0x7fd6340734c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.435+0000 7fd63b117700 1 --2- 192.168.123.105:0/50473610 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd6340730f0 0x7fd6340734c0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fd624009b50 tx=0x7fd624009e60 comp rx=0 tx=0).stop 2026-03-09T15:05:38.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.436+0000 7fd63b117700 1 -- 192.168.123.105:0/50473610 shutdown_connections 2026-03-09T15:05:38.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.436+0000 7fd63b117700 1 --2- 192.168.123.105:0/50473610 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd634073a00 0x7fd634111040 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.436+0000 7fd63b117700 1 --2- 192.168.123.105:0/50473610 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd6340730f0 0x7fd6340734c0 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.436+0000 7fd63b117700 1 -- 192.168.123.105:0/50473610 >> 192.168.123.105:0/50473610 conn(0x7fd6340fc090 msgr2=0x7fd6340fe4a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:38.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.436+0000 7fd63b117700 1 -- 192.168.123.105:0/50473610 shutdown_connections 2026-03-09T15:05:38.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.436+0000 7fd63b117700 1 -- 192.168.123.105:0/50473610 wait complete. 2026-03-09T15:05:38.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.436+0000 7fd63b117700 1 Processor -- start 2026-03-09T15:05:38.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.436+0000 7fd63b117700 1 -- start start 2026-03-09T15:05:38.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.437+0000 7fd63b117700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd6340730f0 0x7fd6341a2570 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.437+0000 7fd63b117700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd634073a00 0x7fd6341a2ab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.437+0000 7fd63b117700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd6341a3140 con 0x7fd6340730f0 2026-03-09T15:05:38.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.437+0000 7fd63b117700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd63419c5f0 con 0x7fd634073a00 2026-03-09T15:05:38.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.437+0000 7fd633fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd634073a00 0x7fd6341a2ab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.437+0000 7fd633fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd634073a00 0x7fd6341a2ab0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:34388/0 (socket says 192.168.123.105:34388) 2026-03-09T15:05:38.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.437+0000 7fd633fff700 1 -- 192.168.123.105:0/1745398513 learned_addr learned my addr 192.168.123.105:0/1745398513 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:38.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.437+0000 7fd638eb3700 1 --2- 192.168.123.105:0/1745398513 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd6340730f0 0x7fd6341a2570 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.437+0000 7fd638eb3700 1 -- 192.168.123.105:0/1745398513 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd634073a00 msgr2=0x7fd6341a2ab0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.437+0000 7fd638eb3700 1 --2- 192.168.123.105:0/1745398513 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd634073a00 0x7fd6341a2ab0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.437+0000 7fd638eb3700 1 -- 192.168.123.105:0/1745398513 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd6240097e0 con 0x7fd6340730f0 2026-03-09T15:05:38.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.438+0000 7fd638eb3700 1 --2- 192.168.123.105:0/1745398513 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd6340730f0 0x7fd6341a2570 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fd624000c00 tx=0x7fd624004c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:38.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.438+0000 7fd631ffb700 1 -- 192.168.123.105:0/1745398513 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd62401d070 con 0x7fd6340730f0 2026-03-09T15:05:38.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.438+0000 7fd631ffb700 1 -- 192.168.123.105:0/1745398513 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fd624022470 con 0x7fd6340730f0 2026-03-09T15:05:38.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.438+0000 7fd631ffb700 1 -- 192.168.123.105:0/1745398513 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd62400f650 con 0x7fd6340730f0 2026-03-09T15:05:38.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.438+0000 7fd63b117700 1 -- 192.168.123.105:0/1745398513 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd63419c8d0 con 0x7fd6340730f0 2026-03-09T15:05:38.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.438+0000 7fd63b117700 1 -- 192.168.123.105:0/1745398513 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd63419ce20 con 0x7fd6340730f0 2026-03-09T15:05:38.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.440+0000 7fd631ffb700 1 -- 192.168.123.105:0/1745398513 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fd62400f7b0 con 0x7fd6340730f0 2026-03-09T15:05:38.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.440+0000 7fd63b117700 1 -- 192.168.123.105:0/1745398513 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd63410e7c0 con 0x7fd6340730f0 2026-03-09T15:05:38.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.440+0000 7fd631ffb700 1 --2- 192.168.123.105:0/1745398513 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd61c077870 0x7fd61c079d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.440+0000 7fd631ffb700 1 -- 192.168.123.105:0/1745398513 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(61..61 src has 1..61) v4 ==== 6136+0+0 (secure 0 0 0) 0x7fd62409aec0 con 0x7fd6340730f0 2026-03-09T15:05:38.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.443+0000 7fd631ffb700 1 -- 192.168.123.105:0/1745398513 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fd6240637a0 con 0x7fd6340730f0 2026-03-09T15:05:38.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.443+0000 7fd633fff700 1 --2- 192.168.123.105:0/1745398513 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd61c077870 0x7fd61c079d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.444 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.443+0000 7fd633fff700 1 --2- 192.168.123.105:0/1745398513 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd61c077870 0x7fd61c079d20 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fd6340fd830 tx=0x7fd628007480 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:38.575 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.574+0000 7fd63b117700 1 -- 192.168.123.105:0/1745398513 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fd63419db00 con 0x7fd61c077870 2026-03-09T15:05:38.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.580+0000 7fd631ffb700 1 -- 192.168.123.105:0/1745398513 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7fd63419db00 con 0x7fd61c077870 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (2m) 37s ago 9m 24.0M - 0.25.0 c8568f914cd2 7635cece310c 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (9m) 37s ago 9m 9332k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (8m) 16s ago 8m 11.4M - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (93s) 37s ago 9m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 35d8c0ae5a58 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (91s) 16s ago 8m 8308k - 19.2.3-678-ge911bdeb 654f31e6858e 82bdad36caf9 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (2m) 37s ago 9m 73.9M - 10.4.0 c8b91775d855 eb6431f63d88 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (7m) 37s ago 7m 181M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (7m) 37s ago 7m 17.7M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (7m) 16s ago 7m 17.5M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (7m) 16s ago 7m 94.9M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (3m) 37s ago 10m 617M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (3m) 16s ago 8m 496M - 19.2.3-678-ge911bdeb 654f31e6858e acf5a6f3f804 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (2m) 37s ago 10m 59.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1e11655f7d87 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (108s) 16s ago 8m 52.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e d1f0309f4d58 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (3m) 37s ago 9m 10.1M - 1.7.0 72c9c2088986 888d071c50d9 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (3m) 16s ago 8m 9533k - 1.7.0 72c9c2088986 22c96a576a60 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (83s) 37s ago 8m 144M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f2883abca2d2 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (61s) 37s ago 8m 109M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b830d7f76498 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (39s) 37s ago 8m 12.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 01cf87b8bc05 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (17s) 16s ago 7m 34.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9359c3ced4d3 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (7m) 16s ago 7m 385M 4096M 18.2.0 dc2bc1663786 4239752204df 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (7m) 16s ago 7m 336M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:05:38.581 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (2m) 37s ago 9m 54.9M - 2.51.0 1d3b7f56885b e6f470b0ba11 2026-03-09T15:05:38.583 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.583+0000 7fd63b117700 1 -- 192.168.123.105:0/1745398513 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd61c077870 msgr2=0x7fd61c079d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.583 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.583+0000 7fd63b117700 1 --2- 192.168.123.105:0/1745398513 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd61c077870 0x7fd61c079d20 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7fd6340fd830 tx=0x7fd628007480 comp rx=0 tx=0).stop 2026-03-09T15:05:38.584 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.583+0000 7fd63b117700 1 -- 192.168.123.105:0/1745398513 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd6340730f0 msgr2=0x7fd6341a2570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.584 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.583+0000 7fd63b117700 1 --2- 192.168.123.105:0/1745398513 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd6340730f0 0x7fd6341a2570 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fd624000c00 tx=0x7fd624004c80 comp rx=0 tx=0).stop 2026-03-09T15:05:38.584 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.583+0000 7fd63b117700 1 -- 192.168.123.105:0/1745398513 shutdown_connections 2026-03-09T15:05:38.584 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.583+0000 7fd63b117700 1 --2- 192.168.123.105:0/1745398513 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd61c077870 0x7fd61c079d20 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.584 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.583+0000 7fd63b117700 1 --2- 192.168.123.105:0/1745398513 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd6340730f0 0x7fd6341a2570 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.584 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.583+0000 7fd63b117700 1 --2- 192.168.123.105:0/1745398513 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd634073a00 0x7fd6341a2ab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.584 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.583+0000 7fd63b117700 1 -- 192.168.123.105:0/1745398513 >> 192.168.123.105:0/1745398513 conn(0x7fd6340fc090 msgr2=0x7fd634102b50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:38.584 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.583+0000 7fd63b117700 1 -- 192.168.123.105:0/1745398513 shutdown_connections 2026-03-09T15:05:38.584 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.583+0000 7fd63b117700 1 -- 192.168.123.105:0/1745398513 wait complete. 2026-03-09T15:05:38.658 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.657+0000 7f9c66c3c700 1 -- 192.168.123.105:0/2051273654 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9c60068490 msgr2=0x7f9c60068860 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.658 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.657+0000 7f9c66c3c700 1 --2- 192.168.123.105:0/2051273654 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9c60068490 0x7f9c60068860 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f9c4c009b30 tx=0x7f9c4c009e40 comp rx=0 tx=0).stop 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.660+0000 7f9c66c3c700 1 -- 192.168.123.105:0/2051273654 shutdown_connections 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.660+0000 7f9c66c3c700 1 --2- 192.168.123.105:0/2051273654 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9c60068e30 0x7f9c60110d40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.660+0000 7f9c66c3c700 1 --2- 192.168.123.105:0/2051273654 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9c60068490 0x7f9c60068860 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.660+0000 7f9c66c3c700 1 -- 192.168.123.105:0/2051273654 >> 192.168.123.105:0/2051273654 conn(0x7f9c600754a0 msgr2=0x7f9c600758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.660+0000 7f9c66c3c700 1 -- 192.168.123.105:0/2051273654 shutdown_connections 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.660+0000 7f9c66c3c700 1 -- 192.168.123.105:0/2051273654 wait complete. 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.660+0000 7f9c66c3c700 1 Processor -- start 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.660+0000 7f9c66c3c700 1 -- start start 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.660+0000 7f9c66c3c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9c60068490 0x7f9c6010e7e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.660+0000 7f9c66c3c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9c60068e30 0x7f9c601097e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.660+0000 7f9c66c3c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9c60109ec0 con 0x7f9c60068490 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.660+0000 7f9c66c3c700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9c6010a030 con 0x7f9c60068e30 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.661+0000 7f9c649d8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9c60068490 0x7f9c6010e7e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.661+0000 7f9c649d8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9c60068490 0x7f9c6010e7e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:56968/0 (socket says 192.168.123.105:56968) 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.661+0000 7f9c649d8700 1 -- 192.168.123.105:0/1334322729 learned_addr learned my addr 192.168.123.105:0/1334322729 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.661+0000 7f9c649d8700 1 -- 192.168.123.105:0/1334322729 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9c60068e30 msgr2=0x7f9c601097e0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.661+0000 7f9c649d8700 1 --2- 192.168.123.105:0/1334322729 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9c60068e30 0x7f9c601097e0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.661+0000 7f9c649d8700 1 -- 192.168.123.105:0/1334322729 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9c4c0097e0 con 0x7f9c60068490 2026-03-09T15:05:38.662 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.661+0000 7f9c649d8700 1 --2- 192.168.123.105:0/1334322729 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9c60068490 0x7f9c6010e7e0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f9c4c00b580 tx=0x7f9c4c00faf0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:38.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.662+0000 7f9c5dffb700 1 -- 192.168.123.105:0/1334322729 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9c4c01d070 con 0x7f9c60068490 2026-03-09T15:05:38.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.662+0000 7f9c66c3c700 1 -- 192.168.123.105:0/1334322729 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9c6010a2b0 con 0x7f9c60068490 2026-03-09T15:05:38.663 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.662+0000 7f9c66c3c700 1 -- 192.168.123.105:0/1334322729 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9c6006f970 con 0x7f9c60068490 2026-03-09T15:05:38.664 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.663+0000 7f9c5dffb700 1 -- 192.168.123.105:0/1334322729 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f9c4c022950 con 0x7f9c60068490 2026-03-09T15:05:38.664 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.663+0000 7f9c5dffb700 1 -- 192.168.123.105:0/1334322729 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9c4c017ac0 con 0x7f9c60068490 2026-03-09T15:05:38.664 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.663+0000 7f9c5dffb700 1 -- 192.168.123.105:0/1334322729 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f9c4c017c60 con 0x7f9c60068490 2026-03-09T15:05:38.665 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.664+0000 7f9c5dffb700 1 --2- 192.168.123.105:0/1334322729 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9c50077ab0 0x7f9c50079f60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.665 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.664+0000 7f9c5ffff700 1 --2- 192.168.123.105:0/1334322729 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9c50077ab0 0x7f9c50079f60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.665 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.664+0000 7f9c5dffb700 1 -- 192.168.123.105:0/1334322729 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(61..61 src has 1..61) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f9c4c09c020 con 0x7f9c60068490 2026-03-09T15:05:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.665+0000 7f9c5ffff700 1 --2- 192.168.123.105:0/1334322729 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9c50077ab0 0x7f9c50079f60 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f9c6010afd0 tx=0x7f9c54006d20 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:38.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.665+0000 7f9c66c3c700 1 -- 192.168.123.105:0/1334322729 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9c44005320 con 0x7f9c60068490 2026-03-09T15:05:38.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.668+0000 7f9c5dffb700 1 -- 192.168.123.105:0/1334322729 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9c4c064980 con 0x7f9c60068490 2026-03-09T15:05:38.835 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.834+0000 7f9c66c3c700 1 -- 192.168.123.105:0/1334322729 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f9c44006200 con 0x7f9c60068490 2026-03-09T15:05:38.835 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.835+0000 7f9c5dffb700 1 -- 192.168.123.105:0/1334322729 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+799 (secure 0 0 0) 0x7f9c4c027070 con 0x7f9c60068490 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2, 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 6, 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:05:38.836 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:05:38.838 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.837+0000 7f9c66c3c700 1 -- 192.168.123.105:0/1334322729 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9c50077ab0 msgr2=0x7f9c50079f60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.838 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.837+0000 7f9c66c3c700 1 --2- 192.168.123.105:0/1334322729 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9c50077ab0 0x7f9c50079f60 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f9c6010afd0 tx=0x7f9c54006d20 comp rx=0 tx=0).stop 2026-03-09T15:05:38.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.839+0000 7f9c66c3c700 1 -- 192.168.123.105:0/1334322729 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9c60068490 msgr2=0x7f9c6010e7e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.839+0000 7f9c66c3c700 1 --2- 192.168.123.105:0/1334322729 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9c60068490 0x7f9c6010e7e0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f9c4c00b580 tx=0x7f9c4c00faf0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.839+0000 7f9c66c3c700 1 -- 192.168.123.105:0/1334322729 shutdown_connections 2026-03-09T15:05:38.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.839+0000 7f9c66c3c700 1 --2- 192.168.123.105:0/1334322729 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9c50077ab0 0x7f9c50079f60 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.839+0000 7f9c66c3c700 1 --2- 192.168.123.105:0/1334322729 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9c60068490 0x7f9c6010e7e0 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.839+0000 7f9c66c3c700 1 --2- 192.168.123.105:0/1334322729 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9c60068e30 0x7f9c601097e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.839+0000 7f9c66c3c700 1 -- 192.168.123.105:0/1334322729 >> 192.168.123.105:0/1334322729 conn(0x7f9c600754a0 msgr2=0x7f9c6010f180 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:38.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.839+0000 7f9c66c3c700 1 -- 192.168.123.105:0/1334322729 shutdown_connections 2026-03-09T15:05:38.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.839+0000 7f9c66c3c700 1 -- 192.168.123.105:0/1334322729 wait complete. 2026-03-09T15:05:38.913 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:38 vm05.local ceph-mon[116516]: pgmap v81: 65 pgs: 65 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 733 B/s rd, 1 op/s 2026-03-09T15:05:38.913 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:38 vm05.local ceph-mon[116516]: from='client.34212 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:38.913 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:38 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T15:05:38.915 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.912+0000 7fbd4242c700 1 -- 192.168.123.105:0/831546097 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd3c06d7a0 msgr2=0x7fbd3c06dc10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.915 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.912+0000 7fbd4242c700 1 --2- 192.168.123.105:0/831546097 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd3c06d7a0 0x7fbd3c06dc10 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fbd30009b00 tx=0x7fbd30009e10 comp rx=0 tx=0).stop 2026-03-09T15:05:38.915 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.912+0000 7fbd4242c700 1 -- 192.168.123.105:0/831546097 shutdown_connections 2026-03-09T15:05:38.915 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.912+0000 7fbd4242c700 1 --2- 192.168.123.105:0/831546097 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd3c06d7a0 0x7fbd3c06dc10 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.915 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.912+0000 7fbd4242c700 1 --2- 192.168.123.105:0/831546097 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbd3c10ed80 0x7fbd3c06d260 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.915 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.912+0000 7fbd4242c700 1 -- 192.168.123.105:0/831546097 >> 192.168.123.105:0/831546097 conn(0x7fbd3c06c830 msgr2=0x7fbd3c071830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:38.915 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.912+0000 7fbd4242c700 1 -- 192.168.123.105:0/831546097 shutdown_connections 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.914+0000 7fbd4242c700 1 -- 192.168.123.105:0/831546097 wait complete. 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.915+0000 7fbd4242c700 1 Processor -- start 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.915+0000 7fbd4242c700 1 -- start start 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.915+0000 7fbd4242c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd3c06d7a0 0x7fbd3c1194e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.915+0000 7fbd4242c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbd3c10ed80 0x7fbd3c1144e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.915+0000 7fbd4242c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbd3c114a20 con 0x7fbd3c06d7a0 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.915+0000 7fbd4242c700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbd3c114b90 con 0x7fbd3c10ed80 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.915+0000 7fbd3bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd3c06d7a0 0x7fbd3c1194e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.915+0000 7fbd3bfff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd3c06d7a0 0x7fbd3c1194e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:56990/0 (socket says 192.168.123.105:56990) 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.915+0000 7fbd3bfff700 1 -- 192.168.123.105:0/2217269804 learned_addr learned my addr 192.168.123.105:0/2217269804 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.915+0000 7fbd3b7fe700 1 --2- 192.168.123.105:0/2217269804 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbd3c10ed80 0x7fbd3c1144e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.916+0000 7fbd3bfff700 1 -- 192.168.123.105:0/2217269804 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbd3c10ed80 msgr2=0x7fbd3c1144e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.916+0000 7fbd3bfff700 1 --2- 192.168.123.105:0/2217269804 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbd3c10ed80 0x7fbd3c1144e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.916+0000 7fbd3bfff700 1 -- 192.168.123.105:0/2217269804 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbd300097e0 con 0x7fbd3c06d7a0 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.916+0000 7fbd3bfff700 1 --2- 192.168.123.105:0/2217269804 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd3c06d7a0 0x7fbd3c1194e0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7fbd34009f50 tx=0x7fbd34009f80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:38.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.917+0000 7fbd397fa700 1 -- 192.168.123.105:0/2217269804 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbd3400cb30 con 0x7fbd3c06d7a0 2026-03-09T15:05:38.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.917+0000 7fbd4242c700 1 -- 192.168.123.105:0/2217269804 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbd3c114e70 con 0x7fbd3c06d7a0 2026-03-09T15:05:38.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.917+0000 7fbd4242c700 1 -- 192.168.123.105:0/2217269804 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbd3c076fa0 con 0x7fbd3c06d7a0 2026-03-09T15:05:38.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.918+0000 7fbd397fa700 1 -- 192.168.123.105:0/2217269804 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fbd3400cc90 con 0x7fbd3c06d7a0 2026-03-09T15:05:38.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.918+0000 7fbd397fa700 1 -- 192.168.123.105:0/2217269804 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbd34007710 con 0x7fbd3c06d7a0 2026-03-09T15:05:38.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.918+0000 7fbd397fa700 1 -- 192.168.123.105:0/2217269804 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fbd340079b0 con 0x7fbd3c06d7a0 2026-03-09T15:05:38.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.919+0000 7fbd397fa700 1 --2- 192.168.123.105:0/2217269804 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbd24077a60 0x7fbd24079f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:38.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.919+0000 7fbd3b7fe700 1 --2- 192.168.123.105:0/2217269804 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbd24077a60 0x7fbd24079f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:38.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.919+0000 7fbd397fa700 1 -- 192.168.123.105:0/2217269804 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(61..61 src has 1..61) v4 ==== 6136+0+0 (secure 0 0 0) 0x7fbd34006e40 con 0x7fbd3c06d7a0 2026-03-09T15:05:38.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.919+0000 7fbd3b7fe700 1 --2- 192.168.123.105:0/2217269804 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbd24077a60 0x7fbd24079f10 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7fbd3c115cd0 tx=0x7fbd3000b540 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:38.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.920+0000 7fbd4242c700 1 -- 192.168.123.105:0/2217269804 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbd28005320 con 0x7fbd3c06d7a0 2026-03-09T15:05:38.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:38.923+0000 7fbd397fa700 1 -- 192.168.123.105:0/2217269804 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fbd34062bf0 con 0x7fbd3c06d7a0 2026-03-09T15:05:39.078 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.077+0000 7fbd4242c700 1 -- 192.168.123.105:0/2217269804 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fbd28006200 con 0x7fbd3c06d7a0 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.078+0000 7fbd397fa700 1 -- 192.168.123.105:0/2217269804 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1944 (secure 0 0 0) 0x7fbd34019020 con 0x7fbd3c06d7a0 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:e11 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:legacy client fscid: 1 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:Filesystem 'cephfs' (1) 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:fs_name cephfs 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:epoch 9 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:created 2026-03-09T14:58:23.182447+0000 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:modified 2026-03-09T14:58:30.215642+0000 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:tableserver 0 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:root 0 2026-03-09T15:05:39.079 INFO:teuthology.orchestra.run.vm05.stdout:session_timeout 60 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:session_autoclose 300 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:max_file_size 1099511627776 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:max_xattr_size 65536 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:required_client_features {} 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:last_failure 0 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:last_failure_osd_epoch 0 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:max_mds 1 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:in 0 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:up {0=14502} 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:failed 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:damaged 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:stopped 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:data_pools [3] 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:metadata_pool 2 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:inline_data enabled 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:balancer 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:bal_rank_mask -1 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:standby_count_wanted 1 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:qdb_cluster leader: 0 members: 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.nrocqt{0:14502} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.ohmitn{0:14510} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:Standby daemons: 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.rrcyql{-1:14518} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:05:39.080 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.jrhwzz{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:05:39.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.081+0000 7fbd4242c700 1 -- 192.168.123.105:0/2217269804 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbd24077a60 msgr2=0x7fbd24079f10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:39.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.081+0000 7fbd4242c700 1 --2- 192.168.123.105:0/2217269804 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbd24077a60 0x7fbd24079f10 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7fbd3c115cd0 tx=0x7fbd3000b540 comp rx=0 tx=0).stop 2026-03-09T15:05:39.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.081+0000 7fbd4242c700 1 -- 192.168.123.105:0/2217269804 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd3c06d7a0 msgr2=0x7fbd3c1194e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:39.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.081+0000 7fbd4242c700 1 --2- 192.168.123.105:0/2217269804 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd3c06d7a0 0x7fbd3c1194e0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7fbd34009f50 tx=0x7fbd34009f80 comp rx=0 tx=0).stop 2026-03-09T15:05:39.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.081+0000 7fbd4242c700 1 -- 192.168.123.105:0/2217269804 shutdown_connections 2026-03-09T15:05:39.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.081+0000 7fbd4242c700 1 --2- 192.168.123.105:0/2217269804 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbd24077a60 0x7fbd24079f10 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.081+0000 7fbd4242c700 1 --2- 192.168.123.105:0/2217269804 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbd3c06d7a0 0x7fbd3c1194e0 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.081+0000 7fbd4242c700 1 --2- 192.168.123.105:0/2217269804 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbd3c10ed80 0x7fbd3c1144e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.081+0000 7fbd4242c700 1 -- 192.168.123.105:0/2217269804 >> 192.168.123.105:0/2217269804 conn(0x7fbd3c06c830 msgr2=0x7fbd3c071190 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:39.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.081+0000 7fbd4242c700 1 -- 192.168.123.105:0/2217269804 shutdown_connections 2026-03-09T15:05:39.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.082+0000 7fbd4242c700 1 -- 192.168.123.105:0/2217269804 wait complete. 2026-03-09T15:05:39.084 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 11 2026-03-09T15:05:39.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:38 vm09.local ceph-mon[98742]: pgmap v81: 65 pgs: 65 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 733 B/s rd, 1 op/s 2026-03-09T15:05:39.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:38 vm09.local ceph-mon[98742]: from='client.34212 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:39.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:38 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T15:05:39.158 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.157+0000 7f0a6943a700 1 -- 192.168.123.105:0/3033739405 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0a64102810 msgr2=0x7f0a64102c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:39.158 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.157+0000 7f0a6943a700 1 --2- 192.168.123.105:0/3033739405 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0a64102810 0x7f0a64102c80 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f0a54009b00 tx=0x7f0a54009e10 comp rx=0 tx=0).stop 2026-03-09T15:05:39.158 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.157+0000 7f0a6943a700 1 -- 192.168.123.105:0/3033739405 shutdown_connections 2026-03-09T15:05:39.158 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.157+0000 7f0a6943a700 1 --2- 192.168.123.105:0/3033739405 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0a64102810 0x7f0a64102c80 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.158 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.157+0000 7f0a6943a700 1 --2- 192.168.123.105:0/3033739405 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0a64108810 0x7f0a64108be0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.158 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.157+0000 7f0a6943a700 1 -- 192.168.123.105:0/3033739405 >> 192.168.123.105:0/3033739405 conn(0x7f0a640fe330 msgr2=0x7f0a64100740 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:39.158 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.157+0000 7f0a6943a700 1 -- 192.168.123.105:0/3033739405 shutdown_connections 2026-03-09T15:05:39.158 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.158+0000 7f0a6943a700 1 -- 192.168.123.105:0/3033739405 wait complete. 2026-03-09T15:05:39.159 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.158+0000 7f0a6943a700 1 Processor -- start 2026-03-09T15:05:39.159 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.158+0000 7f0a6943a700 1 -- start start 2026-03-09T15:05:39.159 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.158+0000 7f0a6943a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0a64102810 0x7f0a64198450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:39.159 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.158+0000 7f0a6943a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0a64108810 0x7f0a64198990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:39.159 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.158+0000 7f0a6943a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0a64199070 con 0x7f0a64108810 2026-03-09T15:05:39.159 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.158+0000 7f0a6943a700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0a6419ce00 con 0x7f0a64102810 2026-03-09T15:05:39.159 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.159+0000 7f0a627fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0a64108810 0x7f0a64198990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:39.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.159+0000 7f0a627fc700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0a64108810 0x7f0a64198990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:57000/0 (socket says 192.168.123.105:57000) 2026-03-09T15:05:39.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.159+0000 7f0a627fc700 1 -- 192.168.123.105:0/4278472238 learned_addr learned my addr 192.168.123.105:0/4278472238 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:39.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.159+0000 7f0a627fc700 1 -- 192.168.123.105:0/4278472238 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0a64102810 msgr2=0x7f0a64198450 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:39.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.159+0000 7f0a627fc700 1 --2- 192.168.123.105:0/4278472238 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0a64102810 0x7f0a64198450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.159+0000 7f0a627fc700 1 -- 192.168.123.105:0/4278472238 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f0a540097e0 con 0x7f0a64108810 2026-03-09T15:05:39.160 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.159+0000 7f0a627fc700 1 --2- 192.168.123.105:0/4278472238 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0a64108810 0x7f0a64198990 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f0a5400b5c0 tx=0x7f0a54004970 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:39.161 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.160+0000 7f0a5bfff700 1 -- 192.168.123.105:0/4278472238 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0a5401d070 con 0x7f0a64108810 2026-03-09T15:05:39.161 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.160+0000 7f0a5bfff700 1 -- 192.168.123.105:0/4278472238 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f0a54004b90 con 0x7f0a64108810 2026-03-09T15:05:39.161 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.160+0000 7f0a6943a700 1 -- 192.168.123.105:0/4278472238 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f0a6419d080 con 0x7f0a64108810 2026-03-09T15:05:39.161 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.160+0000 7f0a6943a700 1 -- 192.168.123.105:0/4278472238 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f0a6419d570 con 0x7f0a64108810 2026-03-09T15:05:39.162 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.161+0000 7f0a5bfff700 1 -- 192.168.123.105:0/4278472238 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0a54022620 con 0x7f0a64108810 2026-03-09T15:05:39.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.162+0000 7f0a5bfff700 1 -- 192.168.123.105:0/4278472238 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f0a54022ae0 con 0x7f0a64108810 2026-03-09T15:05:39.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.162+0000 7f0a5bfff700 1 --2- 192.168.123.105:0/4278472238 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f0a50077910 0x7f0a50079dc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:39.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.163+0000 7f0a62ffd700 1 --2- 192.168.123.105:0/4278472238 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f0a50077910 0x7f0a50079dc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:39.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.164+0000 7f0a5bfff700 1 -- 192.168.123.105:0/4278472238 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(61..61 src has 1..61) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f0a5409b220 con 0x7f0a64108810 2026-03-09T15:05:39.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.165+0000 7f0a62ffd700 1 --2- 192.168.123.105:0/4278472238 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f0a50077910 0x7f0a50079dc0 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f0a64103950 tx=0x7f0a4c009470 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:39.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.165+0000 7f0a6943a700 1 -- 192.168.123.105:0/4278472238 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f0a6410ad10 con 0x7f0a64108810 2026-03-09T15:05:39.171 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.170+0000 7f0a5bfff700 1 -- 192.168.123.105:0/4278472238 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f0a54063a50 con 0x7f0a64108810 2026-03-09T15:05:39.298 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.297+0000 7f0a6943a700 1 -- 192.168.123.105:0/4278472238 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f0a64066eb0 con 0x7f0a50077910 2026-03-09T15:05:39.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.298+0000 7f0a5bfff700 1 -- 192.168.123.105:0/4278472238 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+401 (secure 0 0 0) 0x7f0a64066eb0 con 0x7f0a50077910 2026-03-09T15:05:39.299 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:05:39.299 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:05:39.299 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:05:39.299 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T15:05:39.299 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [ 2026-03-09T15:05:39.299 INFO:teuthology.orchestra.run.vm05.stdout: "mgr", 2026-03-09T15:05:39.299 INFO:teuthology.orchestra.run.vm05.stdout: "mon", 2026-03-09T15:05:39.299 INFO:teuthology.orchestra.run.vm05.stdout: "crash" 2026-03-09T15:05:39.300 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-09T15:05:39.300 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "10/23 daemons upgraded", 2026-03-09T15:05:39.300 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T15:05:39.300 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:05:39.300 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:05:39.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.301+0000 7f0a6943a700 1 -- 192.168.123.105:0/4278472238 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f0a50077910 msgr2=0x7f0a50079dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:39.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.301+0000 7f0a6943a700 1 --2- 192.168.123.105:0/4278472238 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f0a50077910 0x7f0a50079dc0 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7f0a64103950 tx=0x7f0a4c009470 comp rx=0 tx=0).stop 2026-03-09T15:05:39.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.301+0000 7f0a6943a700 1 -- 192.168.123.105:0/4278472238 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0a64108810 msgr2=0x7f0a64198990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:39.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.301+0000 7f0a6943a700 1 --2- 192.168.123.105:0/4278472238 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0a64108810 0x7f0a64198990 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f0a5400b5c0 tx=0x7f0a54004970 comp rx=0 tx=0).stop 2026-03-09T15:05:39.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.302+0000 7f0a6943a700 1 -- 192.168.123.105:0/4278472238 shutdown_connections 2026-03-09T15:05:39.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.302+0000 7f0a6943a700 1 --2- 192.168.123.105:0/4278472238 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f0a50077910 0x7f0a50079dc0 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.303 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.302+0000 7f0a6943a700 1 --2- 192.168.123.105:0/4278472238 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0a64102810 0x7f0a64198450 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.303 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.302+0000 7f0a6943a700 1 --2- 192.168.123.105:0/4278472238 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0a64108810 0x7f0a64198990 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.303 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.302+0000 7f0a6943a700 1 -- 192.168.123.105:0/4278472238 >> 192.168.123.105:0/4278472238 conn(0x7f0a640fe330 msgr2=0x7f0a640ffc40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:39.303 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.302+0000 7f0a6943a700 1 -- 192.168.123.105:0/4278472238 shutdown_connections 2026-03-09T15:05:39.303 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.302+0000 7f0a6943a700 1 -- 192.168.123.105:0/4278472238 wait complete. 2026-03-09T15:05:39.371 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.370+0000 7f3dac007700 1 -- 192.168.123.105:0/2863589462 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3da4102780 msgr2=0x7f3da4102bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:39.371 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.370+0000 7f3dac007700 1 --2- 192.168.123.105:0/2863589462 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3da4102780 0x7f3da4102bf0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f3da0009b00 tx=0x7f3da0009e10 comp rx=0 tx=0).stop 2026-03-09T15:05:39.371 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.370+0000 7f3dac007700 1 -- 192.168.123.105:0/2863589462 shutdown_connections 2026-03-09T15:05:39.371 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.370+0000 7f3dac007700 1 --2- 192.168.123.105:0/2863589462 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3da4102780 0x7f3da4102bf0 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.371 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.370+0000 7f3dac007700 1 --2- 192.168.123.105:0/2863589462 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3da4108780 0x7f3da4108b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.371 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.370+0000 7f3dac007700 1 -- 192.168.123.105:0/2863589462 >> 192.168.123.105:0/2863589462 conn(0x7f3da40fe280 msgr2=0x7f3da4100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:39.371 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.370+0000 7f3dac007700 1 -- 192.168.123.105:0/2863589462 shutdown_connections 2026-03-09T15:05:39.371 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.370+0000 7f3dac007700 1 -- 192.168.123.105:0/2863589462 wait complete. 2026-03-09T15:05:39.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.371+0000 7f3dac007700 1 Processor -- start 2026-03-09T15:05:39.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.371+0000 7f3dac007700 1 -- start start 2026-03-09T15:05:39.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.371+0000 7f3dac007700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3da4102780 0x7f3da4198400 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.371+0000 7f3dac007700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3da4108780 0x7f3da4198940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.371+0000 7f3dac007700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3da4199020 con 0x7f3da4108780 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.371+0000 7f3dac007700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3da419cdb0 con 0x7f3da4102780 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.372+0000 7f3da95a2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3da4108780 0x7f3da4198940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.372+0000 7f3da95a2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3da4108780 0x7f3da4198940 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:57012/0 (socket says 192.168.123.105:57012) 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.372+0000 7f3da95a2700 1 -- 192.168.123.105:0/3773458023 learned_addr learned my addr 192.168.123.105:0/3773458023 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.372+0000 7f3da95a2700 1 -- 192.168.123.105:0/3773458023 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3da4102780 msgr2=0x7f3da4198400 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.372+0000 7f3da95a2700 1 --2- 192.168.123.105:0/3773458023 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3da4102780 0x7f3da4198400 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.372+0000 7f3da95a2700 1 -- 192.168.123.105:0/3773458023 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3da00097e0 con 0x7f3da4108780 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.372+0000 7f3da95a2700 1 --2- 192.168.123.105:0/3773458023 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3da4108780 0x7f3da4198940 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f3da000b5c0 tx=0x7f3da0004990 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.372+0000 7f3d9affd700 1 -- 192.168.123.105:0/3773458023 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3da001d070 con 0x7f3da4108780 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.372+0000 7f3d9affd700 1 -- 192.168.123.105:0/3773458023 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f3da000bc50 con 0x7f3da4108780 2026-03-09T15:05:39.373 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.373+0000 7f3d9affd700 1 -- 192.168.123.105:0/3773458023 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3da000f860 con 0x7f3da4108780 2026-03-09T15:05:39.374 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.373+0000 7f3dac007700 1 -- 192.168.123.105:0/3773458023 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3da419d030 con 0x7f3da4108780 2026-03-09T15:05:39.375 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.374+0000 7f3dac007700 1 -- 192.168.123.105:0/3773458023 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3da419d4a0 con 0x7f3da4108780 2026-03-09T15:05:39.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.375+0000 7f3d9affd700 1 -- 192.168.123.105:0/3773458023 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f3da000f9c0 con 0x7f3da4108780 2026-03-09T15:05:39.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.375+0000 7f3d9affd700 1 --2- 192.168.123.105:0/3773458023 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3d90077870 0x7f3d90079d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:05:39.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.375+0000 7f3d9affd700 1 -- 192.168.123.105:0/3773458023 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(61..61 src has 1..61) v4 ==== 6136+0+0 (secure 0 0 0) 0x7f3da000bdc0 con 0x7f3da4108780 2026-03-09T15:05:39.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.375+0000 7f3da9da3700 1 --2- 192.168.123.105:0/3773458023 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3d90077870 0x7f3d90079d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:05:39.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.376+0000 7f3da9da3700 1 --2- 192.168.123.105:0/3773458023 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3d90077870 0x7f3d90079d20 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f3da41038c0 tx=0x7f3d94006cb0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:05:39.379 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.376+0000 7f3dac007700 1 -- 192.168.123.105:0/3773458023 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3da404ea50 con 0x7f3da4108780 2026-03-09T15:05:39.380 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.379+0000 7f3d9affd700 1 -- 192.168.123.105:0/3773458023 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3da00a0050 con 0x7f3da4108780 2026-03-09T15:05:39.541 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.539+0000 7f3dac007700 1 -- 192.168.123.105:0/3773458023 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f3da4066e40 con 0x7f3da4108780 2026-03-09T15:05:39.541 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.540+0000 7f3d9affd700 1 -- 192.168.123.105:0/3773458023 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+201 (secure 0 0 0) 0x7f3da0027020 con 0x7f3da4108780 2026-03-09T15:05:39.542 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:05:39.542 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:05:39.542 INFO:teuthology.orchestra.run.vm05.stdout: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:05:39.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.544+0000 7f3dac007700 1 -- 192.168.123.105:0/3773458023 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3d90077870 msgr2=0x7f3d90079d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:39.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.544+0000 7f3dac007700 1 --2- 192.168.123.105:0/3773458023 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3d90077870 0x7f3d90079d20 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f3da41038c0 tx=0x7f3d94006cb0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.544+0000 7f3dac007700 1 -- 192.168.123.105:0/3773458023 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3da4108780 msgr2=0x7f3da4198940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:05:39.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.544+0000 7f3dac007700 1 --2- 192.168.123.105:0/3773458023 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3da4108780 0x7f3da4198940 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f3da000b5c0 tx=0x7f3da0004990 comp rx=0 tx=0).stop 2026-03-09T15:05:39.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.544+0000 7f3dac007700 1 -- 192.168.123.105:0/3773458023 shutdown_connections 2026-03-09T15:05:39.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.544+0000 7f3dac007700 1 --2- 192.168.123.105:0/3773458023 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3d90077870 0x7f3d90079d20 secure :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f3da41038c0 tx=0x7f3d94006cb0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.544+0000 7f3dac007700 1 --2- 192.168.123.105:0/3773458023 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3da4102780 0x7f3da4198400 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:05:39.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.544+0000 7f3dac007700 1 --2- 192.168.123.105:0/3773458023 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3da4108780 0x7f3da4198940 secure :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f3da000b5c0 tx=0x7f3da0004990 comp rx=0 tx=0).stop 2026-03-09T15:05:39.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.544+0000 7f3dac007700 1 -- 192.168.123.105:0/3773458023 >> 192.168.123.105:0/3773458023 conn(0x7f3da40fe280 msgr2=0x7f3da40ffb20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:05:39.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.545+0000 7f3dac007700 1 -- 192.168.123.105:0/3773458023 shutdown_connections 2026-03-09T15:05:39.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:05:39.546+0000 7f3dac007700 1 -- 192.168.123.105:0/3773458023 wait complete. 2026-03-09T15:05:39.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:39 vm05.local ceph-mon[116516]: from='client.34216 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:39.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:39 vm05.local ceph-mon[116516]: from='client.34220 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:39.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:39 vm05.local ceph-mon[116516]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T15:05:39.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:39 vm05.local ceph-mon[116516]: Upgrade: osd.4 is safe to restart 2026-03-09T15:05:39.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:39 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1334322729' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:39.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:39 vm05.local ceph-mon[116516]: Upgrade: Updating osd.4 2026-03-09T15:05:39.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:39 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:39.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:39 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T15:05:39.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:39 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:05:39.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:39 vm05.local ceph-mon[116516]: Deploying daemon osd.4 on vm09 2026-03-09T15:05:39.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:39 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/2217269804' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:05:39.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:39 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3773458023' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:05:40.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-mon[98742]: from='client.34216 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:40.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-mon[98742]: from='client.34220 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:40.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-mon[98742]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T15:05:40.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-mon[98742]: Upgrade: osd.4 is safe to restart 2026-03-09T15:05:40.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1334322729' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:40.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-mon[98742]: Upgrade: Updating osd.4 2026-03-09T15:05:40.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:40.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T15:05:40.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:05:40.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-mon[98742]: Deploying daemon osd.4 on vm09 2026-03-09T15:05:40.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/2217269804' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:05:40.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3773458023' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:05:40.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:39 vm09.local systemd[1]: Stopping Ceph osd.4 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:05:40.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[71018]: 2026-03-09T15:05:39.791+0000 7fb44f5eb700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:05:40.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[71018]: 2026-03-09T15:05:39.791+0000 7fb44f5eb700 -1 osd.4 61 *** Got signal Terminated *** 2026-03-09T15:05:40.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:39 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[71018]: 2026-03-09T15:05:39.791+0000 7fb44f5eb700 -1 osd.4 61 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T15:05:41.006 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:40 vm09.local podman[110201]: 2026-03-09 15:05:40.870880969 +0000 UTC m=+1.093050406 container died 4239752204dfec530371e0ebe8cf71c6c2eeb1a2f9bef28aad378c545d885a2c (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4, org.label-schema.build-date=20231212, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.29.1, org.label-schema.license=GPLv2, GIT_BRANCH=HEAD, ceph=True, org.label-schema.name=CentOS Stream 8 Base Image, CEPH_POINT_RELEASE=-18.2.0, GIT_CLEAN=True, RELEASE=HEAD, maintainer=Guillaume Abrioux ) 2026-03-09T15:05:41.006 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:40 vm09.local podman[110201]: 2026-03-09 15:05:40.902685227 +0000 UTC m=+1.124854674 container remove 4239752204dfec530371e0ebe8cf71c6c2eeb1a2f9bef28aad378c545d885a2c (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4, GIT_BRANCH=HEAD, GIT_REPO=https://github.com/ceph/ceph-container.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, RELEASE=HEAD, maintainer=Guillaume Abrioux , io.buildah.version=1.29.1, org.label-schema.schema-version=1.0, CEPH_POINT_RELEASE=-18.2.0, GIT_CLEAN=True, ceph=True, org.label-schema.build-date=20231212, org.label-schema.name=CentOS Stream 8 Base Image) 2026-03-09T15:05:41.006 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:40 vm09.local bash[110201]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4 2026-03-09T15:05:41.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:40 vm09.local ceph-mon[98742]: from='client.34230 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:41.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:40 vm09.local ceph-mon[98742]: pgmap v82: 65 pgs: 65 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 667 B/s rd, 1 op/s 2026-03-09T15:05:41.007 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:40 vm09.local ceph-mon[98742]: osd.4 marked itself down and dead 2026-03-09T15:05:41.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:40 vm05.local ceph-mon[116516]: from='client.34230 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:05:41.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:40 vm05.local ceph-mon[116516]: pgmap v82: 65 pgs: 65 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 667 B/s rd, 1 op/s 2026-03-09T15:05:41.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:40 vm05.local ceph-mon[116516]: osd.4 marked itself down and dead 2026-03-09T15:05:41.274 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local podman[110276]: 2026-03-09 15:05:41.18925859 +0000 UTC m=+0.054193579 container create 08b9989f7bba16329087d1abf3e2c1c150529ccfdc0ea834180e74b5708bec85 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-deactivate, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0) 2026-03-09T15:05:41.274 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local podman[110276]: 2026-03-09 15:05:41.232037607 +0000 UTC m=+0.096972596 container init 08b9989f7bba16329087d1abf3e2c1c150529ccfdc0ea834180e74b5708bec85 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-deactivate, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T15:05:41.274 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local podman[110276]: 2026-03-09 15:05:41.241739895 +0000 UTC m=+0.106674873 container start 08b9989f7bba16329087d1abf3e2c1c150529ccfdc0ea834180e74b5708bec85 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-deactivate, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default) 2026-03-09T15:05:41.274 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local podman[110276]: 2026-03-09 15:05:41.242740135 +0000 UTC m=+0.107675124 container attach 08b9989f7bba16329087d1abf3e2c1c150529ccfdc0ea834180e74b5708bec85 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-deactivate, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) 2026-03-09T15:05:41.541 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local podman[110276]: 2026-03-09 15:05:41.175985902 +0000 UTC m=+0.040920900 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:05:41.541 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local conmon[110289]: conmon 08b9989f7bba16329087 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08b9989f7bba16329087d1abf3e2c1c150529ccfdc0ea834180e74b5708bec85.scope/container/memory.events 2026-03-09T15:05:41.541 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local podman[110297]: 2026-03-09 15:05:41.419323765 +0000 UTC m=+0.013247493 container died 08b9989f7bba16329087d1abf3e2c1c150529ccfdc0ea834180e74b5708bec85 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-deactivate, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , ceph=True) 2026-03-09T15:05:41.541 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local podman[110297]: 2026-03-09 15:05:41.444053885 +0000 UTC m=+0.037977623 container remove 08b9989f7bba16329087d1abf3e2c1c150529ccfdc0ea834180e74b5708bec85 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T15:05:41.541 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.4.service: Deactivated successfully. 2026-03-09T15:05:41.541 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local systemd[1]: Stopped Ceph osd.4 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:05:41.541 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.4.service: Consumed 38.030s CPU time. 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: osdmap e62: 6 total, 5 up, 6 in 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.5", "id": [3, 2]}]: dispatch 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [0, 4]}]: dispatch 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.15", "id": [3, 4]}]: dispatch 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [0, 4]}]: dispatch 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [0, 2, 3, 5]}]: dispatch 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [1, 4]}]: dispatch 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [3, 5]}]: dispatch 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.14", "id": [3, 0]}]: dispatch 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.16", "id": [1, 2]}]: dispatch 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:05:41.896 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:41.896 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local systemd[1]: Starting Ceph osd.4 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:05:41.896 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local podman[110386]: 2026-03-09 15:05:41.743753243 +0000 UTC m=+0.017994858 container create 703625ea7fb2b84605339552bf32ea2fba0eeadf3b52208ed5ea9cf8ce92c45e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True) 2026-03-09T15:05:41.896 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local podman[110386]: 2026-03-09 15:05:41.798336759 +0000 UTC m=+0.072578374 container init 703625ea7fb2b84605339552bf32ea2fba0eeadf3b52208ed5ea9cf8ce92c45e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0) 2026-03-09T15:05:41.896 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local podman[110386]: 2026-03-09 15:05:41.802797077 +0000 UTC m=+0.077038692 container start 703625ea7fb2b84605339552bf32ea2fba0eeadf3b52208ed5ea9cf8ce92c45e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) 2026-03-09T15:05:41.896 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local podman[110386]: 2026-03-09 15:05:41.803926139 +0000 UTC m=+0.078167754 container attach 703625ea7fb2b84605339552bf32ea2fba0eeadf3b52208ed5ea9cf8ce92c45e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20260223, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0) 2026-03-09T15:05:41.896 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local podman[110386]: 2026-03-09 15:05:41.736710402 +0000 UTC m=+0.010952018 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:05:41.896 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate[110398]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:41.896 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local bash[110386]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:41.897 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate[110398]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:41.897 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:41 vm09.local bash[110386]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: osdmap e62: 6 total, 5 up, 6 in 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.5", "id": [3, 2]}]: dispatch 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [0, 4]}]: dispatch 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.15", "id": [3, 4]}]: dispatch 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [0, 4]}]: dispatch 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [0, 2, 3, 5]}]: dispatch 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [1, 4]}]: dispatch 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [3, 5]}]: dispatch 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.14", "id": [3, 0]}]: dispatch 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.16", "id": [1, 2]}]: dispatch 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:05:42.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:42.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate[110398]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T15:05:42.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate[110398]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:42.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local bash[110386]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T15:05:42.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local bash[110386]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:42.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate[110398]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:42.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local bash[110386]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:05:42.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate[110398]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-09T15:05:42.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local bash[110386]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-09T15:05:42.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate[110398]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-0bcbd0b1-43fc-4d6d-997b-961af5995209/osd-block-c4ddfd7f-8055-4c53-a70a-131428da743a --path /var/lib/ceph/osd/ceph-4 --no-mon-config 2026-03-09T15:05:42.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local bash[110386]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-0bcbd0b1-43fc-4d6d-997b-961af5995209/osd-block-c4ddfd7f-8055-4c53-a70a-131428da743a --path /var/lib/ceph/osd/ceph-4 --no-mon-config 2026-03-09T15:05:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:42 vm05.local ceph-mon[116516]: pgmap v84: 65 pgs: 6 stale+active+clean, 59 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 716 B/s rd, 1 op/s 2026-03-09T15:05:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.5", "id": [3, 2]}]': finished 2026-03-09T15:05:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [0, 4]}]': finished 2026-03-09T15:05:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.15", "id": [3, 4]}]': finished 2026-03-09T15:05:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [0, 4]}]': finished 2026-03-09T15:05:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [0, 2, 3, 5]}]': finished 2026-03-09T15:05:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [1, 4]}]': finished 2026-03-09T15:05:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [3, 5]}]': finished 2026-03-09T15:05:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.14", "id": [3, 0]}]': finished 2026-03-09T15:05:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.16", "id": [1, 2]}]': finished 2026-03-09T15:05:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:42 vm05.local ceph-mon[116516]: osdmap e63: 6 total, 5 up, 6 in 2026-03-09T15:05:43.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-mon[98742]: pgmap v84: 65 pgs: 6 stale+active+clean, 59 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 716 B/s rd, 1 op/s 2026-03-09T15:05:43.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.5", "id": [3, 2]}]': finished 2026-03-09T15:05:43.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [0, 4]}]': finished 2026-03-09T15:05:43.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.15", "id": [3, 4]}]': finished 2026-03-09T15:05:43.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [0, 4]}]': finished 2026-03-09T15:05:43.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [0, 2, 3, 5]}]': finished 2026-03-09T15:05:43.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [1, 4]}]': finished 2026-03-09T15:05:43.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [3, 5]}]': finished 2026-03-09T15:05:43.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.14", "id": [3, 0]}]': finished 2026-03-09T15:05:43.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.16", "id": [1, 2]}]': finished 2026-03-09T15:05:43.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-mon[98742]: osdmap e63: 6 total, 5 up, 6 in 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate[110398]: Running command: /usr/bin/ln -snf /dev/ceph-0bcbd0b1-43fc-4d6d-997b-961af5995209/osd-block-c4ddfd7f-8055-4c53-a70a-131428da743a /var/lib/ceph/osd/ceph-4/block 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local bash[110386]: Running command: /usr/bin/ln -snf /dev/ceph-0bcbd0b1-43fc-4d6d-997b-961af5995209/osd-block-c4ddfd7f-8055-4c53-a70a-131428da743a /var/lib/ceph/osd/ceph-4/block 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate[110398]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local bash[110386]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate[110398]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local bash[110386]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate[110398]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local bash[110386]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate[110398]: --> ceph-volume lvm activate successful for osd ID: 4 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local bash[110386]: --> ceph-volume lvm activate successful for osd ID: 4 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local podman[110616]: 2026-03-09 15:05:42.712367236 +0000 UTC m=+0.010378523 container died 703625ea7fb2b84605339552bf32ea2fba0eeadf3b52208ed5ea9cf8ce92c45e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local podman[110616]: 2026-03-09 15:05:42.727896548 +0000 UTC m=+0.025907835 container remove 703625ea7fb2b84605339552bf32ea2fba0eeadf3b52208ed5ea9cf8ce92c45e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223) 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local podman[110655]: 2026-03-09 15:05:42.837862849 +0000 UTC m=+0.016532500 container create 985038f550f842efe94371992de8ca39429755f3866dc7cc801e057986c2e207 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local podman[110655]: 2026-03-09 15:05:42.875013703 +0000 UTC m=+0.053683344 container init 985038f550f842efe94371992de8ca39429755f3866dc7cc801e057986c2e207 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local podman[110655]: 2026-03-09 15:05:42.877631311 +0000 UTC m=+0.056300953 container start 985038f550f842efe94371992de8ca39429755f3866dc7cc801e057986c2e207 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local bash[110655]: 985038f550f842efe94371992de8ca39429755f3866dc7cc801e057986c2e207 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local podman[110655]: 2026-03-09 15:05:42.831070898 +0000 UTC m=+0.009740560 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:05:43.117 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:42 vm09.local systemd[1]: Started Ceph osd.4 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:05:43.563 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:43 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[110666]: 2026-03-09T15:05:43.462+0000 7f8f52e02740 -1 Falling back to public interface 2026-03-09T15:05:43.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:43 vm09.local ceph-mon[98742]: osdmap e64: 6 total, 5 up, 6 in 2026-03-09T15:05:43.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:43 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:43.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:43 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:43.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:43 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:05:43.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:43 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:43.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:43 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:43 vm05.local ceph-mon[116516]: osdmap e64: 6 total, 5 up, 6 in 2026-03-09T15:05:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:43 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:43 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:43 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:05:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:43 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:44.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:43 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:44 vm05.local ceph-mon[116516]: pgmap v87: 65 pgs: 7 active+undersized, 2 remapped+peering, 2 unknown, 10 peering, 3 stale+active+clean, 6 active+undersized+degraded, 35 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 14/264 objects degraded (5.303%) 2026-03-09T15:05:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:44 vm05.local ceph-mon[116516]: osdmap e65: 6 total, 5 up, 6 in 2026-03-09T15:05:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:44 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:44 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:44 vm05.local ceph-mon[116516]: osdmap e66: 6 total, 5 up, 6 in 2026-03-09T15:05:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:44 vm05.local ceph-mon[116516]: Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T15:05:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:44 vm05.local ceph-mon[116516]: Health check failed: Degraded data redundancy: 14/264 objects degraded (5.303%), 6 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:44 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:44 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:45.120 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:44 vm09.local ceph-mon[98742]: pgmap v87: 65 pgs: 7 active+undersized, 2 remapped+peering, 2 unknown, 10 peering, 3 stale+active+clean, 6 active+undersized+degraded, 35 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 14/264 objects degraded (5.303%) 2026-03-09T15:05:45.120 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:44 vm09.local ceph-mon[98742]: osdmap e65: 6 total, 5 up, 6 in 2026-03-09T15:05:45.120 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:44 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:45.120 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:44 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:45.120 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:44 vm09.local ceph-mon[98742]: osdmap e66: 6 total, 5 up, 6 in 2026-03-09T15:05:45.120 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:44 vm09.local ceph-mon[98742]: Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-09T15:05:45.120 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:44 vm09.local ceph-mon[98742]: Health check failed: Degraded data redundancy: 14/264 objects degraded (5.303%), 6 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:45.120 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:44 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:45.120 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:44 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:45 vm05.local ceph-mon[116516]: osdmap e67: 6 total, 5 up, 6 in 2026-03-09T15:05:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:05:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:05:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:05:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T15:05:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:45 vm09.local ceph-mon[98742]: osdmap e67: 6 total, 5 up, 6 in 2026-03-09T15:05:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:05:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:05:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:05:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:05:46.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T15:05:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:46 vm05.local ceph-mon[116516]: pgmap v91: 65 pgs: 1 active+recovery_wait+undersized+remapped, 4 active+clean+remapped, 13 active+undersized, 2 remapped+peering, 1 unknown, 9 peering, 9 active+undersized+degraded, 26 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 21/264 objects degraded (7.955%); 228/264 objects misplaced (86.364%); 4.4 MiB/s, 1 objects/s recovering 2026-03-09T15:05:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:46 vm05.local ceph-mon[116516]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T15:05:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:46 vm05.local ceph-mon[116516]: Upgrade: 1 pgs have unknown state; cannot draw any conclusions 2026-03-09T15:05:47.319 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:47 vm09.local ceph-mon[98742]: pgmap v91: 65 pgs: 1 active+recovery_wait+undersized+remapped, 4 active+clean+remapped, 13 active+undersized, 2 remapped+peering, 1 unknown, 9 peering, 9 active+undersized+degraded, 26 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 21/264 objects degraded (7.955%); 228/264 objects misplaced (86.364%); 4.4 MiB/s, 1 objects/s recovering 2026-03-09T15:05:47.319 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:47 vm09.local ceph-mon[98742]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T15:05:47.319 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:47 vm09.local ceph-mon[98742]: Upgrade: 1 pgs have unknown state; cannot draw any conclusions 2026-03-09T15:05:47.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:47 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[110666]: 2026-03-09T15:05:47.319+0000 7f8f52e02740 -1 osd.4 0 read_superblock omap replica is missing. 2026-03-09T15:05:47.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:47 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[110666]: 2026-03-09T15:05:47.486+0000 7f8f52e02740 -1 osd.4 61 log_to_monitors true 2026-03-09T15:05:48.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:48 vm05.local ceph-mon[116516]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T15:05:48.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:48 vm05.local ceph-mon[116516]: from='osd.4 [v2:192.168.123.109:6808/2714701650,v1:192.168.123.109:6809/2714701650]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T15:05:48.366 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:05:48 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[110666]: 2026-03-09T15:05:48.024+0000 7f8f4ab9c640 -1 osd.4 61 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T15:05:48.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:48 vm09.local ceph-mon[98742]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T15:05:48.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:48 vm09.local ceph-mon[98742]: from='osd.4 [v2:192.168.123.109:6808/2714701650,v1:192.168.123.109:6809/2714701650]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T15:05:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:49 vm05.local ceph-mon[116516]: pgmap v92: 65 pgs: 2 active+recovering+undersized+remapped, 1 active+recovery_wait+undersized+remapped, 4 active+clean+remapped, 17 active+undersized, 2 remapped+peering, 13 active+undersized+degraded, 26 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 20 MiB/s rd, 58 MiB/s wr, 193 op/s; 39/264 objects degraded (14.773%); 228/264 objects misplaced (86.364%); 3.5 MiB/s, 8 objects/s recovering 2026-03-09T15:05:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:49 vm05.local ceph-mon[116516]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T15:05:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:49 vm05.local ceph-mon[116516]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T15:05:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:49 vm05.local ceph-mon[116516]: osdmap e68: 6 total, 5 up, 6 in 2026-03-09T15:05:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:49 vm05.local ceph-mon[116516]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:05:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:49 vm05.local ceph-mon[116516]: from='osd.4 [v2:192.168.123.109:6808/2714701650,v1:192.168.123.109:6809/2714701650]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:05:49.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:49 vm09.local ceph-mon[98742]: pgmap v92: 65 pgs: 2 active+recovering+undersized+remapped, 1 active+recovery_wait+undersized+remapped, 4 active+clean+remapped, 17 active+undersized, 2 remapped+peering, 13 active+undersized+degraded, 26 active+clean; 257 MiB data, 1.7 GiB used, 118 GiB / 120 GiB avail; 20 MiB/s rd, 58 MiB/s wr, 193 op/s; 39/264 objects degraded (14.773%); 228/264 objects misplaced (86.364%); 3.5 MiB/s, 8 objects/s recovering 2026-03-09T15:05:49.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:49 vm09.local ceph-mon[98742]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-09T15:05:49.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:49 vm09.local ceph-mon[98742]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T15:05:49.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:49 vm09.local ceph-mon[98742]: osdmap e68: 6 total, 5 up, 6 in 2026-03-09T15:05:49.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:49 vm09.local ceph-mon[98742]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:05:49.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:49 vm09.local ceph-mon[98742]: from='osd.4 [v2:192.168.123.109:6808/2714701650,v1:192.168.123.109:6809/2714701650]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:05:50.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:50 vm05.local ceph-mon[116516]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T15:05:50.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:50 vm05.local ceph-mon[116516]: osd.4 [v2:192.168.123.109:6808/2714701650,v1:192.168.123.109:6809/2714701650] boot 2026-03-09T15:05:50.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:50 vm05.local ceph-mon[116516]: osdmap e69: 6 total, 6 up, 6 in 2026-03-09T15:05:50.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:50 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:05:50.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:50 vm09.local ceph-mon[98742]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T15:05:50.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:50 vm09.local ceph-mon[98742]: osd.4 [v2:192.168.123.109:6808/2714701650,v1:192.168.123.109:6809/2714701650] boot 2026-03-09T15:05:50.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:50 vm09.local ceph-mon[98742]: osdmap e69: 6 total, 6 up, 6 in 2026-03-09T15:05:50.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:50 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T15:05:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:51 vm09.local ceph-mon[98742]: pgmap v95: 65 pgs: 2 active+recovering+undersized+remapped, 1 active+recovery_wait+undersized+remapped, 6 active+clean+remapped, 17 active+undersized, 13 active+undersized+degraded, 26 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 17 MiB/s rd, 49 MiB/s wr, 164 op/s; 39/264 objects degraded (14.773%); 235/264 objects misplaced (89.015%); 3.0 MiB/s, 7 objects/s recovering 2026-03-09T15:05:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:51 vm09.local ceph-mon[98742]: osdmap e70: 6 total, 6 up, 6 in 2026-03-09T15:05:51.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:51 vm05.local ceph-mon[116516]: pgmap v95: 65 pgs: 2 active+recovering+undersized+remapped, 1 active+recovery_wait+undersized+remapped, 6 active+clean+remapped, 17 active+undersized, 13 active+undersized+degraded, 26 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 17 MiB/s rd, 49 MiB/s wr, 164 op/s; 39/264 objects degraded (14.773%); 235/264 objects misplaced (89.015%); 3.0 MiB/s, 7 objects/s recovering 2026-03-09T15:05:51.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:51 vm05.local ceph-mon[116516]: osdmap e70: 6 total, 6 up, 6 in 2026-03-09T15:05:52.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:52 vm09.local ceph-mon[98742]: osdmap e71: 6 total, 6 up, 6 in 2026-03-09T15:05:52.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:52 vm05.local ceph-mon[116516]: osdmap e71: 6 total, 6 up, 6 in 2026-03-09T15:05:53.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:53 vm09.local ceph-mon[98742]: pgmap v98: 65 pgs: 6 peering, 5 remapped+peering, 2 active+recovering+undersized+remapped, 2 active+clean+remapped, 13 active+undersized, 11 active+undersized+degraded, 26 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 35/264 objects degraded (13.258%); 7/264 objects misplaced (2.652%) 2026-03-09T15:05:53.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:53 vm09.local ceph-mon[98742]: Health check update: Degraded data redundancy: 35/264 objects degraded (13.258%), 11 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:53.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:53 vm09.local ceph-mon[98742]: osdmap e72: 6 total, 6 up, 6 in 2026-03-09T15:05:53.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:53 vm05.local ceph-mon[116516]: pgmap v98: 65 pgs: 6 peering, 5 remapped+peering, 2 active+recovering+undersized+remapped, 2 active+clean+remapped, 13 active+undersized, 11 active+undersized+degraded, 26 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 0 B/s rd, 0 op/s; 35/264 objects degraded (13.258%); 7/264 objects misplaced (2.652%) 2026-03-09T15:05:53.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:53 vm05.local ceph-mon[116516]: Health check update: Degraded data redundancy: 35/264 objects degraded (13.258%), 11 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:53.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:53 vm05.local ceph-mon[116516]: osdmap e72: 6 total, 6 up, 6 in 2026-03-09T15:05:54.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:54 vm09.local ceph-mon[98742]: osdmap e73: 6 total, 6 up, 6 in 2026-03-09T15:05:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:54 vm05.local ceph-mon[116516]: osdmap e73: 6 total, 6 up, 6 in 2026-03-09T15:05:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:55 vm09.local ceph-mon[98742]: pgmap v101: 65 pgs: 1 active+recovering+undersized+remapped, 8 peering, 5 remapped+peering, 1 active+recovering+remapped, 50 active+clean; 257 MiB data, 1.4 GiB used, 119 GiB / 120 GiB avail; 767 B/s rd, 1 op/s; 5.0 MiB/s, 15 objects/s recovering 2026-03-09T15:05:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:55 vm09.local ceph-mon[98742]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 35/264 objects degraded (13.258%), 11 pgs degraded) 2026-03-09T15:05:55.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:55 vm05.local ceph-mon[116516]: pgmap v101: 65 pgs: 1 active+recovering+undersized+remapped, 8 peering, 5 remapped+peering, 1 active+recovering+remapped, 50 active+clean; 257 MiB data, 1.4 GiB used, 119 GiB / 120 GiB avail; 767 B/s rd, 1 op/s; 5.0 MiB/s, 15 objects/s recovering 2026-03-09T15:05:55.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:55 vm05.local ceph-mon[116516]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 35/264 objects degraded (13.258%), 11 pgs degraded) 2026-03-09T15:05:56.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:56 vm05.local ceph-mon[116516]: Health check failed: Degraded data redundancy: 452/264 objects degraded (171.212%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:56.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:56 vm09.local ceph-mon[98742]: Health check failed: Degraded data redundancy: 452/264 objects degraded (171.212%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T15:05:57.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:57 vm05.local ceph-mon[116516]: pgmap v102: 65 pgs: 2 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 2 peering, 1 active+recovering+remapped, 59 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 575 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 5.2 MiB/s, 12 objects/s recovering 2026-03-09T15:05:57.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:57.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:05:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:57 vm09.local ceph-mon[98742]: pgmap v102: 65 pgs: 2 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 2 peering, 1 active+recovering+remapped, 59 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 575 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 5.2 MiB/s, 12 objects/s recovering 2026-03-09T15:05:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:05:57.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:05:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:05:59 vm05.local ceph-mon[116516]: pgmap v103: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 2 peering, 59 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.1 KiB/s rd, 1 op/s; 452/264 objects degraded (171.212%); 4.4 MiB/s, 25 objects/s recovering 2026-03-09T15:05:59.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:05:59 vm09.local ceph-mon[98742]: pgmap v103: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 2 peering, 59 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.1 KiB/s rd, 1 op/s; 452/264 objects degraded (171.212%); 4.4 MiB/s, 25 objects/s recovering 2026-03-09T15:06:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:01 vm05.local ceph-mon[116516]: pgmap v104: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 895 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 3.5 MiB/s, 20 objects/s recovering 2026-03-09T15:06:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:01 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T15:06:01.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:01 vm09.local ceph-mon[98742]: pgmap v104: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 895 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 3.5 MiB/s, 20 objects/s recovering 2026-03-09T15:06:01.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:01 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T15:06:02.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:02 vm05.local ceph-mon[116516]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T15:06:02.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:02 vm05.local ceph-mon[116516]: Upgrade: unsafe to stop osd(s) at this time (1 PGs are or would become offline) 2026-03-09T15:06:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:02 vm09.local ceph-mon[98742]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T15:06:02.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:02 vm09.local ceph-mon[98742]: Upgrade: unsafe to stop osd(s) at this time (1 PGs are or would become offline) 2026-03-09T15:06:03.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:03 vm05.local ceph-mon[116516]: pgmap v105: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 439 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 3.0 MiB/s, 15 objects/s recovering 2026-03-09T15:06:03.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:03 vm09.local ceph-mon[98742]: pgmap v105: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 439 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 3.0 MiB/s, 15 objects/s recovering 2026-03-09T15:06:05.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:05 vm05.local ceph-mon[116516]: pgmap v106: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 893 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 794 KiB/s, 19 objects/s recovering 2026-03-09T15:06:05.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:05 vm09.local ceph-mon[98742]: pgmap v106: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 893 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 794 KiB/s, 19 objects/s recovering 2026-03-09T15:06:07.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:07 vm05.local ceph-mon[116516]: pgmap v107: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 767 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 683 KiB/s, 16 objects/s recovering 2026-03-09T15:06:07.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:07 vm09.local ceph-mon[98742]: pgmap v107: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 767 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 683 KiB/s, 16 objects/s recovering 2026-03-09T15:06:09.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:09 vm05.local ceph-mon[116516]: pgmap v108: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s; 452/264 objects degraded (171.212%); 0 B/s, 24 objects/s recovering 2026-03-09T15:06:09.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:09 vm05.local ceph-mon[116516]: osdmap e74: 6 total, 6 up, 6 in 2026-03-09T15:06:09.570 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:09 vm09.local ceph-mon[98742]: pgmap v108: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s; 452/264 objects degraded (171.212%); 0 B/s, 24 objects/s recovering 2026-03-09T15:06:09.570 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:09 vm09.local ceph-mon[98742]: osdmap e74: 6 total, 6 up, 6 in 2026-03-09T15:06:09.625 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.623+0000 7fba945ee700 1 -- 192.168.123.105:0/3849218624 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba8c102530 msgr2=0x7fba8c1029a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:09.625 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.623+0000 7fba945ee700 1 --2- 192.168.123.105:0/3849218624 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba8c102530 0x7fba8c1029a0 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7fba88009b50 tx=0x7fba88009e60 comp rx=0 tx=0).stop 2026-03-09T15:06:09.625 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.624+0000 7fba945ee700 1 -- 192.168.123.105:0/3849218624 shutdown_connections 2026-03-09T15:06:09.625 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.624+0000 7fba945ee700 1 --2- 192.168.123.105:0/3849218624 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba8c102530 0x7fba8c1029a0 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.625 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.624+0000 7fba945ee700 1 --2- 192.168.123.105:0/3849218624 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba8c108550 0x7fba8c108920 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.625 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.624+0000 7fba945ee700 1 -- 192.168.123.105:0/3849218624 >> 192.168.123.105:0/3849218624 conn(0x7fba8c0fe010 msgr2=0x7fba8c100420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:09.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.627+0000 7fba945ee700 1 -- 192.168.123.105:0/3849218624 shutdown_connections 2026-03-09T15:06:09.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.627+0000 7fba945ee700 1 -- 192.168.123.105:0/3849218624 wait complete. 2026-03-09T15:06:09.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.627+0000 7fba945ee700 1 Processor -- start 2026-03-09T15:06:09.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.627+0000 7fba945ee700 1 -- start start 2026-03-09T15:06:09.628 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.627+0000 7fba945ee700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba8c102530 0x7fba8c10edc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.627+0000 7fba945ee700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba8c108550 0x7fba8c10f300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.627+0000 7fba945ee700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fba8c10f9e0 con 0x7fba8c102530 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.627+0000 7fba945ee700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fba8c10fb50 con 0x7fba8c108550 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.627+0000 7fba91b89700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba8c108550 0x7fba8c10f300 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.627+0000 7fba91b89700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba8c108550 0x7fba8c10f300 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:60842/0 (socket says 192.168.123.105:60842) 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.627+0000 7fba91b89700 1 -- 192.168.123.105:0/2738007122 learned_addr learned my addr 192.168.123.105:0/2738007122 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.628+0000 7fba91b89700 1 -- 192.168.123.105:0/2738007122 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba8c102530 msgr2=0x7fba8c10edc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.628+0000 7fba91b89700 1 --2- 192.168.123.105:0/2738007122 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba8c102530 0x7fba8c10edc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.628+0000 7fba91b89700 1 -- 192.168.123.105:0/2738007122 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fba880097e0 con 0x7fba8c108550 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.628+0000 7fba91b89700 1 --2- 192.168.123.105:0/2738007122 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba8c108550 0x7fba8c10f300 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fba88005950 tx=0x7fba880049e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.628+0000 7fba837fe700 1 -- 192.168.123.105:0/2738007122 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fba8801d070 con 0x7fba8c108550 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.628+0000 7fba945ee700 1 -- 192.168.123.105:0/2738007122 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fba8c118df0 con 0x7fba8c108550 2026-03-09T15:06:09.629 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.628+0000 7fba945ee700 1 -- 192.168.123.105:0/2738007122 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fba8c1192e0 con 0x7fba8c108550 2026-03-09T15:06:09.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.629+0000 7fba837fe700 1 -- 192.168.123.105:0/2738007122 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fba88004b80 con 0x7fba8c108550 2026-03-09T15:06:09.630 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.629+0000 7fba837fe700 1 -- 192.168.123.105:0/2738007122 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fba8800f670 con 0x7fba8c108550 2026-03-09T15:06:09.631 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.630+0000 7fba837fe700 1 -- 192.168.123.105:0/2738007122 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fba8800f7d0 con 0x7fba8c108550 2026-03-09T15:06:09.631 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.630+0000 7fba837fe700 1 --2- 192.168.123.105:0/2738007122 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fba780779e0 0x7fba78079e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:09.631 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.630+0000 7fba837fe700 1 -- 192.168.123.105:0/2738007122 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(75..75 src has 1..75) v4 ==== 6496+0+0 (secure 0 0 0) 0x7fba8809c1e0 con 0x7fba8c108550 2026-03-09T15:06:09.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.631+0000 7fba9238a700 1 --2- 192.168.123.105:0/2738007122 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fba780779e0 0x7fba78079e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:09.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.632+0000 7fba9238a700 1 --2- 192.168.123.105:0/2738007122 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fba780779e0 0x7fba78079e90 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7fba8400c4d0 tx=0x7fba8400b040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:09.635 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.632+0000 7fba945ee700 1 -- 192.168.123.105:0/2738007122 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fba70005320 con 0x7fba8c108550 2026-03-09T15:06:09.636 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.635+0000 7fba837fe700 1 -- 192.168.123.105:0/2738007122 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fba88060070 con 0x7fba8c108550 2026-03-09T15:06:09.765 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.763+0000 7fba945ee700 1 -- 192.168.123.105:0/2738007122 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fba70000bf0 con 0x7fba780779e0 2026-03-09T15:06:09.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.765+0000 7fba837fe700 1 -- 192.168.123.105:0/2738007122 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+401 (secure 0 0 0) 0x7fba70000bf0 con 0x7fba780779e0 2026-03-09T15:06:09.768 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.767+0000 7fba817fa700 1 -- 192.168.123.105:0/2738007122 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fba780779e0 msgr2=0x7fba78079e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:09.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.768+0000 7fba817fa700 1 --2- 192.168.123.105:0/2738007122 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fba780779e0 0x7fba78079e90 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7fba8400c4d0 tx=0x7fba8400b040 comp rx=0 tx=0).stop 2026-03-09T15:06:09.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.768+0000 7fba817fa700 1 -- 192.168.123.105:0/2738007122 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba8c108550 msgr2=0x7fba8c10f300 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:09.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.768+0000 7fba817fa700 1 --2- 192.168.123.105:0/2738007122 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba8c108550 0x7fba8c10f300 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7fba88005950 tx=0x7fba880049e0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.768+0000 7fba817fa700 1 -- 192.168.123.105:0/2738007122 shutdown_connections 2026-03-09T15:06:09.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.768+0000 7fba817fa700 1 --2- 192.168.123.105:0/2738007122 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fba780779e0 0x7fba78079e90 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.768+0000 7fba817fa700 1 --2- 192.168.123.105:0/2738007122 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fba8c102530 0x7fba8c10edc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.768+0000 7fba817fa700 1 --2- 192.168.123.105:0/2738007122 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fba8c108550 0x7fba8c10f300 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.768+0000 7fba817fa700 1 -- 192.168.123.105:0/2738007122 >> 192.168.123.105:0/2738007122 conn(0x7fba8c0fe010 msgr2=0x7fba8c0ff820 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:09.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.771+0000 7fba817fa700 1 -- 192.168.123.105:0/2738007122 shutdown_connections 2026-03-09T15:06:09.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.771+0000 7fba817fa700 1 -- 192.168.123.105:0/2738007122 wait complete. 2026-03-09T15:06:09.783 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:06:09.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.851+0000 7f8f99832700 1 -- 192.168.123.105:0/3056579957 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8f9410e960 msgr2=0x7f8f9410ed30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:09.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.851+0000 7f8f99832700 1 --2- 192.168.123.105:0/3056579957 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8f9410e960 0x7f8f9410ed30 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f8f84009b00 tx=0x7f8f84009e10 comp rx=0 tx=0).stop 2026-03-09T15:06:09.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.851+0000 7f8f99832700 1 -- 192.168.123.105:0/3056579957 shutdown_connections 2026-03-09T15:06:09.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.851+0000 7f8f99832700 1 --2- 192.168.123.105:0/3056579957 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8f9406ce40 0x7f8f9406d2b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.851+0000 7f8f99832700 1 --2- 192.168.123.105:0/3056579957 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8f9410e960 0x7f8f9410ed30 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.852 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.851+0000 7f8f99832700 1 -- 192.168.123.105:0/3056579957 >> 192.168.123.105:0/3056579957 conn(0x7f8f9406c370 msgr2=0x7f8f94071540 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:09.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.852+0000 7f8f99832700 1 -- 192.168.123.105:0/3056579957 shutdown_connections 2026-03-09T15:06:09.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.852+0000 7f8f99832700 1 -- 192.168.123.105:0/3056579957 wait complete. 2026-03-09T15:06:09.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.852+0000 7f8f99832700 1 Processor -- start 2026-03-09T15:06:09.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.852+0000 7f8f99832700 1 -- start start 2026-03-09T15:06:09.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.852+0000 7f8f99832700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8f9406ce40 0x7f8f941196a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:09.853 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.852+0000 7f8f99832700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8f94114650 0x7f8f94114ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:09.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.852+0000 7f8f99832700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8f94115090 con 0x7f8f9406ce40 2026-03-09T15:06:09.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.852+0000 7f8f99832700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f8f94115200 con 0x7f8f94114650 2026-03-09T15:06:09.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.853+0000 7f8f93fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8f94114650 0x7f8f94114ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:09.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.853+0000 7f8f93fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8f94114650 0x7f8f94114ac0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:60850/0 (socket says 192.168.123.105:60850) 2026-03-09T15:06:09.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.853+0000 7f8f93fff700 1 -- 192.168.123.105:0/4154583190 learned_addr learned my addr 192.168.123.105:0/4154583190 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:09.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.853+0000 7f8f93fff700 1 -- 192.168.123.105:0/4154583190 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8f9406ce40 msgr2=0x7f8f941196a0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:06:09.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.853+0000 7f8f98830700 1 --2- 192.168.123.105:0/4154583190 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8f9406ce40 0x7f8f941196a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:09.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.853+0000 7f8f93fff700 1 --2- 192.168.123.105:0/4154583190 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8f9406ce40 0x7f8f941196a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.853+0000 7f8f93fff700 1 -- 192.168.123.105:0/4154583190 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f8f840097e0 con 0x7f8f94114650 2026-03-09T15:06:09.854 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.854+0000 7f8f98830700 1 --2- 192.168.123.105:0/4154583190 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8f9406ce40 0x7f8f941196a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:06:09.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.854+0000 7f8f93fff700 1 --2- 192.168.123.105:0/4154583190 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8f94114650 0x7f8f94114ac0 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f8f8800d350 tx=0x7f8f8800d710 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:09.855 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.855+0000 7f8f91ffb700 1 -- 192.168.123.105:0/4154583190 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8f880155b0 con 0x7f8f94114650 2026-03-09T15:06:09.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.855+0000 7f8f91ffb700 1 -- 192.168.123.105:0/4154583190 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f8f8800f040 con 0x7f8f94114650 2026-03-09T15:06:09.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.855+0000 7f8f91ffb700 1 -- 192.168.123.105:0/4154583190 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8f880149c0 con 0x7f8f94114650 2026-03-09T15:06:09.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.855+0000 7f8f99832700 1 -- 192.168.123.105:0/4154583190 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8f94115490 con 0x7f8f94114650 2026-03-09T15:06:09.856 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.855+0000 7f8f99832700 1 -- 192.168.123.105:0/4154583190 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8f941b7ce0 con 0x7f8f94114650 2026-03-09T15:06:09.858 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.857+0000 7f8f91ffb700 1 -- 192.168.123.105:0/4154583190 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f8f88014b20 con 0x7f8f94114650 2026-03-09T15:06:09.858 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.857+0000 7f8f91ffb700 1 --2- 192.168.123.105:0/4154583190 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f8f7c0779e0 0x7f8f7c079e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:09.858 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.857+0000 7f8f91ffb700 1 -- 192.168.123.105:0/4154583190 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(75..75 src has 1..75) v4 ==== 6496+0+0 (secure 0 0 0) 0x7f8f88099730 con 0x7f8f94114650 2026-03-09T15:06:09.858 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.857+0000 7f8f99832700 1 -- 192.168.123.105:0/4154583190 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8f9404f2a0 con 0x7f8f94114650 2026-03-09T15:06:09.858 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.858+0000 7f8f98830700 1 --2- 192.168.123.105:0/4154583190 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f8f7c0779e0 0x7f8f7c079e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:09.859 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.858+0000 7f8f98830700 1 --2- 192.168.123.105:0/4154583190 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f8f7c0779e0 0x7f8f7c079e90 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f8f84000c00 tx=0x7f8f84011040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:09.863 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.862+0000 7f8f91ffb700 1 -- 192.168.123.105:0/4154583190 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f8f88061ea0 con 0x7f8f94114650 2026-03-09T15:06:09.994 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.993+0000 7f8f99832700 1 -- 192.168.123.105:0/4154583190 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f8f94115bf0 con 0x7f8f7c0779e0 2026-03-09T15:06:09.995 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.994+0000 7f8f91ffb700 1 -- 192.168.123.105:0/4154583190 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+401 (secure 0 0 0) 0x7f8f94115bf0 con 0x7f8f7c0779e0 2026-03-09T15:06:09.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.998+0000 7f8f7b7fe700 1 -- 192.168.123.105:0/4154583190 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f8f7c0779e0 msgr2=0x7f8f7c079e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:09.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.998+0000 7f8f7b7fe700 1 --2- 192.168.123.105:0/4154583190 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f8f7c0779e0 0x7f8f7c079e90 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f8f84000c00 tx=0x7f8f84011040 comp rx=0 tx=0).stop 2026-03-09T15:06:09.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.998+0000 7f8f7b7fe700 1 -- 192.168.123.105:0/4154583190 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8f94114650 msgr2=0x7f8f94114ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:09.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.998+0000 7f8f7b7fe700 1 --2- 192.168.123.105:0/4154583190 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8f94114650 0x7f8f94114ac0 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f8f8800d350 tx=0x7f8f8800d710 comp rx=0 tx=0).stop 2026-03-09T15:06:09.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.998+0000 7f8f7b7fe700 1 -- 192.168.123.105:0/4154583190 shutdown_connections 2026-03-09T15:06:09.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.998+0000 7f8f7b7fe700 1 --2- 192.168.123.105:0/4154583190 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f8f7c0779e0 0x7f8f7c079e90 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.998+0000 7f8f7b7fe700 1 --2- 192.168.123.105:0/4154583190 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8f9406ce40 0x7f8f941196a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.998+0000 7f8f7b7fe700 1 --2- 192.168.123.105:0/4154583190 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f8f94114650 0x7f8f94114ac0 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:09.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.999+0000 7f8f7b7fe700 1 -- 192.168.123.105:0/4154583190 >> 192.168.123.105:0/4154583190 conn(0x7f8f9406c370 msgr2=0x7f8f94072700 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:09.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.999+0000 7f8f7b7fe700 1 -- 192.168.123.105:0/4154583190 shutdown_connections 2026-03-09T15:06:09.999 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:09.999+0000 7f8f7b7fe700 1 -- 192.168.123.105:0/4154583190 wait complete. 2026-03-09T15:06:10.078 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.077+0000 7f22196a3700 1 -- 192.168.123.105:0/1840364840 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2214071b60 msgr2=0x7f2214071fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.078 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.077+0000 7f22196a3700 1 --2- 192.168.123.105:0/1840364840 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2214071b60 0x7f2214071fd0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f220c01c320 tx=0x7f220c01c630 comp rx=0 tx=0).stop 2026-03-09T15:06:10.078 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.077+0000 7f22196a3700 1 -- 192.168.123.105:0/1840364840 shutdown_connections 2026-03-09T15:06:10.078 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.077+0000 7f22196a3700 1 --2- 192.168.123.105:0/1840364840 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f2214071b60 0x7f2214071fd0 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.079 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.077+0000 7f22196a3700 1 --2- 192.168.123.105:0/1840364840 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f221410e9e0 0x7f221410edb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.079 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.077+0000 7f22196a3700 1 -- 192.168.123.105:0/1840364840 >> 192.168.123.105:0/1840364840 conn(0x7f221406c6c0 msgr2=0x7f221406cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:10.079 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.078+0000 7f22196a3700 1 -- 192.168.123.105:0/1840364840 shutdown_connections 2026-03-09T15:06:10.079 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.078+0000 7f22196a3700 1 -- 192.168.123.105:0/1840364840 wait complete. 2026-03-09T15:06:10.079 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.078+0000 7f22196a3700 1 Processor -- start 2026-03-09T15:06:10.079 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.078+0000 7f22196a3700 1 -- start start 2026-03-09T15:06:10.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.079+0000 7f22196a3700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f221410e9e0 0x7f221419c570 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.079+0000 7f22196a3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f221419cab0 0x7f22141a0f20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.079+0000 7f22196a3700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f221419d0c0 con 0x7f221419cab0 2026-03-09T15:06:10.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.079+0000 7f22196a3700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f221419d230 con 0x7f221410e9e0 2026-03-09T15:06:10.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.079+0000 7f2213fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f221410e9e0 0x7f221419c570 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.079+0000 7f2213fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f221410e9e0 0x7f221419c570 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:60866/0 (socket says 192.168.123.105:60866) 2026-03-09T15:06:10.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.079+0000 7f2213fff700 1 -- 192.168.123.105:0/3377889193 learned_addr learned my addr 192.168.123.105:0/3377889193 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:10.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.079+0000 7f22137fe700 1 --2- 192.168.123.105:0/3377889193 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f221419cab0 0x7f22141a0f20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.080+0000 7f2213fff700 1 -- 192.168.123.105:0/3377889193 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f221419cab0 msgr2=0x7f22141a0f20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.080+0000 7f2213fff700 1 --2- 192.168.123.105:0/3377889193 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f221419cab0 0x7f22141a0f20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.080+0000 7f2213fff700 1 -- 192.168.123.105:0/3377889193 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f220c01c060 con 0x7f221410e9e0 2026-03-09T15:06:10.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.080+0000 7f22137fe700 1 --2- 192.168.123.105:0/3377889193 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f221419cab0 0x7f22141a0f20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T15:06:10.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.080+0000 7f2213fff700 1 --2- 192.168.123.105:0/3377889193 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f221410e9e0 0x7f221419c570 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f220400b770 tx=0x7f220400bb30 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:10.083 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.082+0000 7f22117fa700 1 -- 192.168.123.105:0/3377889193 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f220400f820 con 0x7f221410e9e0 2026-03-09T15:06:10.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.082+0000 7f22117fa700 1 -- 192.168.123.105:0/3377889193 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f220400fe60 con 0x7f221410e9e0 2026-03-09T15:06:10.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.083+0000 7f22117fa700 1 -- 192.168.123.105:0/3377889193 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f220400d610 con 0x7f221410e9e0 2026-03-09T15:06:10.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.083+0000 7f22196a3700 1 -- 192.168.123.105:0/3377889193 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f22141a1520 con 0x7f221410e9e0 2026-03-09T15:06:10.085 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.083+0000 7f22196a3700 1 -- 192.168.123.105:0/3377889193 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f22141a1a20 con 0x7f221410e9e0 2026-03-09T15:06:10.086 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.085+0000 7f22117fa700 1 -- 192.168.123.105:0/3377889193 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f220400f980 con 0x7f221410e9e0 2026-03-09T15:06:10.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.086+0000 7f22196a3700 1 -- 192.168.123.105:0/3377889193 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f22141a1d50 con 0x7f221410e9e0 2026-03-09T15:06:10.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.086+0000 7f22117fa700 1 --2- 192.168.123.105:0/3377889193 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f21fc077720 0x7f21fc079bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.086+0000 7f22117fa700 1 -- 192.168.123.105:0/3377889193 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(75..75 src has 1..75) v4 ==== 6496+0+0 (secure 0 0 0) 0x7f2204098900 con 0x7f221410e9e0 2026-03-09T15:06:10.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.086+0000 7f22137fe700 1 --2- 192.168.123.105:0/3377889193 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f21fc077720 0x7f21fc079bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.088 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.087+0000 7f22137fe700 1 --2- 192.168.123.105:0/3377889193 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f21fc077720 0x7f21fc079bd0 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f220c01cab0 tx=0x7f220c007e20 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:10.090 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.090+0000 7f22117fa700 1 -- 192.168.123.105:0/3377889193 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f220409d050 con 0x7f221410e9e0 2026-03-09T15:06:10.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.219+0000 7f22196a3700 1 -- 192.168.123.105:0/3377889193 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f221404eab0 con 0x7f21fc077720 2026-03-09T15:06:10.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.224+0000 7f22117fa700 1 -- 192.168.123.105:0/3377889193 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f221404eab0 con 0x7f21fc077720 2026-03-09T15:06:10.225 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:06:10.225 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (3m) 69s ago 10m 24.0M - 0.25.0 c8568f914cd2 7635cece310c 2026-03-09T15:06:10.225 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (10m) 69s ago 10m 9332k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:06:10.225 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (9m) 26s ago 9m 11.5M - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (2m) 69s ago 10m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 35d8c0ae5a58 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (2m) 26s ago 9m 8308k - 19.2.3-678-ge911bdeb 654f31e6858e 82bdad36caf9 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (3m) 69s ago 9m 73.9M - 10.4.0 c8b91775d855 eb6431f63d88 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (7m) 69s ago 7m 181M - 18.2.0 dc2bc1663786 ea3dca51957f 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (7m) 69s ago 7m 17.7M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (7m) 26s ago 7m 17.7M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (7m) 26s ago 7m 94.9M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (4m) 69s ago 10m 617M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (3m) 26s ago 9m 496M - 19.2.3-678-ge911bdeb 654f31e6858e acf5a6f3f804 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (2m) 69s ago 10m 59.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1e11655f7d87 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (2m) 26s ago 9m 53.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e d1f0309f4d58 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (3m) 69s ago 10m 10.1M - 1.7.0 72c9c2088986 888d071c50d9 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (3m) 26s ago 9m 9545k - 1.7.0 72c9c2088986 22c96a576a60 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (115s) 69s ago 9m 144M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f2883abca2d2 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (92s) 69s ago 8m 109M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b830d7f76498 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (70s) 69s ago 8m 12.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 01cf87b8bc05 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (49s) 26s ago 8m 146M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9359c3ced4d3 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (27s) 26s ago 8m 12.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 985038f550f8 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (8m) 26s ago 8m 338M 4096M 18.2.0 dc2bc1663786 85fde149396e 2026-03-09T15:06:10.226 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (3m) 69s ago 9m 54.9M - 2.51.0 1d3b7f56885b e6f470b0ba11 2026-03-09T15:06:10.228 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.228+0000 7f22196a3700 1 -- 192.168.123.105:0/3377889193 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f21fc077720 msgr2=0x7f21fc079bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.228+0000 7f22196a3700 1 --2- 192.168.123.105:0/3377889193 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f21fc077720 0x7f21fc079bd0 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f220c01cab0 tx=0x7f220c007e20 comp rx=0 tx=0).stop 2026-03-09T15:06:10.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.228+0000 7f22196a3700 1 -- 192.168.123.105:0/3377889193 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f221410e9e0 msgr2=0x7f221419c570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.228+0000 7f22196a3700 1 --2- 192.168.123.105:0/3377889193 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f221410e9e0 0x7f221419c570 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f220400b770 tx=0x7f220400bb30 comp rx=0 tx=0).stop 2026-03-09T15:06:10.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.228+0000 7f22196a3700 1 -- 192.168.123.105:0/3377889193 shutdown_connections 2026-03-09T15:06:10.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.228+0000 7f22196a3700 1 --2- 192.168.123.105:0/3377889193 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f21fc077720 0x7f21fc079bd0 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.228+0000 7f22196a3700 1 --2- 192.168.123.105:0/3377889193 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f221410e9e0 0x7f221419c570 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.228+0000 7f22196a3700 1 --2- 192.168.123.105:0/3377889193 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f221419cab0 0x7f22141a0f20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.228+0000 7f22196a3700 1 -- 192.168.123.105:0/3377889193 >> 192.168.123.105:0/3377889193 conn(0x7f221406c6c0 msgr2=0x7f221406f540 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:10.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.229+0000 7f22196a3700 1 -- 192.168.123.105:0/3377889193 shutdown_connections 2026-03-09T15:06:10.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.229+0000 7f22196a3700 1 -- 192.168.123.105:0/3377889193 wait complete. 2026-03-09T15:06:10.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.299+0000 7f514d113700 1 -- 192.168.123.105:0/2531783434 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5148068490 msgr2=0x7f5148068900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.299+0000 7f514d113700 1 --2- 192.168.123.105:0/2531783434 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5148068490 0x7f5148068900 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f5138009b00 tx=0x7f5138009e10 comp rx=0 tx=0).stop 2026-03-09T15:06:10.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.299+0000 7f514d113700 1 -- 192.168.123.105:0/2531783434 shutdown_connections 2026-03-09T15:06:10.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.299+0000 7f514d113700 1 --2- 192.168.123.105:0/2531783434 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f5148068490 0x7f5148068900 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.299+0000 7f514d113700 1 --2- 192.168.123.105:0/2531783434 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f51481066c0 0x7f5148106a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.299+0000 7f514d113700 1 -- 192.168.123.105:0/2531783434 >> 192.168.123.105:0/2531783434 conn(0x7f51480754a0 msgr2=0x7f51480758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:10.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.299+0000 7f514d113700 1 -- 192.168.123.105:0/2531783434 shutdown_connections 2026-03-09T15:06:10.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.299+0000 7f514d113700 1 -- 192.168.123.105:0/2531783434 wait complete. 2026-03-09T15:06:10.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.300+0000 7f514d113700 1 Processor -- start 2026-03-09T15:06:10.301 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.300+0000 7f514d113700 1 -- start start 2026-03-09T15:06:10.301 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.300+0000 7f514d113700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5148068490 0x7f51481984b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.301 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.300+0000 7f514d113700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51481066c0 0x7f51481989f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.301 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.300+0000 7f514d113700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f51481990d0 con 0x7f51481066c0 2026-03-09T15:06:10.301 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.300+0000 7f514d113700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f514819ce60 con 0x7f5148068490 2026-03-09T15:06:10.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.300+0000 7f514659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51481066c0 0x7f51481989f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.300+0000 7f514659c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51481066c0 0x7f51481989f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:33856/0 (socket says 192.168.123.105:33856) 2026-03-09T15:06:10.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.300+0000 7f514659c700 1 -- 192.168.123.105:0/1856299902 learned_addr learned my addr 192.168.123.105:0/1856299902 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:10.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.301+0000 7f514659c700 1 -- 192.168.123.105:0/1856299902 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5148068490 msgr2=0x7f51481984b0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:06:10.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.301+0000 7f514659c700 1 --2- 192.168.123.105:0/1856299902 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5148068490 0x7f51481984b0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.301+0000 7f514659c700 1 -- 192.168.123.105:0/1856299902 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f51380097e0 con 0x7f51481066c0 2026-03-09T15:06:10.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.301+0000 7f514659c700 1 --2- 192.168.123.105:0/1856299902 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51481066c0 0x7f51481989f0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f51380048c0 tx=0x7f51380049a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:10.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.301+0000 7f513ffff700 1 -- 192.168.123.105:0/1856299902 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f513801d070 con 0x7f51481066c0 2026-03-09T15:06:10.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.301+0000 7f513ffff700 1 -- 192.168.123.105:0/1856299902 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f513800bc50 con 0x7f51481066c0 2026-03-09T15:06:10.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.301+0000 7f513ffff700 1 -- 192.168.123.105:0/1856299902 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f513800f780 con 0x7f51481066c0 2026-03-09T15:06:10.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.301+0000 7f514d113700 1 -- 192.168.123.105:0/1856299902 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f514819d0e0 con 0x7f51481066c0 2026-03-09T15:06:10.303 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.302+0000 7f514d113700 1 -- 192.168.123.105:0/1856299902 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f514819d550 con 0x7f51481066c0 2026-03-09T15:06:10.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.303+0000 7f514d113700 1 -- 192.168.123.105:0/1856299902 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f514804ea50 con 0x7f51481066c0 2026-03-09T15:06:10.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.303+0000 7f513ffff700 1 -- 192.168.123.105:0/1856299902 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f5138022470 con 0x7f51481066c0 2026-03-09T15:06:10.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.304+0000 7f513ffff700 1 --2- 192.168.123.105:0/1856299902 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f5134077700 0x7f5134079bb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.304+0000 7f513ffff700 1 -- 192.168.123.105:0/1856299902 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(75..75 src has 1..75) v4 ==== 6496+0+0 (secure 0 0 0) 0x7f513809b3c0 con 0x7f51481066c0 2026-03-09T15:06:10.307 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.306+0000 7f513ffff700 1 -- 192.168.123.105:0/1856299902 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f5138063b30 con 0x7f51481066c0 2026-03-09T15:06:10.307 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.306+0000 7f5146d9d700 1 --2- 192.168.123.105:0/1856299902 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f5134077700 0x7f5134079bb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.308 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.307+0000 7f5146d9d700 1 --2- 192.168.123.105:0/1856299902 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f5134077700 0x7f5134079bb0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f5130009730 tx=0x7f5130006cb0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:10.470 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.469+0000 7f514d113700 1 -- 192.168.123.105:0/1856299902 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f51481998d0 con 0x7f51481066c0 2026-03-09T15:06:10.473 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:10 vm05.local ceph-mon[116516]: osdmap e75: 6 total, 6 up, 6 in 2026-03-09T15:06:10.473 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:10 vm05.local ceph-mon[116516]: pgmap v111: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s; 452/264 objects degraded (171.212%); 0 B/s, 24 objects/s recovering 2026-03-09T15:06:10.473 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:10 vm05.local ceph-mon[116516]: from='client.44187 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:10.473 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:10 vm05.local ceph-mon[116516]: from='client.44191 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:10.474 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.474+0000 7f513ffff700 1 -- 192.168.123.105:0/1856299902 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+799 (secure 0 0 0) 0x7f5138063280 con 0x7f51481066c0 2026-03-09T15:06:10.474 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:06:10.474 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:06:10.474 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 1, 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 4 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 5, 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 9 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:06:10.475 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:06:10.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.476+0000 7f514d113700 1 -- 192.168.123.105:0/1856299902 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f5134077700 msgr2=0x7f5134079bb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.476+0000 7f514d113700 1 --2- 192.168.123.105:0/1856299902 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f5134077700 0x7f5134079bb0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f5130009730 tx=0x7f5130006cb0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.476+0000 7f514d113700 1 -- 192.168.123.105:0/1856299902 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51481066c0 msgr2=0x7f51481989f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.476+0000 7f514d113700 1 --2- 192.168.123.105:0/1856299902 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51481066c0 0x7f51481989f0 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f51380048c0 tx=0x7f51380049a0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.476+0000 7f514d113700 1 -- 192.168.123.105:0/1856299902 shutdown_connections 2026-03-09T15:06:10.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.476+0000 7f514d113700 1 --2- 192.168.123.105:0/1856299902 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f5134077700 0x7f5134079bb0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.476+0000 7f514d113700 1 --2- 192.168.123.105:0/1856299902 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f5148068490 0x7f51481984b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.476+0000 7f514d113700 1 --2- 192.168.123.105:0/1856299902 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f51481066c0 0x7f51481989f0 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.477 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.477+0000 7f514d113700 1 -- 192.168.123.105:0/1856299902 >> 192.168.123.105:0/1856299902 conn(0x7f51480754a0 msgr2=0x7f51481002e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:10.478 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.477+0000 7f514d113700 1 -- 192.168.123.105:0/1856299902 shutdown_connections 2026-03-09T15:06:10.478 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.477+0000 7f514d113700 1 -- 192.168.123.105:0/1856299902 wait complete. 2026-03-09T15:06:10.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.543+0000 7fd7c200f700 1 -- 192.168.123.105:0/3448098023 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc1015b0 msgr2=0x7fd7bc101980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.543+0000 7fd7c200f700 1 --2- 192.168.123.105:0/3448098023 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc1015b0 0x7fd7bc101980 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7fd7a4009b50 tx=0x7fd7a4009e60 comp rx=0 tx=0).stop 2026-03-09T15:06:10.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.543+0000 7fd7c200f700 1 -- 192.168.123.105:0/3448098023 shutdown_connections 2026-03-09T15:06:10.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.543+0000 7fd7c200f700 1 --2- 192.168.123.105:0/3448098023 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd7bc101ec0 0x7fd7bc10a590 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.543+0000 7fd7c200f700 1 --2- 192.168.123.105:0/3448098023 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc1015b0 0x7fd7bc101980 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.543+0000 7fd7c200f700 1 -- 192.168.123.105:0/3448098023 >> 192.168.123.105:0/3448098023 conn(0x7fd7bc0faf00 msgr2=0x7fd7bc0fd310 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:10.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.543+0000 7fd7c200f700 1 -- 192.168.123.105:0/3448098023 shutdown_connections 2026-03-09T15:06:10.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.543+0000 7fd7c200f700 1 -- 192.168.123.105:0/3448098023 wait complete. 2026-03-09T15:06:10.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.544+0000 7fd7c200f700 1 Processor -- start 2026-03-09T15:06:10.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.544+0000 7fd7c200f700 1 -- start start 2026-03-09T15:06:10.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.544+0000 7fd7c200f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc1015b0 0x7fd7bc1983c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.544+0000 7fd7c200f700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd7bc101ec0 0x7fd7bc198900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.544+0000 7fd7c200f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd7bc198fe0 con 0x7fd7bc1015b0 2026-03-09T15:06:10.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.544+0000 7fd7c200f700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd7bc19cd70 con 0x7fd7bc101ec0 2026-03-09T15:06:10.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.544+0000 7fd7bb7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc1015b0 0x7fd7bc1983c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.544+0000 7fd7bb7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc1015b0 0x7fd7bc1983c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:33880/0 (socket says 192.168.123.105:33880) 2026-03-09T15:06:10.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.544+0000 7fd7bb7fe700 1 -- 192.168.123.105:0/3401892235 learned_addr learned my addr 192.168.123.105:0/3401892235 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:10.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.545+0000 7fd7baffd700 1 --2- 192.168.123.105:0/3401892235 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd7bc101ec0 0x7fd7bc198900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.545+0000 7fd7bb7fe700 1 -- 192.168.123.105:0/3401892235 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd7bc101ec0 msgr2=0x7fd7bc198900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.545+0000 7fd7bb7fe700 1 --2- 192.168.123.105:0/3401892235 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd7bc101ec0 0x7fd7bc198900 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.545+0000 7fd7bb7fe700 1 -- 192.168.123.105:0/3401892235 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd7a40097e0 con 0x7fd7bc1015b0 2026-03-09T15:06:10.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.545+0000 7fd7bb7fe700 1 --2- 192.168.123.105:0/3401892235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc1015b0 0x7fd7bc1983c0 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fd7a4006010 tx=0x7fd7a4005e70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:10.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.545+0000 7fd7b8ff9700 1 -- 192.168.123.105:0/3401892235 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd7a401d070 con 0x7fd7bc1015b0 2026-03-09T15:06:10.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.545+0000 7fd7c200f700 1 -- 192.168.123.105:0/3401892235 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd7bc19cff0 con 0x7fd7bc1015b0 2026-03-09T15:06:10.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.546+0000 7fd7c200f700 1 -- 192.168.123.105:0/3401892235 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd7bc19d4e0 con 0x7fd7bc1015b0 2026-03-09T15:06:10.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.547+0000 7fd7b8ff9700 1 -- 192.168.123.105:0/3401892235 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fd7a400bd20 con 0x7fd7bc1015b0 2026-03-09T15:06:10.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.547+0000 7fd7b8ff9700 1 -- 192.168.123.105:0/3401892235 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd7a400f650 con 0x7fd7bc1015b0 2026-03-09T15:06:10.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.547+0000 7fd7b8ff9700 1 -- 192.168.123.105:0/3401892235 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fd7a400f8f0 con 0x7fd7bc1015b0 2026-03-09T15:06:10.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.547+0000 7fd7b8ff9700 1 --2- 192.168.123.105:0/3401892235 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd7a8077a10 0x7fd7a8079ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.547+0000 7fd7b8ff9700 1 -- 192.168.123.105:0/3401892235 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(75..75 src has 1..75) v4 ==== 6496+0+0 (secure 0 0 0) 0x7fd7a409c2b0 con 0x7fd7bc1015b0 2026-03-09T15:06:10.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.547+0000 7fd7c200f700 1 -- 192.168.123.105:0/3401892235 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd79c005320 con 0x7fd7bc1015b0 2026-03-09T15:06:10.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.547+0000 7fd7baffd700 1 --2- 192.168.123.105:0/3401892235 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd7a8077a10 0x7fd7a8079ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.549+0000 7fd7baffd700 1 --2- 192.168.123.105:0/3401892235 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd7a8077a10 0x7fd7a8079ec0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7fd7bc1999e0 tx=0x7fd7ac009450 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:10.551 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.550+0000 7fd7b8ff9700 1 -- 192.168.123.105:0/3401892235 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fd7a4064ad0 con 0x7fd7bc1015b0 2026-03-09T15:06:10.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:10 vm09.local ceph-mon[98742]: osdmap e75: 6 total, 6 up, 6 in 2026-03-09T15:06:10.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:10 vm09.local ceph-mon[98742]: pgmap v111: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s; 452/264 objects degraded (171.212%); 0 B/s, 24 objects/s recovering 2026-03-09T15:06:10.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:10 vm09.local ceph-mon[98742]: from='client.44187 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:10.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:10 vm09.local ceph-mon[98742]: from='client.44191 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.688+0000 7fd7c200f700 1 -- 192.168.123.105:0/3401892235 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fd79c006200 con 0x7fd7bc1015b0 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.689+0000 7fd7b8ff9700 1 -- 192.168.123.105:0/3401892235 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 11 v11) v1 ==== 76+0+1944 (secure 0 0 0) 0x7fd7a4027030 con 0x7fd7bc1015b0 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stdout:e11 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stdout:btime 1970-01-01T00:00:00:000000+0000 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stdout:legacy client fscid: 1 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stdout:Filesystem 'cephfs' (1) 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stdout:fs_name cephfs 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stdout:epoch 9 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stdout:created 2026-03-09T14:58:23.182447+0000 2026-03-09T15:06:10.690 INFO:teuthology.orchestra.run.vm05.stdout:modified 2026-03-09T14:58:30.215642+0000 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:tableserver 0 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:root 0 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:session_timeout 60 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:session_autoclose 300 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:max_file_size 1099511627776 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:max_xattr_size 65536 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:required_client_features {} 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:last_failure 0 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:last_failure_osd_epoch 0 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:max_mds 1 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:in 0 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:up {0=14502} 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:failed 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:damaged 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:stopped 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:data_pools [3] 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:metadata_pool 2 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:inline_data enabled 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:balancer 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:bal_rank_mask -1 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:standby_count_wanted 1 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:qdb_cluster leader: 0 members: 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.nrocqt{0:14502} state up:active seq 3 join_fscid=1 addr [v2:192.168.123.105:6826/2659122886,v1:192.168.123.105:6827/2659122886] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.ohmitn{0:14510} state up:standby-replay seq 2 join_fscid=1 addr [v2:192.168.123.109:6824/1947130211,v1:192.168.123.109:6825/1947130211] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:Standby daemons: 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.rrcyql{-1:14518} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:06:10.691 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.jrhwzz{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:06:10.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.692+0000 7fd7c200f700 1 -- 192.168.123.105:0/3401892235 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd7a8077a10 msgr2=0x7fd7a8079ec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.692+0000 7fd7c200f700 1 --2- 192.168.123.105:0/3401892235 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd7a8077a10 0x7fd7a8079ec0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7fd7bc1999e0 tx=0x7fd7ac009450 comp rx=0 tx=0).stop 2026-03-09T15:06:10.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.692+0000 7fd7c200f700 1 -- 192.168.123.105:0/3401892235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc1015b0 msgr2=0x7fd7bc1983c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.692+0000 7fd7c200f700 1 --2- 192.168.123.105:0/3401892235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc1015b0 0x7fd7bc1983c0 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7fd7a4006010 tx=0x7fd7a4005e70 comp rx=0 tx=0).stop 2026-03-09T15:06:10.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.692+0000 7fd7c200f700 1 -- 192.168.123.105:0/3401892235 shutdown_connections 2026-03-09T15:06:10.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.692+0000 7fd7c200f700 1 --2- 192.168.123.105:0/3401892235 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd7a8077a10 0x7fd7a8079ec0 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.692+0000 7fd7c200f700 1 --2- 192.168.123.105:0/3401892235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd7bc1015b0 0x7fd7bc1983c0 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.692+0000 7fd7c200f700 1 --2- 192.168.123.105:0/3401892235 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd7bc101ec0 0x7fd7bc198900 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.692+0000 7fd7c200f700 1 -- 192.168.123.105:0/3401892235 >> 192.168.123.105:0/3401892235 conn(0x7fd7bc0faf00 msgr2=0x7fd7bc0ffba0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:10.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.692+0000 7fd7c200f700 1 -- 192.168.123.105:0/3401892235 shutdown_connections 2026-03-09T15:06:10.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.693+0000 7fd7c200f700 1 -- 192.168.123.105:0/3401892235 wait complete. 2026-03-09T15:06:10.694 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 11 2026-03-09T15:06:10.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.757+0000 7f40005bb700 1 -- 192.168.123.105:0/2623949516 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ff80ff4d0 msgr2=0x7f3ff80ff8a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.757+0000 7f40005bb700 1 --2- 192.168.123.105:0/2623949516 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ff80ff4d0 0x7f3ff80ff8a0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f3fe8009b50 tx=0x7f3fe8009e60 comp rx=0 tx=0).stop 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f40005bb700 1 -- 192.168.123.105:0/2623949516 shutdown_connections 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f40005bb700 1 --2- 192.168.123.105:0/2623949516 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3ff80ffde0 0x7f3ff81042c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f40005bb700 1 --2- 192.168.123.105:0/2623949516 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ff80ff4d0 0x7f3ff80ff8a0 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f40005bb700 1 -- 192.168.123.105:0/2623949516 >> 192.168.123.105:0/2623949516 conn(0x7f3ff80faf00 msgr2=0x7f3ff80fd310 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f40005bb700 1 -- 192.168.123.105:0/2623949516 shutdown_connections 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f40005bb700 1 -- 192.168.123.105:0/2623949516 wait complete. 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f40005bb700 1 Processor -- start 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f40005bb700 1 -- start start 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f40005bb700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3ff80ff4d0 0x7f3ff810da00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f40005bb700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ff80ffde0 0x7f3ff8108a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f40005bb700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3ff8108fd0 con 0x7f3ff80ffde0 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f40005bb700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3ff8109140 con 0x7f3ff80ff4d0 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f3ffdb56700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ff80ffde0 0x7f3ff8108a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.762 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f3ffdb56700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ff80ffde0 0x7f3ff8108a00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:33908/0 (socket says 192.168.123.105:33908) 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.758+0000 7f3ffdb56700 1 -- 192.168.123.105:0/4186456786 learned_addr learned my addr 192.168.123.105:0/4186456786 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.759+0000 7f3ffdb56700 1 -- 192.168.123.105:0/4186456786 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3ff80ff4d0 msgr2=0x7f3ff810da00 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.759+0000 7f3ffdb56700 1 --2- 192.168.123.105:0/4186456786 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3ff80ff4d0 0x7f3ff810da00 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.759+0000 7f3ffdb56700 1 -- 192.168.123.105:0/4186456786 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3fe80097e0 con 0x7f3ff80ffde0 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.759+0000 7f3ffdb56700 1 --2- 192.168.123.105:0/4186456786 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ff80ffde0 0x7f3ff8108a00 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f3ff400ed70 tx=0x7f3ff400c5b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.759+0000 7f3fef7fe700 1 -- 192.168.123.105:0/4186456786 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3ff400cd70 con 0x7f3ff80ffde0 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.759+0000 7f40005bb700 1 -- 192.168.123.105:0/4186456786 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3ff81093d0 con 0x7f3ff80ffde0 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.759+0000 7f40005bb700 1 -- 192.168.123.105:0/4186456786 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3ff819e480 con 0x7f3ff80ffde0 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.760+0000 7f3fef7fe700 1 -- 192.168.123.105:0/4186456786 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f3ff4010910 con 0x7f3ff80ffde0 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.760+0000 7f3fef7fe700 1 -- 192.168.123.105:0/4186456786 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3ff4018980 con 0x7f3ff80ffde0 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.760+0000 7f3fef7fe700 1 -- 192.168.123.105:0/4186456786 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f3ff4018ba0 con 0x7f3ff80ffde0 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.760+0000 7f3fef7fe700 1 --2- 192.168.123.105:0/4186456786 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3fe4077a60 0x7f3fe4079f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.761+0000 7f3fef7fe700 1 -- 192.168.123.105:0/4186456786 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(75..75 src has 1..75) v4 ==== 6496+0+0 (secure 0 0 0) 0x7f3ff4014070 con 0x7f3ff80ffde0 2026-03-09T15:06:10.763 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.763+0000 7f3ffe357700 1 --2- 192.168.123.105:0/4186456786 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3fe4077a60 0x7f3fe4079f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.764 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.763+0000 7f40005bb700 1 -- 192.168.123.105:0/4186456786 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3fdc005320 con 0x7f3ff80ffde0 2026-03-09T15:06:10.767 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.766+0000 7f3ffe357700 1 --2- 192.168.123.105:0/4186456786 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3fe4077a60 0x7f3fe4079f10 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f3fe8000c00 tx=0x7f3fe8005fb0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:10.767 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.766+0000 7f3fef7fe700 1 -- 192.168.123.105:0/4186456786 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3ff4063f40 con 0x7f3ff80ffde0 2026-03-09T15:06:10.892 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.891+0000 7f40005bb700 1 -- 192.168.123.105:0/4186456786 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f3fdc000bf0 con 0x7f3fe4077a60 2026-03-09T15:06:10.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.892+0000 7f3fef7fe700 1 -- 192.168.123.105:0/4186456786 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+401 (secure 0 0 0) 0x7f3fdc000bf0 con 0x7f3fe4077a60 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [ 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout: "mgr", 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout: "mon", 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout: "crash" 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "11/23 daemons upgraded", 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:06:10.894 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:06:10.896 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.895+0000 7f40005bb700 1 -- 192.168.123.105:0/4186456786 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3fe4077a60 msgr2=0x7f3fe4079f10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.896 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.895+0000 7f40005bb700 1 --2- 192.168.123.105:0/4186456786 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3fe4077a60 0x7f3fe4079f10 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f3fe8000c00 tx=0x7f3fe8005fb0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.896 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.895+0000 7f40005bb700 1 -- 192.168.123.105:0/4186456786 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ff80ffde0 msgr2=0x7f3ff8108a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.896 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.895+0000 7f40005bb700 1 --2- 192.168.123.105:0/4186456786 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ff80ffde0 0x7f3ff8108a00 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f3ff400ed70 tx=0x7f3ff400c5b0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.896 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.895+0000 7f40005bb700 1 -- 192.168.123.105:0/4186456786 shutdown_connections 2026-03-09T15:06:10.896 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.895+0000 7f40005bb700 1 --2- 192.168.123.105:0/4186456786 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3fe4077a60 0x7f3fe4079f10 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.895+0000 7f40005bb700 1 --2- 192.168.123.105:0/4186456786 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3ff80ff4d0 0x7f3ff810da00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.895+0000 7f40005bb700 1 --2- 192.168.123.105:0/4186456786 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3ff80ffde0 0x7f3ff8108a00 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.895+0000 7f40005bb700 1 -- 192.168.123.105:0/4186456786 >> 192.168.123.105:0/4186456786 conn(0x7f3ff80faf00 msgr2=0x7f3ff8110940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:10.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.895+0000 7f40005bb700 1 -- 192.168.123.105:0/4186456786 shutdown_connections 2026-03-09T15:06:10.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.895+0000 7f40005bb700 1 -- 192.168.123.105:0/4186456786 wait complete. 2026-03-09T15:06:10.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.960+0000 7eff87469700 1 -- 192.168.123.105:0/2220399995 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff80101ee0 msgr2=0x7eff8010a5b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:10.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.960+0000 7eff87469700 1 --2- 192.168.123.105:0/2220399995 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff80101ee0 0x7eff8010a5b0 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7eff7c009b00 tx=0x7eff7c009e10 comp rx=0 tx=0).stop 2026-03-09T15:06:10.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.960+0000 7eff87469700 1 -- 192.168.123.105:0/2220399995 shutdown_connections 2026-03-09T15:06:10.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.960+0000 7eff87469700 1 --2- 192.168.123.105:0/2220399995 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff80101ee0 0x7eff8010a5b0 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.960+0000 7eff87469700 1 --2- 192.168.123.105:0/2220399995 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff801015d0 0x7eff801019a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.960+0000 7eff87469700 1 -- 192.168.123.105:0/2220399995 >> 192.168.123.105:0/2220399995 conn(0x7eff800faf00 msgr2=0x7eff800fd310 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:10.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.960+0000 7eff87469700 1 -- 192.168.123.105:0/2220399995 shutdown_connections 2026-03-09T15:06:10.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.961+0000 7eff87469700 1 -- 192.168.123.105:0/2220399995 wait complete. 2026-03-09T15:06:10.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.961+0000 7eff87469700 1 Processor -- start 2026-03-09T15:06:10.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.961+0000 7eff87469700 1 -- start start 2026-03-09T15:06:10.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.961+0000 7eff87469700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff801015d0 0x7eff801961e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.962+0000 7eff87469700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff80101ee0 0x7eff80196720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.962+0000 7eff87469700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7eff80196e00 con 0x7eff80101ee0 2026-03-09T15:06:10.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.962+0000 7eff87469700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7eff8019ab90 con 0x7eff801015d0 2026-03-09T15:06:10.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.962+0000 7eff84a04700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff80101ee0 0x7eff80196720 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.962+0000 7eff84a04700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff80101ee0 0x7eff80196720 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:33924/0 (socket says 192.168.123.105:33924) 2026-03-09T15:06:10.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.962+0000 7eff84a04700 1 -- 192.168.123.105:0/2769707028 learned_addr learned my addr 192.168.123.105:0/2769707028 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:10.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.962+0000 7eff84a04700 1 -- 192.168.123.105:0/2769707028 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff801015d0 msgr2=0x7eff801961e0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:06:10.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.962+0000 7eff85205700 1 --2- 192.168.123.105:0/2769707028 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff801015d0 0x7eff801961e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.962+0000 7eff84a04700 1 --2- 192.168.123.105:0/2769707028 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff801015d0 0x7eff801961e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:10.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.962+0000 7eff84a04700 1 -- 192.168.123.105:0/2769707028 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7eff7c0097e0 con 0x7eff80101ee0 2026-03-09T15:06:10.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.962+0000 7eff85205700 1 --2- 192.168.123.105:0/2769707028 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff801015d0 0x7eff801961e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:06:10.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.962+0000 7eff84a04700 1 --2- 192.168.123.105:0/2769707028 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff80101ee0 0x7eff80196720 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7eff7c005b40 tx=0x7eff7c00be30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:10.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.963+0000 7eff767fc700 1 -- 192.168.123.105:0/2769707028 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7eff7c01d070 con 0x7eff80101ee0 2026-03-09T15:06:10.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.963+0000 7eff767fc700 1 -- 192.168.123.105:0/2769707028 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7eff7c00f460 con 0x7eff80101ee0 2026-03-09T15:06:10.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.963+0000 7eff767fc700 1 -- 192.168.123.105:0/2769707028 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7eff7c021600 con 0x7eff80101ee0 2026-03-09T15:06:10.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.963+0000 7eff87469700 1 -- 192.168.123.105:0/2769707028 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7eff8019ae10 con 0x7eff80101ee0 2026-03-09T15:06:10.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.963+0000 7eff87469700 1 -- 192.168.123.105:0/2769707028 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7eff8019b300 con 0x7eff80101ee0 2026-03-09T15:06:10.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.964+0000 7eff87469700 1 -- 192.168.123.105:0/2769707028 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7eff80107cb0 con 0x7eff80101ee0 2026-03-09T15:06:10.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.965+0000 7eff767fc700 1 -- 192.168.123.105:0/2769707028 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7eff7c00f5d0 con 0x7eff80101ee0 2026-03-09T15:06:10.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.965+0000 7eff767fc700 1 --2- 192.168.123.105:0/2769707028 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7eff6c0778c0 0x7eff6c079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:10.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.965+0000 7eff767fc700 1 -- 192.168.123.105:0/2769707028 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(75..75 src has 1..75) v4 ==== 6496+0+0 (secure 0 0 0) 0x7eff7c09aa30 con 0x7eff80101ee0 2026-03-09T15:06:10.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.967+0000 7eff85205700 1 --2- 192.168.123.105:0/2769707028 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7eff6c0778c0 0x7eff6c079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:10.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.967+0000 7eff767fc700 1 -- 192.168.123.105:0/2769707028 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7eff7c063120 con 0x7eff80101ee0 2026-03-09T15:06:10.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:10.968+0000 7eff85205700 1 --2- 192.168.123.105:0/2769707028 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7eff6c0778c0 0x7eff6c079d70 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7eff70009dd0 tx=0x7eff70009450 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:11.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.131+0000 7eff87469700 1 -- 192.168.123.105:0/2769707028 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7eff80197540 con 0x7eff80101ee0 2026-03-09T15:06:11.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.132+0000 7eff767fc700 1 -- 192.168.123.105:0/2769707028 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+533 (secure 0 0 0) 0x7eff7c00f880 con 0x7eff80101ee0 2026-03-09T15:06:11.133 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_WARN 1 filesystem with deprecated feature inline_data; Degraded data redundancy: 452/264 objects degraded (171.212%), 2 pgs degraded 2026-03-09T15:06:11.133 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:06:11.133 INFO:teuthology.orchestra.run.vm05.stdout: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:06:11.133 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 452/264 objects degraded (171.212%), 2 pgs degraded 2026-03-09T15:06:11.133 INFO:teuthology.orchestra.run.vm05.stdout: pg 3.4 is active+recovery_wait+undersized+degraded+remapped, acting [2,3] 2026-03-09T15:06:11.133 INFO:teuthology.orchestra.run.vm05.stdout: pg 3.d is active+recovery_wait+undersized+degraded+remapped, acting [2,4] 2026-03-09T15:06:11.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.134+0000 7eff87469700 1 -- 192.168.123.105:0/2769707028 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7eff6c0778c0 msgr2=0x7eff6c079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:11.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.134+0000 7eff87469700 1 --2- 192.168.123.105:0/2769707028 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7eff6c0778c0 0x7eff6c079d70 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7eff70009dd0 tx=0x7eff70009450 comp rx=0 tx=0).stop 2026-03-09T15:06:11.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.134+0000 7eff87469700 1 -- 192.168.123.105:0/2769707028 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff80101ee0 msgr2=0x7eff80196720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:11.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.134+0000 7eff87469700 1 --2- 192.168.123.105:0/2769707028 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff80101ee0 0x7eff80196720 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7eff7c005b40 tx=0x7eff7c00be30 comp rx=0 tx=0).stop 2026-03-09T15:06:11.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.134+0000 7eff87469700 1 -- 192.168.123.105:0/2769707028 shutdown_connections 2026-03-09T15:06:11.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.134+0000 7eff87469700 1 --2- 192.168.123.105:0/2769707028 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7eff6c0778c0 0x7eff6c079d70 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:11.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.134+0000 7eff87469700 1 --2- 192.168.123.105:0/2769707028 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7eff801015d0 0x7eff801961e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:11.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.134+0000 7eff87469700 1 --2- 192.168.123.105:0/2769707028 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7eff80101ee0 0x7eff80196720 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:11.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.135+0000 7eff87469700 1 -- 192.168.123.105:0/2769707028 >> 192.168.123.105:0/2769707028 conn(0x7eff800faf00 msgr2=0x7eff80104df0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:11.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.135+0000 7eff87469700 1 -- 192.168.123.105:0/2769707028 shutdown_connections 2026-03-09T15:06:11.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:11.135+0000 7eff87469700 1 -- 192.168.123.105:0/2769707028 wait complete. 2026-03-09T15:06:11.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:11 vm05.local ceph-mon[116516]: from='client.44193 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:11.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:11 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1856299902' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:11.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:11 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3401892235' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:06:11.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:11 vm05.local ceph-mon[116516]: from='client.34258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:11.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:11 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/2769707028' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:06:11.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:11 vm09.local ceph-mon[98742]: from='client.44193 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:11.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:11 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1856299902' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:11.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:11 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3401892235' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:06:11.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:11 vm09.local ceph-mon[98742]: from='client.34258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:11.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:11 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/2769707028' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:06:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:12 vm05.local ceph-mon[116516]: osdmap e76: 6 total, 6 up, 6 in 2026-03-09T15:06:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:12 vm05.local ceph-mon[116516]: pgmap v113: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 0 B/s, 16 objects/s recovering 2026-03-09T15:06:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:12 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:06:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:12 vm09.local ceph-mon[98742]: osdmap e76: 6 total, 6 up, 6 in 2026-03-09T15:06:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:12 vm09.local ceph-mon[98742]: pgmap v113: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 61 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 0 B/s, 16 objects/s recovering 2026-03-09T15:06:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:12 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:06:13.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:13 vm05.local ceph-mon[116516]: osdmap e77: 6 total, 6 up, 6 in 2026-03-09T15:06:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:13 vm09.local ceph-mon[98742]: osdmap e77: 6 total, 6 up, 6 in 2026-03-09T15:06:14.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:14 vm05.local ceph-mon[116516]: pgmap v115: 65 pgs: 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 62 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 986 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 0 B/s, 0 objects/s recovering 2026-03-09T15:06:14.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:14 vm09.local ceph-mon[98742]: pgmap v115: 65 pgs: 1 active+recovering+remapped, 2 active+recovery_wait+undersized+degraded+remapped, 62 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 986 B/s rd, 1 op/s; 452/264 objects degraded (171.212%); 0 B/s, 0 objects/s recovering 2026-03-09T15:06:15.754 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:15 vm09.local ceph-mon[98742]: Health check update: Degraded data redundancy: 234/264 objects degraded (88.636%), 1 pg degraded (PG_DEGRADED) 2026-03-09T15:06:15.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:15 vm05.local ceph-mon[116516]: Health check update: Degraded data redundancy: 234/264 objects degraded (88.636%), 1 pg degraded (PG_DEGRADED) 2026-03-09T15:06:16.558 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:16 vm09.local systemd[1]: Stopping Ceph osd.5 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:06:16.559 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:16 vm09.local ceph-mon[98742]: pgmap v116: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovery_wait+undersized+degraded+remapped, 63 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 827 B/s rd, 2 op/s; 234/264 objects degraded (88.636%); 0 B/s, 13 objects/s recovering 2026-03-09T15:06:16.559 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T15:06:16.559 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:16 vm09.local ceph-mon[98742]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T15:06:16.559 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:16 vm09.local ceph-mon[98742]: Upgrade: osd.5 is safe to restart 2026-03-09T15:06:16.559 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:16 vm09.local ceph-mon[98742]: Upgrade: Updating osd.5 2026-03-09T15:06:16.559 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:16.559 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T15:06:16.559 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:16.559 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:16 vm09.local ceph-mon[98742]: Deploying daemon osd.5 on vm09 2026-03-09T15:06:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:16 vm05.local ceph-mon[116516]: pgmap v116: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovery_wait+undersized+degraded+remapped, 63 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 827 B/s rd, 2 op/s; 234/264 objects degraded (88.636%); 0 B/s, 13 objects/s recovering 2026-03-09T15:06:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T15:06:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:16 vm05.local ceph-mon[116516]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T15:06:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:16 vm05.local ceph-mon[116516]: Upgrade: osd.5 is safe to restart 2026-03-09T15:06:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:16 vm05.local ceph-mon[116516]: Upgrade: Updating osd.5 2026-03-09T15:06:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T15:06:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:16 vm05.local ceph-mon[116516]: Deploying daemon osd.5 on vm09 2026-03-09T15:06:16.866 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:16 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[76422]: 2026-03-09T15:06:16.623+0000 7f8bf1f25700 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:06:16.866 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:16 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[76422]: 2026-03-09T15:06:16.623+0000 7f8bf1f25700 -1 osd.5 77 *** Got signal Terminated *** 2026-03-09T15:06:16.866 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:16 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[76422]: 2026-03-09T15:06:16.623+0000 7f8bf1f25700 -1 osd.5 77 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T15:06:17.752 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local podman[114782]: 2026-03-09 15:06:17.581442304 +0000 UTC m=+0.971490925 container died 85fde149396e8e30059952aad804b4b594064de6813f8dcdd8abe7fece488cb6 (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5, GIT_CLEAN=True, RELEASE=HEAD, ceph=True, maintainer=Guillaume Abrioux , org.label-schema.build-date=20231212, org.label-schema.name=CentOS Stream 8 Base Image, CEPH_POINT_RELEASE=-18.2.0, GIT_BRANCH=HEAD, io.buildah.version=1.29.1, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, GIT_REPO=https://github.com/ceph/ceph-container.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) 2026-03-09T15:06:17.752 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local podman[114782]: 2026-03-09 15:06:17.600630615 +0000 UTC m=+0.990679226 container remove 85fde149396e8e30059952aad804b4b594064de6813f8dcdd8abe7fece488cb6 (image=quay.io/ceph/ceph@sha256:1793ff3af6ae74527c86e1a0b22401e9c42dc08d0ebb8379653be07db17d0007, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5, GIT_REPO=https://github.com/ceph/ceph-container.git, org.label-schema.schema-version=1.0, GIT_BRANCH=HEAD, GIT_CLEAN=True, RELEASE=HEAD, maintainer=Guillaume Abrioux , org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 8 Base Image, CEPH_POINT_RELEASE=-18.2.0, GIT_COMMIT=0396eef90bef641b676c164ec7a3876f45010308, ceph=True, io.buildah.version=1.29.1, org.label-schema.build-date=20231212, org.label-schema.vendor=CentOS) 2026-03-09T15:06:17.752 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local bash[114782]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5 2026-03-09T15:06:17.752 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local podman[114851]: 2026-03-09 15:06:17.724890927 +0000 UTC m=+0.015102114 container create 4d0e87206b8dd56d77096fc489dd8e1f321a5a3923eb87c6efd9702d88226694 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3) 2026-03-09T15:06:17.752 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:17 vm09.local ceph-mon[98742]: osd.5 marked itself down and dead 2026-03-09T15:06:17.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:17 vm05.local ceph-mon[116516]: osd.5 marked itself down and dead 2026-03-09T15:06:18.060 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local podman[114851]: 2026-03-09 15:06:17.764633827 +0000 UTC m=+0.054845014 container init 4d0e87206b8dd56d77096fc489dd8e1f321a5a3923eb87c6efd9702d88226694 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-deactivate, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T15:06:18.060 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local podman[114851]: 2026-03-09 15:06:17.767076408 +0000 UTC m=+0.057287595 container start 4d0e87206b8dd56d77096fc489dd8e1f321a5a3923eb87c6efd9702d88226694 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-deactivate, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , ceph=True) 2026-03-09T15:06:18.060 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local podman[114851]: 2026-03-09 15:06:17.77159265 +0000 UTC m=+0.061803837 container attach 4d0e87206b8dd56d77096fc489dd8e1f321a5a3923eb87c6efd9702d88226694 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-deactivate, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True) 2026-03-09T15:06:18.060 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local podman[114851]: 2026-03-09 15:06:17.718620774 +0000 UTC m=+0.008831970 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:06:18.060 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local podman[114851]: 2026-03-09 15:06:17.893271922 +0000 UTC m=+0.183483109 container died 4d0e87206b8dd56d77096fc489dd8e1f321a5a3923eb87c6efd9702d88226694 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS) 2026-03-09T15:06:18.060 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local podman[114851]: 2026-03-09 15:06:17.910083765 +0000 UTC m=+0.200294952 container remove 4d0e87206b8dd56d77096fc489dd8e1f321a5a3923eb87c6efd9702d88226694 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-deactivate, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T15:06:18.060 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.5.service: Deactivated successfully. 2026-03-09T15:06:18.060 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.5.service: Unit process 114862 (conmon) remains running after unit stopped. 2026-03-09T15:06:18.060 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local systemd[1]: Stopped Ceph osd.5 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:06:18.060 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:17 vm09.local systemd[1]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.5.service: Consumed 38.532s CPU time, 834.7M memory peak. 2026-03-09T15:06:18.367 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local systemd[1]: Starting Ceph osd.5 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:06:18.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local podman[114955]: 2026-03-09 15:06:18.174160856 +0000 UTC m=+0.015781817 container create f9eb3f3da8754b8830a51f7a775597676db5a83641aa40156904d708b5aae374 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS) 2026-03-09T15:06:18.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local podman[114955]: 2026-03-09 15:06:18.212305453 +0000 UTC m=+0.053926405 container init f9eb3f3da8754b8830a51f7a775597676db5a83641aa40156904d708b5aae374 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True) 2026-03-09T15:06:18.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local podman[114955]: 2026-03-09 15:06:18.215312501 +0000 UTC m=+0.056933462 container start f9eb3f3da8754b8830a51f7a775597676db5a83641aa40156904d708b5aae374 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid) 2026-03-09T15:06:18.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local podman[114955]: 2026-03-09 15:06:18.218774911 +0000 UTC m=+0.060395872 container attach f9eb3f3da8754b8830a51f7a775597676db5a83641aa40156904d708b5aae374 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20260223, ceph=True, OSD_FLAVOR=default) 2026-03-09T15:06:18.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local podman[114955]: 2026-03-09 15:06:18.167928773 +0000 UTC m=+0.009549743 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:06:18.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate[114967]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:06:18.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local bash[114955]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:06:18.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate[114967]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:06:18.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local bash[114955]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:06:18.774 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:18 vm09.local ceph-mon[98742]: pgmap v117: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovery_wait+undersized+degraded+remapped, 63 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s; 234/264 objects degraded (88.636%); 0 B/s, 10 objects/s recovering 2026-03-09T15:06:18.774 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:18 vm09.local ceph-mon[98742]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T15:06:18.774 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:18 vm09.local ceph-mon[98742]: Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T15:06:18.774 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:18 vm09.local ceph-mon[98742]: osdmap e78: 6 total, 5 up, 6 in 2026-03-09T15:06:18.775 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate[114967]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T15:06:18.775 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local bash[114955]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T15:06:18.775 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate[114967]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:06:18.775 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local bash[114955]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:06:18.775 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate[114967]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:06:18.775 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local bash[114955]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T15:06:18.775 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate[114967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-09T15:06:18.775 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local bash[114955]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-09T15:06:18.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:18 vm05.local ceph-mon[116516]: pgmap v117: 65 pgs: 1 active+recovering+undersized+remapped, 1 active+recovery_wait+undersized+degraded+remapped, 63 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s; 234/264 objects degraded (88.636%); 0 B/s, 10 objects/s recovering 2026-03-09T15:06:18.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:18 vm05.local ceph-mon[116516]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T15:06:18.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:18 vm05.local ceph-mon[116516]: Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T15:06:18.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:18 vm05.local ceph-mon[116516]: osdmap e78: 6 total, 5 up, 6 in 2026-03-09T15:06:19.058 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate[114967]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-e7882b97-817a-4353-ab08-884dd4b337e8/osd-block-0ff88b04-4b34-4df6-bf77-2132a823172e --path /var/lib/ceph/osd/ceph-5 --no-mon-config 2026-03-09T15:06:19.058 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:18 vm09.local bash[114955]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-e7882b97-817a-4353-ab08-884dd4b337e8/osd-block-0ff88b04-4b34-4df6-bf77-2132a823172e --path /var/lib/ceph/osd/ceph-5 --no-mon-config 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate[114967]: Running command: /usr/bin/ln -snf /dev/ceph-e7882b97-817a-4353-ab08-884dd4b337e8/osd-block-0ff88b04-4b34-4df6-bf77-2132a823172e /var/lib/ceph/osd/ceph-5/block 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local bash[114955]: Running command: /usr/bin/ln -snf /dev/ceph-e7882b97-817a-4353-ab08-884dd4b337e8/osd-block-0ff88b04-4b34-4df6-bf77-2132a823172e /var/lib/ceph/osd/ceph-5/block 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate[114967]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local bash[114955]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate[114967]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local bash[114955]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate[114967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local bash[114955]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate[114967]: --> ceph-volume lvm activate successful for osd ID: 5 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local bash[114955]: --> ceph-volume lvm activate successful for osd ID: 5 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local podman[114955]: 2026-03-09 15:06:19.08510633 +0000 UTC m=+0.926727291 container died f9eb3f3da8754b8830a51f7a775597676db5a83641aa40156904d708b5aae374 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local podman[114955]: 2026-03-09 15:06:19.102321709 +0000 UTC m=+0.943942659 container remove f9eb3f3da8754b8830a51f7a775597676db5a83641aa40156904d708b5aae374 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-activate, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True) 2026-03-09T15:06:19.368 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local podman[115223]: 2026-03-09 15:06:19.190688907 +0000 UTC m=+0.017146290 container create 15ec92bc2880e82bec54f28b42ab92cf5fdd36cd3ff169a2978afa8045e00427 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20260223) 2026-03-09T15:06:19.369 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local podman[115223]: 2026-03-09 15:06:19.225625532 +0000 UTC m=+0.052082915 container init 15ec92bc2880e82bec54f28b42ab92cf5fdd36cd3ff169a2978afa8045e00427 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T15:06:19.369 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local podman[115223]: 2026-03-09 15:06:19.231168465 +0000 UTC m=+0.057625848 container start 15ec92bc2880e82bec54f28b42ab92cf5fdd36cd3ff169a2978afa8045e00427 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20260223) 2026-03-09T15:06:19.369 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local bash[115223]: 15ec92bc2880e82bec54f28b42ab92cf5fdd36cd3ff169a2978afa8045e00427 2026-03-09T15:06:19.369 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local podman[115223]: 2026-03-09 15:06:19.184813902 +0000 UTC m=+0.011271276 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:06:19.369 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local systemd[1]: Started Ceph osd.5 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000. 2026-03-09T15:06:19.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:19 vm05.local ceph-mon[116516]: osdmap e79: 6 total, 5 up, 6 in 2026-03-09T15:06:19.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:19 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:19.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:19 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:19.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:19 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:19.812 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:19 vm09.local ceph-mon[98742]: osdmap e79: 6 total, 5 up, 6 in 2026-03-09T15:06:19.812 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:19 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:19.812 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:19 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:19.812 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:19 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:19.812 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:19 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:06:19.810+0000 7f1d29463740 -1 Falling back to public interface 2026-03-09T15:06:20.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:20 vm05.local ceph-mon[116516]: pgmap v120: 65 pgs: 10 peering, 8 stale+active+clean, 1 active+recovering+undersized+remapped, 1 active+recovery_wait+undersized+degraded+remapped, 45 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 714 B/s rd, 1 op/s; 234/264 objects degraded (88.636%); 0 B/s, 11 objects/s recovering 2026-03-09T15:06:20.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:20 vm05.local ceph-mon[116516]: osdmap e80: 6 total, 5 up, 6 in 2026-03-09T15:06:20.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:20 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:20.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:20 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:20.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:20 vm05.local ceph-mon[116516]: Health check failed: Reduced data availability: 5 pgs peering (PG_AVAILABILITY) 2026-03-09T15:06:20.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:20 vm09.local ceph-mon[98742]: pgmap v120: 65 pgs: 10 peering, 8 stale+active+clean, 1 active+recovering+undersized+remapped, 1 active+recovery_wait+undersized+degraded+remapped, 45 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 714 B/s rd, 1 op/s; 234/264 objects degraded (88.636%); 0 B/s, 11 objects/s recovering 2026-03-09T15:06:20.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:20 vm09.local ceph-mon[98742]: osdmap e80: 6 total, 5 up, 6 in 2026-03-09T15:06:20.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:20 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:20.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:20 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:20.771 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:20 vm09.local ceph-mon[98742]: Health check failed: Reduced data availability: 5 pgs peering (PG_AVAILABILITY) 2026-03-09T15:06:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:06:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T15:06:21.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:21 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T15:06:22.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:21 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T15:06:22.825 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:22 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:06:22.683+0000 7f1d29463740 -1 osd.5 0 read_superblock omap replica is missing. 2026-03-09T15:06:22.825 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:22 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:06:22.825+0000 7f1d29463740 -1 osd.5 77 log_to_monitors true 2026-03-09T15:06:22.825 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:22 vm09.local ceph-mon[98742]: pgmap v122: 65 pgs: 1 active+clean+remapped, 7 active+undersized, 10 peering, 3 stale+active+clean, 1 active+recovering+undersized+remapped, 3 active+undersized+degraded, 40 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 7/264 objects degraded (2.652%); 0 B/s, 7 objects/s recovering 2026-03-09T15:06:22.826 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:22 vm09.local ceph-mon[98742]: Upgrade: Setting container_image for all osd 2026-03-09T15:06:22.826 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:22 vm09.local ceph-mon[98742]: Upgrade: Setting require_osd_release to 19 squid 2026-03-09T15:06:22.826 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:22 vm09.local ceph-mon[98742]: Health check update: Degraded data redundancy: 7/264 objects degraded (2.652%), 3 pgs degraded (PG_DEGRADED) 2026-03-09T15:06:22.826 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:22 vm09.local ceph-mon[98742]: Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T15:06:22.826 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:22 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T15:06:22.826 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:22 vm09.local ceph-mon[98742]: osdmap e81: 6 total, 5 up, 6 in 2026-03-09T15:06:22.826 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:22 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:22.826 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:22 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "0"}]: dispatch 2026-03-09T15:06:23.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:22 vm05.local ceph-mon[116516]: pgmap v122: 65 pgs: 1 active+clean+remapped, 7 active+undersized, 10 peering, 3 stale+active+clean, 1 active+recovering+undersized+remapped, 3 active+undersized+degraded, 40 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s; 7/264 objects degraded (2.652%); 0 B/s, 7 objects/s recovering 2026-03-09T15:06:23.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:22 vm05.local ceph-mon[116516]: Upgrade: Setting container_image for all osd 2026-03-09T15:06:23.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:22 vm05.local ceph-mon[116516]: Upgrade: Setting require_osd_release to 19 squid 2026-03-09T15:06:23.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:22 vm05.local ceph-mon[116516]: Health check update: Degraded data redundancy: 7/264 objects degraded (2.652%), 3 pgs degraded (PG_DEGRADED) 2026-03-09T15:06:23.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:22 vm05.local ceph-mon[116516]: Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T15:06:23.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:22 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T15:06:23.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:22 vm05.local ceph-mon[116516]: osdmap e81: 6 total, 5 up, 6 in 2026-03-09T15:06:23.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:22 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:23.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:22 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "0"}]: dispatch 2026-03-09T15:06:23.825 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:23 vm05.local ceph-mon[116516]: Upgrade: Disabling standby-replay for filesystem cephfs 2026-03-09T15:06:23.825 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:23 vm05.local ceph-mon[116516]: osdmap e82: 6 total, 5 up, 6 in 2026-03-09T15:06:23.825 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:23 vm05.local ceph-mon[116516]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:06:23.825 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:23 vm05.local ceph-mon[116516]: from='osd.5 [v2:192.168.123.109:6816/3541735027,v1:192.168.123.109:6817/3541735027]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:06:23.825 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:23 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "0"}]': finished 2026-03-09T15:06:23.825 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:23 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 2 up:standby 2026-03-09T15:06:23.866 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:06:23 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:06:23.601+0000 7f1d211fd640 -1 osd.5 77 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T15:06:23.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:23 vm09.local ceph-mon[98742]: Upgrade: Disabling standby-replay for filesystem cephfs 2026-03-09T15:06:23.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:23 vm09.local ceph-mon[98742]: osdmap e82: 6 total, 5 up, 6 in 2026-03-09T15:06:23.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:23 vm09.local ceph-mon[98742]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:06:23.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:23 vm09.local ceph-mon[98742]: from='osd.5 [v2:192.168.123.109:6816/3541735027,v1:192.168.123.109:6817/3541735027]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T15:06:23.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:23 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "0"}]': finished 2026-03-09T15:06:23.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:23 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 2 up:standby 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05[116512]: 2026-03-09T15:06:24.404+0000 7fc0654ee640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: pgmap v125: 65 pgs: 1 active+clean+remapped, 15 active+undersized, 1 active+recovering+undersized+remapped, 14 active+undersized+degraded, 34 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 44/264 objects degraded (16.667%) 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 5 pgs peering) 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: osdmap e83: 6 total, 5 up, 6 in 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='osd.5 [v2:192.168.123.109:6816/3541735027,v1:192.168.123.109:6817/3541735027]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: Upgrade: Updating mds.cephfs.vm05.nrocqt 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.nrocqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: Deploying daemon mds.cephfs.vm05.nrocqt on vm05 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: osd.5 [v2:192.168.123.109:6816/3541735027,v1:192.168.123.109:6817/3541735027] boot 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: osdmap e84: 6 total, 6 up, 6 in 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.109:6824/4257546649,v1:192.168.123.109:6825/4257546649] up:boot 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: Standby daemon mds.cephfs.vm05.rrcyql assigned to filesystem cephfs as rank 0 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:06:24.806 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:24 vm05.local ceph-mon[116516]: fsmap cephfs:1/1 {0=cephfs.vm05.rrcyql=up:replay} 2 up:standby 2026-03-09T15:06:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: pgmap v125: 65 pgs: 1 active+clean+remapped, 15 active+undersized, 1 active+recovering+undersized+remapped, 14 active+undersized+degraded, 34 active+clean; 257 MiB data, 1.3 GiB used, 119 GiB / 120 GiB avail; 44/264 objects degraded (16.667%) 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 5 pgs peering) 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: osdmap e83: 6 total, 5 up, 6 in 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='osd.5 [v2:192.168.123.109:6816/3541735027,v1:192.168.123.109:6817/3541735027]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: Upgrade: Updating mds.cephfs.vm05.nrocqt 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.nrocqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: Deploying daemon mds.cephfs.vm05.nrocqt on vm05 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: osd.5 [v2:192.168.123.109:6816/3541735027,v1:192.168.123.109:6817/3541735027] boot 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: osdmap e84: 6 total, 6 up, 6 in 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.109:6824/4257546649,v1:192.168.123.109:6825/4257546649] up:boot 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: Standby daemon mds.cephfs.vm05.rrcyql assigned to filesystem cephfs as rank 0 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T15:06:24.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:24 vm09.local ceph-mon[98742]: fsmap cephfs:1/1 {0=cephfs.vm05.rrcyql=up:replay} 2 up:standby 2026-03-09T15:06:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:26 vm05.local ceph-mon[116516]: pgmap v128: 65 pgs: 1 remapped+peering, 1 active+recovering+undersized+remapped, 5 peering, 12 active+undersized, 12 active+undersized+degraded, 34 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 38/264 objects degraded (14.394%) 2026-03-09T15:06:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:26 vm05.local ceph-mon[116516]: osdmap e85: 6 total, 6 up, 6 in 2026-03-09T15:06:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:26 vm09.local ceph-mon[98742]: pgmap v128: 65 pgs: 1 remapped+peering, 1 active+recovering+undersized+remapped, 5 peering, 12 active+undersized, 12 active+undersized+degraded, 34 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 38/264 objects degraded (14.394%) 2026-03-09T15:06:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:26 vm09.local ceph-mon[98742]: osdmap e85: 6 total, 6 up, 6 in 2026-03-09T15:06:27.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:27 vm05.local ceph-mon[116516]: osdmap e86: 6 total, 6 up, 6 in 2026-03-09T15:06:27.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:27 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:27.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:27 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:06:27.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:27 vm09.local ceph-mon[98742]: osdmap e86: 6 total, 6 up, 6 in 2026-03-09T15:06:27.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:27 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:27.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:27 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:06:28.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:28 vm05.local ceph-mon[116516]: pgmap v131: 65 pgs: 1 remapped+peering, 1 active+recovering+undersized+remapped, 5 peering, 11 active+undersized, 7 active+undersized+degraded, 40 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 10 MiB/s rd, 5 op/s; 18/264 objects degraded (6.818%); 0 B/s, 8 objects/s recovering 2026-03-09T15:06:28.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:28 vm05.local ceph-mon[116516]: Health check update: Degraded data redundancy: 18/264 objects degraded (6.818%), 7 pgs degraded (PG_DEGRADED) 2026-03-09T15:06:28.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:28 vm09.local ceph-mon[98742]: pgmap v131: 65 pgs: 1 remapped+peering, 1 active+recovering+undersized+remapped, 5 peering, 11 active+undersized, 7 active+undersized+degraded, 40 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 10 MiB/s rd, 5 op/s; 18/264 objects degraded (6.818%); 0 B/s, 8 objects/s recovering 2026-03-09T15:06:28.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:28 vm09.local ceph-mon[98742]: Health check update: Degraded data redundancy: 18/264 objects degraded (6.818%), 7 pgs degraded (PG_DEGRADED) 2026-03-09T15:06:30.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:30 vm09.local ceph-mon[98742]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 18/264 objects degraded (6.818%), 7 pgs degraded) 2026-03-09T15:06:30.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:30 vm05.local ceph-mon[116516]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 18/264 objects degraded (6.818%), 7 pgs degraded) 2026-03-09T15:06:31.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:31 vm09.local ceph-mon[98742]: pgmap v132: 65 pgs: 1 remapped+peering, 1 active+recovering+undersized+remapped, 5 peering, 58 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 22 MiB/s rd, 8 op/s; 0 B/s, 6 objects/s recovering 2026-03-09T15:06:31.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:31 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] up:reconnect 2026-03-09T15:06:31.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:31 vm09.local ceph-mon[98742]: fsmap cephfs:1/1 {0=cephfs.vm05.rrcyql=up:reconnect} 2 up:standby 2026-03-09T15:06:31.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:31 vm09.local ceph-mon[98742]: reconnect by client.14546 192.168.144.1:0/3878864280 after 0 2026-03-09T15:06:31.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:31 vm09.local ceph-mon[98742]: reconnect by client.24345 192.168.144.1:0/70265336 after 0 2026-03-09T15:06:31.528 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:31 vm05.local ceph-mon[116516]: pgmap v132: 65 pgs: 1 remapped+peering, 1 active+recovering+undersized+remapped, 5 peering, 58 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 22 MiB/s rd, 8 op/s; 0 B/s, 6 objects/s recovering 2026-03-09T15:06:31.528 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:31 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] up:reconnect 2026-03-09T15:06:31.528 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:31 vm05.local ceph-mon[116516]: fsmap cephfs:1/1 {0=cephfs.vm05.rrcyql=up:reconnect} 2 up:standby 2026-03-09T15:06:31.528 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:31 vm05.local ceph-mon[116516]: reconnect by client.14546 192.168.144.1:0/3878864280 after 0 2026-03-09T15:06:31.528 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:31 vm05.local ceph-mon[116516]: reconnect by client.24345 192.168.144.1:0/70265336 after 0 2026-03-09T15:06:32.231 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:32 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] up:rejoin 2026-03-09T15:06:32.231 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:32 vm05.local ceph-mon[116516]: fsmap cephfs:1/1 {0=cephfs.vm05.rrcyql=up:rejoin} 2 up:standby 2026-03-09T15:06:32.231 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:32 vm05.local ceph-mon[116516]: daemon mds.cephfs.vm05.rrcyql is now active in filesystem cephfs as rank 0 2026-03-09T15:06:32.231 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:32 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:32.231 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:32 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:32.231 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:32 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:32 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] up:rejoin 2026-03-09T15:06:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:32 vm09.local ceph-mon[98742]: fsmap cephfs:1/1 {0=cephfs.vm05.rrcyql=up:rejoin} 2 up:standby 2026-03-09T15:06:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:32 vm09.local ceph-mon[98742]: daemon mds.cephfs.vm05.rrcyql is now active in filesystem cephfs as rank 0 2026-03-09T15:06:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:32 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:32 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:32.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:32 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:33.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:33 vm05.local ceph-mon[116516]: pgmap v133: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 28 MiB/s rd, 10 op/s; 159/264 objects degraded (60.227%); 0 B/s, 11 objects/s recovering 2026-03-09T15:06:33.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:33 vm05.local ceph-mon[116516]: Health check failed: Degraded data redundancy: 159/264 objects degraded (60.227%), 1 pg degraded (PG_DEGRADED) 2026-03-09T15:06:33.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:33 vm05.local ceph-mon[116516]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-09T15:06:33.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:33 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] up:active 2026-03-09T15:06:33.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:33 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.105:6826/3005307080,v1:192.168.123.105:6827/3005307080] up:boot 2026-03-09T15:06:33.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:33 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm05.rrcyql=up:active} 3 up:standby 2026-03-09T15:06:33.210 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:33 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.nrocqt"}]: dispatch 2026-03-09T15:06:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:33 vm09.local ceph-mon[98742]: pgmap v133: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 28 MiB/s rd, 10 op/s; 159/264 objects degraded (60.227%); 0 B/s, 11 objects/s recovering 2026-03-09T15:06:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:33 vm09.local ceph-mon[98742]: Health check failed: Degraded data redundancy: 159/264 objects degraded (60.227%), 1 pg degraded (PG_DEGRADED) 2026-03-09T15:06:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:33 vm09.local ceph-mon[98742]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-09T15:06:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:33 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.105:6828/1321316558,v1:192.168.123.105:6829/1321316558] up:active 2026-03-09T15:06:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:33 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.105:6826/3005307080,v1:192.168.123.105:6827/3005307080] up:boot 2026-03-09T15:06:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:33 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm05.rrcyql=up:active} 3 up:standby 2026-03-09T15:06:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:33 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.nrocqt"}]: dispatch 2026-03-09T15:06:34.220 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:34 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:34.221 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:34 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:34.221 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:34 vm05.local ceph-mon[116516]: pgmap v134: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 31 MiB/s rd, 127 B/s wr, 10 op/s; 159/264 objects degraded (60.227%); 0 B/s, 5 objects/s recovering 2026-03-09T15:06:34.221 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:34 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:34.221 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:34 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:34.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:34 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:34.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:34 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:34.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:34 vm09.local ceph-mon[98742]: pgmap v134: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 31 MiB/s rd, 127 B/s wr, 10 op/s; 159/264 objects degraded (60.227%); 0 B/s, 5 objects/s recovering 2026-03-09T15:06:34.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:34 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:34.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:34 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05[116512]: 2026-03-09T15:06:35.378+0000 7fc0654ee640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: Upgrade: Updating mds.cephfs.vm05.rrcyql 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.rrcyql", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: Deploying daemon mds.cephfs.vm05.rrcyql on vm05 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: osdmap e87: 6 total, 6 up, 6 in 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: Standby daemon mds.cephfs.vm09.jrhwzz assigned to filesystem cephfs as rank 0 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-09T15:06:35.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:35 vm05.local ceph-mon[116516]: fsmap cephfs:1/1 {0=cephfs.vm09.jrhwzz=up:replay} 2 up:standby 2026-03-09T15:06:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:06:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:35.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: Upgrade: Updating mds.cephfs.vm05.rrcyql 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm05.rrcyql", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: Deploying daemon mds.cephfs.vm05.rrcyql on vm05 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: osdmap e87: 6 total, 6 up, 6 in 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: Standby daemon mds.cephfs.vm09.jrhwzz assigned to filesystem cephfs as rank 0 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-09T15:06:35.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:35 vm09.local ceph-mon[98742]: fsmap cephfs:1/1 {0=cephfs.vm09.jrhwzz=up:replay} 2 up:standby 2026-03-09T15:06:36.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:36 vm05.local ceph-mon[116516]: pgmap v136: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 28 MiB/s rd, 228 B/s wr, 11 op/s; 159/264 objects degraded (60.227%); 0 B/s, 10 objects/s recovering 2026-03-09T15:06:36.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:36 vm09.local ceph-mon[98742]: pgmap v136: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 28 MiB/s rd, 228 B/s wr, 11 op/s; 159/264 objects degraded (60.227%); 0 B/s, 10 objects/s recovering 2026-03-09T15:06:37.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:37 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T15:06:38.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:37 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T15:06:38.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:38 vm09.local ceph-mon[98742]: pgmap v137: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 25 MiB/s rd, 204 B/s wr, 11 op/s; 159/264 objects degraded (60.227%); 0 B/s, 9 objects/s recovering 2026-03-09T15:06:39.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:38 vm05.local ceph-mon[116516]: pgmap v137: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 25 MiB/s rd, 204 B/s wr, 11 op/s; 159/264 objects degraded (60.227%); 0 B/s, 9 objects/s recovering 2026-03-09T15:06:41.208 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:40 vm05.local ceph-mon[116516]: pgmap v138: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 25 MiB/s rd, 204 B/s wr, 11 op/s; 159/264 objects degraded (60.227%); 0 B/s, 9 objects/s recovering 2026-03-09T15:06:41.208 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:40 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] up:reconnect 2026-03-09T15:06:41.208 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:40 vm05.local ceph-mon[116516]: fsmap cephfs:1/1 {0=cephfs.vm09.jrhwzz=up:reconnect} 2 up:standby 2026-03-09T15:06:41.208 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:40 vm05.local ceph-mon[116516]: reconnect by client.24345 192.168.144.1:0/70265336 after 0.002 2026-03-09T15:06:41.208 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:40 vm05.local ceph-mon[116516]: reconnect by client.14546 192.168.144.1:0/3878864280 after 0.002 2026-03-09T15:06:41.208 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.206+0000 7fcc08b1b700 1 -- 192.168.123.105:0/833913857 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcc04101100 msgr2=0x7fcc04101570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.209 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.206+0000 7fcc08b1b700 1 --2- 192.168.123.105:0/833913857 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcc04101100 0x7fcc04101570 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7fcbf4009b00 tx=0x7fcbf4009e10 comp rx=0 tx=0).stop 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.209+0000 7fcc08b1b700 1 -- 192.168.123.105:0/833913857 shutdown_connections 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.209+0000 7fcc08b1b700 1 --2- 192.168.123.105:0/833913857 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcc04101100 0x7fcc04101570 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.209+0000 7fcc08b1b700 1 --2- 192.168.123.105:0/833913857 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcc040ff480 0x7fcc04100bc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.209+0000 7fcc08b1b700 1 -- 192.168.123.105:0/833913857 >> 192.168.123.105:0/833913857 conn(0x7fcc040747e0 msgr2=0x7fcc04074be0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.211+0000 7fcc08b1b700 1 -- 192.168.123.105:0/833913857 shutdown_connections 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.211+0000 7fcc08b1b700 1 -- 192.168.123.105:0/833913857 wait complete. 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.211+0000 7fcc08b1b700 1 Processor -- start 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.211+0000 7fcc08b1b700 1 -- start start 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.211+0000 7fcc08b1b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcc040ff480 0x7fcc04198e20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.211+0000 7fcc08b1b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcc04101100 0x7fcc04199380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.211+0000 7fcc08b1b700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcc04193ed0 con 0x7fcc040ff480 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.211+0000 7fcc08b1b700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcc04194040 con 0x7fcc04101100 2026-03-09T15:06:41.214 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.212+0000 7fcc01d9b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcc04101100 0x7fcc04199380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:41.215 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.212+0000 7fcc01d9b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcc04101100 0x7fcc04199380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:44906/0 (socket says 192.168.123.105:44906) 2026-03-09T15:06:41.215 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.212+0000 7fcc01d9b700 1 -- 192.168.123.105:0/3915638491 learned_addr learned my addr 192.168.123.105:0/3915638491 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:41.215 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.212+0000 7fcc01d9b700 1 -- 192.168.123.105:0/3915638491 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcc040ff480 msgr2=0x7fcc04198e20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.215 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.212+0000 7fcc01d9b700 1 --2- 192.168.123.105:0/3915638491 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcc040ff480 0x7fcc04198e20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.215 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.212+0000 7fcc01d9b700 1 -- 192.168.123.105:0/3915638491 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcbf40097e0 con 0x7fcc04101100 2026-03-09T15:06:41.215 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.212+0000 7fcc01d9b700 1 --2- 192.168.123.105:0/3915638491 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcc04101100 0x7fcc04199380 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7fcbf400bbd0 tx=0x7fcbf4005f00 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:41.215 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.212+0000 7fcbf37fe700 1 -- 192.168.123.105:0/3915638491 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcbf401d070 con 0x7fcc04101100 2026-03-09T15:06:41.215 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.212+0000 7fcc08b1b700 1 -- 192.168.123.105:0/3915638491 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcc041942c0 con 0x7fcc04101100 2026-03-09T15:06:41.215 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.212+0000 7fcc08b1b700 1 -- 192.168.123.105:0/3915638491 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcc04194810 con 0x7fcc04101100 2026-03-09T15:06:41.215 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.212+0000 7fcbf37fe700 1 -- 192.168.123.105:0/3915638491 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fcbf4003ba0 con 0x7fcc04101100 2026-03-09T15:06:41.215 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.212+0000 7fcbf37fe700 1 -- 192.168.123.105:0/3915638491 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcbf4017610 con 0x7fcc04101100 2026-03-09T15:06:41.216 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.214+0000 7fcbf37fe700 1 -- 192.168.123.105:0/3915638491 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fcbf4017770 con 0x7fcc04101100 2026-03-09T15:06:41.217 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.214+0000 7fcbf37fe700 1 --2- 192.168.123.105:0/3915638491 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcbec0778c0 0x7fcbec079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:41.217 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.214+0000 7fcbf37fe700 1 -- 192.168.123.105:0/3915638491 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(87..87 src has 1..87) v4 ==== 6635+0+0 (secure 0 0 0) 0x7fcbf409af50 con 0x7fcc04101100 2026-03-09T15:06:41.217 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.215+0000 7fcc08b1b700 1 -- 192.168.123.105:0/3915638491 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcbe40052f0 con 0x7fcc04101100 2026-03-09T15:06:41.217 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.215+0000 7fcc0259c700 1 --2- 192.168.123.105:0/3915638491 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcbec0778c0 0x7fcbec079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:41.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.218+0000 7fcc0259c700 1 --2- 192.168.123.105:0/3915638491 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcbec0778c0 0x7fcbec079d70 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7fcbf80097b0 tx=0x7fcbf8006d20 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:41.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.218+0000 7fcbf37fe700 1 -- 192.168.123.105:0/3915638491 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fcbf406ae60 con 0x7fcc04101100 2026-03-09T15:06:41.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.354+0000 7fcc08b1b700 1 -- 192.168.123.105:0/3915638491 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fcbe4000bc0 con 0x7fcbec0778c0 2026-03-09T15:06:41.356 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.355+0000 7fcbf37fe700 1 -- 192.168.123.105:0/3915638491 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7fcbe4000bc0 con 0x7fcbec0778c0 2026-03-09T15:06:41.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.357+0000 7fcc08b1b700 1 -- 192.168.123.105:0/3915638491 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcbec0778c0 msgr2=0x7fcbec079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.357+0000 7fcc08b1b700 1 --2- 192.168.123.105:0/3915638491 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcbec0778c0 0x7fcbec079d70 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7fcbf80097b0 tx=0x7fcbf8006d20 comp rx=0 tx=0).stop 2026-03-09T15:06:41.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.358+0000 7fcc08b1b700 1 -- 192.168.123.105:0/3915638491 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcc04101100 msgr2=0x7fcc04199380 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.358+0000 7fcc08b1b700 1 --2- 192.168.123.105:0/3915638491 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcc04101100 0x7fcc04199380 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7fcbf400bbd0 tx=0x7fcbf4005f00 comp rx=0 tx=0).stop 2026-03-09T15:06:41.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.358+0000 7fcc08b1b700 1 -- 192.168.123.105:0/3915638491 shutdown_connections 2026-03-09T15:06:41.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.358+0000 7fcc08b1b700 1 --2- 192.168.123.105:0/3915638491 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcbec0778c0 0x7fcbec079d70 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.358+0000 7fcc08b1b700 1 --2- 192.168.123.105:0/3915638491 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcc040ff480 0x7fcc04198e20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.358+0000 7fcc08b1b700 1 --2- 192.168.123.105:0/3915638491 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcc04101100 0x7fcc04199380 unknown :-1 s=CLOSED pgs=31 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.358+0000 7fcc08b1b700 1 -- 192.168.123.105:0/3915638491 >> 192.168.123.105:0/3915638491 conn(0x7fcc040747e0 msgr2=0x7fcc040fdf20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:41.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.358+0000 7fcc08b1b700 1 -- 192.168.123.105:0/3915638491 shutdown_connections 2026-03-09T15:06:41.359 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.358+0000 7fcc08b1b700 1 -- 192.168.123.105:0/3915638491 wait complete. 2026-03-09T15:06:41.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:40 vm09.local ceph-mon[98742]: pgmap v138: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 25 MiB/s rd, 204 B/s wr, 11 op/s; 159/264 objects degraded (60.227%); 0 B/s, 9 objects/s recovering 2026-03-09T15:06:41.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:40 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] up:reconnect 2026-03-09T15:06:41.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:40 vm09.local ceph-mon[98742]: fsmap cephfs:1/1 {0=cephfs.vm09.jrhwzz=up:reconnect} 2 up:standby 2026-03-09T15:06:41.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:40 vm09.local ceph-mon[98742]: reconnect by client.24345 192.168.144.1:0/70265336 after 0.002 2026-03-09T15:06:41.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:40 vm09.local ceph-mon[98742]: reconnect by client.14546 192.168.144.1:0/3878864280 after 0.002 2026-03-09T15:06:41.368 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:06:41.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.442+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1406512417 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa35810c8f0 msgr2=0x7fa35810ccc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.442+0000 7fa35e4f3700 1 --2- 192.168.123.105:0/1406512417 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa35810c8f0 0x7fa35810ccc0 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7fa354007780 tx=0x7fa35400c050 comp rx=0 tx=0).stop 2026-03-09T15:06:41.444 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.443+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1406512417 shutdown_connections 2026-03-09T15:06:41.444 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.443+0000 7fa35e4f3700 1 --2- 192.168.123.105:0/1406512417 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa358071e40 0x7fa3580722b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.444 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.443+0000 7fa35e4f3700 1 --2- 192.168.123.105:0/1406512417 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa35810c8f0 0x7fa35810ccc0 unknown :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.444 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.443+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1406512417 >> 192.168.123.105:0/1406512417 conn(0x7fa35806c6c0 msgr2=0x7fa35806cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:41.445 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.444+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1406512417 shutdown_connections 2026-03-09T15:06:41.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.444+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1406512417 wait complete. 2026-03-09T15:06:41.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.444+0000 7fa35e4f3700 1 Processor -- start 2026-03-09T15:06:41.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.444+0000 7fa35e4f3700 1 -- start start 2026-03-09T15:06:41.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.444+0000 7fa35e4f3700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa358071e40 0x7fa358132810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:41.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.444+0000 7fa35e4f3700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa358132d50 0x7fa3581331c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:41.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.444+0000 7fa35e4f3700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa35807eef0 con 0x7fa358071e40 2026-03-09T15:06:41.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.444+0000 7fa35e4f3700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa35807f060 con 0x7fa358132d50 2026-03-09T15:06:41.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.446+0000 7fa35ccf0700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa358132d50 0x7fa3581331c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:41.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.446+0000 7fa35ccf0700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa358132d50 0x7fa3581331c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:44920/0 (socket says 192.168.123.105:44920) 2026-03-09T15:06:41.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.446+0000 7fa35ccf0700 1 -- 192.168.123.105:0/1276463284 learned_addr learned my addr 192.168.123.105:0/1276463284 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:41.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.446+0000 7fa35d4f1700 1 --2- 192.168.123.105:0/1276463284 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa358071e40 0x7fa358132810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:41.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.446+0000 7fa35ccf0700 1 -- 192.168.123.105:0/1276463284 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa358071e40 msgr2=0x7fa358132810 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.446+0000 7fa35ccf0700 1 --2- 192.168.123.105:0/1276463284 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa358071e40 0x7fa358132810 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.446+0000 7fa35ccf0700 1 -- 192.168.123.105:0/1276463284 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa354007430 con 0x7fa358132d50 2026-03-09T15:06:41.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.446+0000 7fa35ccf0700 1 --2- 192.168.123.105:0/1276463284 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa358132d50 0x7fa3581331c0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7fa35000bf40 tx=0x7fa350009f80 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:41.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.448+0000 7fa34e7fc700 1 -- 192.168.123.105:0/1276463284 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa35000cad0 con 0x7fa358132d50 2026-03-09T15:06:41.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.448+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1276463284 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa35807f2e0 con 0x7fa358132d50 2026-03-09T15:06:41.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.448+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1276463284 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa35807f830 con 0x7fa358132d50 2026-03-09T15:06:41.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.449+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1276463284 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa35804f2a0 con 0x7fa358132d50 2026-03-09T15:06:41.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.450+0000 7fa34e7fc700 1 -- 192.168.123.105:0/1276463284 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fa35000cc30 con 0x7fa358132d50 2026-03-09T15:06:41.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.450+0000 7fa34e7fc700 1 -- 192.168.123.105:0/1276463284 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa350007740 con 0x7fa358132d50 2026-03-09T15:06:41.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.451+0000 7fa34e7fc700 1 -- 192.168.123.105:0/1276463284 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fa3500078a0 con 0x7fa358132d50 2026-03-09T15:06:41.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.452+0000 7fa34e7fc700 1 --2- 192.168.123.105:0/1276463284 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa3440779e0 0x7fa344079e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:41.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.452+0000 7fa34e7fc700 1 -- 192.168.123.105:0/1276463284 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(87..87 src has 1..87) v4 ==== 6635+0+0 (secure 0 0 0) 0x7fa35009ac20 con 0x7fa358132d50 2026-03-09T15:06:41.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.452+0000 7fa35d4f1700 1 --2- 192.168.123.105:0/1276463284 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa3440779e0 0x7fa344079e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:41.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.453+0000 7fa35d4f1700 1 --2- 192.168.123.105:0/1276463284 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa3440779e0 0x7fa344079e90 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7fa354007400 tx=0x7fa354015040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:41.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.454+0000 7fa34e7fc700 1 -- 192.168.123.105:0/1276463284 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fa350063390 con 0x7fa358132d50 2026-03-09T15:06:41.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.598+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1276463284 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fa3580707e0 con 0x7fa3440779e0 2026-03-09T15:06:41.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.599+0000 7fa34e7fc700 1 -- 192.168.123.105:0/1276463284 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7fa3580707e0 con 0x7fa3440779e0 2026-03-09T15:06:41.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.603+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1276463284 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa3440779e0 msgr2=0x7fa344079e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.603+0000 7fa35e4f3700 1 --2- 192.168.123.105:0/1276463284 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa3440779e0 0x7fa344079e90 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7fa354007400 tx=0x7fa354015040 comp rx=0 tx=0).stop 2026-03-09T15:06:41.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.603+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1276463284 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa358132d50 msgr2=0x7fa3581331c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.603+0000 7fa35e4f3700 1 --2- 192.168.123.105:0/1276463284 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa358132d50 0x7fa3581331c0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7fa35000bf40 tx=0x7fa350009f80 comp rx=0 tx=0).stop 2026-03-09T15:06:41.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.603+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1276463284 shutdown_connections 2026-03-09T15:06:41.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.603+0000 7fa35e4f3700 1 --2- 192.168.123.105:0/1276463284 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa3440779e0 0x7fa344079e90 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.603+0000 7fa35e4f3700 1 --2- 192.168.123.105:0/1276463284 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa358071e40 0x7fa358132810 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.603+0000 7fa35e4f3700 1 --2- 192.168.123.105:0/1276463284 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa358132d50 0x7fa3581331c0 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.603+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1276463284 >> 192.168.123.105:0/1276463284 conn(0x7fa35806c6c0 msgr2=0x7fa3580700d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:41.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.603+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1276463284 shutdown_connections 2026-03-09T15:06:41.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.603+0000 7fa35e4f3700 1 -- 192.168.123.105:0/1276463284 wait complete. 2026-03-09T15:06:41.688 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.687+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/3370470296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3cdc1065c0 msgr2=0x7f3cdc106990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.687+0000 7f3ce1ecb700 1 --2- 192.168.123.105:0/3370470296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3cdc1065c0 0x7f3cdc106990 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f3cc4009b00 tx=0x7f3cc4009e10 comp rx=0 tx=0).stop 2026-03-09T15:06:41.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.687+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/3370470296 shutdown_connections 2026-03-09T15:06:41.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.687+0000 7f3ce1ecb700 1 --2- 192.168.123.105:0/3370470296 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3cdc1005a0 0x7f3cdc100a10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.687+0000 7f3ce1ecb700 1 --2- 192.168.123.105:0/3370470296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3cdc1065c0 0x7f3cdc106990 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.687+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/3370470296 >> 192.168.123.105:0/3370470296 conn(0x7f3cdc078550 msgr2=0x7f3cdc078950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:41.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.687+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/3370470296 shutdown_connections 2026-03-09T15:06:41.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.687+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/3370470296 wait complete. 2026-03-09T15:06:41.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3ce1ecb700 1 Processor -- start 2026-03-09T15:06:41.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3ce1ecb700 1 -- start start 2026-03-09T15:06:41.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3ce1ecb700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3cdc1005a0 0x7f3cdc196130 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:41.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3ce1ecb700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3cdc1065c0 0x7f3cdc196670 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:41.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3ce1ecb700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3cdc196d50 con 0x7f3cdc1065c0 2026-03-09T15:06:41.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3ce1ecb700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3cdc19aae0 con 0x7f3cdc1005a0 2026-03-09T15:06:41.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3cdaffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3cdc1065c0 0x7f3cdc196670 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:41.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3cdaffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3cdc1065c0 0x7f3cdc196670 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:36416/0 (socket says 192.168.123.105:36416) 2026-03-09T15:06:41.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3cdaffd700 1 -- 192.168.123.105:0/2008858689 learned_addr learned my addr 192.168.123.105:0/2008858689 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:41.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3cdb7fe700 1 --2- 192.168.123.105:0/2008858689 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3cdc1005a0 0x7f3cdc196130 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:41.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3cdaffd700 1 -- 192.168.123.105:0/2008858689 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3cdc1005a0 msgr2=0x7f3cdc196130 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3cdaffd700 1 --2- 192.168.123.105:0/2008858689 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3cdc1005a0 0x7f3cdc196130 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3cdaffd700 1 -- 192.168.123.105:0/2008858689 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3cc40097e0 con 0x7f3cdc1065c0 2026-03-09T15:06:41.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.689+0000 7f3cdaffd700 1 --2- 192.168.123.105:0/2008858689 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3cdc1065c0 0x7f3cdc196670 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f3ccc00eb10 tx=0x7f3ccc00ee20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:41.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.691+0000 7f3cd8ff9700 1 -- 192.168.123.105:0/2008858689 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3ccc00cc40 con 0x7f3cdc1065c0 2026-03-09T15:06:41.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.691+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/2008858689 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3cdc19ad60 con 0x7f3cdc1065c0 2026-03-09T15:06:41.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.691+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/2008858689 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3cdc19b2b0 con 0x7f3cdc1065c0 2026-03-09T15:06:41.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.691+0000 7f3cd8ff9700 1 -- 192.168.123.105:0/2008858689 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f3ccc00cda0 con 0x7f3cdc1065c0 2026-03-09T15:06:41.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.691+0000 7f3cd8ff9700 1 -- 192.168.123.105:0/2008858689 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3ccc018810 con 0x7f3cdc1065c0 2026-03-09T15:06:41.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.691+0000 7f3cd8ff9700 1 -- 192.168.123.105:0/2008858689 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f3ccc018a50 con 0x7f3cdc1065c0 2026-03-09T15:06:41.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.691+0000 7f3cd8ff9700 1 --2- 192.168.123.105:0/2008858689 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3cc8077a60 0x7f3cc8079f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:41.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.692+0000 7f3cdb7fe700 1 --2- 192.168.123.105:0/2008858689 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3cc8077a60 0x7f3cc8079f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:41.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.692+0000 7f3cd8ff9700 1 -- 192.168.123.105:0/2008858689 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(87..87 src has 1..87) v4 ==== 6635+0+0 (secure 0 0 0) 0x7f3ccc014070 con 0x7f3cdc1065c0 2026-03-09T15:06:41.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.692+0000 7f3cdb7fe700 1 --2- 192.168.123.105:0/2008858689 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3cc8077a60 0x7f3cc8079f10 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f3cc4006010 tx=0x7f3cc400b560 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:41.693 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.692+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/2008858689 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3cbc005320 con 0x7f3cdc1065c0 2026-03-09T15:06:41.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.695+0000 7f3cd8ff9700 1 -- 192.168.123.105:0/2008858689 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3ccc063d70 con 0x7f3cdc1065c0 2026-03-09T15:06:41.824 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.823+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/2008858689 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f3cbc000bf0 con 0x7f3cc8077a60 2026-03-09T15:06:41.829 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.828+0000 7f3cd8ff9700 1 -- 192.168.123.105:0/2008858689 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f3cbc000bf0 con 0x7f3cc8077a60 2026-03-09T15:06:41.830 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:06:41.830 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (3m) 8s ago 10m 24.0M - 0.25.0 c8568f914cd2 7635cece310c 2026-03-09T15:06:41.830 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (10m) 8s ago 10m 9713k - 18.2.0 dc2bc1663786 d3853bf87871 2026-03-09T15:06:41.830 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (9m) 21s ago 9m 11.8M - 18.2.0 dc2bc1663786 e86718d7b18a 2026-03-09T15:06:41.830 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (2m) 8s ago 10m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 35d8c0ae5a58 2026-03-09T15:06:41.830 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (2m) 21s ago 9m 8308k - 19.2.3-678-ge911bdeb 654f31e6858e 82bdad36caf9 2026-03-09T15:06:41.830 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (3m) 8s ago 10m 82.7M - 10.4.0 c8b91775d855 eb6431f63d88 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (9s) 8s ago 8m 20.7M - 19.2.3-678-ge911bdeb 654f31e6858e f41a092cac53 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (8m) 8s ago 8m 270M - 18.2.0 dc2bc1663786 08b2826cd233 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (8m) 21s ago 8m 17.9M - 18.2.0 dc2bc1663786 6c77fb591d5a 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (8m) 21s ago 8m 94.9M - 18.2.0 dc2bc1663786 b5ad1c71089a 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (4m) 8s ago 11m 622M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (4m) 21s ago 9m 497M - 19.2.3-678-ge911bdeb 654f31e6858e acf5a6f3f804 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (3m) 8s ago 11m 63.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1e11655f7d87 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (2m) 21s ago 9m 53.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e d1f0309f4d58 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (4m) 8s ago 10m 10.1M - 1.7.0 72c9c2088986 888d071c50d9 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (4m) 21s ago 9m 9579k - 1.7.0 72c9c2088986 22c96a576a60 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (2m) 8s ago 9m 161M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f2883abca2d2 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (2m) 8s ago 9m 127M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b830d7f76498 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (102s) 8s ago 9m 119M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 01cf87b8bc05 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (81s) 21s ago 9m 151M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9359c3ced4d3 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (58s) 21s ago 8m 125M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 985038f550f8 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (22s) 21s ago 8m 12.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 15ec92bc2880 2026-03-09T15:06:41.831 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (3m) 8s ago 10m 57.5M - 2.51.0 1d3b7f56885b e6f470b0ba11 2026-03-09T15:06:41.833 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.832+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/2008858689 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3cc8077a60 msgr2=0x7f3cc8079f10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.833 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.832+0000 7f3ce1ecb700 1 --2- 192.168.123.105:0/2008858689 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3cc8077a60 0x7f3cc8079f10 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f3cc4006010 tx=0x7f3cc400b560 comp rx=0 tx=0).stop 2026-03-09T15:06:41.833 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.832+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/2008858689 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3cdc1065c0 msgr2=0x7f3cdc196670 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.833 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.832+0000 7f3ce1ecb700 1 --2- 192.168.123.105:0/2008858689 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3cdc1065c0 0x7f3cdc196670 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f3ccc00eb10 tx=0x7f3ccc00ee20 comp rx=0 tx=0).stop 2026-03-09T15:06:41.834 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.832+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/2008858689 shutdown_connections 2026-03-09T15:06:41.834 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.832+0000 7f3ce1ecb700 1 --2- 192.168.123.105:0/2008858689 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3cc8077a60 0x7f3cc8079f10 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.834 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.832+0000 7f3ce1ecb700 1 --2- 192.168.123.105:0/2008858689 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3cdc1005a0 0x7f3cdc196130 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.834 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.832+0000 7f3ce1ecb700 1 --2- 192.168.123.105:0/2008858689 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3cdc1065c0 0x7f3cdc196670 secure :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f3ccc00eb10 tx=0x7f3ccc00ee20 comp rx=0 tx=0).stop 2026-03-09T15:06:41.834 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.832+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/2008858689 >> 192.168.123.105:0/2008858689 conn(0x7f3cdc078550 msgr2=0x7f3cdc0fed20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:41.834 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.833+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/2008858689 shutdown_connections 2026-03-09T15:06:41.834 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.833+0000 7f3ce1ecb700 1 -- 192.168.123.105:0/2008858689 wait complete. 2026-03-09T15:06:41.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.900+0000 7f076f4fb700 1 -- 192.168.123.105:0/4011509948 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0768108750 msgr2=0x7f0768108b20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.900+0000 7f076f4fb700 1 --2- 192.168.123.105:0/4011509948 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0768108750 0x7f0768108b20 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f075c009b00 tx=0x7f075c009e10 comp rx=0 tx=0).stop 2026-03-09T15:06:41.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.904+0000 7f076f4fb700 1 -- 192.168.123.105:0/4011509948 shutdown_connections 2026-03-09T15:06:41.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.904+0000 7f076f4fb700 1 --2- 192.168.123.105:0/4011509948 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0768102750 0x7f0768102bc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.904+0000 7f076f4fb700 1 --2- 192.168.123.105:0/4011509948 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0768108750 0x7f0768108b20 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.904+0000 7f076f4fb700 1 -- 192.168.123.105:0/4011509948 >> 192.168.123.105:0/4011509948 conn(0x7f07680fe250 msgr2=0x7f0768100660 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:41.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.904+0000 7f076f4fb700 1 -- 192.168.123.105:0/4011509948 shutdown_connections 2026-03-09T15:06:41.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.904+0000 7f076f4fb700 1 -- 192.168.123.105:0/4011509948 wait complete. 2026-03-09T15:06:41.906 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.905+0000 7f076f4fb700 1 Processor -- start 2026-03-09T15:06:41.906 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.905+0000 7f076f4fb700 1 -- start start 2026-03-09T15:06:41.906 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.905+0000 7f076f4fb700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0768102750 0x7f07681983d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:41.906 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.905+0000 7f076f4fb700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0768108750 0x7f0768198910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.905+0000 7f076f4fb700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f0768198ff0 con 0x7f0768102750 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.905+0000 7f076f4fb700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f076819cd80 con 0x7f0768108750 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.906+0000 7f076d297700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0768102750 0x7f07681983d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.906+0000 7f076ca96700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0768108750 0x7f0768198910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.906+0000 7f076ca96700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0768108750 0x7f0768198910 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:44954/0 (socket says 192.168.123.105:44954) 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.906+0000 7f076ca96700 1 -- 192.168.123.105:0/3082632128 learned_addr learned my addr 192.168.123.105:0/3082632128 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.906+0000 7f076ca96700 1 -- 192.168.123.105:0/3082632128 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0768102750 msgr2=0x7f07681983d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.906+0000 7f076ca96700 1 --2- 192.168.123.105:0/3082632128 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0768102750 0x7f07681983d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.906+0000 7f076ca96700 1 -- 192.168.123.105:0/3082632128 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f075c0097e0 con 0x7f0768108750 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.906+0000 7f076ca96700 1 --2- 192.168.123.105:0/3082632128 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0768108750 0x7f0768198910 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f076400b700 tx=0x7f076400bac0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.906+0000 7f076d297700 1 --2- 192.168.123.105:0/3082632128 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0768102750 0x7f07681983d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.907+0000 7f075a7fc700 1 -- 192.168.123.105:0/3082632128 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0764010840 con 0x7f0768108750 2026-03-09T15:06:41.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.907+0000 7f076f4fb700 1 -- 192.168.123.105:0/3082632128 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f076819d060 con 0x7f0768108750 2026-03-09T15:06:41.909 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.907+0000 7f075a7fc700 1 -- 192.168.123.105:0/3082632128 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f0764010e80 con 0x7f0768108750 2026-03-09T15:06:41.909 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.907+0000 7f075a7fc700 1 -- 192.168.123.105:0/3082632128 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f0764019600 con 0x7f0768108750 2026-03-09T15:06:41.909 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.907+0000 7f076f4fb700 1 -- 192.168.123.105:0/3082632128 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f076819d5b0 con 0x7f0768108750 2026-03-09T15:06:41.909 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.908+0000 7f076f4fb700 1 -- 192.168.123.105:0/3082632128 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f076804ea50 con 0x7f0768108750 2026-03-09T15:06:41.909 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.908+0000 7f075a7fc700 1 -- 192.168.123.105:0/3082632128 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f076400f3e0 con 0x7f0768108750 2026-03-09T15:06:41.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.909+0000 7f075a7fc700 1 --2- 192.168.123.105:0/3082632128 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07540778c0 0x7f0754079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:41.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.909+0000 7f075a7fc700 1 -- 192.168.123.105:0/3082632128 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(87..87 src has 1..87) v4 ==== 6635+0+0 (secure 0 0 0) 0x7f0764098c10 con 0x7f0768108750 2026-03-09T15:06:41.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.909+0000 7f076d297700 1 --2- 192.168.123.105:0/3082632128 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07540778c0 0x7f0754079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:41.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.910+0000 7f076d297700 1 --2- 192.168.123.105:0/3082632128 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07540778c0 0x7f0754079d70 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f075c006010 tx=0x7f075c00b540 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:41.913 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:41.912+0000 7f075a7fc700 1 -- 192.168.123.105:0/3082632128 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f07640611d0 con 0x7f0768108750 2026-03-09T15:06:42.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.086+0000 7f076f4fb700 1 -- 192.168.123.105:0/3082632128 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f0768066e40 con 0x7f0768108750 2026-03-09T15:06:42.088 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.087+0000 7f075a7fc700 1 -- 192.168.123.105:0/3082632128 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+815 (secure 0 0 0) 0x7f076400f690 con 0x7f0768108750 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2, 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)": 2, 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 11 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:06:42.089 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:06:42.091 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.090+0000 7f076f4fb700 1 -- 192.168.123.105:0/3082632128 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07540778c0 msgr2=0x7f0754079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.091 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.090+0000 7f076f4fb700 1 --2- 192.168.123.105:0/3082632128 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07540778c0 0x7f0754079d70 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f075c006010 tx=0x7f075c00b540 comp rx=0 tx=0).stop 2026-03-09T15:06:42.091 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.091+0000 7f076f4fb700 1 -- 192.168.123.105:0/3082632128 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0768108750 msgr2=0x7f0768198910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.091 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.091+0000 7f076f4fb700 1 --2- 192.168.123.105:0/3082632128 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0768108750 0x7f0768198910 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7f076400b700 tx=0x7f076400bac0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.091+0000 7f076f4fb700 1 -- 192.168.123.105:0/3082632128 shutdown_connections 2026-03-09T15:06:42.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.091+0000 7f076f4fb700 1 --2- 192.168.123.105:0/3082632128 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f07540778c0 0x7f0754079d70 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.091+0000 7f076f4fb700 1 --2- 192.168.123.105:0/3082632128 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f0768102750 0x7f07681983d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.091+0000 7f076f4fb700 1 --2- 192.168.123.105:0/3082632128 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f0768108750 0x7f0768198910 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.091+0000 7f076f4fb700 1 -- 192.168.123.105:0/3082632128 >> 192.168.123.105:0/3082632128 conn(0x7f07680fe250 msgr2=0x7f07680ffa60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:42.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.091+0000 7f076f4fb700 1 -- 192.168.123.105:0/3082632128 shutdown_connections 2026-03-09T15:06:42.092 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.091+0000 7f076f4fb700 1 -- 192.168.123.105:0/3082632128 wait complete. 2026-03-09T15:06:42.163 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.162+0000 7fe1d85f0700 1 -- 192.168.123.105:0/3961732404 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1d00ffd10 msgr2=0x7fe1d01000e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.163 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.162+0000 7fe1d85f0700 1 --2- 192.168.123.105:0/3961732404 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1d00ffd10 0x7fe1d01000e0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7fe1cc009b80 tx=0x7fe1cc009e90 comp rx=0 tx=0).stop 2026-03-09T15:06:42.163 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:41 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] up:rejoin 2026-03-09T15:06:42.163 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:41 vm05.local ceph-mon[116516]: fsmap cephfs:1/1 {0=cephfs.vm09.jrhwzz=up:rejoin} 2 up:standby 2026-03-09T15:06:42.163 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:41 vm05.local ceph-mon[116516]: daemon mds.cephfs.vm09.jrhwzz is now active in filesystem cephfs as rank 0 2026-03-09T15:06:42.163 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:06:42.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.163+0000 7fe1d85f0700 1 -- 192.168.123.105:0/3961732404 shutdown_connections 2026-03-09T15:06:42.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.163+0000 7fe1d85f0700 1 --2- 192.168.123.105:0/3961732404 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe1d0100620 0x7fe1d0108b20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.163+0000 7fe1d85f0700 1 --2- 192.168.123.105:0/3961732404 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1d00ffd10 0x7fe1d01000e0 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.164 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.163+0000 7fe1d85f0700 1 -- 192.168.123.105:0/3961732404 >> 192.168.123.105:0/3961732404 conn(0x7fe1d00747e0 msgr2=0x7fe1d0074be0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:42.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.165+0000 7fe1d85f0700 1 -- 192.168.123.105:0/3961732404 shutdown_connections 2026-03-09T15:06:42.165 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.165+0000 7fe1d85f0700 1 -- 192.168.123.105:0/3961732404 wait complete. 2026-03-09T15:06:42.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.165+0000 7fe1d85f0700 1 Processor -- start 2026-03-09T15:06:42.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.165+0000 7fe1d85f0700 1 -- start start 2026-03-09T15:06:42.166 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.166+0000 7fe1d85f0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1d00ffd10 0x7fe1d019ae30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:42.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.166+0000 7fe1d85f0700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe1d0100620 0x7fe1d0193e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:42.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.166+0000 7fe1d85f0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe1d0194390 con 0x7fe1d00ffd10 2026-03-09T15:06:42.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.166+0000 7fe1d85f0700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe1d0194500 con 0x7fe1d0100620 2026-03-09T15:06:42.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.166+0000 7fe1d638c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1d00ffd10 0x7fe1d019ae30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:42.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.166+0000 7fe1d638c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1d00ffd10 0x7fe1d019ae30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:36446/0 (socket says 192.168.123.105:36446) 2026-03-09T15:06:42.167 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.167+0000 7fe1d638c700 1 -- 192.168.123.105:0/3977451235 learned_addr learned my addr 192.168.123.105:0/3977451235 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:42.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.167+0000 7fe1d638c700 1 -- 192.168.123.105:0/3977451235 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe1d0100620 msgr2=0x7fe1d0193e50 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:06:42.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.167+0000 7fe1d638c700 1 --2- 192.168.123.105:0/3977451235 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe1d0100620 0x7fe1d0193e50 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.167+0000 7fe1d638c700 1 -- 192.168.123.105:0/3977451235 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe1cc0097e0 con 0x7fe1d00ffd10 2026-03-09T15:06:42.168 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.167+0000 7fe1d638c700 1 --2- 192.168.123.105:0/3977451235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1d00ffd10 0x7fe1d019ae30 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7fe1cc000c00 tx=0x7fe1cc00b960 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:42.169 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.168+0000 7fe1c77fe700 1 -- 192.168.123.105:0/3977451235 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe1cc01d070 con 0x7fe1d00ffd10 2026-03-09T15:06:42.169 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.168+0000 7fe1c77fe700 1 -- 192.168.123.105:0/3977451235 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fe1cc004500 con 0x7fe1d00ffd10 2026-03-09T15:06:42.169 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.168+0000 7fe1c77fe700 1 -- 192.168.123.105:0/3977451235 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe1cc022470 con 0x7fe1d00ffd10 2026-03-09T15:06:42.169 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.168+0000 7fe1d85f0700 1 -- 192.168.123.105:0/3977451235 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe1d01946a0 con 0x7fe1d00ffd10 2026-03-09T15:06:42.169 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.169+0000 7fe1d85f0700 1 -- 192.168.123.105:0/3977451235 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe1d0194b70 con 0x7fe1d00ffd10 2026-03-09T15:06:42.171 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.170+0000 7fe1d85f0700 1 -- 192.168.123.105:0/3977451235 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe1d004ea50 con 0x7fe1d00ffd10 2026-03-09T15:06:42.171 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.170+0000 7fe1c77fe700 1 -- 192.168.123.105:0/3977451235 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fe1cc003680 con 0x7fe1d00ffd10 2026-03-09T15:06:42.174 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.170+0000 7fe1c77fe700 1 --2- 192.168.123.105:0/3977451235 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe1bc077660 0x7fe1bc079b10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:42.175 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.171+0000 7fe1c77fe700 1 -- 192.168.123.105:0/3977451235 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(87..87 src has 1..87) v4 ==== 6635+0+0 (secure 0 0 0) 0x7fe1cc027020 con 0x7fe1d00ffd10 2026-03-09T15:06:42.175 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.172+0000 7fe1d5b8b700 1 --2- 192.168.123.105:0/3977451235 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe1bc077660 0x7fe1bc079b10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:42.175 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.172+0000 7fe1d5b8b700 1 --2- 192.168.123.105:0/3977451235 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe1bc077660 0x7fe1bc079b10 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7fe1d0195640 tx=0x7fe1c0006c60 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:42.175 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.174+0000 7fe1c77fe700 1 -- 192.168.123.105:0/3977451235 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe1cc05fe70 con 0x7fe1d00ffd10 2026-03-09T15:06:42.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.334+0000 7fe1d85f0700 1 -- 192.168.123.105:0/3977451235 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fe1d0195430 con 0x7fe1d00ffd10 2026-03-09T15:06:42.336 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.336+0000 7fe1c77fe700 1 -- 192.168.123.105:0/3977451235 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 22 v22) v1 ==== 76+0+1754 (secure 0 0 0) 0x7fe1cc05fe70 con 0x7fe1d00ffd10 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:e22 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:btime 2026-03-09T15:06:41:935076+0000 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:legacy client fscid: 1 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:Filesystem 'cephfs' (1) 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:fs_name cephfs 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:epoch 22 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:flags 12 joinable allow_snaps allow_multimds_snaps 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:created 2026-03-09T14:58:23.182447+0000 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:modified 2026-03-09T15:06:41.935075+0000 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:tableserver 0 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:root 0 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:session_timeout 60 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:session_autoclose 300 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:max_file_size 1099511627776 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:max_xattr_size 65536 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:required_client_features {} 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:last_failure 0 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:last_failure_osd_epoch 87 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:max_mds 1 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:in 0 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:up {0=24317} 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:failed 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:damaged 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:stopped 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:data_pools [3] 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:metadata_pool 2 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:inline_data enabled 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:balancer 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:bal_rank_mask -1 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:standby_count_wanted 1 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:qdb_cluster leader: 24317 members: 24317 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.jrhwzz{0:24317} state up:active seq 127 join_fscid=1 addr [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:Standby daemons: 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.ohmitn{-1:34270} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.109:6824/4257546649,v1:192.168.123.109:6825/4257546649] compat {c=[1],r=[1],i=[7ff]}] 2026-03-09T15:06:42.337 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.nrocqt{-1:34272} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.105:6826/3005307080,v1:192.168.123.105:6827/3005307080] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T15:06:42.339 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.339+0000 7fe1c57fa700 1 -- 192.168.123.105:0/3977451235 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe1bc077660 msgr2=0x7fe1bc079b10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.339 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.339+0000 7fe1c57fa700 1 --2- 192.168.123.105:0/3977451235 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe1bc077660 0x7fe1bc079b10 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7fe1d0195640 tx=0x7fe1c0006c60 comp rx=0 tx=0).stop 2026-03-09T15:06:42.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.339+0000 7fe1c57fa700 1 -- 192.168.123.105:0/3977451235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1d00ffd10 msgr2=0x7fe1d019ae30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.339+0000 7fe1c57fa700 1 --2- 192.168.123.105:0/3977451235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1d00ffd10 0x7fe1d019ae30 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7fe1cc000c00 tx=0x7fe1cc00b960 comp rx=0 tx=0).stop 2026-03-09T15:06:42.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.339+0000 7fe1c57fa700 1 -- 192.168.123.105:0/3977451235 shutdown_connections 2026-03-09T15:06:42.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.339+0000 7fe1c57fa700 1 --2- 192.168.123.105:0/3977451235 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe1bc077660 0x7fe1bc079b10 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.339+0000 7fe1c57fa700 1 --2- 192.168.123.105:0/3977451235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe1d00ffd10 0x7fe1d019ae30 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.339+0000 7fe1c57fa700 1 --2- 192.168.123.105:0/3977451235 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe1d0100620 0x7fe1d0193e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.339+0000 7fe1c57fa700 1 -- 192.168.123.105:0/3977451235 >> 192.168.123.105:0/3977451235 conn(0x7fe1d00747e0 msgr2=0x7fe1d010acf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:42.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.339+0000 7fe1c57fa700 1 -- 192.168.123.105:0/3977451235 shutdown_connections 2026-03-09T15:06:42.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.339+0000 7fe1c57fa700 1 -- 192.168.123.105:0/3977451235 wait complete. 2026-03-09T15:06:42.342 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 22 2026-03-09T15:06:42.362 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:41 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] up:rejoin 2026-03-09T15:06:42.362 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:41 vm09.local ceph-mon[98742]: fsmap cephfs:1/1 {0=cephfs.vm09.jrhwzz=up:rejoin} 2 up:standby 2026-03-09T15:06:42.362 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:41 vm09.local ceph-mon[98742]: daemon mds.cephfs.vm09.jrhwzz is now active in filesystem cephfs as rank 0 2026-03-09T15:06:42.362 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:06:42.433 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.431+0000 7fe481ede700 1 -- 192.168.123.105:0/1017078328 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe47c102530 msgr2=0x7fe47c1029a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.433 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.432+0000 7fe481ede700 1 --2- 192.168.123.105:0/1017078328 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe47c102530 0x7fe47c1029a0 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7fe470009b50 tx=0x7fe470009e60 comp rx=0 tx=0).stop 2026-03-09T15:06:42.433 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.432+0000 7fe481ede700 1 -- 192.168.123.105:0/1017078328 shutdown_connections 2026-03-09T15:06:42.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.432+0000 7fe481ede700 1 --2- 192.168.123.105:0/1017078328 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe47c102530 0x7fe47c1029a0 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.432+0000 7fe481ede700 1 --2- 192.168.123.105:0/1017078328 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe47c108550 0x7fe47c108920 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.433+0000 7fe481ede700 1 -- 192.168.123.105:0/1017078328 >> 192.168.123.105:0/1017078328 conn(0x7fe47c0fe070 msgr2=0x7fe47c100480 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:42.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.433+0000 7fe481ede700 1 -- 192.168.123.105:0/1017078328 shutdown_connections 2026-03-09T15:06:42.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.433+0000 7fe481ede700 1 -- 192.168.123.105:0/1017078328 wait complete. 2026-03-09T15:06:42.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.433+0000 7fe481ede700 1 Processor -- start 2026-03-09T15:06:42.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.434+0000 7fe481ede700 1 -- start start 2026-03-09T15:06:42.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.434+0000 7fe481ede700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe47c102530 0x7fe47c198240 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:42.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.434+0000 7fe47b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe47c102530 0x7fe47c198240 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:42.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.434+0000 7fe47b7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe47c102530 0x7fe47c198240 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:36456/0 (socket says 192.168.123.105:36456) 2026-03-09T15:06:42.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.434+0000 7fe47b7fe700 1 -- 192.168.123.105:0/365013093 learned_addr learned my addr 192.168.123.105:0/365013093 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:42.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.434+0000 7fe481ede700 1 --2- 192.168.123.105:0/365013093 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe47c108550 0x7fe47c198780 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:42.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.434+0000 7fe481ede700 1 -- 192.168.123.105:0/365013093 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe47c198e60 con 0x7fe47c102530 2026-03-09T15:06:42.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.434+0000 7fe481ede700 1 -- 192.168.123.105:0/365013093 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe47c19c940 con 0x7fe47c108550 2026-03-09T15:06:42.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.435+0000 7fe47b7fe700 1 -- 192.168.123.105:0/365013093 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe47c108550 msgr2=0x7fe47c198780 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.435+0000 7fe47b7fe700 1 --2- 192.168.123.105:0/365013093 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe47c108550 0x7fe47c198780 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.435+0000 7fe47b7fe700 1 -- 192.168.123.105:0/365013093 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe4700097e0 con 0x7fe47c102530 2026-03-09T15:06:42.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.435+0000 7fe47b7fe700 1 --2- 192.168.123.105:0/365013093 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe47c102530 0x7fe47c198240 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7fe46c00ed70 tx=0x7fe46c00c5b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:42.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.437+0000 7fe478ff9700 1 -- 192.168.123.105:0/365013093 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe46c00cd70 con 0x7fe47c102530 2026-03-09T15:06:42.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.437+0000 7fe478ff9700 1 -- 192.168.123.105:0/365013093 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fe46c010910 con 0x7fe47c102530 2026-03-09T15:06:42.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.437+0000 7fe478ff9700 1 -- 192.168.123.105:0/365013093 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe46c018980 con 0x7fe47c102530 2026-03-09T15:06:42.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.437+0000 7fe481ede700 1 -- 192.168.123.105:0/365013093 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe47c19cc20 con 0x7fe47c102530 2026-03-09T15:06:42.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.437+0000 7fe481ede700 1 -- 192.168.123.105:0/365013093 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe47c19d0f0 con 0x7fe47c102530 2026-03-09T15:06:42.440 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.438+0000 7fe481ede700 1 -- 192.168.123.105:0/365013093 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe47c10a9e0 con 0x7fe47c102530 2026-03-09T15:06:42.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.440+0000 7fe478ff9700 1 -- 192.168.123.105:0/365013093 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fe46c010430 con 0x7fe47c102530 2026-03-09T15:06:42.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.440+0000 7fe478ff9700 1 --2- 192.168.123.105:0/365013093 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe464077710 0x7fe464079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:42.441 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.441+0000 7fe478ff9700 1 -- 192.168.123.105:0/365013093 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(87..87 src has 1..87) v4 ==== 6635+0+0 (secure 0 0 0) 0x7fe46c014070 con 0x7fe47c102530 2026-03-09T15:06:42.444 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.444+0000 7fe478ff9700 1 -- 192.168.123.105:0/365013093 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe46c062c50 con 0x7fe47c102530 2026-03-09T15:06:42.446 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.445+0000 7fe47affd700 1 --2- 192.168.123.105:0/365013093 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe464077710 0x7fe464079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:42.448 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.447+0000 7fe47affd700 1 --2- 192.168.123.105:0/365013093 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe464077710 0x7fe464079bc0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fe470009540 tx=0x7fe47000b540 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:42.594 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.592+0000 7fe481ede700 1 -- 192.168.123.105:0/365013093 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fe47c103c50 con 0x7fe464077710 2026-03-09T15:06:42.595 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.594+0000 7fe478ff9700 1 -- 192.168.123.105:0/365013093 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+416 (secure 0 0 0) 0x7fe47c103c50 con 0x7fe464077710 2026-03-09T15:06:42.595 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:06:42.595 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:06:42.595 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:06:42.595 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T15:06:42.595 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [ 2026-03-09T15:06:42.595 INFO:teuthology.orchestra.run.vm05.stdout: "mgr", 2026-03-09T15:06:42.595 INFO:teuthology.orchestra.run.vm05.stdout: "osd", 2026-03-09T15:06:42.595 INFO:teuthology.orchestra.run.vm05.stdout: "mon", 2026-03-09T15:06:42.596 INFO:teuthology.orchestra.run.vm05.stdout: "crash" 2026-03-09T15:06:42.596 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-09T15:06:42.596 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "13/23 daemons upgraded", 2026-03-09T15:06:42.596 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading mds daemons", 2026-03-09T15:06:42.596 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:06:42.596 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:06:42.598 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.598+0000 7fe4627fc700 1 -- 192.168.123.105:0/365013093 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe464077710 msgr2=0x7fe464079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.598 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.598+0000 7fe4627fc700 1 --2- 192.168.123.105:0/365013093 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe464077710 0x7fe464079bc0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fe470009540 tx=0x7fe47000b540 comp rx=0 tx=0).stop 2026-03-09T15:06:42.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.598+0000 7fe4627fc700 1 -- 192.168.123.105:0/365013093 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe47c102530 msgr2=0x7fe47c198240 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.598+0000 7fe4627fc700 1 --2- 192.168.123.105:0/365013093 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe47c102530 0x7fe47c198240 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7fe46c00ed70 tx=0x7fe46c00c5b0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.598+0000 7fe4627fc700 1 -- 192.168.123.105:0/365013093 shutdown_connections 2026-03-09T15:06:42.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.599+0000 7fe4627fc700 1 --2- 192.168.123.105:0/365013093 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe464077710 0x7fe464079bc0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.599+0000 7fe4627fc700 1 --2- 192.168.123.105:0/365013093 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe47c102530 0x7fe47c198240 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.599+0000 7fe4627fc700 1 --2- 192.168.123.105:0/365013093 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe47c108550 0x7fe47c198780 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.599+0000 7fe4627fc700 1 -- 192.168.123.105:0/365013093 >> 192.168.123.105:0/365013093 conn(0x7fe47c0fe070 msgr2=0x7fe47c0ffbc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:42.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.599+0000 7fe4627fc700 1 -- 192.168.123.105:0/365013093 shutdown_connections 2026-03-09T15:06:42.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.599+0000 7fe4627fc700 1 -- 192.168.123.105:0/365013093 wait complete. 2026-03-09T15:06:42.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.679+0000 7f342331a700 1 -- 192.168.123.105:0/106727203 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f341c071e40 msgr2=0x7f341c0722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.679+0000 7f342331a700 1 --2- 192.168.123.105:0/106727203 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f341c071e40 0x7f341c0722b0 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f341400d3f0 tx=0x7f341400d700 comp rx=0 tx=0).stop 2026-03-09T15:06:42.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.680+0000 7f342331a700 1 -- 192.168.123.105:0/106727203 shutdown_connections 2026-03-09T15:06:42.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.680+0000 7f342331a700 1 --2- 192.168.123.105:0/106727203 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f341c071e40 0x7f341c0722b0 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.680+0000 7f342331a700 1 --2- 192.168.123.105:0/106727203 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f341c10c8b0 0x7f341c10cc80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.681 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.680+0000 7f342331a700 1 -- 192.168.123.105:0/106727203 >> 192.168.123.105:0/106727203 conn(0x7f341c06c6c0 msgr2=0x7f341c06cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:42.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.680+0000 7f342331a700 1 -- 192.168.123.105:0/106727203 shutdown_connections 2026-03-09T15:06:42.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.680+0000 7f342331a700 1 -- 192.168.123.105:0/106727203 wait complete. 2026-03-09T15:06:42.682 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.681+0000 7f342331a700 1 Processor -- start 2026-03-09T15:06:42.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.681+0000 7f342331a700 1 -- start start 2026-03-09T15:06:42.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.681+0000 7f342331a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f341c10c8b0 0x7f341c1327a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:42.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.681+0000 7f342331a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f341c132ce0 0x7f341c133150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:42.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.681+0000 7f342331a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f341c07eef0 con 0x7f341c132ce0 2026-03-09T15:06:42.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.681+0000 7f342331a700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f341c07f060 con 0x7f341c10c8b0 2026-03-09T15:06:42.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.681+0000 7f34210b6700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f341c10c8b0 0x7f341c1327a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:42.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.681+0000 7f34210b6700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f341c10c8b0 0x7f341c1327a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:44984/0 (socket says 192.168.123.105:44984) 2026-03-09T15:06:42.688 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.681+0000 7f34210b6700 1 -- 192.168.123.105:0/3837382624 learned_addr learned my addr 192.168.123.105:0/3837382624 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:06:42.688 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.681+0000 7f34210b6700 1 -- 192.168.123.105:0/3837382624 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f341c132ce0 msgr2=0x7f341c133150 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.688 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.681+0000 7f34210b6700 1 --2- 192.168.123.105:0/3837382624 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f341c132ce0 0x7f341c133150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.688 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.681+0000 7f34210b6700 1 -- 192.168.123.105:0/3837382624 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3414007ed0 con 0x7f341c10c8b0 2026-03-09T15:06:42.688 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.686+0000 7f34210b6700 1 --2- 192.168.123.105:0/3837382624 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f341c10c8b0 0x7f341c1327a0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f341800b770 tx=0x7f341800bb30 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:42.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.687+0000 7f34127fc700 1 -- 192.168.123.105:0/3837382624 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f341800f820 con 0x7f341c10c8b0 2026-03-09T15:06:42.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.687+0000 7f34127fc700 1 -- 192.168.123.105:0/3837382624 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f341800fe60 con 0x7f341c10c8b0 2026-03-09T15:06:42.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.687+0000 7f34127fc700 1 -- 192.168.123.105:0/3837382624 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f341800d610 con 0x7f341c10c8b0 2026-03-09T15:06:42.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.687+0000 7f342331a700 1 -- 192.168.123.105:0/3837382624 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f341c07f2e0 con 0x7f341c10c8b0 2026-03-09T15:06:42.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.687+0000 7f342331a700 1 -- 192.168.123.105:0/3837382624 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f341c07f830 con 0x7f341c10c8b0 2026-03-09T15:06:42.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.687+0000 7f342331a700 1 -- 192.168.123.105:0/3837382624 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f341c04f2a0 con 0x7f341c10c8b0 2026-03-09T15:06:42.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.689+0000 7f34127fc700 1 -- 192.168.123.105:0/3837382624 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f341801e030 con 0x7f341c10c8b0 2026-03-09T15:06:42.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.689+0000 7f34127fc700 1 --2- 192.168.123.105:0/3837382624 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3408077910 0x7f3408079dc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:06:42.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.689+0000 7f34127fc700 1 -- 192.168.123.105:0/3837382624 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(87..87 src has 1..87) v4 ==== 6635+0+0 (secure 0 0 0) 0x7f34180999d0 con 0x7f341c10c8b0 2026-03-09T15:06:42.692 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.690+0000 7f34208b5700 1 --2- 192.168.123.105:0/3837382624 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3408077910 0x7f3408079dc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:06:42.697 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.691+0000 7f34208b5700 1 --2- 192.168.123.105:0/3837382624 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3408077910 0x7f3408079dc0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f341400db80 tx=0x7f3414006040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:06:42.698 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.691+0000 7f34127fc700 1 -- 192.168.123.105:0/3837382624 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3418062090 con 0x7f341c10c8b0 2026-03-09T15:06:42.876 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.875+0000 7f342331a700 1 -- 192.168.123.105:0/3837382624 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f341c04ea50 con 0x7f341c10c8b0 2026-03-09T15:06:42.877 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_WARN 1 filesystem with deprecated feature inline_data; Degraded data redundancy: 159/264 objects degraded (60.227%), 1 pg degraded 2026-03-09T15:06:42.877 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:06:42.877 INFO:teuthology.orchestra.run.vm05.stdout: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:06:42.877 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 159/264 objects degraded (60.227%), 1 pg degraded 2026-03-09T15:06:42.877 INFO:teuthology.orchestra.run.vm05.stdout: pg 3.d is active+recovery_wait+undersized+degraded+remapped, acting [2,4] 2026-03-09T15:06:42.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.876+0000 7f34127fc700 1 -- 192.168.123.105:0/3837382624 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+451 (secure 0 0 0) 0x7f34180617e0 con 0x7f341c10c8b0 2026-03-09T15:06:42.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.878+0000 7f342331a700 1 -- 192.168.123.105:0/3837382624 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3408077910 msgr2=0x7f3408079dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.878+0000 7f342331a700 1 --2- 192.168.123.105:0/3837382624 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3408077910 0x7f3408079dc0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f341400db80 tx=0x7f3414006040 comp rx=0 tx=0).stop 2026-03-09T15:06:42.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.878+0000 7f342331a700 1 -- 192.168.123.105:0/3837382624 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f341c10c8b0 msgr2=0x7f341c1327a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:06:42.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.878+0000 7f342331a700 1 --2- 192.168.123.105:0/3837382624 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f341c10c8b0 0x7f341c1327a0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7f341800b770 tx=0x7f341800bb30 comp rx=0 tx=0).stop 2026-03-09T15:06:42.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.878+0000 7f342331a700 1 -- 192.168.123.105:0/3837382624 shutdown_connections 2026-03-09T15:06:42.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.878+0000 7f342331a700 1 --2- 192.168.123.105:0/3837382624 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3408077910 0x7f3408079dc0 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.878+0000 7f342331a700 1 --2- 192.168.123.105:0/3837382624 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f341c10c8b0 0x7f341c1327a0 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.878+0000 7f342331a700 1 --2- 192.168.123.105:0/3837382624 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f341c132ce0 0x7f341c133150 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:06:42.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.878+0000 7f342331a700 1 -- 192.168.123.105:0/3837382624 >> 192.168.123.105:0/3837382624 conn(0x7f341c06c6c0 msgr2=0x7f341c06ff90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:06:42.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.879+0000 7f342331a700 1 -- 192.168.123.105:0/3837382624 shutdown_connections 2026-03-09T15:06:42.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:06:42.879+0000 7f342331a700 1 -- 192.168.123.105:0/3837382624 wait complete. 2026-03-09T15:06:42.948 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: from='client.44215 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:43.234 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: pgmap v139: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 25 MiB/s rd, 204 B/s wr, 11 op/s; 159/264 objects degraded (60.227%); 0 B/s, 9 objects/s recovering 2026-03-09T15:06:43.234 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: from='client.44219 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:43.234 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: from='client.34280 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:43.234 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-09T15:06:43.234 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] up:active 2026-03-09T15:06:43.234 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm09.jrhwzz=up:active} 2 up:standby 2026-03-09T15:06:43.234 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3082632128' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:43.234 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3977451235' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:06:43.234 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:43.234 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:43.234 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:43.234 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:42 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3837382624' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: from='client.44215 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: pgmap v139: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 25 MiB/s rd, 204 B/s wr, 11 op/s; 159/264 objects degraded (60.227%); 0 B/s, 9 objects/s recovering 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: from='client.44219 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: from='client.34280 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.109:6826/2393799497,v1:192.168.123.109:6827/2393799497] up:active 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm09.jrhwzz=up:active} 2 up:standby 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3082632128' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3977451235' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:42 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3837382624' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:06:44.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:43 vm05.local ceph-mon[116516]: from='client.34290 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:44.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:43 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:44.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:43 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:44.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:43 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.105:6828/3529134522,v1:192.168.123.105:6829/3529134522] up:boot 2026-03-09T15:06:44.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:43 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm09.jrhwzz=up:active} 3 up:standby 2026-03-09T15:06:44.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:43 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T15:06:44.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:43 vm09.local ceph-mon[98742]: from='client.34290 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:06:44.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:43 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:44.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:43 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:44.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:43 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.105:6828/3529134522,v1:192.168.123.105:6829/3529134522] up:boot 2026-03-09T15:06:44.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:43 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm09.jrhwzz=up:active} 3 up:standby 2026-03-09T15:06:44.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:43 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T15:06:45.012 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:45 vm05.local ceph-mon[116516]: pgmap v140: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 25 MiB/s rd, 204 B/s wr, 11 op/s; 159/264 objects degraded (60.227%); 0 B/s, 9 objects/s recovering 2026-03-09T15:06:45.277 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:45 vm05.local ceph-mon[116516]: osdmap e88: 6 total, 6 up, 6 in 2026-03-09T15:06:45.278 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:45.278 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:45.278 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:45.278 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:45 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:45.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:45 vm09.local ceph-mon[98742]: pgmap v140: 65 pgs: 1 active+recovery_wait+undersized+degraded+remapped, 1 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 25 MiB/s rd, 204 B/s wr, 11 op/s; 159/264 objects degraded (60.227%); 0 B/s, 9 objects/s recovering 2026-03-09T15:06:45.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:45 vm09.local ceph-mon[98742]: osdmap e88: 6 total, 6 up, 6 in 2026-03-09T15:06:45.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:45.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:45.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:45.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:45 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:46.296 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:46 vm09.local ceph-mon[98742]: osdmap e89: 6 total, 6 up, 6 in 2026-03-09T15:06:46.297 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:46 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:46.297 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:46 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:46.297 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:46 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:46.297 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:46 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:06:46.297 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:46 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:46.297 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:46 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:46.297 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:46 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:46.297 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:46 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:46.297 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:46 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:46 vm05.local ceph-mon[116516]: osdmap e89: 6 total, 6 up, 6 in 2026-03-09T15:06:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:06:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:46.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:46 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:47.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:47 vm09.local ceph-mon[98742]: pgmap v143: 65 pgs: 2 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 26 MiB/s rd, 6.2 KiB/s wr, 14 op/s; 0 B/s, 7 objects/s recovering 2026-03-09T15:06:47.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:47 vm09.local ceph-mon[98742]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 159/264 objects degraded (60.227%), 1 pg degraded) 2026-03-09T15:06:47.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:47 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:47.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:47 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:47.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:47 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.ohmitn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:06:47.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:47 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:47 vm05.local ceph-mon[116516]: pgmap v143: 65 pgs: 2 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 26 MiB/s rd, 6.2 KiB/s wr, 14 op/s; 0 B/s, 7 objects/s recovering 2026-03-09T15:06:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:47 vm05.local ceph-mon[116516]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 159/264 objects degraded (60.227%), 1 pg degraded) 2026-03-09T15:06:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:47 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:47 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:47 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.ohmitn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:06:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:47 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:48.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:48 vm05.local ceph-mon[116516]: Upgrade: Updating mds.cephfs.vm09.ohmitn 2026-03-09T15:06:48.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:48 vm05.local ceph-mon[116516]: Deploying daemon mds.cephfs.vm09.ohmitn on vm09 2026-03-09T15:06:48.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:48 vm05.local ceph-mon[116516]: osdmap e90: 6 total, 6 up, 6 in 2026-03-09T15:06:48.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:48 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm09.jrhwzz=up:active} 2 up:standby 2026-03-09T15:06:48.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:48 vm09.local ceph-mon[98742]: Upgrade: Updating mds.cephfs.vm09.ohmitn 2026-03-09T15:06:48.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:48 vm09.local ceph-mon[98742]: Deploying daemon mds.cephfs.vm09.ohmitn on vm09 2026-03-09T15:06:48.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:48 vm09.local ceph-mon[98742]: osdmap e90: 6 total, 6 up, 6 in 2026-03-09T15:06:48.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:48 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm09.jrhwzz=up:active} 2 up:standby 2026-03-09T15:06:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:49 vm05.local ceph-mon[116516]: pgmap v145: 65 pgs: 2 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 8.7 MiB/s rd, 8.3 KiB/s wr, 12 op/s; 0 B/s, 1 objects/s recovering 2026-03-09T15:06:49.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:49 vm09.local ceph-mon[98742]: pgmap v145: 65 pgs: 2 active+recovering+undersized+remapped, 63 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 8.7 MiB/s rd, 8.3 KiB/s wr, 12 op/s; 0 B/s, 1 objects/s recovering 2026-03-09T15:06:50.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:50 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:06:50.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:50 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:06:51.107 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:51 vm09.local ceph-mon[98742]: pgmap v146: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 688 KiB/s rd, 8.2 KiB/s wr, 9 op/s; 0 B/s, 7 objects/s recovering 2026-03-09T15:06:51.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:51 vm05.local ceph-mon[116516]: pgmap v146: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 688 KiB/s rd, 8.2 KiB/s wr, 9 op/s; 0 B/s, 7 objects/s recovering 2026-03-09T15:06:52.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:52 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:52.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:52 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:52.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:52 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:52.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:52 vm05.local ceph-mon[116516]: pgmap v147: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 4.6 KiB/s rd, 6.6 KiB/s wr, 7 op/s; 0 B/s, 12 objects/s recovering 2026-03-09T15:06:52.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:52 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:52.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:52 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:52.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:52 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:52.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:52 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:52.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:52 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:52.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:52 vm09.local ceph-mon[98742]: pgmap v147: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 4.6 KiB/s rd, 6.6 KiB/s wr, 7 op/s; 0 B/s, 12 objects/s recovering 2026-03-09T15:06:52.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:52 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:52.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:52 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:53.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:53 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.109:6824/2799240855,v1:192.168.123.109:6825/2799240855] up:boot 2026-03-09T15:06:53.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:53 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm09.jrhwzz=up:active} 3 up:standby 2026-03-09T15:06:53.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:53 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:06:53.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:53 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:53.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:53 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:53.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:53 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.109:6824/2799240855,v1:192.168.123.109:6825/2799240855] up:boot 2026-03-09T15:06:53.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:53 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm09.jrhwzz=up:active} 3 up:standby 2026-03-09T15:06:53.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:53 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:06:53.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:53 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:53.580 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:53 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm05[116512]: 2026-03-09T15:06:54.366+0000 7fc0654ee640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: pgmap v148: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 1.1 KiB/s rd, 1 op/s; 0 B/s, 10 objects/s recovering 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: Upgrade: Updating mds.cephfs.vm09.jrhwzz 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.jrhwzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: Deploying daemon mds.cephfs.vm09.jrhwzz on vm09 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: osdmap e91: 6 total, 6 up, 6 in 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: Standby daemon mds.cephfs.vm05.nrocqt assigned to filesystem cephfs as rank 0 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-09T15:06:54.805 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:54 vm05.local ceph-mon[116516]: fsmap cephfs:1/1 {0=cephfs.vm05.nrocqt=up:replay} 2 up:standby 2026-03-09T15:06:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: pgmap v148: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 1.1 KiB/s rd, 1 op/s; 0 B/s, 10 objects/s recovering 2026-03-09T15:06:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:06:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:06:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:06:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: Upgrade: Updating mds.cephfs.vm09.jrhwzz 2026-03-09T15:06:54.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:54.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.vm09.jrhwzz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-09T15:06:54.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:06:54.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: Deploying daemon mds.cephfs.vm09.jrhwzz on vm09 2026-03-09T15:06:54.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: Health check failed: 1 filesystem is degraded (FS_DEGRADED) 2026-03-09T15:06:54.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-09T15:06:54.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: osdmap e91: 6 total, 6 up, 6 in 2026-03-09T15:06:54.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: Standby daemon mds.cephfs.vm05.nrocqt assigned to filesystem cephfs as rank 0 2026-03-09T15:06:54.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-09T15:06:54.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: fsmap cephfs:0/1 3 up:standby, 1 failed 2026-03-09T15:06:54.867 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:54 vm09.local ceph-mon[98742]: fsmap cephfs:1/1 {0=cephfs.vm05.nrocqt=up:replay} 2 up:standby 2026-03-09T15:06:56.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:56 vm05.local ceph-mon[116516]: pgmap v150: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 2.4 MiB/s rd, 1 op/s; 0 B/s, 15 objects/s recovering 2026-03-09T15:06:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:56 vm09.local ceph-mon[98742]: pgmap v150: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 2.4 MiB/s rd, 1 op/s; 0 B/s, 15 objects/s recovering 2026-03-09T15:06:57.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:57.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:06:57.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:06:57.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:06:58.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:58 vm05.local ceph-mon[116516]: pgmap v151: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 8.0 MiB/s rd, 2 op/s; 0 B/s, 13 objects/s recovering 2026-03-09T15:06:58.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:58 vm09.local ceph-mon[98742]: pgmap v151: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 8.0 MiB/s rd, 2 op/s; 0 B/s, 13 objects/s recovering 2026-03-09T15:06:59.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:59 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.105:6826/3005307080,v1:192.168.123.105:6827/3005307080] up:reconnect 2026-03-09T15:06:59.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:59 vm05.local ceph-mon[116516]: fsmap cephfs:1/1 {0=cephfs.vm05.nrocqt=up:reconnect} 2 up:standby 2026-03-09T15:06:59.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:59 vm05.local ceph-mon[116516]: reconnect by client.14546 192.168.144.1:0/3878864280 after 0.003 2026-03-09T15:06:59.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:06:59 vm05.local ceph-mon[116516]: reconnect by client.24345 192.168.144.1:0/70265336 after 0.003 2026-03-09T15:06:59.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:59 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.105:6826/3005307080,v1:192.168.123.105:6827/3005307080] up:reconnect 2026-03-09T15:06:59.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:59 vm09.local ceph-mon[98742]: fsmap cephfs:1/1 {0=cephfs.vm05.nrocqt=up:reconnect} 2 up:standby 2026-03-09T15:06:59.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:59 vm09.local ceph-mon[98742]: reconnect by client.14546 192.168.144.1:0/3878864280 after 0.003 2026-03-09T15:06:59.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:06:59 vm09.local ceph-mon[98742]: reconnect by client.24345 192.168.144.1:0/70265336 after 0.003 2026-03-09T15:07:01.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:00 vm05.local ceph-mon[116516]: pgmap v152: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 19 MiB/s rd, 6 op/s; 0 B/s, 9 objects/s recovering 2026-03-09T15:07:01.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:00 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.105:6826/3005307080,v1:192.168.123.105:6827/3005307080] up:rejoin 2026-03-09T15:07:01.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:00 vm05.local ceph-mon[116516]: fsmap cephfs:1/1 {0=cephfs.vm05.nrocqt=up:rejoin} 2 up:standby 2026-03-09T15:07:01.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:00 vm05.local ceph-mon[116516]: daemon mds.cephfs.vm05.nrocqt is now active in filesystem cephfs as rank 0 2026-03-09T15:07:01.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:00 vm09.local ceph-mon[98742]: pgmap v152: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 19 MiB/s rd, 6 op/s; 0 B/s, 9 objects/s recovering 2026-03-09T15:07:01.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:00 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.105:6826/3005307080,v1:192.168.123.105:6827/3005307080] up:rejoin 2026-03-09T15:07:01.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:00 vm09.local ceph-mon[98742]: fsmap cephfs:1/1 {0=cephfs.vm05.nrocqt=up:rejoin} 2 up:standby 2026-03-09T15:07:01.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:00 vm09.local ceph-mon[98742]: daemon mds.cephfs.vm05.nrocqt is now active in filesystem cephfs as rank 0 2026-03-09T15:07:02.437 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:02 vm09.local ceph-mon[98742]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-09T15:07:02.437 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:02 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.105:6826/3005307080,v1:192.168.123.105:6827/3005307080] up:active 2026-03-09T15:07:02.437 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:02 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 2 up:standby 2026-03-09T15:07:02.437 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:02 vm09.local ceph-mon[98742]: osdmap e92: 6 total, 6 up, 6 in 2026-03-09T15:07:02.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:02 vm05.local ceph-mon[116516]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded) 2026-03-09T15:07:02.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:02 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.105:6826/3005307080,v1:192.168.123.105:6827/3005307080] up:active 2026-03-09T15:07:02.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:02 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 2 up:standby 2026-03-09T15:07:02.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:02 vm05.local ceph-mon[116516]: osdmap e92: 6 total, 6 up, 6 in 2026-03-09T15:07:03.279 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:03 vm09.local ceph-mon[98742]: pgmap v154: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 30 MiB/s rd, 127 B/s wr, 9 op/s; 0 B/s, 12 objects/s recovering 2026-03-09T15:07:03.279 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:03 vm09.local ceph-mon[98742]: osdmap e93: 6 total, 6 up, 6 in 2026-03-09T15:07:03.279 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:03 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:03.279 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:03 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:03.279 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:03 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:07:03.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:03 vm05.local ceph-mon[116516]: pgmap v154: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 30 MiB/s rd, 127 B/s wr, 9 op/s; 0 B/s, 12 objects/s recovering 2026-03-09T15:07:03.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:03 vm05.local ceph-mon[116516]: osdmap e93: 6 total, 6 up, 6 in 2026-03-09T15:07:03.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:03 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:03.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:03 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:03.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:03 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:07:04.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:04 vm05.local ceph-mon[116516]: mds.? [v2:192.168.123.109:6826/632428118,v1:192.168.123.109:6827/632428118] up:boot 2026-03-09T15:07:04.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:04 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 3 up:standby 2026-03-09T15:07:04.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:04 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.jrhwzz"}]: dispatch 2026-03-09T15:07:04.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:04 vm05.local ceph-mon[116516]: pgmap v156: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 29 MiB/s rd, 255 B/s wr, 10 op/s; 0 B/s, 6 objects/s recovering 2026-03-09T15:07:04.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:04 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:04.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:04 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:04.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:04 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:04.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:04 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:04.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:04 vm09.local ceph-mon[98742]: mds.? [v2:192.168.123.109:6826/632428118,v1:192.168.123.109:6827/632428118] up:boot 2026-03-09T15:07:04.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:04 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 3 up:standby 2026-03-09T15:07:04.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:04 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "mds metadata", "who": "cephfs.vm09.jrhwzz"}]: dispatch 2026-03-09T15:07:04.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:04 vm09.local ceph-mon[98742]: pgmap v156: 65 pgs: 1 active+recovering+undersized+remapped, 64 active+clean; 257 MiB data, 991 MiB used, 119 GiB / 120 GiB avail; 29 MiB/s rd, 255 B/s wr, 10 op/s; 0 B/s, 6 objects/s recovering 2026-03-09T15:07:04.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:04 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:04.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:04 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:04.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:04 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:04.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:04 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: Upgrade: Setting container_image for all mds 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.nrocqt"}]: dispatch 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.nrocqt"}]': finished 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.rrcyql"}]': finished 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm09.jrhwzz"}]: dispatch 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm09.jrhwzz"}]': finished 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm09.ohmitn"}]': finished 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: Upgrade: Setting filesystem cephfs Joinable 2026-03-09T15:07:06.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:05 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "joinable", "val": "true"}]: dispatch 2026-03-09T15:07:06.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:06.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:06.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:07:06.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:07:06.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:06.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: Upgrade: Setting container_image for all mds 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.nrocqt"}]: dispatch 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.nrocqt"}]': finished 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.rrcyql"}]: dispatch 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm05.rrcyql"}]': finished 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm09.jrhwzz"}]: dispatch 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm09.jrhwzz"}]': finished 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm09.ohmitn"}]: dispatch 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds.cephfs.vm09.ohmitn"}]': finished 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: Upgrade: Setting filesystem cephfs Joinable 2026-03-09T15:07:06.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:05 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "joinable", "val": "true"}]: dispatch 2026-03-09T15:07:07.274 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:07 vm05.local ceph-mon[116516]: pgmap v157: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 6.2 KiB/s wr, 13 op/s; 0 B/s, 6 objects/s recovering 2026-03-09T15:07:07.275 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:07 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "joinable", "val": "true"}]': finished 2026-03-09T15:07:07.275 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:07 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 3 up:standby 2026-03-09T15:07:07.275 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:07 vm05.local ceph-mon[116516]: Upgrade: Enabling allow_standby_replay on filesystem cephfs 2026-03-09T15:07:07.275 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:07 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "1"}]: dispatch 2026-03-09T15:07:07.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:07 vm09.local ceph-mon[98742]: pgmap v157: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 6.2 KiB/s wr, 13 op/s; 0 B/s, 6 objects/s recovering 2026-03-09T15:07:07.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:07 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "joinable", "val": "true"}]': finished 2026-03-09T15:07:07.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:07 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 3 up:standby 2026-03-09T15:07:07.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:07 vm09.local ceph-mon[98742]: Upgrade: Enabling allow_standby_replay on filesystem cephfs 2026-03-09T15:07:07.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:07 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "1"}]: dispatch 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "1"}]': finished 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 3 up:standby 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: Upgrade: Setting container_image for all rgw 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: Upgrade: Setting container_image for all rbd-mirror 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T15:07:08.223 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:08 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:07:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "allow_standby_replay", "val": "1"}]': finished 2026-03-09T15:07:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 3 up:standby 2026-03-09T15:07:08.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T15:07:08.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:08.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:08.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: Upgrade: Setting container_image for all rgw 2026-03-09T15:07:08.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:08.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:08.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: Upgrade: Setting container_image for all rbd-mirror 2026-03-09T15:07:08.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:08.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:08.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:08.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm05", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T15:07:08.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:08 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:07:09.118 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:09 vm09.local ceph-mon[98742]: pgmap v158: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 7.2 MiB/s rd, 6.2 KiB/s wr, 9 op/s; 0 B/s, 6 objects/s recovering 2026-03-09T15:07:09.118 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:09 vm09.local ceph-mon[98742]: Upgrade: Updating ceph-exporter.vm05 (1/2) 2026-03-09T15:07:09.118 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:09 vm09.local ceph-mon[98742]: Deploying daemon ceph-exporter.vm05 on vm05 2026-03-09T15:07:09.118 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:09 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:09.118 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:09 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:09.118 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:09 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:09.118 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:09 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T15:07:09.118 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:09 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:07:09.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:09 vm05.local ceph-mon[116516]: pgmap v158: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 7.2 MiB/s rd, 6.2 KiB/s wr, 9 op/s; 0 B/s, 6 objects/s recovering 2026-03-09T15:07:09.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:09 vm05.local ceph-mon[116516]: Upgrade: Updating ceph-exporter.vm05 (1/2) 2026-03-09T15:07:09.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:09 vm05.local ceph-mon[116516]: Deploying daemon ceph-exporter.vm05 on vm05 2026-03-09T15:07:09.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:09 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:09.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:09 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:09.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:09 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:09.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:09 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-09T15:07:09.305 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:09 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:07:10.061 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:10 vm09.local ceph-mon[98742]: Upgrade: Updating ceph-exporter.vm09 (2/2) 2026-03-09T15:07:10.062 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:10 vm09.local ceph-mon[98742]: Deploying daemon ceph-exporter.vm09 on vm09 2026-03-09T15:07:10.062 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:10 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:10.062 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:10 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:10.062 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:10 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:07:10.321 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:10 vm05.local ceph-mon[116516]: Upgrade: Updating ceph-exporter.vm09 (2/2) 2026-03-09T15:07:10.321 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:10 vm05.local ceph-mon[116516]: Deploying daemon ceph-exporter.vm09 on vm09 2026-03-09T15:07:10.321 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:10 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:10.321 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:10 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:10.321 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:10 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:07:11.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:11 vm05.local ceph-mon[116516]: pgmap v159: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 4.3 MiB/s rd, 5.6 KiB/s wr, 8 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T15:07:11.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:11.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:11.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:11 vm09.local ceph-mon[98742]: pgmap v159: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 4.3 MiB/s rd, 5.6 KiB/s wr, 8 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T15:07:11.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:11.367 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:12 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:12 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:12 vm05.local ceph-mon[116516]: pgmap v160: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 7.7 MiB/s rd, 4.9 KiB/s wr, 8 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T15:07:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:12 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:12 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:07:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:12 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:12 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:12 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:12 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:12 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:12 vm09.local ceph-mon[98742]: pgmap v160: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 7.7 MiB/s rd, 4.9 KiB/s wr, 8 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T15:07:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:12 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:12 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:07:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:12 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:12 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:12 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.043+0000 7fd665285700 1 -- 192.168.123.105:0/3340443307 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd660071e40 msgr2=0x7fd6600722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.043+0000 7fd665285700 1 --2- 192.168.123.105:0/3340443307 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd660071e40 0x7fd6600722b0 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7fd65800cd40 tx=0x7fd65800a320 comp rx=0 tx=0).stop 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.043+0000 7fd665285700 1 -- 192.168.123.105:0/3340443307 shutdown_connections 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.043+0000 7fd665285700 1 --2- 192.168.123.105:0/3340443307 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd660071e40 0x7fd6600722b0 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.043+0000 7fd665285700 1 --2- 192.168.123.105:0/3340443307 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd66010c8f0 0x7fd66010ccc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.043+0000 7fd665285700 1 -- 192.168.123.105:0/3340443307 >> 192.168.123.105:0/3340443307 conn(0x7fd66006c6c0 msgr2=0x7fd66006cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.043+0000 7fd665285700 1 -- 192.168.123.105:0/3340443307 shutdown_connections 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.043+0000 7fd665285700 1 -- 192.168.123.105:0/3340443307 wait complete. 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.043+0000 7fd665285700 1 Processor -- start 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.043+0000 7fd665285700 1 -- start start 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd665285700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd66010c8f0 0x7fd66007cdf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd665285700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd66007d330 0x7fd66007d7a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd665285700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd660081970 con 0x7fd66010c8f0 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd665285700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd660081ae0 con 0x7fd66007d330 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd65ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd66010c8f0 0x7fd66007cdf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd65ffff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd66010c8f0 0x7fd66007cdf0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:57110/0 (socket says 192.168.123.105:57110) 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd65ffff700 1 -- 192.168.123.105:0/2132797709 learned_addr learned my addr 192.168.123.105:0/2132797709 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd65f7fe700 1 --2- 192.168.123.105:0/2132797709 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd66007d330 0x7fd66007d7a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd65ffff700 1 -- 192.168.123.105:0/2132797709 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd66007d330 msgr2=0x7fd66007d7a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:13.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd65ffff700 1 --2- 192.168.123.105:0/2132797709 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd66007d330 0x7fd66007d7a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd65ffff700 1 -- 192.168.123.105:0/2132797709 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd65800c9f0 con 0x7fd66010c8f0 2026-03-09T15:07:13.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd65ffff700 1 --2- 192.168.123.105:0/2132797709 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd66010c8f0 0x7fd66007cdf0 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7fd65000b770 tx=0x7fd65000ba80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:13.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd65d7fa700 1 -- 192.168.123.105:0/2132797709 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd650010840 con 0x7fd66010c8f0 2026-03-09T15:07:13.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd665285700 1 -- 192.168.123.105:0/2132797709 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd660081dc0 con 0x7fd66010c8f0 2026-03-09T15:07:13.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.044+0000 7fd665285700 1 -- 192.168.123.105:0/2132797709 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd660082310 con 0x7fd66010c8f0 2026-03-09T15:07:13.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.045+0000 7fd65d7fa700 1 -- 192.168.123.105:0/2132797709 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fd650010e80 con 0x7fd66010c8f0 2026-03-09T15:07:13.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.045+0000 7fd65d7fa700 1 -- 192.168.123.105:0/2132797709 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd65000d590 con 0x7fd66010c8f0 2026-03-09T15:07:13.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.048+0000 7fd665285700 1 -- 192.168.123.105:0/2132797709 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd64c005320 con 0x7fd66010c8f0 2026-03-09T15:07:13.064 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.063+0000 7fd65d7fa700 1 -- 192.168.123.105:0/2132797709 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fd6500109a0 con 0x7fd66010c8f0 2026-03-09T15:07:13.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.063+0000 7fd65d7fa700 1 --2- 192.168.123.105:0/2132797709 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd648077910 0x7fd648079dc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:13.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.063+0000 7fd65d7fa700 1 -- 192.168.123.105:0/2132797709 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fd650099860 con 0x7fd66010c8f0 2026-03-09T15:07:13.065 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.063+0000 7fd65d7fa700 1 -- 192.168.123.105:0/2132797709 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fd6500c9a90 con 0x7fd66010c8f0 2026-03-09T15:07:13.067 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.066+0000 7fd65f7fe700 1 --2- 192.168.123.105:0/2132797709 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd648077910 0x7fd648079dc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:13.073 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.071+0000 7fd65f7fe700 1 --2- 192.168.123.105:0/2132797709 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd648077910 0x7fd648079dc0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fd65800cd10 tx=0x7fd65800a720 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:13.278 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.277+0000 7fd665285700 1 -- 192.168.123.105:0/2132797709 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fd64c000bf0 con 0x7fd648077910 2026-03-09T15:07:13.279 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.279+0000 7fd65d7fa700 1 -- 192.168.123.105:0/2132797709 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+466 (secure 0 0 0) 0x7fd64c000bf0 con 0x7fd648077910 2026-03-09T15:07:13.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.281+0000 7fd646ffd700 1 -- 192.168.123.105:0/2132797709 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd648077910 msgr2=0x7fd648079dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:13.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.281+0000 7fd646ffd700 1 --2- 192.168.123.105:0/2132797709 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd648077910 0x7fd648079dc0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fd65800cd10 tx=0x7fd65800a720 comp rx=0 tx=0).stop 2026-03-09T15:07:13.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.281+0000 7fd646ffd700 1 -- 192.168.123.105:0/2132797709 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd66010c8f0 msgr2=0x7fd66007cdf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:13.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.281+0000 7fd646ffd700 1 --2- 192.168.123.105:0/2132797709 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd66010c8f0 0x7fd66007cdf0 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7fd65000b770 tx=0x7fd65000ba80 comp rx=0 tx=0).stop 2026-03-09T15:07:13.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.281+0000 7fd646ffd700 1 -- 192.168.123.105:0/2132797709 shutdown_connections 2026-03-09T15:07:13.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.281+0000 7fd646ffd700 1 --2- 192.168.123.105:0/2132797709 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd648077910 0x7fd648079dc0 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.281+0000 7fd646ffd700 1 --2- 192.168.123.105:0/2132797709 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd66010c8f0 0x7fd66007cdf0 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.281+0000 7fd646ffd700 1 --2- 192.168.123.105:0/2132797709 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd66007d330 0x7fd66007d7a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.281+0000 7fd646ffd700 1 -- 192.168.123.105:0/2132797709 >> 192.168.123.105:0/2132797709 conn(0x7fd66006c6c0 msgr2=0x7fd660070000 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:13.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.282+0000 7fd646ffd700 1 -- 192.168.123.105:0/2132797709 shutdown_connections 2026-03-09T15:07:13.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.282+0000 7fd646ffd700 1 -- 192.168.123.105:0/2132797709 wait complete. 2026-03-09T15:07:13.299 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.567+0000 7fc147f11700 1 -- 192.168.123.105:0/3016544054 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc140071e40 msgr2=0x7fc1400722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.567+0000 7fc147f11700 1 --2- 192.168.123.105:0/3016544054 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc140071e40 0x7fc1400722b0 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7fc13800cd40 tx=0x7fc13800a320 comp rx=0 tx=0).stop 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc147f11700 1 -- 192.168.123.105:0/3016544054 shutdown_connections 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc147f11700 1 --2- 192.168.123.105:0/3016544054 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc140071e40 0x7fc1400722b0 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc147f11700 1 --2- 192.168.123.105:0/3016544054 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc14010c8b0 0x7fc14010cc80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc147f11700 1 -- 192.168.123.105:0/3016544054 >> 192.168.123.105:0/3016544054 conn(0x7fc14006c6c0 msgr2=0x7fc14006cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc147f11700 1 -- 192.168.123.105:0/3016544054 shutdown_connections 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc147f11700 1 -- 192.168.123.105:0/3016544054 wait complete. 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc147f11700 1 Processor -- start 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc147f11700 1 -- start start 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc147f11700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc14010c8b0 0x7fc14007ce60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc147f11700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc14007d3a0 0x7fc14007d810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc147f11700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc1400819e0 con 0x7fc14010c8b0 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc147f11700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc140081b50 con 0x7fc14007d3a0 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.568+0000 7fc1454ac700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc14007d3a0 0x7fc14007d810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.569+0000 7fc1454ac700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc14007d3a0 0x7fc14007d810 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:48232/0 (socket says 192.168.123.105:48232) 2026-03-09T15:07:13.569 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.569+0000 7fc1454ac700 1 -- 192.168.123.105:0/2256181682 learned_addr learned my addr 192.168.123.105:0/2256181682 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:07:13.570 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.569+0000 7fc1454ac700 1 -- 192.168.123.105:0/2256181682 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc14010c8b0 msgr2=0x7fc14007ce60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:13.570 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.569+0000 7fc1454ac700 1 --2- 192.168.123.105:0/2256181682 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc14010c8b0 0x7fc14007ce60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.570 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.569+0000 7fc1454ac700 1 -- 192.168.123.105:0/2256181682 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc13800c9f0 con 0x7fc14007d3a0 2026-03-09T15:07:13.570 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.569+0000 7fc1454ac700 1 --2- 192.168.123.105:0/2256181682 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc14007d3a0 0x7fc14007d810 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fc1380062a0 tx=0x7fc13800bba0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:13.570 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.570+0000 7fc136ffd700 1 -- 192.168.123.105:0/2256181682 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc13800dea0 con 0x7fc14007d3a0 2026-03-09T15:07:13.571 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.570+0000 7fc147f11700 1 -- 192.168.123.105:0/2256181682 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc140081dd0 con 0x7fc14007d3a0 2026-03-09T15:07:13.571 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.570+0000 7fc147f11700 1 -- 192.168.123.105:0/2256181682 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc140082320 con 0x7fc14007d3a0 2026-03-09T15:07:13.571 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.570+0000 7fc136ffd700 1 -- 192.168.123.105:0/2256181682 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fc138009d70 con 0x7fc14007d3a0 2026-03-09T15:07:13.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.571+0000 7fc136ffd700 1 -- 192.168.123.105:0/2256181682 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc13801f690 con 0x7fc14007d3a0 2026-03-09T15:07:13.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.573+0000 7fc136ffd700 1 -- 192.168.123.105:0/2256181682 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc138004030 con 0x7fc14007d3a0 2026-03-09T15:07:13.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.573+0000 7fc147f11700 1 -- 192.168.123.105:0/2256181682 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc124005320 con 0x7fc14007d3a0 2026-03-09T15:07:13.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.573+0000 7fc136ffd700 1 --2- 192.168.123.105:0/2256181682 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc12c0798d0 0x7fc12c07bd80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:13.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.573+0000 7fc136ffd700 1 -- 192.168.123.105:0/2256181682 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fc13809afe0 con 0x7fc14007d3a0 2026-03-09T15:07:13.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.574+0000 7fc145cad700 1 --2- 192.168.123.105:0/2256181682 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc12c0798d0 0x7fc12c07bd80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:13.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.576+0000 7fc145cad700 1 --2- 192.168.123.105:0/2256181682 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc12c0798d0 0x7fc12c07bd80 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fc13c0097b0 tx=0x7fc13c006d20 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:13.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.580+0000 7fc136ffd700 1 -- 192.168.123.105:0/2256181682 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc138063660 con 0x7fc14007d3a0 2026-03-09T15:07:13.765 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.764+0000 7fc147f11700 1 -- 192.168.123.105:0/2256181682 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fc124000bf0 con 0x7fc12c0798d0 2026-03-09T15:07:13.768 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.767+0000 7fc136ffd700 1 -- 192.168.123.105:0/2256181682 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+466 (secure 0 0 0) 0x7fc124000bf0 con 0x7fc12c0798d0 2026-03-09T15:07:13.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.773+0000 7fc134ff9700 1 -- 192.168.123.105:0/2256181682 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc12c0798d0 msgr2=0x7fc12c07bd80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:13.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.773+0000 7fc134ff9700 1 --2- 192.168.123.105:0/2256181682 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc12c0798d0 0x7fc12c07bd80 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fc13c0097b0 tx=0x7fc13c006d20 comp rx=0 tx=0).stop 2026-03-09T15:07:13.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.773+0000 7fc134ff9700 1 -- 192.168.123.105:0/2256181682 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc14007d3a0 msgr2=0x7fc14007d810 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:13.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.773+0000 7fc134ff9700 1 --2- 192.168.123.105:0/2256181682 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc14007d3a0 0x7fc14007d810 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fc1380062a0 tx=0x7fc13800bba0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.780+0000 7fc134ff9700 1 -- 192.168.123.105:0/2256181682 shutdown_connections 2026-03-09T15:07:13.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.780+0000 7fc134ff9700 1 --2- 192.168.123.105:0/2256181682 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc12c0798d0 0x7fc12c07bd80 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.780+0000 7fc134ff9700 1 --2- 192.168.123.105:0/2256181682 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc14010c8b0 0x7fc14007ce60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.780+0000 7fc134ff9700 1 --2- 192.168.123.105:0/2256181682 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc14007d3a0 0x7fc14007d810 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.780+0000 7fc134ff9700 1 -- 192.168.123.105:0/2256181682 >> 192.168.123.105:0/2256181682 conn(0x7fc14006c6c0 msgr2=0x7fc14006ff50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:13.784 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.783+0000 7fc134ff9700 1 -- 192.168.123.105:0/2256181682 shutdown_connections 2026-03-09T15:07:13.784 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.783+0000 7fc134ff9700 1 -- 192.168.123.105:0/2256181682 wait complete. 2026-03-09T15:07:13.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:13 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:13.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:13 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:13.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:13 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:13.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:13 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:13.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.891+0000 7f3260feb700 1 -- 192.168.123.105:0/3779707175 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f325c10c8f0 msgr2=0x7f325c10ccc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:13.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.891+0000 7f3260feb700 1 --2- 192.168.123.105:0/3779707175 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f325c10c8f0 0x7f325c10ccc0 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f324c008790 tx=0x7f324c008aa0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.891+0000 7f3260feb700 1 -- 192.168.123.105:0/3779707175 shutdown_connections 2026-03-09T15:07:13.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.891+0000 7f3260feb700 1 --2- 192.168.123.105:0/3779707175 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f325c071e40 0x7f325c0722b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.891+0000 7f3260feb700 1 --2- 192.168.123.105:0/3779707175 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f325c10c8f0 0x7f325c10ccc0 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.891+0000 7f3260feb700 1 -- 192.168.123.105:0/3779707175 >> 192.168.123.105:0/3779707175 conn(0x7f325c06c6c0 msgr2=0x7f325c06cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:13.893 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.892+0000 7f3260feb700 1 -- 192.168.123.105:0/3779707175 shutdown_connections 2026-03-09T15:07:13.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.892+0000 7f3260feb700 1 -- 192.168.123.105:0/3779707175 wait complete. 2026-03-09T15:07:13.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.892+0000 7f3260feb700 1 Processor -- start 2026-03-09T15:07:13.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.892+0000 7f3260feb700 1 -- start start 2026-03-09T15:07:13.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.892+0000 7f3260feb700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f325c071e40 0x7f325c07ce80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:13.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.892+0000 7f3260feb700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f325c07d3c0 0x7f325c07d830 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:13.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.892+0000 7f3260feb700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f325c081a00 con 0x7f325c071e40 2026-03-09T15:07:13.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.892+0000 7f3260feb700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f325c081b70 con 0x7f325c07d3c0 2026-03-09T15:07:13.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.894+0000 7f325affd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f325c07d3c0 0x7f325c07d830 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:13.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.894+0000 7f325affd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f325c07d3c0 0x7f325c07d830 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:48250/0 (socket says 192.168.123.105:48250) 2026-03-09T15:07:13.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.894+0000 7f325affd700 1 -- 192.168.123.105:0/3695036572 learned_addr learned my addr 192.168.123.105:0/3695036572 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:07:13.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.894+0000 7f325b7fe700 1 --2- 192.168.123.105:0/3695036572 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f325c071e40 0x7f325c07ce80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:13.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:13 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:13.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:13 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:13.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:13 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:13.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:13 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:13.896 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.895+0000 7f325affd700 1 -- 192.168.123.105:0/3695036572 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f325c071e40 msgr2=0x7f325c07ce80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:13.896 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.895+0000 7f325affd700 1 --2- 192.168.123.105:0/3695036572 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f325c071e40 0x7f325c07ce80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:13.896 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.895+0000 7f325affd700 1 -- 192.168.123.105:0/3695036572 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f324c008440 con 0x7f325c07d3c0 2026-03-09T15:07:13.896 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.895+0000 7f325affd700 1 --2- 192.168.123.105:0/3695036572 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f325c07d3c0 0x7f325c07d830 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f325400f4d0 tx=0x7f325400f890 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:13.896 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.895+0000 7f3258ff9700 1 -- 192.168.123.105:0/3695036572 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3254010040 con 0x7f325c07d3c0 2026-03-09T15:07:13.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.895+0000 7f3260feb700 1 -- 192.168.123.105:0/3695036572 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f325c081df0 con 0x7f325c07d3c0 2026-03-09T15:07:13.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.895+0000 7f3260feb700 1 -- 192.168.123.105:0/3695036572 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f325c082340 con 0x7f325c07d3c0 2026-03-09T15:07:13.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.896+0000 7f3258ff9700 1 -- 192.168.123.105:0/3695036572 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f3254009bf0 con 0x7f325c07d3c0 2026-03-09T15:07:13.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.896+0000 7f3258ff9700 1 -- 192.168.123.105:0/3695036572 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3254015980 con 0x7f325c07d3c0 2026-03-09T15:07:13.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.898+0000 7f3258ff9700 1 -- 192.168.123.105:0/3695036572 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f3254009710 con 0x7f325c07d3c0 2026-03-09T15:07:13.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.898+0000 7f3260feb700 1 -- 192.168.123.105:0/3695036572 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f325c04f2a0 con 0x7f325c07d3c0 2026-03-09T15:07:13.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.898+0000 7f3258ff9700 1 --2- 192.168.123.105:0/3695036572 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3244077910 0x7f3244079dc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:13.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.899+0000 7f3258ff9700 1 -- 192.168.123.105:0/3695036572 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f325409a240 con 0x7f325c07d3c0 2026-03-09T15:07:13.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.901+0000 7f3258ff9700 1 -- 192.168.123.105:0/3695036572 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3254062860 con 0x7f325c07d3c0 2026-03-09T15:07:13.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.923+0000 7f325b7fe700 1 --2- 192.168.123.105:0/3695036572 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3244077910 0x7f3244079dc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:13.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:13.936+0000 7f325b7fe700 1 --2- 192.168.123.105:0/3695036572 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3244077910 0x7f3244079dc0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f324c000c00 tx=0x7f324c011040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:14.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.051+0000 7f3260feb700 1 -- 192.168.123.105:0/3695036572 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7f325c082620 con 0x7f3244077910 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.056+0000 7f3258ff9700 1 -- 192.168.123.105:0/3695036572 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7f325c082620 con 0x7f3244077910 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (4m) 2s ago 11m 24.0M - 0.25.0 c8568f914cd2 7635cece310c 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (5s) 2s ago 11m 9852k - 19.2.3-678-ge911bdeb 654f31e6858e 7e4630f85fea 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (4s) 3s ago 10m 9559k - 19.2.3-678-ge911bdeb 654f31e6858e 7757fd500ae0 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (3m) 2s ago 11m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 35d8c0ae5a58 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (3m) 3s ago 10m 8308k - 19.2.3-678-ge911bdeb 654f31e6858e 82bdad36caf9 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (4m) 2s ago 10m 84.0M - 10.4.0 c8b91775d855 eb6431f63d88 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (42s) 2s ago 8m 103M - 19.2.3-678-ge911bdeb 654f31e6858e f41a092cac53 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (31s) 2s ago 8m 262M - 19.2.3-678-ge911bdeb 654f31e6858e 5a60c73f6399 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (11s) 3s ago 8m 15.4M - 19.2.3-678-ge911bdeb 654f31e6858e 05062c46f72b 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (22s) 3s ago 8m 20.6M - 19.2.3-678-ge911bdeb 654f31e6858e 615aff4b5fba 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (5m) 2s ago 12m 626M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (4m) 3s ago 10m 497M - 19.2.3-678-ge911bdeb 654f31e6858e acf5a6f3f804 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (3m) 2s ago 12m 67.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1e11655f7d87 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (3m) 3s ago 10m 58.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e d1f0309f4d58 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (4m) 2s ago 11m 10.3M - 1.7.0 72c9c2088986 888d071c50d9 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (4m) 3s ago 10m 9617k - 1.7.0 72c9c2088986 22c96a576a60 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (2m) 2s ago 10m 164M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f2883abca2d2 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (2m) 2s ago 9m 127M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b830d7f76498 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (2m) 2s ago 9m 127M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 01cf87b8bc05 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (113s) 3s ago 9m 164M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9359c3ced4d3 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (91s) 3s ago 9m 111M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 985038f550f8 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (54s) 3s ago 9m 101M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 15ec92bc2880 2026-03-09T15:07:14.058 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (4m) 2s ago 10m 58.0M - 2.51.0 1d3b7f56885b e6f470b0ba11 2026-03-09T15:07:14.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.060+0000 7f32427fc700 1 -- 192.168.123.105:0/3695036572 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3244077910 msgr2=0x7f3244079dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.060+0000 7f32427fc700 1 --2- 192.168.123.105:0/3695036572 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3244077910 0x7f3244079dc0 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f324c000c00 tx=0x7f324c011040 comp rx=0 tx=0).stop 2026-03-09T15:07:14.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.060+0000 7f32427fc700 1 -- 192.168.123.105:0/3695036572 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f325c07d3c0 msgr2=0x7f325c07d830 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.060+0000 7f32427fc700 1 --2- 192.168.123.105:0/3695036572 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f325c07d3c0 0x7f325c07d830 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f325400f4d0 tx=0x7f325400f890 comp rx=0 tx=0).stop 2026-03-09T15:07:14.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.060+0000 7f32427fc700 1 -- 192.168.123.105:0/3695036572 shutdown_connections 2026-03-09T15:07:14.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.060+0000 7f32427fc700 1 --2- 192.168.123.105:0/3695036572 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3244077910 0x7f3244079dc0 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.060+0000 7f32427fc700 1 --2- 192.168.123.105:0/3695036572 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f325c071e40 0x7f325c07ce80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.060+0000 7f32427fc700 1 --2- 192.168.123.105:0/3695036572 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f325c07d3c0 0x7f325c07d830 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.060+0000 7f32427fc700 1 -- 192.168.123.105:0/3695036572 >> 192.168.123.105:0/3695036572 conn(0x7f325c06c6c0 msgr2=0x7f325c0700f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:14.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.061+0000 7f32427fc700 1 -- 192.168.123.105:0/3695036572 shutdown_connections 2026-03-09T15:07:14.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.061+0000 7f32427fc700 1 -- 192.168.123.105:0/3695036572 wait complete. 2026-03-09T15:07:14.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.134+0000 7f3dc75e4700 1 -- 192.168.123.105:0/2847172744 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3dc010eab0 msgr2=0x7f3dc010ee80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.135 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.134+0000 7f3dc75e4700 1 --2- 192.168.123.105:0/2847172744 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3dc010eab0 0x7f3dc010ee80 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7f3dbc009b00 tx=0x7f3dbc009e10 comp rx=0 tx=0).stop 2026-03-09T15:07:14.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.134+0000 7f3dc75e4700 1 -- 192.168.123.105:0/2847172744 shutdown_connections 2026-03-09T15:07:14.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.134+0000 7f3dc75e4700 1 --2- 192.168.123.105:0/2847172744 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3dc0071b60 0x7f3dc0071fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.134+0000 7f3dc75e4700 1 --2- 192.168.123.105:0/2847172744 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3dc010eab0 0x7f3dc010ee80 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.134+0000 7f3dc75e4700 1 -- 192.168.123.105:0/2847172744 >> 192.168.123.105:0/2847172744 conn(0x7f3dc006c6c0 msgr2=0x7f3dc006cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:14.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.135+0000 7f3dc75e4700 1 -- 192.168.123.105:0/2847172744 shutdown_connections 2026-03-09T15:07:14.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.135+0000 7f3dc75e4700 1 -- 192.168.123.105:0/2847172744 wait complete. 2026-03-09T15:07:14.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.135+0000 7f3dc75e4700 1 Processor -- start 2026-03-09T15:07:14.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.135+0000 7f3dc75e4700 1 -- start start 2026-03-09T15:07:14.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.135+0000 7f3dc75e4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3dc0071b60 0x7f3dc0119510 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:14.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.135+0000 7f3dc75e4700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3dc0114510 0x7f3dc0114980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:14.136 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.135+0000 7f3dc75e4700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3dc0114ec0 con 0x7f3dc0071b60 2026-03-09T15:07:14.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.135+0000 7f3dc75e4700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3dc0115030 con 0x7f3dc0114510 2026-03-09T15:07:14.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.136+0000 7f3dc4b7f700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3dc0114510 0x7f3dc0114980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:14.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.136+0000 7f3dc4b7f700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3dc0114510 0x7f3dc0114980 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:48272/0 (socket says 192.168.123.105:48272) 2026-03-09T15:07:14.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.136+0000 7f3dc4b7f700 1 -- 192.168.123.105:0/40767051 learned_addr learned my addr 192.168.123.105:0/40767051 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:07:14.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.136+0000 7f3dc5380700 1 --2- 192.168.123.105:0/40767051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3dc0071b60 0x7f3dc0119510 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:14.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.136+0000 7f3dc4b7f700 1 -- 192.168.123.105:0/40767051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3dc0071b60 msgr2=0x7f3dc0119510 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.136+0000 7f3dc4b7f700 1 --2- 192.168.123.105:0/40767051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3dc0071b60 0x7f3dc0119510 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.136+0000 7f3dc4b7f700 1 -- 192.168.123.105:0/40767051 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3dbc0097e0 con 0x7f3dc0114510 2026-03-09T15:07:14.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.136+0000 7f3dc4b7f700 1 --2- 192.168.123.105:0/40767051 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3dc0114510 0x7f3dc0114980 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f3db800d350 tx=0x7f3db800d710 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:14.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.137+0000 7f3db67fc700 1 -- 192.168.123.105:0/40767051 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3db80155b0 con 0x7f3dc0114510 2026-03-09T15:07:14.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.137+0000 7f3dc75e4700 1 -- 192.168.123.105:0/40767051 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f3dc0115310 con 0x7f3dc0114510 2026-03-09T15:07:14.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.137+0000 7f3dc75e4700 1 -- 192.168.123.105:0/40767051 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f3dc01b7cb0 con 0x7f3dc0114510 2026-03-09T15:07:14.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.138+0000 7f3db67fc700 1 -- 192.168.123.105:0/40767051 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f3db800f040 con 0x7f3dc0114510 2026-03-09T15:07:14.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.138+0000 7f3db67fc700 1 -- 192.168.123.105:0/40767051 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3db80149c0 con 0x7f3dc0114510 2026-03-09T15:07:14.139 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.138+0000 7f3dc75e4700 1 -- 192.168.123.105:0/40767051 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f3da4005320 con 0x7f3dc0114510 2026-03-09T15:07:14.140 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.139+0000 7f3db67fc700 1 -- 192.168.123.105:0/40767051 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f3db8014be0 con 0x7f3dc0114510 2026-03-09T15:07:14.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.139+0000 7f3db67fc700 1 --2- 192.168.123.105:0/40767051 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3dac077910 0x7f3dac079dc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:14.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.139+0000 7f3db67fc700 1 -- 192.168.123.105:0/40767051 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f3db8099800 con 0x7f3dc0114510 2026-03-09T15:07:14.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.140+0000 7f3dc5380700 1 --2- 192.168.123.105:0/40767051 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3dac077910 0x7f3dac079dc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:14.141 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.140+0000 7f3dc5380700 1 --2- 192.168.123.105:0/40767051 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3dac077910 0x7f3dac079dc0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f3dbc009fd0 tx=0x7f3dbc011040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:14.142 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.142+0000 7f3db67fc700 1 -- 192.168.123.105:0/40767051 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3db8061f50 con 0x7f3dc0114510 2026-03-09T15:07:14.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.309+0000 7f3dc75e4700 1 -- 192.168.123.105:0/40767051 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f3da4005cc0 con 0x7f3dc0114510 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.311+0000 7f3db67fc700 1 -- 192.168.123.105:0/40767051 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f3db8014e90 con 0x7f3dc0114510 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 14 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:07:14.312 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:07:14.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.318+0000 7f3dabfff700 1 -- 192.168.123.105:0/40767051 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3dac077910 msgr2=0x7f3dac079dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.318+0000 7f3dabfff700 1 --2- 192.168.123.105:0/40767051 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3dac077910 0x7f3dac079dc0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f3dbc009fd0 tx=0x7f3dbc011040 comp rx=0 tx=0).stop 2026-03-09T15:07:14.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.318+0000 7f3dabfff700 1 -- 192.168.123.105:0/40767051 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3dc0114510 msgr2=0x7f3dc0114980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.318+0000 7f3dabfff700 1 --2- 192.168.123.105:0/40767051 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3dc0114510 0x7f3dc0114980 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f3db800d350 tx=0x7f3db800d710 comp rx=0 tx=0).stop 2026-03-09T15:07:14.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.318+0000 7f3dabfff700 1 -- 192.168.123.105:0/40767051 shutdown_connections 2026-03-09T15:07:14.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.318+0000 7f3dabfff700 1 --2- 192.168.123.105:0/40767051 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3dac077910 0x7f3dac079dc0 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.318+0000 7f3dabfff700 1 --2- 192.168.123.105:0/40767051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3dc0071b60 0x7f3dc0119510 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.318+0000 7f3dabfff700 1 --2- 192.168.123.105:0/40767051 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3dc0114510 0x7f3dc0114980 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.318+0000 7f3dabfff700 1 -- 192.168.123.105:0/40767051 >> 192.168.123.105:0/40767051 conn(0x7f3dc006c6c0 msgr2=0x7f3dc0070070 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:14.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.319+0000 7f3dabfff700 1 -- 192.168.123.105:0/40767051 shutdown_connections 2026-03-09T15:07:14.319 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.319+0000 7f3dabfff700 1 -- 192.168.123.105:0/40767051 wait complete. 2026-03-09T15:07:14.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.401+0000 7f9cc348c700 1 -- 192.168.123.105:0/2333876029 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9cbc10c8f0 msgr2=0x7f9cbc10ccc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.401+0000 7f9cc348c700 1 --2- 192.168.123.105:0/2333876029 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9cbc10c8f0 0x7f9cbc10ccc0 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7f9cb8007780 tx=0x7f9cb8007a90 comp rx=0 tx=0).stop 2026-03-09T15:07:14.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.401+0000 7f9cc348c700 1 -- 192.168.123.105:0/2333876029 shutdown_connections 2026-03-09T15:07:14.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.401+0000 7f9cc348c700 1 --2- 192.168.123.105:0/2333876029 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9cbc071e40 0x7f9cbc0722b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.401+0000 7f9cc348c700 1 --2- 192.168.123.105:0/2333876029 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9cbc10c8f0 0x7f9cbc10ccc0 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.401+0000 7f9cc348c700 1 -- 192.168.123.105:0/2333876029 >> 192.168.123.105:0/2333876029 conn(0x7f9cbc06c6c0 msgr2=0x7f9cbc06cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:14.403 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.401+0000 7f9cc348c700 1 -- 192.168.123.105:0/2333876029 shutdown_connections 2026-03-09T15:07:14.403 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.401+0000 7f9cc348c700 1 -- 192.168.123.105:0/2333876029 wait complete. 2026-03-09T15:07:14.403 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc348c700 1 Processor -- start 2026-03-09T15:07:14.403 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc348c700 1 -- start start 2026-03-09T15:07:14.403 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc348c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9cbc071e40 0x7f9cbc07ccb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:14.403 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc348c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9cbc07d1f0 0x7f9cbc07d660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:14.403 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc348c700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9cbc083d20 con 0x7f9cbc071e40 2026-03-09T15:07:14.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc348c700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9cbc081830 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc1c89700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9cbc07d1f0 0x7f9cbc07d660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:14.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc1c89700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9cbc07d1f0 0x7f9cbc07d660 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:48286/0 (socket says 192.168.123.105:48286) 2026-03-09T15:07:14.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc1c89700 1 -- 192.168.123.105:0/3316039384 learned_addr learned my addr 192.168.123.105:0/3316039384 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:07:14.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc248a700 1 --2- 192.168.123.105:0/3316039384 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9cbc071e40 0x7f9cbc07ccb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:14.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc1c89700 1 -- 192.168.123.105:0/3316039384 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9cbc071e40 msgr2=0x7f9cbc07ccb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc1c89700 1 --2- 192.168.123.105:0/3316039384 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9cbc071e40 0x7f9cbc07ccb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.402+0000 7f9cc1c89700 1 -- 192.168.123.105:0/3316039384 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9cb8007430 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.403+0000 7f9cc1c89700 1 --2- 192.168.123.105:0/3316039384 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9cbc07d1f0 0x7f9cbc07d660 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f9cb400c390 tx=0x7f9cb400c6a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:14.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.403+0000 7f9cb37fe700 1 -- 192.168.123.105:0/3316039384 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9cb400e030 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.405 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.403+0000 7f9cc348c700 1 -- 192.168.123.105:0/3316039384 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9cbc081b10 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.405 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.403+0000 7f9cc348c700 1 -- 192.168.123.105:0/3316039384 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9cbc082060 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.405 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.404+0000 7f9cb37fe700 1 -- 192.168.123.105:0/3316039384 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f9cb400f040 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.405 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.404+0000 7f9cb37fe700 1 -- 192.168.123.105:0/3316039384 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9cb4014650 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.405+0000 7f9cb37fe700 1 -- 192.168.123.105:0/3316039384 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f9cb40147b0 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.406+0000 7f9cb37fe700 1 --2- 192.168.123.105:0/3316039384 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9ca80779e0 0x7f9ca8079e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:14.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.406+0000 7f9cc248a700 1 --2- 192.168.123.105:0/3316039384 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9ca80779e0 0x7f9ca8079e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:14.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.406+0000 7f9cb37fe700 1 -- 192.168.123.105:0/3316039384 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f9cb409ad20 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.406+0000 7f9cc248a700 1 --2- 192.168.123.105:0/3316039384 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9ca80779e0 0x7f9ca8079e90 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f9cb8007e60 tx=0x7f9cb80058e0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:14.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.407+0000 7f9cc348c700 1 -- 192.168.123.105:0/3316039384 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9ca0005320 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.410 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.410+0000 7f9cb37fe700 1 -- 192.168.123.105:0/3316039384 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9cb40633c0 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.557+0000 7f9cc348c700 1 -- 192.168.123.105:0/3316039384 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7f9ca0006200 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.558+0000 7f9cb37fe700 1 -- 192.168.123.105:0/3316039384 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 34 v34) v1 ==== 76+0+2002 (secure 0 0 0) 0x7f9cb4062b10 con 0x7f9cbc07d1f0 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:e34 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:btime 2026-03-09T15:07:07:008677+0000 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:legacy client fscid: 1 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:Filesystem 'cephfs' (1) 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:fs_name cephfs 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:epoch 34 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:created 2026-03-09T14:58:23.182447+0000 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:modified 2026-03-09T15:07:07.008672+0000 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:tableserver 0 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:root 0 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:session_timeout 60 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:session_autoclose 300 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:max_file_size 1099511627776 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:max_xattr_size 65536 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:required_client_features {} 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:last_failure 0 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:last_failure_osd_epoch 91 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:max_mds 1 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:in 0 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:up {0=34272} 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:failed 2026-03-09T15:07:14.559 INFO:teuthology.orchestra.run.vm05.stdout:damaged 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:stopped 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:data_pools [3] 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:metadata_pool 2 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:inline_data enabled 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:balancer 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:bal_rank_mask -1 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:standby_count_wanted 1 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:qdb_cluster leader: 34272 members: 34272 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.nrocqt{0:34272} state up:active seq 10 join_fscid=1 addr [v2:192.168.123.105:6826/3005307080,v1:192.168.123.105:6827/3005307080] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.rrcyql{0:34292} state up:standby-replay seq 1 join_fscid=1 addr [v2:192.168.123.105:6828/3529134522,v1:192.168.123.105:6829/3529134522] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:Standby daemons: 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.ohmitn{-1:44239} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.109:6824/2799240855,v1:192.168.123.109:6825/2799240855] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T15:07:14.560 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.jrhwzz{-1:44243} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.109:6826/632428118,v1:192.168.123.109:6827/632428118] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T15:07:14.564 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.563+0000 7f9cb17fa700 1 -- 192.168.123.105:0/3316039384 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9ca80779e0 msgr2=0x7f9ca8079e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.564 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.563+0000 7f9cb17fa700 1 --2- 192.168.123.105:0/3316039384 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9ca80779e0 0x7f9ca8079e90 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f9cb8007e60 tx=0x7f9cb80058e0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.564 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.564+0000 7f9cb17fa700 1 -- 192.168.123.105:0/3316039384 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9cbc07d1f0 msgr2=0x7f9cbc07d660 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.564 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.564+0000 7f9cb17fa700 1 --2- 192.168.123.105:0/3316039384 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9cbc07d1f0 0x7f9cbc07d660 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f9cb400c390 tx=0x7f9cb400c6a0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.564+0000 7f9cb17fa700 1 -- 192.168.123.105:0/3316039384 shutdown_connections 2026-03-09T15:07:14.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.564+0000 7f9cb17fa700 1 --2- 192.168.123.105:0/3316039384 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9ca80779e0 0x7f9ca8079e90 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.564+0000 7f9cb17fa700 1 --2- 192.168.123.105:0/3316039384 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9cbc071e40 0x7f9cbc07ccb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.564+0000 7f9cb17fa700 1 --2- 192.168.123.105:0/3316039384 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9cbc07d1f0 0x7f9cbc07d660 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.564+0000 7f9cb17fa700 1 -- 192.168.123.105:0/3316039384 >> 192.168.123.105:0/3316039384 conn(0x7f9cbc06c6c0 msgr2=0x7f9cbc070940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:14.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.564+0000 7f9cb17fa700 1 -- 192.168.123.105:0/3316039384 shutdown_connections 2026-03-09T15:07:14.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.564+0000 7f9cb17fa700 1 -- 192.168.123.105:0/3316039384 wait complete. 2026-03-09T15:07:14.569 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 34 2026-03-09T15:07:14.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.643+0000 7f846aa75700 1 -- 192.168.123.105:0/2473286056 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8464071e40 msgr2=0x7f84640722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.643+0000 7f846aa75700 1 --2- 192.168.123.105:0/2473286056 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8464071e40 0x7f84640722b0 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f845c00cd40 tx=0x7f845c00a320 comp rx=0 tx=0).stop 2026-03-09T15:07:14.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.643+0000 7f846aa75700 1 -- 192.168.123.105:0/2473286056 shutdown_connections 2026-03-09T15:07:14.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.643+0000 7f846aa75700 1 --2- 192.168.123.105:0/2473286056 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8464071e40 0x7f84640722b0 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.643+0000 7f846aa75700 1 --2- 192.168.123.105:0/2473286056 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f846410c8b0 0x7f846410cc80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.643+0000 7f846aa75700 1 -- 192.168.123.105:0/2473286056 >> 192.168.123.105:0/2473286056 conn(0x7f846406c6c0 msgr2=0x7f846406cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:14.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.643+0000 7f846aa75700 1 -- 192.168.123.105:0/2473286056 shutdown_connections 2026-03-09T15:07:14.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.643+0000 7f846aa75700 1 -- 192.168.123.105:0/2473286056 wait complete. 2026-03-09T15:07:14.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.644+0000 7f846aa75700 1 Processor -- start 2026-03-09T15:07:14.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.644+0000 7f846aa75700 1 -- start start 2026-03-09T15:07:14.645 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.644+0000 7f846aa75700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8464071e40 0x7f846407cf30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:14.645 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.644+0000 7f846aa75700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f846410c8b0 0x7f846407d470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:14.645 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.644+0000 7f846aa75700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f846407db50 con 0x7f8464071e40 2026-03-09T15:07:14.645 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.644+0000 7f846aa75700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f846407dcc0 con 0x7f846410c8b0 2026-03-09T15:07:14.645 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.644+0000 7f8463fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f846410c8b0 0x7f846407d470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:14.645 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.644+0000 7f8463fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f846410c8b0 0x7f846407d470 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:48302/0 (socket says 192.168.123.105:48302) 2026-03-09T15:07:14.645 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.644+0000 7f8463fff700 1 -- 192.168.123.105:0/1387034964 learned_addr learned my addr 192.168.123.105:0/1387034964 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:07:14.645 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.644+0000 7f8468811700 1 --2- 192.168.123.105:0/1387034964 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8464071e40 0x7f846407cf30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:14.645 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.645+0000 7f8463fff700 1 -- 192.168.123.105:0/1387034964 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8464071e40 msgr2=0x7f846407cf30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.645 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.645+0000 7f8463fff700 1 --2- 192.168.123.105:0/1387034964 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8464071e40 0x7f846407cf30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.646 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.645+0000 7f8463fff700 1 -- 192.168.123.105:0/1387034964 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f845c00c9f0 con 0x7f846410c8b0 2026-03-09T15:07:14.646 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.645+0000 7f8463fff700 1 --2- 192.168.123.105:0/1387034964 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f846410c8b0 0x7f846407d470 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f845c009040 tx=0x7f845c009350 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:14.646 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.645+0000 7f8461ffb700 1 -- 192.168.123.105:0/1387034964 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f845c00dea0 con 0x7f846410c8b0 2026-03-09T15:07:14.647 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.645+0000 7f846aa75700 1 -- 192.168.123.105:0/1387034964 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f8464081ab0 con 0x7f846410c8b0 2026-03-09T15:07:14.647 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.645+0000 7f846aa75700 1 -- 192.168.123.105:0/1387034964 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f8464082000 con 0x7f846410c8b0 2026-03-09T15:07:14.647 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.645+0000 7f8461ffb700 1 -- 192.168.123.105:0/1387034964 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f845c00adb0 con 0x7f846410c8b0 2026-03-09T15:07:14.647 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.646+0000 7f8461ffb700 1 -- 192.168.123.105:0/1387034964 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f845c0076f0 con 0x7f846410c8b0 2026-03-09T15:07:14.647 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.647+0000 7f846aa75700 1 -- 192.168.123.105:0/1387034964 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f8450005320 con 0x7f846410c8b0 2026-03-09T15:07:14.647 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.647+0000 7f8461ffb700 1 -- 192.168.123.105:0/1387034964 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f845c00c110 con 0x7f846410c8b0 2026-03-09T15:07:14.648 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.647+0000 7f8461ffb700 1 --2- 192.168.123.105:0/1387034964 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f844c07bb20 0x7f844c07dfd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:14.648 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.647+0000 7f8461ffb700 1 -- 192.168.123.105:0/1387034964 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f845c09ab10 con 0x7f846410c8b0 2026-03-09T15:07:14.649 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.649+0000 7f8468811700 1 --2- 192.168.123.105:0/1387034964 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f844c07bb20 0x7f844c07dfd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:14.651 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.650+0000 7f8461ffb700 1 -- 192.168.123.105:0/1387034964 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f845c063130 con 0x7f846410c8b0 2026-03-09T15:07:14.659 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.659+0000 7f8468811700 1 --2- 192.168.123.105:0/1387034964 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f844c07bb20 0x7f844c07dfd0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f8454009c00 tx=0x7f8454009380 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:14.785 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.784+0000 7f846aa75700 1 -- 192.168.123.105:0/1387034964 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f8450000bf0 con 0x7f844c07bb20 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.785+0000 7f8461ffb700 1 -- 192.168.123.105:0/1387034964 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+466 (secure 0 0 0) 0x7f8450000bf0 con 0x7f844c07bb20 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [ 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "ceph-exporter", 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "mds", 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "crash", 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "mgr", 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "mon", 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "osd" 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "18/23 daemons upgraded", 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading ceph-exporter daemons", 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:07:14.786 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:07:14.790 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.789+0000 7f844b7fe700 1 -- 192.168.123.105:0/1387034964 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f844c07bb20 msgr2=0x7f844c07dfd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.790 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.789+0000 7f844b7fe700 1 --2- 192.168.123.105:0/1387034964 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f844c07bb20 0x7f844c07dfd0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f8454009c00 tx=0x7f8454009380 comp rx=0 tx=0).stop 2026-03-09T15:07:14.790 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.789+0000 7f844b7fe700 1 -- 192.168.123.105:0/1387034964 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f846410c8b0 msgr2=0x7f846407d470 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.790 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.789+0000 7f844b7fe700 1 --2- 192.168.123.105:0/1387034964 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f846410c8b0 0x7f846407d470 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f845c009040 tx=0x7f845c009350 comp rx=0 tx=0).stop 2026-03-09T15:07:14.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.791+0000 7f844b7fe700 1 -- 192.168.123.105:0/1387034964 shutdown_connections 2026-03-09T15:07:14.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.791+0000 7f844b7fe700 1 --2- 192.168.123.105:0/1387034964 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f844c07bb20 0x7f844c07dfd0 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.791+0000 7f844b7fe700 1 --2- 192.168.123.105:0/1387034964 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f8464071e40 0x7f846407cf30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.791+0000 7f844b7fe700 1 --2- 192.168.123.105:0/1387034964 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f846410c8b0 0x7f846407d470 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.791+0000 7f844b7fe700 1 -- 192.168.123.105:0/1387034964 >> 192.168.123.105:0/1387034964 conn(0x7f846406c6c0 msgr2=0x7f8464070380 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:14.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.791+0000 7f844b7fe700 1 -- 192.168.123.105:0/1387034964 shutdown_connections 2026-03-09T15:07:14.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.791+0000 7f844b7fe700 1 -- 192.168.123.105:0/1387034964 wait complete. 2026-03-09T15:07:14.820 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='client.34304 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:14.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.876+0000 7f4766189700 1 -- 192.168.123.105:0/4087474408 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f47600ffea0 msgr2=0x7f4760100270 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.876+0000 7f4766189700 1 --2- 192.168.123.105:0/4087474408 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f47600ffea0 0x7f4760100270 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f4750009a60 tx=0x7f4750009d70 comp rx=0 tx=0).stop 2026-03-09T15:07:14.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.876+0000 7f4766189700 1 -- 192.168.123.105:0/4087474408 shutdown_connections 2026-03-09T15:07:14.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.876+0000 7f4766189700 1 --2- 192.168.123.105:0/4087474408 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4760100840 0x7f4760103e70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.876+0000 7f4766189700 1 --2- 192.168.123.105:0/4087474408 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f47600ffea0 0x7f4760100270 secure :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f4750009a60 tx=0x7f4750009d70 comp rx=0 tx=0).stop 2026-03-09T15:07:14.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.876+0000 7f4766189700 1 -- 192.168.123.105:0/4087474408 >> 192.168.123.105:0/4087474408 conn(0x7f4760075240 msgr2=0x7f4760075640 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:14.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.876+0000 7f4766189700 1 -- 192.168.123.105:0/4087474408 shutdown_connections 2026-03-09T15:07:14.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.876+0000 7f4766189700 1 -- 192.168.123.105:0/4087474408 wait complete. 2026-03-09T15:07:14.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.877+0000 7f4766189700 1 Processor -- start 2026-03-09T15:07:14.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.877+0000 7f4766189700 1 -- start start 2026-03-09T15:07:14.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.877+0000 7f4766189700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4760100840 0x7f476006dd00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:14.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.877+0000 7f4766189700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f476006e240 0x7f476006e6b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:14.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.877+0000 7f4766189700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4760072de0 con 0x7f476006e240 2026-03-09T15:07:14.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.877+0000 7f4766189700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4760110d50 con 0x7f4760100840 2026-03-09T15:07:14.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.877+0000 7f475f7fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4760100840 0x7f476006dd00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:14.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.877+0000 7f475f7fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4760100840 0x7f476006dd00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:48316/0 (socket says 192.168.123.105:48316) 2026-03-09T15:07:14.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.877+0000 7f475f7fe700 1 -- 192.168.123.105:0/2143986296 learned_addr learned my addr 192.168.123.105:0/2143986296 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:07:14.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.880+0000 7f475f7fe700 1 -- 192.168.123.105:0/2143986296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f476006e240 msgr2=0x7f476006e6b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:14.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.880+0000 7f475f7fe700 1 --2- 192.168.123.105:0/2143986296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f476006e240 0x7f476006e6b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:14.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.880+0000 7f475f7fe700 1 -- 192.168.123.105:0/2143986296 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4750009710 con 0x7f4760100840 2026-03-09T15:07:14.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.881+0000 7f475f7fe700 1 --2- 192.168.123.105:0/2143986296 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4760100840 0x7f476006dd00 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f4750000c00 tx=0x7f47500095f0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:14.884 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.883+0000 7f475cff9700 1 -- 192.168.123.105:0/2143986296 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f475001d070 con 0x7f4760100840 2026-03-09T15:07:14.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.883+0000 7f4766189700 1 -- 192.168.123.105:0/2143986296 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4760110ef0 con 0x7f4760100840 2026-03-09T15:07:14.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.883+0000 7f4766189700 1 -- 192.168.123.105:0/2143986296 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f47601113e0 con 0x7f4760100840 2026-03-09T15:07:14.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.883+0000 7f475cff9700 1 -- 192.168.123.105:0/2143986296 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f4750004360 con 0x7f4760100840 2026-03-09T15:07:14.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.883+0000 7f475cff9700 1 -- 192.168.123.105:0/2143986296 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f475000f670 con 0x7f4760100840 2026-03-09T15:07:14.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.885+0000 7f475cff9700 1 -- 192.168.123.105:0/2143986296 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f4750022470 con 0x7f4760100840 2026-03-09T15:07:14.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.885+0000 7f4766189700 1 -- 192.168.123.105:0/2143986296 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f474c005320 con 0x7f4760100840 2026-03-09T15:07:14.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.885+0000 7f475cff9700 1 --2- 192.168.123.105:0/2143986296 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f47480778c0 0x7f4748079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:14.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.886+0000 7f475cff9700 1 -- 192.168.123.105:0/2143986296 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f475009ad40 con 0x7f4760100840 2026-03-09T15:07:14.888 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.888+0000 7f475effd700 1 --2- 192.168.123.105:0/2143986296 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f47480778c0 0x7f4748079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:14.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.888+0000 7f475cff9700 1 -- 192.168.123.105:0/2143986296 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f4750063410 con 0x7f4760100840 2026-03-09T15:07:14.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:14.899+0000 7f475effd700 1 --2- 192.168.123.105:0/2143986296 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f47480778c0 0x7f4748079d70 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f4754005950 tx=0x7f47540058e0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: pgmap v161: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 16 MiB/s rd, 4.3 KiB/s wr, 9 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='client.44249 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: Upgrade: Setting filesystem cephfs Joinable 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "joinable", "val": "true"}]: dispatch 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='client.44253 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/40767051' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:15.116 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:14 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3316039384' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:07:15.116 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.114+0000 7f4766189700 1 -- 192.168.123.105:0/2143986296 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7f474c005190 con 0x7f4760100840 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='client.34304 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: pgmap v161: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 16 MiB/s rd, 4.3 KiB/s wr, 9 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='client.44249 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: Upgrade: Setting filesystem cephfs Joinable 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "fs set", "fs_name": "cephfs", "var": "joinable", "val": "true"}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='client.44253 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/40767051' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:15.117 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:14 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3316039384' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:07:15.117 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.116+0000 7f475cff9700 1 -- 192.168.123.105:0/2143986296 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+201 (secure 0 0 0) 0x7f4750062b60 con 0x7f4760100840 2026-03-09T15:07:15.119 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:07:15.119 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:07:15.119 INFO:teuthology.orchestra.run.vm05.stdout: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:07:15.125 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.124+0000 7f47467fc700 1 -- 192.168.123.105:0/2143986296 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f47480778c0 msgr2=0x7f4748079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:15.125 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.124+0000 7f47467fc700 1 --2- 192.168.123.105:0/2143986296 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f47480778c0 0x7f4748079d70 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f4754005950 tx=0x7f47540058e0 comp rx=0 tx=0).stop 2026-03-09T15:07:15.125 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.124+0000 7f47467fc700 1 -- 192.168.123.105:0/2143986296 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4760100840 msgr2=0x7f476006dd00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:15.125 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.124+0000 7f47467fc700 1 --2- 192.168.123.105:0/2143986296 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4760100840 0x7f476006dd00 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f4750000c00 tx=0x7f47500095f0 comp rx=0 tx=0).stop 2026-03-09T15:07:15.125 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.124+0000 7f47467fc700 1 -- 192.168.123.105:0/2143986296 shutdown_connections 2026-03-09T15:07:15.126 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.124+0000 7f47467fc700 1 --2- 192.168.123.105:0/2143986296 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f47480778c0 0x7f4748079d70 secure :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f4754005950 tx=0x7f47540058e0 comp rx=0 tx=0).stop 2026-03-09T15:07:15.126 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.124+0000 7f47467fc700 1 --2- 192.168.123.105:0/2143986296 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4760100840 0x7f476006dd00 secure :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f4750000c00 tx=0x7f47500095f0 comp rx=0 tx=0).stop 2026-03-09T15:07:15.126 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.124+0000 7f47467fc700 1 --2- 192.168.123.105:0/2143986296 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f476006e240 0x7f476006e6b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:15.126 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.124+0000 7f47467fc700 1 -- 192.168.123.105:0/2143986296 >> 192.168.123.105:0/2143986296 conn(0x7f4760075240 msgr2=0x7f47600fea00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:15.126 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.125+0000 7f47467fc700 1 -- 192.168.123.105:0/2143986296 shutdown_connections 2026-03-09T15:07:15.126 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:15.125+0000 7f47467fc700 1 -- 192.168.123.105:0/2143986296 wait complete. 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='client.44263 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "joinable", "val": "true"}]': finished 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: Upgrade: Setting container_image for all ceph-exporter 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm05"}]: dispatch 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm05"}]': finished 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm09"}]: dispatch 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm09"}]': finished 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: Upgrade: Setting container_image for all iscsi 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: Upgrade: Setting container_image for all nfs 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: Upgrade: Setting container_image for all nvmeof 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: Upgrade: Finalizing container_image settings 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: Upgrade: Complete! 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/2143986296' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:07:16.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='client.44263 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "fs set", "fs_name": "cephfs", "var": "joinable", "val": "true"}]': finished 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: fsmap cephfs:1 {0=cephfs.vm05.nrocqt=up:active} 1 up:standby-replay 2 up:standby 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: Upgrade: Setting container_image for all ceph-exporter 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm05"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm05"}]': finished 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm09"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter.vm09"}]': finished 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: Upgrade: Setting container_image for all iscsi 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: Upgrade: Setting container_image for all nfs 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: Upgrade: Setting container_image for all nvmeof 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: Upgrade: Finalizing container_image settings 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T15:07:16.617 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: Upgrade: Complete! 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/2143986296' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:07:16.618 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:17 vm05.local ceph-mon[116516]: pgmap v162: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 19 MiB/s rd, 4.0 KiB/s wr, 9 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T15:07:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:17 vm09.local ceph-mon[98742]: pgmap v162: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 19 MiB/s rd, 4.0 KiB/s wr, 9 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T15:07:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:07:19.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:19 vm05.local ceph-mon[116516]: pgmap v163: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 5 op/s 2026-03-09T15:07:19.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:19 vm09.local ceph-mon[98742]: pgmap v163: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 20 MiB/s rd, 5 op/s 2026-03-09T15:07:21.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:21 vm05.local ceph-mon[116516]: pgmap v164: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 6 op/s 2026-03-09T15:07:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:21 vm09.local ceph-mon[98742]: pgmap v164: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 21 MiB/s rd, 6 op/s 2026-03-09T15:07:23.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:23 vm05.local ceph-mon[116516]: pgmap v165: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 18 MiB/s rd, 5 op/s 2026-03-09T15:07:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:23 vm09.local ceph-mon[98742]: pgmap v165: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 18 MiB/s rd, 5 op/s 2026-03-09T15:07:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:25 vm09.local ceph-mon[98742]: pgmap v166: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 15 MiB/s rd, 4 op/s 2026-03-09T15:07:25.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:25 vm05.local ceph-mon[116516]: pgmap v166: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 15 MiB/s rd, 4 op/s 2026-03-09T15:07:27.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:27 vm05.local ceph-mon[116516]: pgmap v167: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 6.5 MiB/s rd, 2 op/s 2026-03-09T15:07:27.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:27 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:07:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:27 vm09.local ceph-mon[98742]: pgmap v167: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 6.5 MiB/s rd, 2 op/s 2026-03-09T15:07:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:27 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:07:29.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:29 vm05.local ceph-mon[116516]: pgmap v168: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.7 MiB/s rd, 1 op/s 2026-03-09T15:07:29.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:29 vm09.local ceph-mon[98742]: pgmap v168: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.7 MiB/s rd, 1 op/s 2026-03-09T15:07:31.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:31 vm05.local ceph-mon[116516]: pgmap v169: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.0 MiB/s rd, 1 op/s 2026-03-09T15:07:31.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:31 vm09.local ceph-mon[98742]: pgmap v169: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.0 MiB/s rd, 1 op/s 2026-03-09T15:07:33.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:33 vm05.local ceph-mon[116516]: pgmap v170: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:07:33.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:33 vm09.local ceph-mon[98742]: pgmap v170: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:07:35.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:35 vm05.local ceph-mon[116516]: pgmap v171: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:07:35.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:35 vm09.local ceph-mon[98742]: pgmap v171: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:07:37.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:37 vm05.local ceph-mon[116516]: pgmap v172: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:07:37.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:37 vm09.local ceph-mon[98742]: pgmap v172: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:07:39.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:39 vm05.local ceph-mon[116516]: pgmap v173: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:07:39.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:39 vm09.local ceph-mon[98742]: pgmap v173: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:07:41.499 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:41 vm05.local ceph-mon[116516]: pgmap v174: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:07:41.518 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:41 vm09.local ceph-mon[98742]: pgmap v174: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:07:42.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:07:42.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:07:43.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:43 vm05.local ceph-mon[116516]: pgmap v175: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:07:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:43 vm09.local ceph-mon[98742]: pgmap v175: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:07:45.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.193+0000 7f21c1c94700 1 -- 192.168.123.105:0/3298390616 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21bc106560 msgr2=0x7f21bc106930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:45.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.193+0000 7f21c1c94700 1 --2- 192.168.123.105:0/3298390616 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21bc106560 0x7f21bc106930 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f21a4009b00 tx=0x7f21a4009e10 comp rx=0 tx=0).stop 2026-03-09T15:07:45.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.195+0000 7f21c1c94700 1 -- 192.168.123.105:0/3298390616 shutdown_connections 2026-03-09T15:07:45.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.195+0000 7f21c1c94700 1 --2- 192.168.123.105:0/3298390616 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f21bc100540 0x7f21bc1009b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:45.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.195+0000 7f21c1c94700 1 --2- 192.168.123.105:0/3298390616 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21bc106560 0x7f21bc106930 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:45.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.195+0000 7f21c1c94700 1 -- 192.168.123.105:0/3298390616 >> 192.168.123.105:0/3298390616 conn(0x7f21bc0fbfc0 msgr2=0x7f21bc0fe3d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:45.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.195+0000 7f21c1c94700 1 -- 192.168.123.105:0/3298390616 shutdown_connections 2026-03-09T15:07:45.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.195+0000 7f21c1c94700 1 -- 192.168.123.105:0/3298390616 wait complete. 2026-03-09T15:07:45.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.195+0000 7f21c1c94700 1 Processor -- start 2026-03-09T15:07:45.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.195+0000 7f21c1c94700 1 -- start start 2026-03-09T15:07:45.197 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21c1c94700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21bc100540 0x7f21bc073050 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:45.197 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21c1c94700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f21bc106560 0x7f21bc073590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:45.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21c1c94700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f21bc074e00 con 0x7f21bc100540 2026-03-09T15:07:45.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21c1c94700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f21bc074f70 con 0x7f21bc106560 2026-03-09T15:07:45.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21bb7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21bc100540 0x7f21bc073050 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:45.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21bb7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21bc100540 0x7f21bc073050 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:60180/0 (socket says 192.168.123.105:60180) 2026-03-09T15:07:45.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21bb7fe700 1 -- 192.168.123.105:0/4293541879 learned_addr learned my addr 192.168.123.105:0/4293541879 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:07:45.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21bb7fe700 1 -- 192.168.123.105:0/4293541879 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f21bc106560 msgr2=0x7f21bc073590 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:07:45.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21baffd700 1 --2- 192.168.123.105:0/4293541879 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f21bc106560 0x7f21bc073590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:45.198 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21bb7fe700 1 --2- 192.168.123.105:0/4293541879 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f21bc106560 0x7f21bc073590 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21bb7fe700 1 -- 192.168.123.105:0/4293541879 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f21a40097e0 con 0x7f21bc100540 2026-03-09T15:07:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21bb7fe700 1 --2- 192.168.123.105:0/4293541879 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21bc100540 0x7f21bc073050 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f21a4004930 tx=0x7f21a4004a10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21b8ff9700 1 -- 192.168.123.105:0/4293541879 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f21a401d070 con 0x7f21bc100540 2026-03-09T15:07:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21c1c94700 1 -- 192.168.123.105:0/4293541879 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f21bc073b30 con 0x7f21bc100540 2026-03-09T15:07:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21c1c94700 1 -- 192.168.123.105:0/4293541879 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f21bc1a6b50 con 0x7f21bc100540 2026-03-09T15:07:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21b8ff9700 1 -- 192.168.123.105:0/4293541879 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f21a400bc50 con 0x7f21bc100540 2026-03-09T15:07:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.196+0000 7f21b8ff9700 1 -- 192.168.123.105:0/4293541879 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f21a400f830 con 0x7f21bc100540 2026-03-09T15:07:45.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.197+0000 7f21b8ff9700 1 -- 192.168.123.105:0/4293541879 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f21a400f990 con 0x7f21bc100540 2026-03-09T15:07:45.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.198+0000 7f21b8ff9700 1 --2- 192.168.123.105:0/4293541879 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f21a8077990 0x7f21a8079e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:45.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.198+0000 7f21b8ff9700 1 -- 192.168.123.105:0/4293541879 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f21a409b1f0 con 0x7f21bc100540 2026-03-09T15:07:45.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.198+0000 7f21c1c94700 1 -- 192.168.123.105:0/4293541879 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f219c005320 con 0x7f21bc100540 2026-03-09T15:07:45.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.198+0000 7f21baffd700 1 --2- 192.168.123.105:0/4293541879 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f21a8077990 0x7f21a8079e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:45.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.199+0000 7f21baffd700 1 --2- 192.168.123.105:0/4293541879 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f21a8077990 0x7f21a8079e40 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f21bc074840 tx=0x7f21ac005cf0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:45.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.201+0000 7f21b8ff9700 1 -- 192.168.123.105:0/4293541879 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f21a4063940 con 0x7f21bc100540 2026-03-09T15:07:45.324 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.323+0000 7f21c1c94700 1 -- 192.168.123.105:0/4293541879 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7f219c000bf0 con 0x7f21a8077990 2026-03-09T15:07:45.325 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.324+0000 7f21b8ff9700 1 -- 192.168.123.105:0/4293541879 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+175 (secure 0 0 0) 0x7f219c000bf0 con 0x7f21a8077990 2026-03-09T15:07:45.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.326+0000 7f21c1c94700 1 -- 192.168.123.105:0/4293541879 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f21a8077990 msgr2=0x7f21a8079e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:45.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.326+0000 7f21c1c94700 1 --2- 192.168.123.105:0/4293541879 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f21a8077990 0x7f21a8079e40 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7f21bc074840 tx=0x7f21ac005cf0 comp rx=0 tx=0).stop 2026-03-09T15:07:45.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.326+0000 7f21c1c94700 1 -- 192.168.123.105:0/4293541879 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21bc100540 msgr2=0x7f21bc073050 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:45.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.326+0000 7f21c1c94700 1 --2- 192.168.123.105:0/4293541879 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21bc100540 0x7f21bc073050 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f21a4004930 tx=0x7f21a4004a10 comp rx=0 tx=0).stop 2026-03-09T15:07:45.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.326+0000 7f21c1c94700 1 -- 192.168.123.105:0/4293541879 shutdown_connections 2026-03-09T15:07:45.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.326+0000 7f21c1c94700 1 --2- 192.168.123.105:0/4293541879 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f21a8077990 0x7f21a8079e40 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:45.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.326+0000 7f21c1c94700 1 --2- 192.168.123.105:0/4293541879 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f21bc100540 0x7f21bc073050 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:45.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.326+0000 7f21c1c94700 1 --2- 192.168.123.105:0/4293541879 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f21bc106560 0x7f21bc073590 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:45.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.326+0000 7f21c1c94700 1 -- 192.168.123.105:0/4293541879 >> 192.168.123.105:0/4293541879 conn(0x7f21bc0fbfc0 msgr2=0x7f21bc0fd9a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:45.327 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.326+0000 7f21c1c94700 1 -- 192.168.123.105:0/4293541879 shutdown_connections 2026-03-09T15:07:45.328 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.326+0000 7f21c1c94700 1 -- 192.168.123.105:0/4293541879 wait complete. 2026-03-09T15:07:45.391 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T15:07:45.458 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:45 vm05.local ceph-mon[116516]: pgmap v176: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:07:45.572 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:07:45.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:45 vm09.local ceph-mon[98742]: pgmap v176: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:07:45.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.821+0000 7fb66e1ed700 1 -- 192.168.123.105:0/2528641308 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb668069090 msgr2=0x7fb668104e80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:45.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.821+0000 7fb66e1ed700 1 --2- 192.168.123.105:0/2528641308 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb668069090 0x7fb668104e80 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7fb664009b00 tx=0x7fb664009e10 comp rx=0 tx=0).stop 2026-03-09T15:07:45.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.822+0000 7fb66e1ed700 1 -- 192.168.123.105:0/2528641308 shutdown_connections 2026-03-09T15:07:45.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.822+0000 7fb66e1ed700 1 --2- 192.168.123.105:0/2528641308 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb668069090 0x7fb668104e80 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:45.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.822+0000 7fb66e1ed700 1 --2- 192.168.123.105:0/2528641308 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6680686f0 0x7fb668068ac0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:45.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.822+0000 7fb66e1ed700 1 -- 192.168.123.105:0/2528641308 >> 192.168.123.105:0/2528641308 conn(0x7fb668075240 msgr2=0x7fb668075640 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:45.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.822+0000 7fb66e1ed700 1 -- 192.168.123.105:0/2528641308 shutdown_connections 2026-03-09T15:07:45.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.822+0000 7fb66e1ed700 1 -- 192.168.123.105:0/2528641308 wait complete. 2026-03-09T15:07:45.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.823+0000 7fb66e1ed700 1 Processor -- start 2026-03-09T15:07:45.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.823+0000 7fb66e1ed700 1 -- start start 2026-03-09T15:07:45.824 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.823+0000 7fb66e1ed700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6680686f0 0x7fb6681981a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:45.824 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.823+0000 7fb66e1ed700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb668069090 0x7fb6681986e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:45.824 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.823+0000 7fb66e1ed700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb668198dc0 con 0x7fb668069090 2026-03-09T15:07:45.824 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.823+0000 7fb66e1ed700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb66819cb50 con 0x7fb6680686f0 2026-03-09T15:07:45.824 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.823+0000 7fb66c9ea700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb668069090 0x7fb6681986e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:45.824 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.823+0000 7fb66c9ea700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb668069090 0x7fb6681986e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:60194/0 (socket says 192.168.123.105:60194) 2026-03-09T15:07:45.824 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.823+0000 7fb66c9ea700 1 -- 192.168.123.105:0/3631302583 learned_addr learned my addr 192.168.123.105:0/3631302583 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:07:45.824 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.823+0000 7fb66d1eb700 1 --2- 192.168.123.105:0/3631302583 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6680686f0 0x7fb6681981a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:45.824 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.824+0000 7fb66c9ea700 1 -- 192.168.123.105:0/3631302583 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6680686f0 msgr2=0x7fb6681981a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:45.824 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.824+0000 7fb66c9ea700 1 --2- 192.168.123.105:0/3631302583 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6680686f0 0x7fb6681981a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:45.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.824+0000 7fb66c9ea700 1 -- 192.168.123.105:0/3631302583 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb6640097e0 con 0x7fb668069090 2026-03-09T15:07:45.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.824+0000 7fb66c9ea700 1 --2- 192.168.123.105:0/3631302583 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb668069090 0x7fb6681986e0 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fb664005f50 tx=0x7fb664004970 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:45.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.824+0000 7fb65e7fc700 1 -- 192.168.123.105:0/3631302583 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb66401d070 con 0x7fb668069090 2026-03-09T15:07:45.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.824+0000 7fb65e7fc700 1 -- 192.168.123.105:0/3631302583 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fb66400bc50 con 0x7fb668069090 2026-03-09T15:07:45.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.824+0000 7fb65e7fc700 1 -- 192.168.123.105:0/3631302583 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb66400f700 con 0x7fb668069090 2026-03-09T15:07:45.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.824+0000 7fb66e1ed700 1 -- 192.168.123.105:0/3631302583 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb66819cdd0 con 0x7fb668069090 2026-03-09T15:07:45.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.824+0000 7fb66e1ed700 1 -- 192.168.123.105:0/3631302583 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb66819d2c0 con 0x7fb668069090 2026-03-09T15:07:45.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.825+0000 7fb66e1ed700 1 -- 192.168.123.105:0/3631302583 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb66804f2a0 con 0x7fb668069090 2026-03-09T15:07:45.829 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.828+0000 7fb65e7fc700 1 -- 192.168.123.105:0/3631302583 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fb664022a50 con 0x7fb668069090 2026-03-09T15:07:45.829 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.828+0000 7fb65e7fc700 1 --2- 192.168.123.105:0/3631302583 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb654077990 0x7fb654079e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:45.829 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.829+0000 7fb66d1eb700 1 --2- 192.168.123.105:0/3631302583 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb654077990 0x7fb654079e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:45.829 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.829+0000 7fb65e7fc700 1 -- 192.168.123.105:0/3631302583 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fb66409be40 con 0x7fb668069090 2026-03-09T15:07:45.830 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.829+0000 7fb66d1eb700 1 --2- 192.168.123.105:0/3631302583 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb654077990 0x7fb654079e40 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7fb658006fd0 tx=0x7fb658008040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:45.830 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.829+0000 7fb65e7fc700 1 -- 192.168.123.105:0/3631302583 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fb66409c220 con 0x7fb668069090 2026-03-09T15:07:45.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.946+0000 7fb66e1ed700 1 -- 192.168.123.105:0/3631302583 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7fb66819d610 con 0x7fb654077990 2026-03-09T15:07:45.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.954+0000 7fb65e7fc700 1 -- 192.168.123.105:0/3631302583 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3552 (secure 0 0 0) 0x7fb66819d610 con 0x7fb654077990 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (4m) 34s ago 11m 24.0M - 0.25.0 c8568f914cd2 7635cece310c 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (37s) 34s ago 11m 9852k - 19.2.3-678-ge911bdeb 654f31e6858e 7e4630f85fea 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (36s) 35s ago 11m 9559k - 19.2.3-678-ge911bdeb 654f31e6858e 7757fd500ae0 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (3m) 34s ago 11m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 35d8c0ae5a58 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (3m) 35s ago 11m 8308k - 19.2.3-678-ge911bdeb 654f31e6858e 82bdad36caf9 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (4m) 34s ago 11m 84.0M - 10.4.0 c8b91775d855 eb6431f63d88 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (74s) 34s ago 9m 103M - 19.2.3-678-ge911bdeb 654f31e6858e f41a092cac53 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (63s) 34s ago 9m 262M - 19.2.3-678-ge911bdeb 654f31e6858e 5a60c73f6399 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (43s) 35s ago 9m 15.4M - 19.2.3-678-ge911bdeb 654f31e6858e 05062c46f72b 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (54s) 35s ago 9m 20.6M - 19.2.3-678-ge911bdeb 654f31e6858e 615aff4b5fba 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (5m) 34s ago 12m 626M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (5m) 35s ago 10m 497M - 19.2.3-678-ge911bdeb 654f31e6858e acf5a6f3f804 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (4m) 34s ago 12m 67.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1e11655f7d87 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (3m) 35s ago 10m 58.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e d1f0309f4d58 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (5m) 34s ago 11m 10.3M - 1.7.0 72c9c2088986 888d071c50d9 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (5m) 35s ago 10m 9617k - 1.7.0 72c9c2088986 22c96a576a60 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (3m) 34s ago 10m 164M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f2883abca2d2 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (3m) 34s ago 10m 127M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b830d7f76498 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (2m) 34s ago 10m 127M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 01cf87b8bc05 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (2m) 35s ago 10m 164M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9359c3ced4d3 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (2m) 35s ago 9m 111M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 985038f550f8 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (86s) 35s ago 9m 101M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 15ec92bc2880 2026-03-09T15:07:45.956 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (4m) 34s ago 11m 58.0M - 2.51.0 1d3b7f56885b e6f470b0ba11 2026-03-09T15:07:45.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.958+0000 7fb66e1ed700 1 -- 192.168.123.105:0/3631302583 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb654077990 msgr2=0x7fb654079e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:45.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.958+0000 7fb66e1ed700 1 --2- 192.168.123.105:0/3631302583 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb654077990 0x7fb654079e40 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7fb658006fd0 tx=0x7fb658008040 comp rx=0 tx=0).stop 2026-03-09T15:07:45.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.959+0000 7fb66e1ed700 1 -- 192.168.123.105:0/3631302583 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb668069090 msgr2=0x7fb6681986e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:45.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.959+0000 7fb66e1ed700 1 --2- 192.168.123.105:0/3631302583 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb668069090 0x7fb6681986e0 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fb664005f50 tx=0x7fb664004970 comp rx=0 tx=0).stop 2026-03-09T15:07:45.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.959+0000 7fb66e1ed700 1 -- 192.168.123.105:0/3631302583 shutdown_connections 2026-03-09T15:07:45.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.959+0000 7fb66e1ed700 1 --2- 192.168.123.105:0/3631302583 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb654077990 0x7fb654079e40 secure :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7fb658006fd0 tx=0x7fb658008040 comp rx=0 tx=0).stop 2026-03-09T15:07:45.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.959+0000 7fb66e1ed700 1 --2- 192.168.123.105:0/3631302583 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb6680686f0 0x7fb6681981a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:45.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.959+0000 7fb66e1ed700 1 --2- 192.168.123.105:0/3631302583 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb668069090 0x7fb6681986e0 secure :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fb664005f50 tx=0x7fb664004970 comp rx=0 tx=0).stop 2026-03-09T15:07:45.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.959+0000 7fb66e1ed700 1 -- 192.168.123.105:0/3631302583 >> 192.168.123.105:0/3631302583 conn(0x7fb668075240 msgr2=0x7fb6680fe740 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:45.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.960+0000 7fb66e1ed700 1 -- 192.168.123.105:0/3631302583 shutdown_connections 2026-03-09T15:07:45.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:45.960+0000 7fb66e1ed700 1 -- 192.168.123.105:0/3631302583 wait complete. 2026-03-09T15:07:46.006 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-09T15:07:46.192 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:07:46.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.428+0000 7fcdda138700 1 -- 192.168.123.105:0/2768638150 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcdd4102780 msgr2=0x7fcdd4102bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:46.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.428+0000 7fcdda138700 1 --2- 192.168.123.105:0/2768638150 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcdd4102780 0x7fcdd4102bf0 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fcdc4009b00 tx=0x7fcdc4009e10 comp rx=0 tx=0).stop 2026-03-09T15:07:46.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.429+0000 7fcdda138700 1 -- 192.168.123.105:0/2768638150 shutdown_connections 2026-03-09T15:07:46.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.429+0000 7fcdda138700 1 --2- 192.168.123.105:0/2768638150 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcdd4102780 0x7fcdd4102bf0 unknown :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:46.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.429+0000 7fcdda138700 1 --2- 192.168.123.105:0/2768638150 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcdd4108780 0x7fcdd4108b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:46.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.429+0000 7fcdda138700 1 -- 192.168.123.105:0/2768638150 >> 192.168.123.105:0/2768638150 conn(0x7fcdd40fe280 msgr2=0x7fcdd4100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:46.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.429+0000 7fcdda138700 1 -- 192.168.123.105:0/2768638150 shutdown_connections 2026-03-09T15:07:46.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.429+0000 7fcdda138700 1 -- 192.168.123.105:0/2768638150 wait complete. 2026-03-09T15:07:46.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.429+0000 7fcdda138700 1 Processor -- start 2026-03-09T15:07:46.430 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.430+0000 7fcdda138700 1 -- start start 2026-03-09T15:07:46.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.430+0000 7fcdda138700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcdd4102780 0x7fcdd4198440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:46.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.430+0000 7fcdda138700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcdd4108780 0x7fcdd4198980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:46.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.430+0000 7fcdda138700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcdd4199060 con 0x7fcdd4102780 2026-03-09T15:07:46.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.430+0000 7fcdda138700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcdd419cdf0 con 0x7fcdd4108780 2026-03-09T15:07:46.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.430+0000 7fcdd2ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcdd4108780 0x7fcdd4198980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:46.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.430+0000 7fcdd2ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcdd4108780 0x7fcdd4198980 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:44042/0 (socket says 192.168.123.105:44042) 2026-03-09T15:07:46.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.430+0000 7fcdd2ffd700 1 -- 192.168.123.105:0/1241096126 learned_addr learned my addr 192.168.123.105:0/1241096126 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:07:46.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.430+0000 7fcdd37fe700 1 --2- 192.168.123.105:0/1241096126 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcdd4102780 0x7fcdd4198440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:46.431 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.431+0000 7fcdd2ffd700 1 -- 192.168.123.105:0/1241096126 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcdd4102780 msgr2=0x7fcdd4198440 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:46.432 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.431+0000 7fcdd2ffd700 1 --2- 192.168.123.105:0/1241096126 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcdd4102780 0x7fcdd4198440 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:46.432 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.431+0000 7fcdd2ffd700 1 -- 192.168.123.105:0/1241096126 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcdc40097e0 con 0x7fcdd4108780 2026-03-09T15:07:46.432 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.431+0000 7fcdd37fe700 1 --2- 192.168.123.105:0/1241096126 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcdd4102780 0x7fcdd4198440 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:07:46.432 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.431+0000 7fcdd2ffd700 1 --2- 192.168.123.105:0/1241096126 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcdd4108780 0x7fcdd4198980 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7fcdc40048c0 tx=0x7fcdc40048f0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:46.432 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.431+0000 7fcdd0ff9700 1 -- 192.168.123.105:0/1241096126 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcdc401d070 con 0x7fcdd4108780 2026-03-09T15:07:46.432 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.431+0000 7fcdda138700 1 -- 192.168.123.105:0/1241096126 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcdd419d070 con 0x7fcdd4108780 2026-03-09T15:07:46.433 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.431+0000 7fcdda138700 1 -- 192.168.123.105:0/1241096126 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcdd419d560 con 0x7fcdd4108780 2026-03-09T15:07:46.433 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.431+0000 7fcdd0ff9700 1 -- 192.168.123.105:0/1241096126 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fcdc400f460 con 0x7fcdd4108780 2026-03-09T15:07:46.433 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.431+0000 7fcdd0ff9700 1 -- 192.168.123.105:0/1241096126 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcdc4017610 con 0x7fcdd4108780 2026-03-09T15:07:46.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.432+0000 7fcdda138700 1 -- 192.168.123.105:0/1241096126 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcdb4005320 con 0x7fcdd4108780 2026-03-09T15:07:46.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.433+0000 7fcdd0ff9700 1 -- 192.168.123.105:0/1241096126 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fcdc4044b60 con 0x7fcdd4108780 2026-03-09T15:07:46.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.433+0000 7fcdd0ff9700 1 --2- 192.168.123.105:0/1241096126 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcdc00778c0 0x7fcdc0079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:46.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.433+0000 7fcdd37fe700 1 --2- 192.168.123.105:0/1241096126 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcdc00778c0 0x7fcdc0079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:46.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.433+0000 7fcdd0ff9700 1 -- 192.168.123.105:0/1241096126 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fcdc409b0d0 con 0x7fcdd4108780 2026-03-09T15:07:46.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.433+0000 7fcdd37fe700 1 --2- 192.168.123.105:0/1241096126 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcdc00778c0 0x7fcdc0079d70 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7fcdd41038c0 tx=0x7fcdbc007400 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:46.436 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.435+0000 7fcdd0ff9700 1 -- 192.168.123.105:0/1241096126 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fcdc40636f0 con 0x7fcdd4108780 2026-03-09T15:07:46.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:46 vm05.local ceph-mon[116516]: from='client.34328 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:46.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:46 vm05.local ceph-mon[116516]: pgmap v177: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:07:46.555 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:46 vm05.local ceph-mon[116516]: from='client.34332 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:46.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.555+0000 7fcdda138700 1 -- 192.168.123.105:0/1241096126 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}) v1 -- 0x7fcdb4000bf0 con 0x7fcdc00778c0 2026-03-09T15:07:46.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.556+0000 7fcdd0ff9700 1 -- 192.168.123.105:0/1241096126 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+175 (secure 0 0 0) 0x7fcdb4000bf0 con 0x7fcdc00778c0 2026-03-09T15:07:46.557 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:07:46.557 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": null, 2026-03-09T15:07:46.557 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": false, 2026-03-09T15:07:46.557 INFO:teuthology.orchestra.run.vm05.stdout: "which": "", 2026-03-09T15:07:46.557 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-09T15:07:46.557 INFO:teuthology.orchestra.run.vm05.stdout: "progress": null, 2026-03-09T15:07:46.557 INFO:teuthology.orchestra.run.vm05.stdout: "message": "", 2026-03-09T15:07:46.557 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-09T15:07:46.557 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:07:46.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.558+0000 7fcdda138700 1 -- 192.168.123.105:0/1241096126 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcdc00778c0 msgr2=0x7fcdc0079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:46.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.558+0000 7fcdda138700 1 --2- 192.168.123.105:0/1241096126 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcdc00778c0 0x7fcdc0079d70 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7fcdd41038c0 tx=0x7fcdbc007400 comp rx=0 tx=0).stop 2026-03-09T15:07:46.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.559+0000 7fcdda138700 1 -- 192.168.123.105:0/1241096126 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcdd4108780 msgr2=0x7fcdd4198980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:46.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.559+0000 7fcdda138700 1 --2- 192.168.123.105:0/1241096126 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcdd4108780 0x7fcdd4198980 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7fcdc40048c0 tx=0x7fcdc40048f0 comp rx=0 tx=0).stop 2026-03-09T15:07:46.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.559+0000 7fcdda138700 1 -- 192.168.123.105:0/1241096126 shutdown_connections 2026-03-09T15:07:46.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.559+0000 7fcdda138700 1 --2- 192.168.123.105:0/1241096126 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcdc00778c0 0x7fcdc0079d70 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:46.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.559+0000 7fcdda138700 1 --2- 192.168.123.105:0/1241096126 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcdd4102780 0x7fcdd4198440 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:46.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.559+0000 7fcdda138700 1 --2- 192.168.123.105:0/1241096126 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcdd4108780 0x7fcdd4198980 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:46.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.559+0000 7fcdda138700 1 -- 192.168.123.105:0/1241096126 >> 192.168.123.105:0/1241096126 conn(0x7fcdd40fe280 msgr2=0x7fcdd40ffc00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:46.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.559+0000 7fcdda138700 1 -- 192.168.123.105:0/1241096126 shutdown_connections 2026-03-09T15:07:46.560 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.559+0000 7fcdda138700 1 -- 192.168.123.105:0/1241096126 wait complete. 2026-03-09T15:07:46.598 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T15:07:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:46 vm09.local ceph-mon[98742]: from='client.34328 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:46 vm09.local ceph-mon[98742]: pgmap v177: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:07:46.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:46 vm09.local ceph-mon[98742]: from='client.34332 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:46.732 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:07:46.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.961+0000 7fc02d11a700 1 -- 192.168.123.105:0/986363373 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0281066c0 msgr2=0x7fc028106a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:46.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.961+0000 7fc02d11a700 1 --2- 192.168.123.105:0/986363373 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0281066c0 0x7fc028106a90 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7fc010009b00 tx=0x7fc010009e10 comp rx=0 tx=0).stop 2026-03-09T15:07:46.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.962+0000 7fc02d11a700 1 -- 192.168.123.105:0/986363373 shutdown_connections 2026-03-09T15:07:46.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.962+0000 7fc02d11a700 1 --2- 192.168.123.105:0/986363373 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc028068490 0x7fc028068900 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:46.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.962+0000 7fc02d11a700 1 --2- 192.168.123.105:0/986363373 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc0281066c0 0x7fc028106a90 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:46.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.962+0000 7fc02d11a700 1 -- 192.168.123.105:0/986363373 >> 192.168.123.105:0/986363373 conn(0x7fc0280754a0 msgr2=0x7fc0280758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:46.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.962+0000 7fc02d11a700 1 -- 192.168.123.105:0/986363373 shutdown_connections 2026-03-09T15:07:46.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.962+0000 7fc02d11a700 1 -- 192.168.123.105:0/986363373 wait complete. 2026-03-09T15:07:46.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.963+0000 7fc02d11a700 1 Processor -- start 2026-03-09T15:07:46.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.963+0000 7fc02d11a700 1 -- start start 2026-03-09T15:07:46.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.963+0000 7fc02d11a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc028068490 0x7fc028193fd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:46.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.963+0000 7fc02d11a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc0281066c0 0x7fc028194510 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:46.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.963+0000 7fc02d11a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc028194b60 con 0x7fc028068490 2026-03-09T15:07:46.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.963+0000 7fc02d11a700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc028194ca0 con 0x7fc0281066c0 2026-03-09T15:07:46.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.963+0000 7fc02659c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc0281066c0 0x7fc028194510 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:46.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.963+0000 7fc02659c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc0281066c0 0x7fc028194510 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:44054/0 (socket says 192.168.123.105:44054) 2026-03-09T15:07:46.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.963+0000 7fc02659c700 1 -- 192.168.123.105:0/1028683600 learned_addr learned my addr 192.168.123.105:0/1028683600 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:07:46.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.964+0000 7fc026d9d700 1 --2- 192.168.123.105:0/1028683600 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc028068490 0x7fc028193fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:46.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.964+0000 7fc02659c700 1 -- 192.168.123.105:0/1028683600 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc028068490 msgr2=0x7fc028193fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:46.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.964+0000 7fc02659c700 1 --2- 192.168.123.105:0/1028683600 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc028068490 0x7fc028193fd0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:46.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.964+0000 7fc02659c700 1 -- 192.168.123.105:0/1028683600 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc0100097e0 con 0x7fc0281066c0 2026-03-09T15:07:46.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.964+0000 7fc026d9d700 1 --2- 192.168.123.105:0/1028683600 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc028068490 0x7fc028193fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:07:46.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.964+0000 7fc02659c700 1 --2- 192.168.123.105:0/1028683600 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc0281066c0 0x7fc028194510 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7fc01800c930 tx=0x7fc01800cc40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:46.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.964+0000 7fc01ffff700 1 -- 192.168.123.105:0/1028683600 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc018007ab0 con 0x7fc0281066c0 2026-03-09T15:07:46.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.965+0000 7fc02d11a700 1 -- 192.168.123.105:0/1028683600 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc028198af0 con 0x7fc0281066c0 2026-03-09T15:07:46.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.965+0000 7fc02d11a700 1 -- 192.168.123.105:0/1028683600 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc028199040 con 0x7fc0281066c0 2026-03-09T15:07:46.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.965+0000 7fc01ffff700 1 -- 192.168.123.105:0/1028683600 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fc01800ce80 con 0x7fc0281066c0 2026-03-09T15:07:46.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.965+0000 7fc01ffff700 1 -- 192.168.123.105:0/1028683600 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc018018730 con 0x7fc0281066c0 2026-03-09T15:07:46.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.965+0000 7fc02d11a700 1 -- 192.168.123.105:0/1028683600 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc02804ea50 con 0x7fc0281066c0 2026-03-09T15:07:46.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.966+0000 7fc01ffff700 1 -- 192.168.123.105:0/1028683600 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc018007c10 con 0x7fc0281066c0 2026-03-09T15:07:46.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.967+0000 7fc01ffff700 1 --2- 192.168.123.105:0/1028683600 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc01407bcd0 0x7fc01407e180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:46.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.967+0000 7fc026d9d700 1 --2- 192.168.123.105:0/1028683600 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc01407bcd0 0x7fc01407e180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:46.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.967+0000 7fc01ffff700 1 -- 192.168.123.105:0/1028683600 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fc018099210 con 0x7fc0281066c0 2026-03-09T15:07:46.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.967+0000 7fc026d9d700 1 --2- 192.168.123.105:0/1028683600 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc01407bcd0 0x7fc01407e180 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fc010006010 tx=0x7fc01000b540 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:46.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:46.969+0000 7fc01ffff700 1 -- 192.168.123.105:0/1028683600 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc0180618b0 con 0x7fc0281066c0 2026-03-09T15:07:47.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.123+0000 7fc02d11a700 1 -- 192.168.123.105:0/1028683600 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "health", "detail": "detail"} v 0) v1 -- 0x7fc028198c80 con 0x7fc0281066c0 2026-03-09T15:07:47.125 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.125+0000 7fc01ffff700 1 -- 192.168.123.105:0/1028683600 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "health", "detail": "detail"}]=0 v0) v1 ==== 74+0+201 (secure 0 0 0) 0x7fc028198c80 con 0x7fc0281066c0 2026-03-09T15:07:47.127 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:07:47.127 INFO:teuthology.orchestra.run.vm05.stdout:[WRN] FS_INLINE_DATA_DEPRECATED: 1 filesystem with deprecated feature inline_data 2026-03-09T15:07:47.127 INFO:teuthology.orchestra.run.vm05.stdout: fs cephfs has deprecated feature inline_data enabled. 2026-03-09T15:07:47.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.128+0000 7fc02d11a700 1 -- 192.168.123.105:0/1028683600 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc01407bcd0 msgr2=0x7fc01407e180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:47.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.128+0000 7fc02d11a700 1 --2- 192.168.123.105:0/1028683600 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc01407bcd0 0x7fc01407e180 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fc010006010 tx=0x7fc01000b540 comp rx=0 tx=0).stop 2026-03-09T15:07:47.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.128+0000 7fc02d11a700 1 -- 192.168.123.105:0/1028683600 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc0281066c0 msgr2=0x7fc028194510 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:47.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.128+0000 7fc02d11a700 1 --2- 192.168.123.105:0/1028683600 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc0281066c0 0x7fc028194510 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7fc01800c930 tx=0x7fc01800cc40 comp rx=0 tx=0).stop 2026-03-09T15:07:47.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.129+0000 7fc02d11a700 1 -- 192.168.123.105:0/1028683600 shutdown_connections 2026-03-09T15:07:47.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.129+0000 7fc02d11a700 1 --2- 192.168.123.105:0/1028683600 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc01407bcd0 0x7fc01407e180 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:47.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.129+0000 7fc02d11a700 1 --2- 192.168.123.105:0/1028683600 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc028068490 0x7fc028193fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:47.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.129+0000 7fc02d11a700 1 --2- 192.168.123.105:0/1028683600 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc0281066c0 0x7fc028194510 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:47.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.129+0000 7fc02d11a700 1 -- 192.168.123.105:0/1028683600 >> 192.168.123.105:0/1028683600 conn(0x7fc0280754a0 msgr2=0x7fc0280febe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:47.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.129+0000 7fc02d11a700 1 -- 192.168.123.105:0/1028683600 shutdown_connections 2026-03-09T15:07:47.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.129+0000 7fc02d11a700 1 -- 192.168.123.105:0/1028683600 wait complete. 2026-03-09T15:07:47.174 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T15:07:47.308 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:07:47.381 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:47 vm05.local ceph-mon[116516]: from='client.44273 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:47.381 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:47 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1028683600' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:07:47.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.545+0000 7fb1f33d6700 1 -- 192.168.123.105:0/854060130 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1ec101810 msgr2=0x7fb1ec101be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:47.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.545+0000 7fb1f33d6700 1 --2- 192.168.123.105:0/854060130 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1ec101810 0x7fb1ec101be0 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7fb1dc009b30 tx=0x7fb1dc009e40 comp rx=0 tx=0).stop 2026-03-09T15:07:47.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.546+0000 7fb1f33d6700 1 -- 192.168.123.105:0/854060130 shutdown_connections 2026-03-09T15:07:47.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.546+0000 7fb1f33d6700 1 --2- 192.168.123.105:0/854060130 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1ec102120 0x7fb1ec10a620 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:47.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.546+0000 7fb1f33d6700 1 --2- 192.168.123.105:0/854060130 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1ec101810 0x7fb1ec101be0 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:47.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.546+0000 7fb1f33d6700 1 -- 192.168.123.105:0/854060130 >> 192.168.123.105:0/854060130 conn(0x7fb1ec076270 msgr2=0x7fb1ec076670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:47.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.546+0000 7fb1f33d6700 1 -- 192.168.123.105:0/854060130 shutdown_connections 2026-03-09T15:07:47.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.546+0000 7fb1f33d6700 1 -- 192.168.123.105:0/854060130 wait complete. 2026-03-09T15:07:47.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.547+0000 7fb1f33d6700 1 Processor -- start 2026-03-09T15:07:47.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.547+0000 7fb1f33d6700 1 -- start start 2026-03-09T15:07:47.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.547+0000 7fb1f33d6700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1ec101810 0x7fb1ec196280 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:47.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.547+0000 7fb1f33d6700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1ec102120 0x7fb1ec1967c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:47.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.547+0000 7fb1f33d6700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb1ec196ea0 con 0x7fb1ec101810 2026-03-09T15:07:47.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.547+0000 7fb1f33d6700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb1ec19ac30 con 0x7fb1ec102120 2026-03-09T15:07:47.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.547+0000 7fb1f0971700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1ec102120 0x7fb1ec1967c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:47.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.548+0000 7fb1f1172700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1ec101810 0x7fb1ec196280 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:47.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.548+0000 7fb1f0971700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1ec102120 0x7fb1ec1967c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:44084/0 (socket says 192.168.123.105:44084) 2026-03-09T15:07:47.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.548+0000 7fb1f0971700 1 -- 192.168.123.105:0/362817437 learned_addr learned my addr 192.168.123.105:0/362817437 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:07:47.549 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.548+0000 7fb1f1172700 1 -- 192.168.123.105:0/362817437 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1ec102120 msgr2=0x7fb1ec1967c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:47.549 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.548+0000 7fb1f1172700 1 --2- 192.168.123.105:0/362817437 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1ec102120 0x7fb1ec1967c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:47.549 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.548+0000 7fb1f1172700 1 -- 192.168.123.105:0/362817437 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb1dc0097e0 con 0x7fb1ec101810 2026-03-09T15:07:47.549 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.548+0000 7fb1f1172700 1 --2- 192.168.123.105:0/362817437 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1ec101810 0x7fb1ec196280 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7fb1dc006010 tx=0x7fb1dc004930 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:47.550 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.548+0000 7fb1e27fc700 1 -- 192.168.123.105:0/362817437 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb1dc01d070 con 0x7fb1ec101810 2026-03-09T15:07:47.550 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.548+0000 7fb1e27fc700 1 -- 192.168.123.105:0/362817437 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fb1dc022470 con 0x7fb1ec101810 2026-03-09T15:07:47.550 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.548+0000 7fb1e27fc700 1 -- 192.168.123.105:0/362817437 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb1dc00f650 con 0x7fb1ec101810 2026-03-09T15:07:47.550 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.548+0000 7fb1f33d6700 1 -- 192.168.123.105:0/362817437 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb1ec19aeb0 con 0x7fb1ec101810 2026-03-09T15:07:47.550 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.548+0000 7fb1f33d6700 1 -- 192.168.123.105:0/362817437 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb1ec19b3a0 con 0x7fb1ec101810 2026-03-09T15:07:47.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.550+0000 7fb1e27fc700 1 -- 192.168.123.105:0/362817437 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fb1dc00f7b0 con 0x7fb1ec101810 2026-03-09T15:07:47.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.550+0000 7fb1f33d6700 1 -- 192.168.123.105:0/362817437 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb1ec107d20 con 0x7fb1ec101810 2026-03-09T15:07:47.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.551+0000 7fb1e27fc700 1 --2- 192.168.123.105:0/362817437 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb1d8077870 0x7fb1d8079d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:07:47.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.551+0000 7fb1e27fc700 1 -- 192.168.123.105:0/362817437 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fb1dc09b230 con 0x7fb1ec101810 2026-03-09T15:07:47.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.552+0000 7fb1f0971700 1 --2- 192.168.123.105:0/362817437 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb1d8077870 0x7fb1d8079d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:07:47.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.553+0000 7fb1f0971700 1 --2- 192.168.123.105:0/362817437 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb1d8077870 0x7fb1d8079d20 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7fb1ec1978a0 tx=0x7fb1e800b4a0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:07:47.554 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.553+0000 7fb1e27fc700 1 -- 192.168.123.105:0/362817437 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fb1dc05fea0 con 0x7fb1ec101810 2026-03-09T15:07:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:47 vm09.local ceph-mon[98742]: from='client.44273 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:07:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:47 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1028683600' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T15:07:47.714 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.713+0000 7fb1f33d6700 1 -- 192.168.123.105:0/362817437 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7fb1ec1975e0 con 0x7fb1ec101810 2026-03-09T15:07:47.715 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.714+0000 7fb1e27fc700 1 -- 192.168.123.105:0/362817437 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7fb1dc05fea0 con 0x7fb1ec101810 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T15:07:47.716 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:07:47.717 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:07:47.717 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 14 2026-03-09T15:07:47.717 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:07:47.717 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:07:47.718 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.717+0000 7fb1f33d6700 1 -- 192.168.123.105:0/362817437 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb1d8077870 msgr2=0x7fb1d8079d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:47.718 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.717+0000 7fb1f33d6700 1 --2- 192.168.123.105:0/362817437 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb1d8077870 0x7fb1d8079d20 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7fb1ec1978a0 tx=0x7fb1e800b4a0 comp rx=0 tx=0).stop 2026-03-09T15:07:47.718 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.717+0000 7fb1f33d6700 1 -- 192.168.123.105:0/362817437 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1ec101810 msgr2=0x7fb1ec196280 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:07:47.718 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.718+0000 7fb1f33d6700 1 --2- 192.168.123.105:0/362817437 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1ec101810 0x7fb1ec196280 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7fb1dc006010 tx=0x7fb1dc004930 comp rx=0 tx=0).stop 2026-03-09T15:07:47.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.718+0000 7fb1f33d6700 1 -- 192.168.123.105:0/362817437 shutdown_connections 2026-03-09T15:07:47.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.718+0000 7fb1f33d6700 1 --2- 192.168.123.105:0/362817437 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb1d8077870 0x7fb1d8079d20 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:47.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.718+0000 7fb1f33d6700 1 --2- 192.168.123.105:0/362817437 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb1ec101810 0x7fb1ec196280 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:47.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.718+0000 7fb1f33d6700 1 --2- 192.168.123.105:0/362817437 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb1ec102120 0x7fb1ec1967c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:07:47.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.718+0000 7fb1f33d6700 1 -- 192.168.123.105:0/362817437 >> 192.168.123.105:0/362817437 conn(0x7fb1ec076270 msgr2=0x7fb1ec104e60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:07:47.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.718+0000 7fb1f33d6700 1 -- 192.168.123.105:0/362817437 shutdown_connections 2026-03-09T15:07:47.719 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:07:47.719+0000 7fb1f33d6700 1 -- 192.168.123.105:0/362817437 wait complete. 2026-03-09T15:07:47.775 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'echo "wait for servicemap items w/ changing names to refresh"' 2026-03-09T15:07:47.911 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:07:48.090 INFO:teuthology.orchestra.run.vm05.stdout:wait for servicemap items w/ changing names to refresh 2026-03-09T15:07:48.123 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 60' 2026-03-09T15:07:48.256 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:07:48.280 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:48 vm05.local ceph-mon[116516]: pgmap v178: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:07:48.280 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:48 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/362817437' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:48.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:48 vm09.local ceph-mon[98742]: pgmap v178: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:07:48.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:48 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/362817437' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:07:50.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:50 vm09.local ceph-mon[98742]: pgmap v179: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:07:50.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:50 vm05.local ceph-mon[116516]: pgmap v179: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:07:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:52 vm05.local ceph-mon[116516]: pgmap v180: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:07:52.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:52 vm09.local ceph-mon[98742]: pgmap v180: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:07:54.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:54 vm05.local ceph-mon[116516]: pgmap v181: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:07:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:54 vm09.local ceph-mon[98742]: pgmap v181: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:07:56.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:56 vm05.local ceph-mon[116516]: pgmap v182: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:07:56.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:07:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:56 vm09.local ceph-mon[98742]: pgmap v182: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:07:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:07:58.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:07:58 vm05.local ceph-mon[116516]: pgmap v183: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:07:58.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:07:58 vm09.local ceph-mon[98742]: pgmap v183: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:00.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:00 vm05.local ceph-mon[116516]: pgmap v184: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:00.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:00 vm09.local ceph-mon[98742]: pgmap v184: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:02.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:02 vm05.local ceph-mon[116516]: pgmap v185: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:02 vm09.local ceph-mon[98742]: pgmap v185: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:04.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:04 vm05.local ceph-mon[116516]: pgmap v186: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:04.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:04 vm09.local ceph-mon[98742]: pgmap v186: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:06.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:06 vm05.local ceph-mon[116516]: pgmap v187: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:06.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:06 vm09.local ceph-mon[98742]: pgmap v187: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:08.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:08 vm05.local ceph-mon[116516]: pgmap v188: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:08.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:08 vm09.local ceph-mon[98742]: pgmap v188: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:10.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:10 vm05.local ceph-mon[116516]: pgmap v189: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:10.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:10 vm09.local ceph-mon[98742]: pgmap v189: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:11.768 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:08:11.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:08:12.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:12 vm05.local ceph-mon[116516]: pgmap v190: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:12.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:12 vm09.local ceph-mon[98742]: pgmap v190: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:14.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:14 vm05.local ceph-mon[116516]: pgmap v191: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:14.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:14 vm09.local ceph-mon[98742]: pgmap v191: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:15.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:15 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:08:15.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:15 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:08:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:16 vm05.local ceph-mon[116516]: pgmap v192: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:08:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:08:16.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:08:16.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:16 vm09.local ceph-mon[98742]: pgmap v192: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:16.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:08:16.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:08:16.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:08:18.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:18 vm05.local ceph-mon[116516]: pgmap v193: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:18.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:18 vm09.local ceph-mon[98742]: pgmap v193: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:20.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:20 vm05.local ceph-mon[116516]: pgmap v194: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:20.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:20 vm09.local ceph-mon[98742]: pgmap v194: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:22.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:22 vm05.local ceph-mon[116516]: pgmap v195: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:22.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:22 vm09.local ceph-mon[98742]: pgmap v195: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:24.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:24 vm05.local ceph-mon[116516]: pgmap v196: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:24 vm09.local ceph-mon[98742]: pgmap v196: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:26 vm09.local ceph-mon[98742]: pgmap v197: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:26 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:08:27.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:26 vm05.local ceph-mon[116516]: pgmap v197: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:27.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:26 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:08:28.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:28 vm09.local ceph-mon[98742]: pgmap v198: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:29.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:28 vm05.local ceph-mon[116516]: pgmap v198: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:30.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:30 vm09.local ceph-mon[98742]: pgmap v199: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:31.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:30 vm05.local ceph-mon[116516]: pgmap v199: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:32.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:32 vm09.local ceph-mon[98742]: pgmap v200: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:33.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:32 vm05.local ceph-mon[116516]: pgmap v200: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:34.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:34 vm09.local ceph-mon[98742]: pgmap v201: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:34 vm05.local ceph-mon[116516]: pgmap v201: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:36.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:36 vm09.local ceph-mon[98742]: pgmap v202: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:37.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:36 vm05.local ceph-mon[116516]: pgmap v202: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:38.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:38 vm09.local ceph-mon[98742]: pgmap v203: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:39.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:38 vm05.local ceph-mon[116516]: pgmap v203: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:40.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:40 vm09.local ceph-mon[98742]: pgmap v204: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:41.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:40 vm05.local ceph-mon[116516]: pgmap v204: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:41.933 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:08:41.955 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:08:42.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:42 vm09.local ceph-mon[98742]: pgmap v205: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:42 vm05.local ceph-mon[116516]: pgmap v205: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:08:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:44 vm05.local ceph-mon[116516]: pgmap v206: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:44 vm09.local ceph-mon[98742]: pgmap v206: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:47.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:46 vm05.local ceph-mon[116516]: pgmap v207: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:47.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:46 vm09.local ceph-mon[98742]: pgmap v207: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:48.461 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T15:08:48.608 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:48.682 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:48 vm05.local ceph-mon[116516]: pgmap v208: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:08:48.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.897+0000 7ff26ab10700 1 -- 192.168.123.105:0/628526648 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff264068490 msgr2=0x7ff264068900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:48.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.897+0000 7ff26ab10700 1 --2- 192.168.123.105:0/628526648 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff264068490 0x7ff264068900 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7ff258009b00 tx=0x7ff258009e10 comp rx=0 tx=0).stop 2026-03-09T15:08:48.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.898+0000 7ff26ab10700 1 -- 192.168.123.105:0/628526648 shutdown_connections 2026-03-09T15:08:48.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.898+0000 7ff26ab10700 1 --2- 192.168.123.105:0/628526648 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff264068490 0x7ff264068900 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:48.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.898+0000 7ff26ab10700 1 --2- 192.168.123.105:0/628526648 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2641013c0 0x7ff264101790 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:48.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.898+0000 7ff26ab10700 1 -- 192.168.123.105:0/628526648 >> 192.168.123.105:0/628526648 conn(0x7ff2640754a0 msgr2=0x7ff2640758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:48.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.898+0000 7ff26ab10700 1 -- 192.168.123.105:0/628526648 shutdown_connections 2026-03-09T15:08:48.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.898+0000 7ff26ab10700 1 -- 192.168.123.105:0/628526648 wait complete. 2026-03-09T15:08:48.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.899+0000 7ff26ab10700 1 Processor -- start 2026-03-09T15:08:48.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.899+0000 7ff26ab10700 1 -- start start 2026-03-09T15:08:48.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.899+0000 7ff26ab10700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff264068490 0x7ff264198340 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:48.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.900+0000 7ff2688ac700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff264068490 0x7ff264198340 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:48.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.900+0000 7ff2688ac700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff264068490 0x7ff264198340 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:42726/0 (socket says 192.168.123.105:42726) 2026-03-09T15:08:48.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.900+0000 7ff2688ac700 1 -- 192.168.123.105:0/1844674703 learned_addr learned my addr 192.168.123.105:0/1844674703 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:48.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.900+0000 7ff26ab10700 1 --2- 192.168.123.105:0/1844674703 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2641013c0 0x7ff264198880 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:48.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.900+0000 7ff26ab10700 1 -- 192.168.123.105:0/1844674703 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff264198f60 con 0x7ff264068490 2026-03-09T15:08:48.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.900+0000 7ff26ab10700 1 -- 192.168.123.105:0/1844674703 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff26419ccf0 con 0x7ff2641013c0 2026-03-09T15:08:48.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.900+0000 7ff263fff700 1 --2- 192.168.123.105:0/1844674703 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2641013c0 0x7ff264198880 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:48.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.900+0000 7ff2688ac700 1 -- 192.168.123.105:0/1844674703 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2641013c0 msgr2=0x7ff264198880 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:48.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.900+0000 7ff2688ac700 1 --2- 192.168.123.105:0/1844674703 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2641013c0 0x7ff264198880 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:48.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.900+0000 7ff2688ac700 1 -- 192.168.123.105:0/1844674703 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff2580097e0 con 0x7ff264068490 2026-03-09T15:08:48.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.901+0000 7ff2688ac700 1 --2- 192.168.123.105:0/1844674703 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff264068490 0x7ff264198340 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7ff25400dc40 tx=0x7ff25400df50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:48.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.901+0000 7ff261ffb700 1 -- 192.168.123.105:0/1844674703 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff2540098e0 con 0x7ff264068490 2026-03-09T15:08:48.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.901+0000 7ff261ffb700 1 -- 192.168.123.105:0/1844674703 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7ff254010460 con 0x7ff264068490 2026-03-09T15:08:48.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.901+0000 7ff26ab10700 1 -- 192.168.123.105:0/1844674703 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff26419cfd0 con 0x7ff264068490 2026-03-09T15:08:48.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.901+0000 7ff26ab10700 1 -- 192.168.123.105:0/1844674703 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff26419d520 con 0x7ff264068490 2026-03-09T15:08:48.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.901+0000 7ff261ffb700 1 -- 192.168.123.105:0/1844674703 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff25400f5d0 con 0x7ff264068490 2026-03-09T15:08:48.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.903+0000 7ff261ffb700 1 -- 192.168.123.105:0/1844674703 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ff25400f730 con 0x7ff264068490 2026-03-09T15:08:48.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.903+0000 7ff26ab10700 1 -- 192.168.123.105:0/1844674703 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff26404ea50 con 0x7ff264068490 2026-03-09T15:08:48.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.903+0000 7ff261ffb700 1 --2- 192.168.123.105:0/1844674703 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff24c0778c0 0x7ff24c079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:48.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.903+0000 7ff261ffb700 1 -- 192.168.123.105:0/1844674703 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7ff254099a40 con 0x7ff264068490 2026-03-09T15:08:48.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.906+0000 7ff263fff700 1 --2- 192.168.123.105:0/1844674703 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff24c0778c0 0x7ff24c079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:48.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.906+0000 7ff263fff700 1 --2- 192.168.123.105:0/1844674703 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff24c0778c0 0x7ff24c079d70 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7ff264199960 tx=0x7ff258005fb0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:48.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:48.906+0000 7ff261ffb700 1 -- 192.168.123.105:0/1844674703 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ff254063130 con 0x7ff264068490 2026-03-09T15:08:49.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.032+0000 7ff26ab10700 1 -- 192.168.123.105:0/1844674703 --> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] -- mgr_command(tid 0: {"prefix": "orch ps", "target": ["mon-mgr", ""]}) v1 -- 0x7ff26419d800 con 0x7ff24c0778c0 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.039+0000 7ff261ffb700 1 -- 192.168.123.105:0/1844674703 <== mgr.34104 v2:192.168.123.105:6800/3615781754 1 ==== mgr_command_reply(tid 0: 0 ) v1 ==== 8+0+3576 (secure 0 0 0) 0x7ff26419d800 con 0x7ff24c0778c0 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.vm05 vm05 *:9093,9094 running (5m) 97s ago 12m 24.0M - 0.25.0 c8568f914cd2 7635cece310c 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm05 vm05 running (100s) 97s ago 13m 9852k - 19.2.3-678-ge911bdeb 654f31e6858e 7e4630f85fea 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:ceph-exporter.vm09 vm09 running (99s) 98s ago 12m 9559k - 19.2.3-678-ge911bdeb 654f31e6858e 7757fd500ae0 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm05 vm05 running (4m) 97s ago 13m 7834k - 19.2.3-678-ge911bdeb 654f31e6858e 35d8c0ae5a58 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:crash.vm09 vm09 running (4m) 98s ago 12m 8308k - 19.2.3-678-ge911bdeb 654f31e6858e 82bdad36caf9 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:grafana.vm05 vm05 *:3000 running (5m) 97s ago 12m 84.0M - 10.4.0 c8b91775d855 eb6431f63d88 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.nrocqt vm05 running (2m) 97s ago 10m 103M - 19.2.3-678-ge911bdeb 654f31e6858e f41a092cac53 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm05.rrcyql vm05 running (2m) 97s ago 10m 262M - 19.2.3-678-ge911bdeb 654f31e6858e 5a60c73f6399 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.jrhwzz vm09 running (106s) 98s ago 10m 15.4M - 19.2.3-678-ge911bdeb 654f31e6858e 05062c46f72b 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:mds.cephfs.vm09.ohmitn vm09 running (117s) 98s ago 10m 20.6M - 19.2.3-678-ge911bdeb 654f31e6858e 615aff4b5fba 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm05.lhsexd vm05 *:8443,9283,8765 running (6m) 97s ago 13m 626M - 19.2.3-678-ge911bdeb 654f31e6858e 65927226544e 2026-03-09T15:08:49.040 INFO:teuthology.orchestra.run.vm05.stdout:mgr.vm09.cfuwdz vm09 *:8443,9283,8765 running (6m) 98s ago 12m 497M - 19.2.3-678-ge911bdeb 654f31e6858e acf5a6f3f804 2026-03-09T15:08:49.041 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm05 vm05 running (5m) 97s ago 13m 67.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1e11655f7d87 2026-03-09T15:08:49.041 INFO:teuthology.orchestra.run.vm05.stdout:mon.vm09 vm09 running (4m) 98s ago 11m 58.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e d1f0309f4d58 2026-03-09T15:08:49.041 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm05 vm05 *:9100 running (6m) 97s ago 12m 10.3M - 1.7.0 72c9c2088986 888d071c50d9 2026-03-09T15:08:49.041 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.vm09 vm09 *:9100 running (6m) 98s ago 12m 9617k - 1.7.0 72c9c2088986 22c96a576a60 2026-03-09T15:08:49.041 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (4m) 97s ago 11m 164M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f2883abca2d2 2026-03-09T15:08:49.041 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (4m) 97s ago 11m 127M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b830d7f76498 2026-03-09T15:08:49.041 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (3m) 97s ago 11m 127M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 01cf87b8bc05 2026-03-09T15:08:49.041 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm09 running (3m) 98s ago 11m 164M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9359c3ced4d3 2026-03-09T15:08:49.041 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm09 running (3m) 98s ago 10m 111M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 985038f550f8 2026-03-09T15:08:49.041 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm09 running (2m) 98s ago 10m 101M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 15ec92bc2880 2026-03-09T15:08:49.041 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.vm05 vm05 *:9095 running (6m) 97s ago 12m 58.0M - 2.51.0 1d3b7f56885b e6f470b0ba11 2026-03-09T15:08:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.042+0000 7ff26ab10700 1 -- 192.168.123.105:0/1844674703 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff24c0778c0 msgr2=0x7ff24c079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.042+0000 7ff26ab10700 1 --2- 192.168.123.105:0/1844674703 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff24c0778c0 0x7ff24c079d70 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7ff264199960 tx=0x7ff258005fb0 comp rx=0 tx=0).stop 2026-03-09T15:08:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.042+0000 7ff26ab10700 1 -- 192.168.123.105:0/1844674703 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff264068490 msgr2=0x7ff264198340 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.042+0000 7ff26ab10700 1 --2- 192.168.123.105:0/1844674703 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff264068490 0x7ff264198340 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7ff25400dc40 tx=0x7ff25400df50 comp rx=0 tx=0).stop 2026-03-09T15:08:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.042+0000 7ff26ab10700 1 -- 192.168.123.105:0/1844674703 shutdown_connections 2026-03-09T15:08:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.042+0000 7ff26ab10700 1 --2- 192.168.123.105:0/1844674703 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff24c0778c0 0x7ff24c079d70 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.042+0000 7ff26ab10700 1 --2- 192.168.123.105:0/1844674703 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff264068490 0x7ff264198340 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.042+0000 7ff26ab10700 1 --2- 192.168.123.105:0/1844674703 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff2641013c0 0x7ff264198880 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.043+0000 7ff26ab10700 1 -- 192.168.123.105:0/1844674703 >> 192.168.123.105:0/1844674703 conn(0x7ff2640754a0 msgr2=0x7ff2640fddd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:49.043 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.043+0000 7ff26ab10700 1 -- 192.168.123.105:0/1844674703 shutdown_connections 2026-03-09T15:08:49.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.043+0000 7ff26ab10700 1 -- 192.168.123.105:0/1844674703 wait complete. 2026-03-09T15:08:49.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:48 vm09.local ceph-mon[98742]: pgmap v208: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:08:49.119 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T15:08:49.272 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:49.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.554+0000 7f991bc4d700 1 -- 192.168.123.105:0/1294084953 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99140730f0 msgr2=0x7f99140734c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:49.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.554+0000 7f991bc4d700 1 --2- 192.168.123.105:0/1294084953 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99140730f0 0x7f99140734c0 secure :-1 s=READY pgs=122 cs=0 l=1 rev1=1 crypto rx=0x7f9904009b00 tx=0x7f9904009e10 comp rx=0 tx=0).stop 2026-03-09T15:08:49.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.555+0000 7f991bc4d700 1 -- 192.168.123.105:0/1294084953 shutdown_connections 2026-03-09T15:08:49.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.555+0000 7f991bc4d700 1 --2- 192.168.123.105:0/1294084953 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9914073a00 0x7f9914110ff0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:49.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.555+0000 7f991bc4d700 1 --2- 192.168.123.105:0/1294084953 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99140730f0 0x7f99140734c0 unknown :-1 s=CLOSED pgs=122 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:49.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.555+0000 7f991bc4d700 1 -- 192.168.123.105:0/1294084953 >> 192.168.123.105:0/1294084953 conn(0x7f99140fc000 msgr2=0x7f99140fe410 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:49.556 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.556+0000 7f991bc4d700 1 -- 192.168.123.105:0/1294084953 shutdown_connections 2026-03-09T15:08:49.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.556+0000 7f991bc4d700 1 -- 192.168.123.105:0/1294084953 wait complete. 2026-03-09T15:08:49.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.556+0000 7f991bc4d700 1 Processor -- start 2026-03-09T15:08:49.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.556+0000 7f991bc4d700 1 -- start start 2026-03-09T15:08:49.557 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.557+0000 7f991bc4d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99140730f0 0x7f991419c650 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:49.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.557+0000 7f991bc4d700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9914073a00 0x7f991419cb90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:49.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.557+0000 7f991bc4d700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f991419d1e0 con 0x7f99140730f0 2026-03-09T15:08:49.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.557+0000 7f991bc4d700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f991419d320 con 0x7f9914073a00 2026-03-09T15:08:49.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.557+0000 7f99199e9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99140730f0 0x7f991419c650 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:49.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.557+0000 7f99191e8700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9914073a00 0x7f991419cb90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:49.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.557+0000 7f99191e8700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9914073a00 0x7f991419cb90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45574/0 (socket says 192.168.123.105:45574) 2026-03-09T15:08:49.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.557+0000 7f99191e8700 1 -- 192.168.123.105:0/1461488516 learned_addr learned my addr 192.168.123.105:0/1461488516 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:49.558 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.558+0000 7f99199e9700 1 -- 192.168.123.105:0/1461488516 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9914073a00 msgr2=0x7f991419cb90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:49.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.558+0000 7f99199e9700 1 --2- 192.168.123.105:0/1461488516 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9914073a00 0x7f991419cb90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:49.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.558+0000 7f99199e9700 1 -- 192.168.123.105:0/1461488516 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f99040097e0 con 0x7f99140730f0 2026-03-09T15:08:49.559 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.558+0000 7f99199e9700 1 --2- 192.168.123.105:0/1461488516 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99140730f0 0x7f991419c650 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7f9904006010 tx=0x7f990400bb10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:49.561 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.558+0000 7f990affd700 1 -- 192.168.123.105:0/1461488516 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f990401d070 con 0x7f99140730f0 2026-03-09T15:08:49.561 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.558+0000 7f990affd700 1 -- 192.168.123.105:0/1461488516 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f990400bd30 con 0x7f99140730f0 2026-03-09T15:08:49.561 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.558+0000 7f990affd700 1 -- 192.168.123.105:0/1461488516 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f99040056d0 con 0x7f99140730f0 2026-03-09T15:08:49.561 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.558+0000 7f991bc4d700 1 -- 192.168.123.105:0/1461488516 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f99141a1170 con 0x7f99140730f0 2026-03-09T15:08:49.561 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.558+0000 7f991bc4d700 1 -- 192.168.123.105:0/1461488516 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f99141a16c0 con 0x7f99140730f0 2026-03-09T15:08:49.562 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.559+0000 7f991bc4d700 1 -- 192.168.123.105:0/1461488516 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f991410e770 con 0x7f99140730f0 2026-03-09T15:08:49.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.564+0000 7f990affd700 1 -- 192.168.123.105:0/1461488516 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f990400f8d0 con 0x7f99140730f0 2026-03-09T15:08:49.565 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.564+0000 7f990affd700 1 --2- 192.168.123.105:0/1461488516 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9900077990 0x7f9900079e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:49.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.565+0000 7f990affd700 1 -- 192.168.123.105:0/1461488516 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f990409ba20 con 0x7f99140730f0 2026-03-09T15:08:49.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.565+0000 7f990affd700 1 -- 192.168.123.105:0/1461488516 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f99040cba90 con 0x7f99140730f0 2026-03-09T15:08:49.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.565+0000 7f99191e8700 1 --2- 192.168.123.105:0/1461488516 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9900077990 0x7f9900079e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:49.566 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.566+0000 7f99191e8700 1 --2- 192.168.123.105:0/1461488516 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9900077990 0x7f9900079e40 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f991419dc70 tx=0x7f9910005f90 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:49.733 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.732+0000 7f991bc4d700 1 -- 192.168.123.105:0/1461488516 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f99141a1300 con 0x7f99140730f0 2026-03-09T15:08:49.734 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.734+0000 7f990affd700 1 -- 192.168.123.105:0/1461488516 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f99141a1300 con 0x7f99140730f0 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 6 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: "mds": { 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 14 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-09T15:08:49.735 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-09T15:08:49.737 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.737+0000 7f991bc4d700 1 -- 192.168.123.105:0/1461488516 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9900077990 msgr2=0x7f9900079e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:49.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.737+0000 7f991bc4d700 1 --2- 192.168.123.105:0/1461488516 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9900077990 0x7f9900079e40 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f991419dc70 tx=0x7f9910005f90 comp rx=0 tx=0).stop 2026-03-09T15:08:49.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.737+0000 7f991bc4d700 1 -- 192.168.123.105:0/1461488516 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99140730f0 msgr2=0x7f991419c650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:49.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.737+0000 7f991bc4d700 1 --2- 192.168.123.105:0/1461488516 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99140730f0 0x7f991419c650 secure :-1 s=READY pgs=123 cs=0 l=1 rev1=1 crypto rx=0x7f9904006010 tx=0x7f990400bb10 comp rx=0 tx=0).stop 2026-03-09T15:08:49.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.737+0000 7f991bc4d700 1 -- 192.168.123.105:0/1461488516 shutdown_connections 2026-03-09T15:08:49.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.738+0000 7f991bc4d700 1 --2- 192.168.123.105:0/1461488516 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9900077990 0x7f9900079e40 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:49.738 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.738+0000 7f991bc4d700 1 --2- 192.168.123.105:0/1461488516 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99140730f0 0x7f991419c650 unknown :-1 s=CLOSED pgs=123 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:49.739 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.738+0000 7f991bc4d700 1 --2- 192.168.123.105:0/1461488516 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9914073a00 0x7f991419cb90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:49.739 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.738+0000 7f991bc4d700 1 -- 192.168.123.105:0/1461488516 >> 192.168.123.105:0/1461488516 conn(0x7f99140fc000 msgr2=0x7f9914102b00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:49.739 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.738+0000 7f991bc4d700 1 -- 192.168.123.105:0/1461488516 shutdown_connections 2026-03-09T15:08:49.739 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:49.738+0000 7f991bc4d700 1 -- 192.168.123.105:0/1461488516 wait complete. 2026-03-09T15:08:49.826 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:49 vm05.local ceph-mon[116516]: from='client.34348 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:08:49.827 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | length == 1'"'"'' 2026-03-09T15:08:49.993 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:50.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:49 vm09.local ceph-mon[98742]: from='client.34348 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T15:08:50.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.272+0000 7f71a465a700 1 -- 192.168.123.105:0/2645746833 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f719c106ba0 msgr2=0x7f719c107010 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:50.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.272+0000 7f71a465a700 1 --2- 192.168.123.105:0/2645746833 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f719c106ba0 0x7f719c107010 secure :-1 s=READY pgs=124 cs=0 l=1 rev1=1 crypto rx=0x7f718c009b50 tx=0x7f718c009e60 comp rx=0 tx=0).stop 2026-03-09T15:08:50.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.273+0000 7f71a465a700 1 -- 192.168.123.105:0/2645746833 shutdown_connections 2026-03-09T15:08:50.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.273+0000 7f71a465a700 1 --2- 192.168.123.105:0/2645746833 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f719c106ba0 0x7f719c107010 unknown :-1 s=CLOSED pgs=124 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:50.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.273+0000 7f71a465a700 1 --2- 192.168.123.105:0/2645746833 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f719c10cbb0 0x7f719c10cf80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:50.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.273+0000 7f71a465a700 1 -- 192.168.123.105:0/2645746833 >> 192.168.123.105:0/2645746833 conn(0x7f719c074b10 msgr2=0x7f719c076f20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:50.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.273+0000 7f71a465a700 1 -- 192.168.123.105:0/2645746833 shutdown_connections 2026-03-09T15:08:50.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.273+0000 7f71a465a700 1 -- 192.168.123.105:0/2645746833 wait complete. 2026-03-09T15:08:50.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.273+0000 7f71a465a700 1 Processor -- start 2026-03-09T15:08:50.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.274+0000 7f71a465a700 1 -- start start 2026-03-09T15:08:50.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.274+0000 7f71a465a700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f719c106ba0 0x7f719c1a25a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:50.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.274+0000 7f71a465a700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f719c10cbb0 0x7f719c1a2ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:50.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.274+0000 7f71a465a700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f719c1a3170 con 0x7f719c10cbb0 2026-03-09T15:08:50.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.274+0000 7f71a23f6700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f719c106ba0 0x7f719c1a25a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:50.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.274+0000 7f71a23f6700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f719c106ba0 0x7f719c1a25a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45588/0 (socket says 192.168.123.105:45588) 2026-03-09T15:08:50.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.274+0000 7f71a23f6700 1 -- 192.168.123.105:0/1469400935 learned_addr learned my addr 192.168.123.105:0/1469400935 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:50.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.274+0000 7f71a465a700 1 -- 192.168.123.105:0/1469400935 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f719c19c620 con 0x7f719c106ba0 2026-03-09T15:08:50.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.274+0000 7f71a1bf5700 1 --2- 192.168.123.105:0/1469400935 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f719c10cbb0 0x7f719c1a2ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:50.276 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.275+0000 7f71a1bf5700 1 -- 192.168.123.105:0/1469400935 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f719c106ba0 msgr2=0x7f719c1a25a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:50.276 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.275+0000 7f71a1bf5700 1 --2- 192.168.123.105:0/1469400935 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f719c106ba0 0x7f719c1a25a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:50.276 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.275+0000 7f71a1bf5700 1 -- 192.168.123.105:0/1469400935 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f718c0097e0 con 0x7f719c10cbb0 2026-03-09T15:08:50.276 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.275+0000 7f71a1bf5700 1 --2- 192.168.123.105:0/1469400935 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f719c10cbb0 0x7f719c1a2ae0 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f718c005950 tx=0x7f718c004ef0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:50.277 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.275+0000 7f71937fe700 1 -- 192.168.123.105:0/1469400935 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f718c01d070 con 0x7f719c10cbb0 2026-03-09T15:08:50.277 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.275+0000 7f71937fe700 1 -- 192.168.123.105:0/1469400935 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f718c00bbb0 con 0x7f719c10cbb0 2026-03-09T15:08:50.277 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.275+0000 7f71937fe700 1 -- 192.168.123.105:0/1469400935 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f718c00f770 con 0x7f719c10cbb0 2026-03-09T15:08:50.277 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.275+0000 7f71a465a700 1 -- 192.168.123.105:0/1469400935 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f719c19c8a0 con 0x7f719c10cbb0 2026-03-09T15:08:50.277 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.275+0000 7f71a23f6700 1 --2- 192.168.123.105:0/1469400935 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f719c106ba0 0x7f719c1a25a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:08:50.277 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.275+0000 7f71a465a700 1 -- 192.168.123.105:0/1469400935 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f719c19cd90 con 0x7f719c10cbb0 2026-03-09T15:08:50.278 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.277+0000 7f71a465a700 1 -- 192.168.123.105:0/1469400935 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f719c04ea50 con 0x7f719c10cbb0 2026-03-09T15:08:50.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.280+0000 7f71937fe700 1 -- 192.168.123.105:0/1469400935 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f718c00bd20 con 0x7f719c10cbb0 2026-03-09T15:08:50.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.281+0000 7f71937fe700 1 --2- 192.168.123.105:0/1469400935 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f71880779e0 0x7f7188079e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:50.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.281+0000 7f71a23f6700 1 --2- 192.168.123.105:0/1469400935 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f71880779e0 0x7f7188079e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:50.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.281+0000 7f71937fe700 1 -- 192.168.123.105:0/1469400935 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f718c09bcc0 con 0x7f719c10cbb0 2026-03-09T15:08:50.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.281+0000 7f71a23f6700 1 --2- 192.168.123.105:0/1469400935 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f71880779e0 0x7f7188079e90 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f7198009e90 tx=0x7f7198009450 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:50.282 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.281+0000 7f71937fe700 1 -- 192.168.123.105:0/1469400935 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f718c09c0a0 con 0x7f719c10cbb0 2026-03-09T15:08:50.444 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.443+0000 7f71a465a700 1 -- 192.168.123.105:0/1469400935 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f719c066e40 con 0x7f719c10cbb0 2026-03-09T15:08:50.446 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.446+0000 7f71937fe700 1 -- 192.168.123.105:0/1469400935 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f718c064420 con 0x7f719c10cbb0 2026-03-09T15:08:50.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.448+0000 7f71a465a700 1 -- 192.168.123.105:0/1469400935 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f71880779e0 msgr2=0x7f7188079e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:50.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.448+0000 7f71a465a700 1 --2- 192.168.123.105:0/1469400935 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f71880779e0 0x7f7188079e90 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f7198009e90 tx=0x7f7198009450 comp rx=0 tx=0).stop 2026-03-09T15:08:50.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.448+0000 7f71a465a700 1 -- 192.168.123.105:0/1469400935 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f719c10cbb0 msgr2=0x7f719c1a2ae0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:50.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.448+0000 7f71a465a700 1 --2- 192.168.123.105:0/1469400935 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f719c10cbb0 0x7f719c1a2ae0 secure :-1 s=READY pgs=125 cs=0 l=1 rev1=1 crypto rx=0x7f718c005950 tx=0x7f718c004ef0 comp rx=0 tx=0).stop 2026-03-09T15:08:50.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.448+0000 7f71a465a700 1 -- 192.168.123.105:0/1469400935 shutdown_connections 2026-03-09T15:08:50.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.448+0000 7f71a465a700 1 --2- 192.168.123.105:0/1469400935 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f71880779e0 0x7f7188079e90 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:50.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.448+0000 7f71a465a700 1 --2- 192.168.123.105:0/1469400935 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f719c106ba0 0x7f719c1a25a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:50.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.448+0000 7f71a465a700 1 --2- 192.168.123.105:0/1469400935 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f719c10cbb0 0x7f719c1a2ae0 unknown :-1 s=CLOSED pgs=125 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:50.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.448+0000 7f71a465a700 1 -- 192.168.123.105:0/1469400935 >> 192.168.123.105:0/1469400935 conn(0x7f719c074b10 msgr2=0x7f719c0763e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:50.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.449+0000 7f71a465a700 1 -- 192.168.123.105:0/1469400935 shutdown_connections 2026-03-09T15:08:50.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.449+0000 7f71a465a700 1 -- 192.168.123.105:0/1469400935 wait complete. 2026-03-09T15:08:50.459 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-09T15:08:50.516 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | keys'"'"' | grep $sha1' 2026-03-09T15:08:50.680 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:50.705 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:50 vm05.local ceph-mon[116516]: pgmap v209: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:50.705 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:50 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1461488516' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:08:50.705 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:50 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1469400935' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:08:50.941 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.939+0000 7f9e79f45700 1 -- 192.168.123.105:0/2838542357 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e74102750 msgr2=0x7f9e74102bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:50.941 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.939+0000 7f9e79f45700 1 --2- 192.168.123.105:0/2838542357 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e74102750 0x7f9e74102bc0 secure :-1 s=READY pgs=126 cs=0 l=1 rev1=1 crypto rx=0x7f9e64009b00 tx=0x7f9e64009e10 comp rx=0 tx=0).stop 2026-03-09T15:08:50.941 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.940+0000 7f9e79f45700 1 -- 192.168.123.105:0/2838542357 shutdown_connections 2026-03-09T15:08:50.941 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.940+0000 7f9e79f45700 1 --2- 192.168.123.105:0/2838542357 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e74102750 0x7f9e74102bc0 unknown :-1 s=CLOSED pgs=126 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:50.941 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.940+0000 7f9e79f45700 1 --2- 192.168.123.105:0/2838542357 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9e74108750 0x7f9e74108b20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:50.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.940+0000 7f9e79f45700 1 -- 192.168.123.105:0/2838542357 >> 192.168.123.105:0/2838542357 conn(0x7f9e740fe250 msgr2=0x7f9e74100660 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:50.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.940+0000 7f9e79f45700 1 -- 192.168.123.105:0/2838542357 shutdown_connections 2026-03-09T15:08:50.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.941+0000 7f9e79f45700 1 -- 192.168.123.105:0/2838542357 wait complete. 2026-03-09T15:08:50.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.941+0000 7f9e79f45700 1 Processor -- start 2026-03-09T15:08:50.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.941+0000 7f9e79f45700 1 -- start start 2026-03-09T15:08:50.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.942+0000 7f9e79f45700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9e74102750 0x7f9e741982f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:50.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.942+0000 7f9e79f45700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e74108750 0x7f9e74198830 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:50.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.942+0000 7f9e79f45700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9e74198e80 con 0x7f9e74108750 2026-03-09T15:08:50.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.942+0000 7f9e79f45700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9e74198fc0 con 0x7f9e74102750 2026-03-09T15:08:50.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.942+0000 7f9e72ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e74108750 0x7f9e74198830 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:50.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.942+0000 7f9e72ffd700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e74108750 0x7f9e74198830 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:42778/0 (socket says 192.168.123.105:42778) 2026-03-09T15:08:50.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.942+0000 7f9e72ffd700 1 -- 192.168.123.105:0/1533353547 learned_addr learned my addr 192.168.123.105:0/1533353547 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:50.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.942+0000 7f9e72ffd700 1 -- 192.168.123.105:0/1533353547 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9e74102750 msgr2=0x7f9e741982f0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:08:50.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.942+0000 7f9e72ffd700 1 --2- 192.168.123.105:0/1533353547 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9e74102750 0x7f9e741982f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:50.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.942+0000 7f9e72ffd700 1 -- 192.168.123.105:0/1533353547 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9e640097e0 con 0x7f9e74108750 2026-03-09T15:08:50.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.943+0000 7f9e72ffd700 1 --2- 192.168.123.105:0/1533353547 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e74108750 0x7f9e74198830 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f9e64006010 tx=0x7f9e64004c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:50.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.943+0000 7f9e70ff9700 1 -- 192.168.123.105:0/1533353547 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9e6401d070 con 0x7f9e74108750 2026-03-09T15:08:50.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.943+0000 7f9e70ff9700 1 -- 192.168.123.105:0/1533353547 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f9e6400bcf0 con 0x7f9e74108750 2026-03-09T15:08:50.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.943+0000 7f9e79f45700 1 -- 192.168.123.105:0/1533353547 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9e7419cdb0 con 0x7f9e74108750 2026-03-09T15:08:50.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.943+0000 7f9e79f45700 1 -- 192.168.123.105:0/1533353547 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9e7419d2a0 con 0x7f9e74108750 2026-03-09T15:08:50.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.943+0000 7f9e70ff9700 1 -- 192.168.123.105:0/1533353547 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9e6400f870 con 0x7f9e74108750 2026-03-09T15:08:50.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.945+0000 7f9e70ff9700 1 -- 192.168.123.105:0/1533353547 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f9e6400f9d0 con 0x7f9e74108750 2026-03-09T15:08:50.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.945+0000 7f9e70ff9700 1 --2- 192.168.123.105:0/1533353547 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9e60077870 0x7f9e60079d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:50.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.945+0000 7f9e70ff9700 1 -- 192.168.123.105:0/1533353547 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f9e6409b330 con 0x7f9e74108750 2026-03-09T15:08:50.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.945+0000 7f9e737fe700 1 --2- 192.168.123.105:0/1533353547 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9e60077870 0x7f9e60079d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:50.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.945+0000 7f9e79f45700 1 -- 192.168.123.105:0/1533353547 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9e54005320 con 0x7f9e74108750 2026-03-09T15:08:50.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.950+0000 7f9e70ff9700 1 -- 192.168.123.105:0/1533353547 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9e64064ae0 con 0x7f9e74108750 2026-03-09T15:08:50.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:50.950+0000 7f9e737fe700 1 --2- 192.168.123.105:0/1533353547 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9e60077870 0x7f9e60079d20 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f9e74103890 tx=0x7f9e5c005cb0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:51.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:50 vm09.local ceph-mon[98742]: pgmap v209: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:51.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:50 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1461488516' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:08:51.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:50 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1469400935' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:08:51.121 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.120+0000 7f9e79f45700 1 -- 192.168.123.105:0/1533353547 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "versions"} v 0) v1 -- 0x7f9e54006200 con 0x7f9e74108750 2026-03-09T15:08:51.121 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.120+0000 7f9e70ff9700 1 -- 192.168.123.105:0/1533353547 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "versions"}]=0 v0) v1 ==== 56+0+633 (secure 0 0 0) 0x7f9e64027030 con 0x7f9e74108750 2026-03-09T15:08:51.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.123+0000 7f9e79f45700 1 -- 192.168.123.105:0/1533353547 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9e60077870 msgr2=0x7f9e60079d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:51.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.123+0000 7f9e79f45700 1 --2- 192.168.123.105:0/1533353547 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9e60077870 0x7f9e60079d20 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f9e74103890 tx=0x7f9e5c005cb0 comp rx=0 tx=0).stop 2026-03-09T15:08:51.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.123+0000 7f9e79f45700 1 -- 192.168.123.105:0/1533353547 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e74108750 msgr2=0x7f9e74198830 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:51.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.123+0000 7f9e79f45700 1 --2- 192.168.123.105:0/1533353547 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e74108750 0x7f9e74198830 secure :-1 s=READY pgs=127 cs=0 l=1 rev1=1 crypto rx=0x7f9e64006010 tx=0x7f9e64004c80 comp rx=0 tx=0).stop 2026-03-09T15:08:51.124 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.124+0000 7f9e79f45700 1 -- 192.168.123.105:0/1533353547 shutdown_connections 2026-03-09T15:08:51.125 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.124+0000 7f9e79f45700 1 --2- 192.168.123.105:0/1533353547 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9e60077870 0x7f9e60079d20 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:51.125 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.124+0000 7f9e79f45700 1 --2- 192.168.123.105:0/1533353547 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9e74102750 0x7f9e741982f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:51.125 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.124+0000 7f9e79f45700 1 --2- 192.168.123.105:0/1533353547 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9e74108750 0x7f9e74198830 unknown :-1 s=CLOSED pgs=127 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:51.125 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.124+0000 7f9e79f45700 1 -- 192.168.123.105:0/1533353547 >> 192.168.123.105:0/1533353547 conn(0x7f9e740fe250 msgr2=0x7f9e740ff9a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:51.125 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.124+0000 7f9e79f45700 1 -- 192.168.123.105:0/1533353547 shutdown_connections 2026-03-09T15:08:51.125 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.124+0000 7f9e79f45700 1 -- 192.168.123.105:0/1533353547 wait complete. 2026-03-09T15:08:51.134 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-09T15:08:51.201 DEBUG:teuthology.parallel:result is None 2026-03-09T15:08:51.201 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T15:08:51.204 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm05.local 2026-03-09T15:08:51.204 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- bash -c 'ceph fs dump' 2026-03-09T15:08:51.364 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:51.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.630+0000 7fb7ee145700 1 -- 192.168.123.105:0/1786498617 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb7e8108790 msgr2=0x7fb7e8108b60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:51.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.630+0000 7fb7ee145700 1 --2- 192.168.123.105:0/1786498617 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb7e8108790 0x7fb7e8108b60 secure :-1 s=READY pgs=128 cs=0 l=1 rev1=1 crypto rx=0x7fb7d0009b00 tx=0x7fb7d0009e10 comp rx=0 tx=0).stop 2026-03-09T15:08:51.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.631+0000 7fb7ee145700 1 -- 192.168.123.105:0/1786498617 shutdown_connections 2026-03-09T15:08:51.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.631+0000 7fb7ee145700 1 --2- 192.168.123.105:0/1786498617 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb7e8102790 0x7fb7e8102c00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:51.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.631+0000 7fb7ee145700 1 --2- 192.168.123.105:0/1786498617 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb7e8108790 0x7fb7e8108b60 unknown :-1 s=CLOSED pgs=128 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:51.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.631+0000 7fb7ee145700 1 -- 192.168.123.105:0/1786498617 >> 192.168.123.105:0/1786498617 conn(0x7fb7e80fe2b0 msgr2=0x7fb7e81006c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:51.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.632+0000 7fb7ee145700 1 -- 192.168.123.105:0/1786498617 shutdown_connections 2026-03-09T15:08:51.632 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.632+0000 7fb7ee145700 1 -- 192.168.123.105:0/1786498617 wait complete. 2026-03-09T15:08:51.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.632+0000 7fb7ee145700 1 Processor -- start 2026-03-09T15:08:51.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.632+0000 7fb7ee145700 1 -- start start 2026-03-09T15:08:51.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.633+0000 7fb7ee145700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb7e8102790 0x7fb7e81983c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:51.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.633+0000 7fb7ee145700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb7e8108790 0x7fb7e8198900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:51.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.633+0000 7fb7ee145700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb7e8198fe0 con 0x7fb7e8102790 2026-03-09T15:08:51.633 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.633+0000 7fb7e6ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb7e8108790 0x7fb7e8198900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:51.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.633+0000 7fb7e6ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb7e8108790 0x7fb7e8198900 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45620/0 (socket says 192.168.123.105:45620) 2026-03-09T15:08:51.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.633+0000 7fb7e6ffd700 1 -- 192.168.123.105:0/641707562 learned_addr learned my addr 192.168.123.105:0/641707562 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:51.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.633+0000 7fb7e77fe700 1 --2- 192.168.123.105:0/641707562 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb7e8102790 0x7fb7e81983c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:51.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.633+0000 7fb7ee145700 1 -- 192.168.123.105:0/641707562 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb7e819cd70 con 0x7fb7e8108790 2026-03-09T15:08:51.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.633+0000 7fb7e77fe700 1 -- 192.168.123.105:0/641707562 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb7e8108790 msgr2=0x7fb7e8198900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:51.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.633+0000 7fb7e77fe700 1 --2- 192.168.123.105:0/641707562 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb7e8108790 0x7fb7e8198900 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:51.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.633+0000 7fb7e77fe700 1 -- 192.168.123.105:0/641707562 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb7d00097e0 con 0x7fb7e8102790 2026-03-09T15:08:51.634 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.634+0000 7fb7e77fe700 1 --2- 192.168.123.105:0/641707562 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb7e8102790 0x7fb7e81983c0 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7fb7d0006010 tx=0x7fb7d0004c30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:51.635 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.634+0000 7fb7e4ff9700 1 -- 192.168.123.105:0/641707562 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb7d001d070 con 0x7fb7e8102790 2026-03-09T15:08:51.637 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.634+0000 7fb7e4ff9700 1 -- 192.168.123.105:0/641707562 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fb7d000bc50 con 0x7fb7e8102790 2026-03-09T15:08:51.637 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.634+0000 7fb7ee145700 1 -- 192.168.123.105:0/641707562 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb7e819cff0 con 0x7fb7e8102790 2026-03-09T15:08:51.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.634+0000 7fb7e4ff9700 1 -- 192.168.123.105:0/641707562 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb7d0021620 con 0x7fb7e8102790 2026-03-09T15:08:51.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.634+0000 7fb7ee145700 1 -- 192.168.123.105:0/641707562 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb7e819d4e0 con 0x7fb7e8102790 2026-03-09T15:08:51.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.636+0000 7fb7e4ff9700 1 -- 192.168.123.105:0/641707562 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fb7d002b430 con 0x7fb7e8102790 2026-03-09T15:08:51.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.636+0000 7fb7e4ff9700 1 --2- 192.168.123.105:0/641707562 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb7d40779e0 0x7fb7d4079e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:51.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.636+0000 7fb7e4ff9700 1 -- 192.168.123.105:0/641707562 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fb7d009c030 con 0x7fb7e8102790 2026-03-09T15:08:51.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.637+0000 7fb7e6ffd700 1 --2- 192.168.123.105:0/641707562 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb7d40779e0 0x7fb7d4079e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:51.638 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.637+0000 7fb7ee145700 1 -- 192.168.123.105:0/641707562 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb7e804ea50 con 0x7fb7e8102790 2026-03-09T15:08:51.643 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.639+0000 7fb7e6ffd700 1 --2- 192.168.123.105:0/641707562 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb7d40779e0 0x7fb7d4079e90 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7fb7e81999e0 tx=0x7fb7d8009480 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:51.644 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.643+0000 7fb7e4ff9700 1 -- 192.168.123.105:0/641707562 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fb7d00646d0 con 0x7fb7e8102790 2026-03-09T15:08:51.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.786+0000 7fb7ee145700 1 -- 192.168.123.105:0/641707562 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump"} v 0) v1 -- 0x7fb7e8199770 con 0x7fb7e8102790 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.787+0000 7fb7e4ff9700 1 -- 192.168.123.105:0/641707562 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump"}]=0 dumped fsmap epoch 35 v35) v1 ==== 76+0+2002 (secure 0 0 0) 0x7fb7d0026070 con 0x7fb7e8102790 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:e35 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:btime 2026-03-09T15:07:14:873419+0000 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:legacy client fscid: 1 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:Filesystem 'cephfs' (1) 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:fs_name cephfs 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:epoch 35 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:flags 32 joinable allow_snaps allow_multimds_snaps allow_standby_replay 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:created 2026-03-09T14:58:23.182447+0000 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:modified 2026-03-09T15:07:13.969385+0000 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:tableserver 0 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:root 0 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:session_timeout 60 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:session_autoclose 300 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:max_file_size 1099511627776 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:max_xattr_size 65536 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:required_client_features {} 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:last_failure 0 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:last_failure_osd_epoch 91 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:max_mds 1 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:in 0 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:up {0=34272} 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:failed 2026-03-09T15:08:51.788 INFO:teuthology.orchestra.run.vm05.stdout:damaged 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:stopped 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:data_pools [3] 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:metadata_pool 2 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:inline_data enabled 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:balancer 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:bal_rank_mask -1 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:standby_count_wanted 1 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:qdb_cluster leader: 34272 members: 34272 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.nrocqt{0:34272} state up:active seq 10 join_fscid=1 addr [v2:192.168.123.105:6826/3005307080,v1:192.168.123.105:6827/3005307080] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm05.rrcyql{0:34292} state up:standby-replay seq 1 join_fscid=1 addr [v2:192.168.123.105:6828/3529134522,v1:192.168.123.105:6829/3529134522] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:Standby daemons: 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.ohmitn{-1:44239} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.109:6824/2799240855,v1:192.168.123.109:6825/2799240855] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T15:08:51.789 INFO:teuthology.orchestra.run.vm05.stdout:[mds.cephfs.vm09.jrhwzz{-1:44243} state up:standby seq 1 join_fscid=1 addr [v2:192.168.123.109:6826/632428118,v1:192.168.123.109:6827/632428118] compat {c=[1],r=[1],i=[1fff]}] 2026-03-09T15:08:51.790 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.790+0000 7fb7ee145700 1 -- 192.168.123.105:0/641707562 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb7d40779e0 msgr2=0x7fb7d4079e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:51.790 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.790+0000 7fb7ee145700 1 --2- 192.168.123.105:0/641707562 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb7d40779e0 0x7fb7d4079e90 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7fb7e81999e0 tx=0x7fb7d8009480 comp rx=0 tx=0).stop 2026-03-09T15:08:51.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.790+0000 7fb7ee145700 1 -- 192.168.123.105:0/641707562 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb7e8102790 msgr2=0x7fb7e81983c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:51.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.790+0000 7fb7ee145700 1 --2- 192.168.123.105:0/641707562 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb7e8102790 0x7fb7e81983c0 secure :-1 s=READY pgs=129 cs=0 l=1 rev1=1 crypto rx=0x7fb7d0006010 tx=0x7fb7d0004c30 comp rx=0 tx=0).stop 2026-03-09T15:08:51.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.790+0000 7fb7ee145700 1 -- 192.168.123.105:0/641707562 shutdown_connections 2026-03-09T15:08:51.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.790+0000 7fb7ee145700 1 --2- 192.168.123.105:0/641707562 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb7d40779e0 0x7fb7d4079e90 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:51.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.790+0000 7fb7ee145700 1 --2- 192.168.123.105:0/641707562 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb7e8102790 0x7fb7e81983c0 unknown :-1 s=CLOSED pgs=129 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:51.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.790+0000 7fb7ee145700 1 --2- 192.168.123.105:0/641707562 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb7e8108790 0x7fb7e8198900 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:51.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.790+0000 7fb7ee145700 1 -- 192.168.123.105:0/641707562 >> 192.168.123.105:0/641707562 conn(0x7fb7e80fe2b0 msgr2=0x7fb7e80ffad0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:51.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.790+0000 7fb7ee145700 1 -- 192.168.123.105:0/641707562 shutdown_connections 2026-03-09T15:08:51.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:51.790+0000 7fb7ee145700 1 -- 192.168.123.105:0/641707562 wait complete. 2026-03-09T15:08:51.792 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 35 2026-03-09T15:08:51.858 INFO:teuthology.run_tasks:Running task fs.post_upgrade_checks... 2026-03-09T15:08:51.861 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 2026-03-09T15:08:52.023 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:52.051 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:51 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1533353547' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:08:52.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:51 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1533353547' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T15:08:52.304 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.301+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/2031988146 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc28102040 msgr2=0x7fbc2810a540 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:52.304 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.301+0000 7fbc2dcc0700 1 --2- 192.168.123.105:0/2031988146 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc28102040 0x7fbc2810a540 secure :-1 s=READY pgs=130 cs=0 l=1 rev1=1 crypto rx=0x7fbc18009b50 tx=0x7fbc18009e60 comp rx=0 tx=0).stop 2026-03-09T15:08:52.304 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.303+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/2031988146 shutdown_connections 2026-03-09T15:08:52.304 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.303+0000 7fbc2dcc0700 1 --2- 192.168.123.105:0/2031988146 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc28102040 0x7fbc2810a540 unknown :-1 s=CLOSED pgs=130 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:52.304 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.303+0000 7fbc2dcc0700 1 --2- 192.168.123.105:0/2031988146 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbc28101730 0x7fbc28101b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:52.304 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.303+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/2031988146 >> 192.168.123.105:0/2031988146 conn(0x7fbc28076210 msgr2=0x7fbc28076610 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:52.304 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.303+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/2031988146 shutdown_connections 2026-03-09T15:08:52.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.304+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/2031988146 wait complete. 2026-03-09T15:08:52.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.304+0000 7fbc2dcc0700 1 Processor -- start 2026-03-09T15:08:52.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.304+0000 7fbc2dcc0700 1 -- start start 2026-03-09T15:08:52.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.304+0000 7fbc2dcc0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc28101730 0x7fbc281961b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:52.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.304+0000 7fbc2dcc0700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbc28102040 0x7fbc281966f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:52.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.304+0000 7fbc2dcc0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbc28196dd0 con 0x7fbc28101730 2026-03-09T15:08:52.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.305+0000 7fbc26ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbc28102040 0x7fbc281966f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:52.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.305+0000 7fbc26ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbc28102040 0x7fbc281966f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45644/0 (socket says 192.168.123.105:45644) 2026-03-09T15:08:52.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.305+0000 7fbc26ffd700 1 -- 192.168.123.105:0/4216507744 learned_addr learned my addr 192.168.123.105:0/4216507744 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:52.306 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.305+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/4216507744 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbc2819ab60 con 0x7fbc28102040 2026-03-09T15:08:52.306 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.305+0000 7fbc277fe700 1 --2- 192.168.123.105:0/4216507744 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc28101730 0x7fbc281961b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:52.306 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.305+0000 7fbc277fe700 1 -- 192.168.123.105:0/4216507744 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbc28102040 msgr2=0x7fbc281966f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:52.306 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.305+0000 7fbc277fe700 1 --2- 192.168.123.105:0/4216507744 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbc28102040 0x7fbc281966f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:52.306 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.305+0000 7fbc277fe700 1 -- 192.168.123.105:0/4216507744 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbc180097e0 con 0x7fbc28101730 2026-03-09T15:08:52.306 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.305+0000 7fbc277fe700 1 --2- 192.168.123.105:0/4216507744 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc28101730 0x7fbc281961b0 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7fbc1000d6d0 tx=0x7fbc1000d9e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:52.307 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.306+0000 7fbc24ff9700 1 -- 192.168.123.105:0/4216507744 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbc100041d0 con 0x7fbc28101730 2026-03-09T15:08:52.307 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.306+0000 7fbc24ff9700 1 -- 192.168.123.105:0/4216507744 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fbc10004d10 con 0x7fbc28101730 2026-03-09T15:08:52.307 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.306+0000 7fbc24ff9700 1 -- 192.168.123.105:0/4216507744 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbc1000f690 con 0x7fbc28101730 2026-03-09T15:08:52.307 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.306+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/4216507744 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbc2819ade0 con 0x7fbc28101730 2026-03-09T15:08:52.307 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.306+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/4216507744 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbc2819b330 con 0x7fbc28101730 2026-03-09T15:08:52.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.307+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/4216507744 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbc28107bb0 con 0x7fbc28101730 2026-03-09T15:08:52.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.308+0000 7fbc24ff9700 1 -- 192.168.123.105:0/4216507744 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fbc1000f900 con 0x7fbc28101730 2026-03-09T15:08:52.311 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.311+0000 7fbc24ff9700 1 --2- 192.168.123.105:0/4216507744 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbc140778c0 0x7fbc14079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:52.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.311+0000 7fbc24ff9700 1 -- 192.168.123.105:0/4216507744 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fbc10099b70 con 0x7fbc28101730 2026-03-09T15:08:52.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.311+0000 7fbc26ffd700 1 --2- 192.168.123.105:0/4216507744 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbc140778c0 0x7fbc14079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:52.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.311+0000 7fbc24ff9700 1 -- 192.168.123.105:0/4216507744 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fbc10062190 con 0x7fbc28101730 2026-03-09T15:08:52.312 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.311+0000 7fbc26ffd700 1 --2- 192.168.123.105:0/4216507744 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbc140778c0 0x7fbc14079d70 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7fbc281977d0 tx=0x7fbc1800b540 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:52.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.450+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/4216507744 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "format": "json"} v 0) v1 -- 0x7fbc280689f0 con 0x7fbc28101730 2026-03-09T15:08:52.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.451+0000 7fbc24ff9700 1 -- 192.168.123.105:0/4216507744 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "format": "json"}]=0 dumped fsmap epoch 35 v35) v1 ==== 94+0+5270 (secure 0 0 0) 0x7fbc1001d090 con 0x7fbc28101730 2026-03-09T15:08:52.452 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:52.452 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":35,"btime":"2026-03-09T15:07:14:873419+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44239,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/2799240855","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2799240855},{"type":"v1","addr":"192.168.123.109:6825","nonce":2799240855}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":25},{"gid":44243,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6827/632428118","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":632428118},{"type":"v1","addr":"192.168.123.109:6827","nonce":632428118}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":31}],"filesystems":[{"mdsmap":{"epoch":35,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:07:13.969385+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":91,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34272},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34272":{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":27,"state":"up:active","state_seq":10,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}},"gid_34292":{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":0,"incarnation":0,"state":"up:standby-replay","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34272,"qdb_cluster":[34272]},"id":1}]} 2026-03-09T15:08:52.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.454+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/4216507744 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbc140778c0 msgr2=0x7fbc14079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:52.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.454+0000 7fbc2dcc0700 1 --2- 192.168.123.105:0/4216507744 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbc140778c0 0x7fbc14079d70 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7fbc281977d0 tx=0x7fbc1800b540 comp rx=0 tx=0).stop 2026-03-09T15:08:52.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.454+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/4216507744 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc28101730 msgr2=0x7fbc281961b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:52.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.454+0000 7fbc2dcc0700 1 --2- 192.168.123.105:0/4216507744 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc28101730 0x7fbc281961b0 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7fbc1000d6d0 tx=0x7fbc1000d9e0 comp rx=0 tx=0).stop 2026-03-09T15:08:52.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.454+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/4216507744 shutdown_connections 2026-03-09T15:08:52.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.455+0000 7fbc2dcc0700 1 --2- 192.168.123.105:0/4216507744 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbc140778c0 0x7fbc14079d70 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:52.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.455+0000 7fbc2dcc0700 1 --2- 192.168.123.105:0/4216507744 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbc28101730 0x7fbc281961b0 unknown :-1 s=CLOSED pgs=131 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:52.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.455+0000 7fbc2dcc0700 1 --2- 192.168.123.105:0/4216507744 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbc28102040 0x7fbc281966f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:52.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.455+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/4216507744 >> 192.168.123.105:0/4216507744 conn(0x7fbc28076210 msgr2=0x7fbc28104cf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:52.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.455+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/4216507744 shutdown_connections 2026-03-09T15:08:52.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.455+0000 7fbc2dcc0700 1 -- 192.168.123.105:0/4216507744 wait complete. 2026-03-09T15:08:52.456 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 35 2026-03-09T15:08:52.508 DEBUG:tasks.fs:checking fs fscid=1,name=cephfs state = {'epoch': 9, 'max_mds': 1, 'flags': 50} 2026-03-09T15:08:52.508 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 10 2026-03-09T15:08:52.668 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:52.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:52 vm09.local ceph-mon[98742]: pgmap v210: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:52.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:52 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/641707562' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:08:52.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:52 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/4216507744' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-09T15:08:52.946 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:52 vm05.local ceph-mon[116516]: pgmap v210: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:52.947 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:52 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/641707562' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch 2026-03-09T15:08:52.947 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:52 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/4216507744' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json"}]: dispatch 2026-03-09T15:08:52.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.943+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2320563780 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4dfc100590 msgr2=0x7f4dfc100a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:52.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.943+0000 7f4e03fc2700 1 --2- 192.168.123.105:0/2320563780 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4dfc100590 0x7f4dfc100a00 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7f4df8009b00 tx=0x7f4df8009e10 comp rx=0 tx=0).stop 2026-03-09T15:08:52.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.946+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2320563780 shutdown_connections 2026-03-09T15:08:52.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.946+0000 7f4e03fc2700 1 --2- 192.168.123.105:0/2320563780 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4dfc100590 0x7f4dfc100a00 unknown :-1 s=CLOSED pgs=132 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:52.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.946+0000 7f4e03fc2700 1 --2- 192.168.123.105:0/2320563780 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4dfc1065b0 0x7f4dfc106980 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:52.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.946+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2320563780 >> 192.168.123.105:0/2320563780 conn(0x7f4dfc0fc090 msgr2=0x7f4dfc0fe4a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:52.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.946+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2320563780 shutdown_connections 2026-03-09T15:08:52.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.946+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2320563780 wait complete. 2026-03-09T15:08:52.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.947+0000 7f4e03fc2700 1 Processor -- start 2026-03-09T15:08:52.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.947+0000 7f4e03fc2700 1 -- start start 2026-03-09T15:08:52.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.947+0000 7f4e03fc2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4dfc100590 0x7f4dfc073050 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:52.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.947+0000 7f4e03fc2700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4dfc1065b0 0x7f4dfc073590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:52.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.947+0000 7f4e03fc2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4dfc19dfb0 con 0x7f4dfc100590 2026-03-09T15:08:52.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.947+0000 7f4e03fc2700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4dfc19e0f0 con 0x7f4dfc1065b0 2026-03-09T15:08:52.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.947+0000 7f4e01d5e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4dfc100590 0x7f4dfc073050 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:52.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.947+0000 7f4e01d5e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4dfc100590 0x7f4dfc073050 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:42830/0 (socket says 192.168.123.105:42830) 2026-03-09T15:08:52.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.947+0000 7f4e01d5e700 1 -- 192.168.123.105:0/2994039391 learned_addr learned my addr 192.168.123.105:0/2994039391 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:52.949 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.947+0000 7f4e0155d700 1 --2- 192.168.123.105:0/2994039391 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4dfc1065b0 0x7f4dfc073590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:52.949 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.948+0000 7f4e01d5e700 1 -- 192.168.123.105:0/2994039391 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4dfc1065b0 msgr2=0x7f4dfc073590 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:52.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.948+0000 7f4e01d5e700 1 --2- 192.168.123.105:0/2994039391 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4dfc1065b0 0x7f4dfc073590 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:52.953 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.948+0000 7f4e01d5e700 1 -- 192.168.123.105:0/2994039391 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f4df80097e0 con 0x7f4dfc100590 2026-03-09T15:08:52.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.948+0000 7f4e01d5e700 1 --2- 192.168.123.105:0/2994039391 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4dfc100590 0x7f4dfc073050 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7f4dec00ba70 tx=0x7f4dec00bd80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:52.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.948+0000 7f4df2ffd700 1 -- 192.168.123.105:0/2994039391 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4dec00c700 con 0x7f4dfc100590 2026-03-09T15:08:52.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.948+0000 7f4df2ffd700 1 -- 192.168.123.105:0/2994039391 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f4dec00cd40 con 0x7f4dfc100590 2026-03-09T15:08:52.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.948+0000 7f4df2ffd700 1 -- 192.168.123.105:0/2994039391 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f4dec012340 con 0x7f4dfc100590 2026-03-09T15:08:52.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.948+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2994039391 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f4dfc073b30 con 0x7f4dfc100590 2026-03-09T15:08:52.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.948+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2994039391 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f4dfc1a6c30 con 0x7f4dfc100590 2026-03-09T15:08:52.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.950+0000 7f4df2ffd700 1 -- 192.168.123.105:0/2994039391 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f4dec0124e0 con 0x7f4dfc100590 2026-03-09T15:08:52.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.950+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2994039391 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4dfc04ea50 con 0x7f4dfc100590 2026-03-09T15:08:52.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.953+0000 7f4df2ffd700 1 --2- 192.168.123.105:0/2994039391 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f4de8077990 0x7f4de8079e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:52.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.953+0000 7f4e0155d700 1 --2- 192.168.123.105:0/2994039391 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f4de8077990 0x7f4de8079e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:52.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.954+0000 7f4df2ffd700 1 -- 192.168.123.105:0/2994039391 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f4dec098c30 con 0x7f4dfc100590 2026-03-09T15:08:52.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.954+0000 7f4e0155d700 1 --2- 192.168.123.105:0/2994039391 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f4de8077990 0x7f4de8079e40 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f4dfc074840 tx=0x7f4df8005fb0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:52.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:52.954+0000 7f4df2ffd700 1 -- 192.168.123.105:0/2994039391 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f4dec09c240 con 0x7f4dfc100590 2026-03-09T15:08:53.100 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.099+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2994039391 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 10, "format": "json"} v 0) v1 -- 0x7f4dfc1a6f10 con 0x7f4dfc100590 2026-03-09T15:08:53.101 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.099+0000 7f4df2ffd700 1 -- 192.168.123.105:0/2994039391 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 10, "format": "json"}]=0 dumped fsmap epoch 10 v35) v1 ==== 107+0+4920 (secure 0 0 0) 0x7f4dec062320 con 0x7f4dfc100590 2026-03-09T15:08:53.101 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:53.101 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":10,"btime":"1970-01-01T00:00:00:000000+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":14518,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.105:6829/1321316558","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":1321316558},{"type":"v1","addr":"192.168.123.105:6829","nonce":1321316558}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":10},{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":-1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":8}],"filesystems":[{"mdsmap":{"epoch":9,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T14:58:30.215642+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14502},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14502":{"gid":14502,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":4,"state":"up:active","state_seq":3,"addr":"192.168.123.105:6827/2659122886","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":2659122886},{"type":"v1","addr":"192.168.123.105:6827","nonce":2659122886}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_14510":{"gid":14510,"name":"cephfs.vm09.ohmitn","rank":0,"incarnation":0,"state":"up:standby-replay","state_seq":2,"addr":"192.168.123.109:6825/1947130211","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":1947130211},{"type":"v1","addr":"192.168.123.109:6825","nonce":1947130211}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:08:53.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.102+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2994039391 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f4de8077990 msgr2=0x7f4de8079e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:53.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.102+0000 7f4e03fc2700 1 --2- 192.168.123.105:0/2994039391 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f4de8077990 0x7f4de8079e40 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f4dfc074840 tx=0x7f4df8005fb0 comp rx=0 tx=0).stop 2026-03-09T15:08:53.103 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.103+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2994039391 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4dfc100590 msgr2=0x7f4dfc073050 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:53.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.103+0000 7f4e03fc2700 1 --2- 192.168.123.105:0/2994039391 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4dfc100590 0x7f4dfc073050 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x7f4dec00ba70 tx=0x7f4dec00bd80 comp rx=0 tx=0).stop 2026-03-09T15:08:53.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.103+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2994039391 shutdown_connections 2026-03-09T15:08:53.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.103+0000 7f4e03fc2700 1 --2- 192.168.123.105:0/2994039391 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f4de8077990 0x7f4de8079e40 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:53.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.103+0000 7f4e03fc2700 1 --2- 192.168.123.105:0/2994039391 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4dfc100590 0x7f4dfc073050 unknown :-1 s=CLOSED pgs=133 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:53.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.103+0000 7f4e03fc2700 1 --2- 192.168.123.105:0/2994039391 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f4dfc1065b0 0x7f4dfc073590 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:53.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.103+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2994039391 >> 192.168.123.105:0/2994039391 conn(0x7f4dfc0fc090 msgr2=0x7f4dfc0fd930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:53.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.103+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2994039391 shutdown_connections 2026-03-09T15:08:53.104 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.103+0000 7f4e03fc2700 1 -- 192.168.123.105:0/2994039391 wait complete. 2026-03-09T15:08:53.105 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 10 2026-03-09T15:08:53.152 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 11 2026-03-09T15:08:53.306 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:53.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.653+0000 7f6169144700 1 -- 192.168.123.105:0/3238850196 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6164106590 msgr2=0x7f6164106960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:53.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.653+0000 7f6169144700 1 --2- 192.168.123.105:0/3238850196 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6164106590 0x7f6164106960 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7f6154009ab0 tx=0x7f6154009dc0 comp rx=0 tx=0).stop 2026-03-09T15:08:53.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.654+0000 7f6169144700 1 -- 192.168.123.105:0/3238850196 shutdown_connections 2026-03-09T15:08:53.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.654+0000 7f6169144700 1 --2- 192.168.123.105:0/3238850196 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6164100570 0x7f61641009e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:53.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.654+0000 7f6169144700 1 --2- 192.168.123.105:0/3238850196 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6164106590 0x7f6164106960 unknown :-1 s=CLOSED pgs=134 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:53.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.654+0000 7f6169144700 1 -- 192.168.123.105:0/3238850196 >> 192.168.123.105:0/3238850196 conn(0x7f61640fc050 msgr2=0x7f61640fe460 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:53.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.654+0000 7f6169144700 1 -- 192.168.123.105:0/3238850196 shutdown_connections 2026-03-09T15:08:53.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.654+0000 7f6169144700 1 -- 192.168.123.105:0/3238850196 wait complete. 2026-03-09T15:08:53.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.655+0000 7f6169144700 1 Processor -- start 2026-03-09T15:08:53.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.655+0000 7f6169144700 1 -- start start 2026-03-09T15:08:53.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.655+0000 7f6169144700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6164100570 0x7f6164196260 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:53.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.655+0000 7f6169144700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6164106590 0x7f61641967a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:53.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.655+0000 7f6169144700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6164196e80 con 0x7f6164100570 2026-03-09T15:08:53.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.655+0000 7f6169144700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f616419ac10 con 0x7f6164106590 2026-03-09T15:08:53.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.656+0000 7f616259c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6164106590 0x7f61641967a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:53.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.656+0000 7f616259c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6164106590 0x7f61641967a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45688/0 (socket says 192.168.123.105:45688) 2026-03-09T15:08:53.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.656+0000 7f616259c700 1 -- 192.168.123.105:0/2401271962 learned_addr learned my addr 192.168.123.105:0/2401271962 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:53.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.656+0000 7f6162d9d700 1 --2- 192.168.123.105:0/2401271962 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6164100570 0x7f6164196260 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:53.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.656+0000 7f6162d9d700 1 -- 192.168.123.105:0/2401271962 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6164106590 msgr2=0x7f61641967a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:53.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.656+0000 7f6162d9d700 1 --2- 192.168.123.105:0/2401271962 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6164106590 0x7f61641967a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:53.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.656+0000 7f6162d9d700 1 -- 192.168.123.105:0/2401271962 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6154009710 con 0x7f6164100570 2026-03-09T15:08:53.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.656+0000 7f6162d9d700 1 --2- 192.168.123.105:0/2401271962 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6164100570 0x7f6164196260 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f6154003a10 tx=0x7f615400f6c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:53.659 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.657+0000 7f615bfff700 1 -- 192.168.123.105:0/2401271962 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f615401d070 con 0x7f6164100570 2026-03-09T15:08:53.659 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.657+0000 7f615bfff700 1 -- 192.168.123.105:0/2401271962 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f6154021410 con 0x7f6164100570 2026-03-09T15:08:53.659 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.657+0000 7f615bfff700 1 -- 192.168.123.105:0/2401271962 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6154017600 con 0x7f6164100570 2026-03-09T15:08:53.659 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.657+0000 7f6169144700 1 -- 192.168.123.105:0/2401271962 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f616419ae90 con 0x7f6164100570 2026-03-09T15:08:53.659 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.657+0000 7f6169144700 1 -- 192.168.123.105:0/2401271962 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f616419b380 con 0x7f6164100570 2026-03-09T15:08:53.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.659+0000 7f615bfff700 1 -- 192.168.123.105:0/2401271962 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f6154021a80 con 0x7f6164100570 2026-03-09T15:08:53.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.659+0000 7f615bfff700 1 --2- 192.168.123.105:0/2401271962 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f61500778c0 0x7f6150079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:53.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.659+0000 7f615bfff700 1 -- 192.168.123.105:0/2401271962 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f615409acb0 con 0x7f6164100570 2026-03-09T15:08:53.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.659+0000 7f616259c700 1 --2- 192.168.123.105:0/2401271962 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f61500778c0 0x7f6150079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:53.660 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.659+0000 7f6169144700 1 -- 192.168.123.105:0/2401271962 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6164108a20 con 0x7f6164100570 2026-03-09T15:08:53.661 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.660+0000 7f616259c700 1 --2- 192.168.123.105:0/2401271962 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f61500778c0 0x7f6150079d70 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7f6164197880 tx=0x7f614c009500 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:53.664 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.663+0000 7f615bfff700 1 -- 192.168.123.105:0/2401271962 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f6154063380 con 0x7f6164100570 2026-03-09T15:08:53.817 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.816+0000 7f6169144700 1 -- 192.168.123.105:0/2401271962 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 11, "format": "json"} v 0) v1 -- 0x7f6164066e40 con 0x7f6164100570 2026-03-09T15:08:53.817 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.817+0000 7f615bfff700 1 -- 192.168.123.105:0/2401271962 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 11, "format": "json"}]=0 dumped fsmap epoch 11 v35) v1 ==== 107+0+4920 (secure 0 0 0) 0x7f6154026070 con 0x7f6164100570 2026-03-09T15:08:53.818 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:53.818 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":11,"btime":"1970-01-01T00:00:00:000000+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":14518,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.105:6829/1321316558","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":1321316558},{"type":"v1","addr":"192.168.123.105:6829","nonce":1321316558}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":10},{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":11}],"filesystems":[{"mdsmap":{"epoch":9,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T14:58:30.215642+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14502},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14502":{"gid":14502,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":4,"state":"up:active","state_seq":3,"addr":"192.168.123.105:6827/2659122886","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":2659122886},{"type":"v1","addr":"192.168.123.105:6827","nonce":2659122886}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}},"gid_14510":{"gid":14510,"name":"cephfs.vm09.ohmitn","rank":0,"incarnation":0,"state":"up:standby-replay","state_seq":2,"addr":"192.168.123.109:6825/1947130211","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":1947130211},{"type":"v1","addr":"192.168.123.109:6825","nonce":1947130211}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:08:53.820 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.820+0000 7f6169144700 1 -- 192.168.123.105:0/2401271962 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f61500778c0 msgr2=0x7f6150079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:53.820 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.820+0000 7f6169144700 1 --2- 192.168.123.105:0/2401271962 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f61500778c0 0x7f6150079d70 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7f6164197880 tx=0x7f614c009500 comp rx=0 tx=0).stop 2026-03-09T15:08:53.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.820+0000 7f6169144700 1 -- 192.168.123.105:0/2401271962 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6164100570 msgr2=0x7f6164196260 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:53.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.820+0000 7f6169144700 1 --2- 192.168.123.105:0/2401271962 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6164100570 0x7f6164196260 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7f6154003a10 tx=0x7f615400f6c0 comp rx=0 tx=0).stop 2026-03-09T15:08:53.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.820+0000 7f6169144700 1 -- 192.168.123.105:0/2401271962 shutdown_connections 2026-03-09T15:08:53.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.820+0000 7f6169144700 1 --2- 192.168.123.105:0/2401271962 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f61500778c0 0x7f6150079d70 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:53.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.820+0000 7f6169144700 1 --2- 192.168.123.105:0/2401271962 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6164100570 0x7f6164196260 unknown :-1 s=CLOSED pgs=135 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:53.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.820+0000 7f6169144700 1 --2- 192.168.123.105:0/2401271962 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6164106590 0x7f61641967a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:53.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.820+0000 7f6169144700 1 -- 192.168.123.105:0/2401271962 >> 192.168.123.105:0/2401271962 conn(0x7f61640fc050 msgr2=0x7f61640fda00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:53.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.820+0000 7f6169144700 1 -- 192.168.123.105:0/2401271962 shutdown_connections 2026-03-09T15:08:53.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:53.820+0000 7f6169144700 1 -- 192.168.123.105:0/2401271962 wait complete. 2026-03-09T15:08:53.822 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 11 2026-03-09T15:08:53.848 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:53 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/2994039391' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 10, "format": "json"}]: dispatch 2026-03-09T15:08:53.874 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 12 2026-03-09T15:08:54.034 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:54.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:53 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/2994039391' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 10, "format": "json"}]: dispatch 2026-03-09T15:08:54.296 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.294+0000 7f1264961700 1 -- 192.168.123.105:0/2024869531 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f125c075b90 msgr2=0x7f125c110ee0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:54.296 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.294+0000 7f1264961700 1 --2- 192.168.123.105:0/2024869531 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f125c075b90 0x7f125c110ee0 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7f1250009b50 tx=0x7f1250009e60 comp rx=0 tx=0).stop 2026-03-09T15:08:54.296 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.295+0000 7f1264961700 1 -- 192.168.123.105:0/2024869531 shutdown_connections 2026-03-09T15:08:54.296 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.295+0000 7f1264961700 1 --2- 192.168.123.105:0/2024869531 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f125c075b90 0x7f125c110ee0 unknown :-1 s=CLOSED pgs=136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:54.296 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.295+0000 7f1264961700 1 --2- 192.168.123.105:0/2024869531 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f125c075280 0x7f125c075650 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:54.296 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.295+0000 7f1264961700 1 -- 192.168.123.105:0/2024869531 >> 192.168.123.105:0/2024869531 conn(0x7f125c0fe280 msgr2=0x7f125c100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:54.296 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.295+0000 7f1264961700 1 -- 192.168.123.105:0/2024869531 shutdown_connections 2026-03-09T15:08:54.296 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.296+0000 7f1264961700 1 -- 192.168.123.105:0/2024869531 wait complete. 2026-03-09T15:08:54.297 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.296+0000 7f1264961700 1 Processor -- start 2026-03-09T15:08:54.297 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.296+0000 7f1264961700 1 -- start start 2026-03-09T15:08:54.297 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.297+0000 7f1264961700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f125c075280 0x7f125c1a2560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:54.297 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.297+0000 7f1264961700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f125c075b90 0x7f125c1a2aa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:54.298 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.297+0000 7f1264961700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f125c1a3130 con 0x7f125c075280 2026-03-09T15:08:54.298 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.297+0000 7f1261efc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f125c075b90 0x7f125c1a2aa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:54.298 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.297+0000 7f1264961700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f125c19c5e0 con 0x7f125c075b90 2026-03-09T15:08:54.298 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.297+0000 7f1261efc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f125c075b90 0x7f125c1a2aa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45700/0 (socket says 192.168.123.105:45700) 2026-03-09T15:08:54.298 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.297+0000 7f1261efc700 1 -- 192.168.123.105:0/4000655051 learned_addr learned my addr 192.168.123.105:0/4000655051 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:54.298 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.297+0000 7f1261efc700 1 -- 192.168.123.105:0/4000655051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f125c075280 msgr2=0x7f125c1a2560 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:08:54.298 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.297+0000 7f12626fd700 1 --2- 192.168.123.105:0/4000655051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f125c075280 0x7f125c1a2560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:54.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.298+0000 7f1261efc700 1 --2- 192.168.123.105:0/4000655051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f125c075280 0x7f125c1a2560 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:54.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.298+0000 7f1261efc700 1 -- 192.168.123.105:0/4000655051 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f12500097e0 con 0x7f125c075b90 2026-03-09T15:08:54.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.298+0000 7f12626fd700 1 --2- 192.168.123.105:0/4000655051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f125c075280 0x7f125c1a2560 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:08:54.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.298+0000 7f1261efc700 1 --2- 192.168.123.105:0/4000655051 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f125c075b90 0x7f125c1a2aa0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f125000ba60 tx=0x7f125000bb40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:54.299 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.298+0000 7f12577fe700 1 -- 192.168.123.105:0/4000655051 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f125001d070 con 0x7f125c075b90 2026-03-09T15:08:54.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.298+0000 7f1264961700 1 -- 192.168.123.105:0/4000655051 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f125c19c860 con 0x7f125c075b90 2026-03-09T15:08:54.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.298+0000 7f1264961700 1 -- 192.168.123.105:0/4000655051 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f125c19cdb0 con 0x7f125c075b90 2026-03-09T15:08:54.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.299+0000 7f12577fe700 1 -- 192.168.123.105:0/4000655051 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f125000f460 con 0x7f125c075b90 2026-03-09T15:08:54.300 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.299+0000 7f12577fe700 1 -- 192.168.123.105:0/4000655051 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1250021620 con 0x7f125c075b90 2026-03-09T15:08:54.301 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.300+0000 7f1264961700 1 -- 192.168.123.105:0/4000655051 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f125c10e650 con 0x7f125c075b90 2026-03-09T15:08:54.301 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.300+0000 7f12577fe700 1 -- 192.168.123.105:0/4000655051 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f125000fab0 con 0x7f125c075b90 2026-03-09T15:08:54.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.300+0000 7f12577fe700 1 --2- 192.168.123.105:0/4000655051 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f124c0778c0 0x7f124c079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:54.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.301+0000 7f12626fd700 1 --2- 192.168.123.105:0/4000655051 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f124c0778c0 0x7f124c079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:54.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.301+0000 7f12577fe700 1 -- 192.168.123.105:0/4000655051 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f125009afe0 con 0x7f125c075b90 2026-03-09T15:08:54.302 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.301+0000 7f12626fd700 1 --2- 192.168.123.105:0/4000655051 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f124c0778c0 0x7f124c079d70 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f1248009dd0 tx=0x7f1248009450 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:54.305 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.304+0000 7f12577fe700 1 -- 192.168.123.105:0/4000655051 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f1250063580 con 0x7f125c075b90 2026-03-09T15:08:54.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.446+0000 7f1264961700 1 -- 192.168.123.105:0/4000655051 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 12, "format": "json"} v 0) v1 -- 0x7f125c04ea50 con 0x7f125c075b90 2026-03-09T15:08:54.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.450+0000 7f12577fe700 1 -- 192.168.123.105:0/4000655051 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 12, "format": "json"}]=0 dumped fsmap epoch 12 v35) v1 ==== 107+0+4141 (secure 0 0 0) 0x7f1250062cd0 con 0x7f125c075b90 2026-03-09T15:08:54.451 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:54.451 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":12,"btime":"2026-03-09T15:06:23:551105+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":14518,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.105:6829/1321316558","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":1321316558},{"type":"v1","addr":"192.168.123.105:6829","nonce":1321316558}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":10},{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":11}],"filesystems":[{"mdsmap":{"epoch":12,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:23.551102+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14502},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14502":{"gid":14502,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":4,"state":"up:active","state_seq":3,"addr":"192.168.123.105:6827/2659122886","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":2659122886},{"type":"v1","addr":"192.168.123.105:6827","nonce":2659122886}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":14502,"qdb_cluster":[14502]},"id":1}]} 2026-03-09T15:08:54.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.453+0000 7f1264961700 1 -- 192.168.123.105:0/4000655051 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f124c0778c0 msgr2=0x7f124c079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:54.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.453+0000 7f1264961700 1 --2- 192.168.123.105:0/4000655051 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f124c0778c0 0x7f124c079d70 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f1248009dd0 tx=0x7f1248009450 comp rx=0 tx=0).stop 2026-03-09T15:08:54.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.453+0000 7f1264961700 1 -- 192.168.123.105:0/4000655051 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f125c075b90 msgr2=0x7f125c1a2aa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:54.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.453+0000 7f1264961700 1 --2- 192.168.123.105:0/4000655051 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f125c075b90 0x7f125c1a2aa0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f125000ba60 tx=0x7f125000bb40 comp rx=0 tx=0).stop 2026-03-09T15:08:54.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.453+0000 7f1264961700 1 -- 192.168.123.105:0/4000655051 shutdown_connections 2026-03-09T15:08:54.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.454+0000 7f1264961700 1 --2- 192.168.123.105:0/4000655051 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f124c0778c0 0x7f124c079d70 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:54.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.454+0000 7f1264961700 1 --2- 192.168.123.105:0/4000655051 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f125c075280 0x7f125c1a2560 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:54.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.454+0000 7f1264961700 1 --2- 192.168.123.105:0/4000655051 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f125c075b90 0x7f125c1a2aa0 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:54.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.454+0000 7f1264961700 1 -- 192.168.123.105:0/4000655051 >> 192.168.123.105:0/4000655051 conn(0x7f125c0fe280 msgr2=0x7f125c0feee0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:54.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.454+0000 7f1264961700 1 -- 192.168.123.105:0/4000655051 shutdown_connections 2026-03-09T15:08:54.455 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.454+0000 7f1264961700 1 -- 192.168.123.105:0/4000655051 wait complete. 2026-03-09T15:08:54.456 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 12 2026-03-09T15:08:54.523 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 12 2026-03-09T15:08:54.523 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 13 2026-03-09T15:08:54.675 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:54.700 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:54 vm05.local ceph-mon[116516]: pgmap v211: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:08:54.700 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:54 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/2401271962' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 11, "format": "json"}]: dispatch 2026-03-09T15:08:54.700 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:54 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/4000655051' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 12, "format": "json"}]: dispatch 2026-03-09T15:08:54.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.950+0000 7f71e69e4700 1 -- 192.168.123.105:0/3116329061 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71e0102780 msgr2=0x7f71e0102bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:54.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.950+0000 7f71e69e4700 1 --2- 192.168.123.105:0/3116329061 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71e0102780 0x7f71e0102bf0 secure :-1 s=READY pgs=137 cs=0 l=1 rev1=1 crypto rx=0x7f71d0009b50 tx=0x7f71d0009e60 comp rx=0 tx=0).stop 2026-03-09T15:08:54.953 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.952+0000 7f71e69e4700 1 -- 192.168.123.105:0/3116329061 shutdown_connections 2026-03-09T15:08:54.953 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.952+0000 7f71e69e4700 1 --2- 192.168.123.105:0/3116329061 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71e0102780 0x7f71e0102bf0 unknown :-1 s=CLOSED pgs=137 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:54.953 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.952+0000 7f71e69e4700 1 --2- 192.168.123.105:0/3116329061 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f71e0108780 0x7f71e0108b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:54.953 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.952+0000 7f71e69e4700 1 -- 192.168.123.105:0/3116329061 >> 192.168.123.105:0/3116329061 conn(0x7f71e00fe280 msgr2=0x7f71e0100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:54.953 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.953+0000 7f71e69e4700 1 -- 192.168.123.105:0/3116329061 shutdown_connections 2026-03-09T15:08:54.953 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.953+0000 7f71e69e4700 1 -- 192.168.123.105:0/3116329061 wait complete. 2026-03-09T15:08:54.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.953+0000 7f71e69e4700 1 Processor -- start 2026-03-09T15:08:54.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.953+0000 7f71e69e4700 1 -- start start 2026-03-09T15:08:54.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.954+0000 7f71e69e4700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f71e0102780 0x7f71e0198390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:54.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.954+0000 7f71e69e4700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71e0108780 0x7f71e01988d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:54.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.954+0000 7f71e69e4700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f71e0198f20 con 0x7f71e0108780 2026-03-09T15:08:54.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.954+0000 7f71e69e4700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f71e0199060 con 0x7f71e0102780 2026-03-09T15:08:54.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.954+0000 7f71df7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71e0108780 0x7f71e01988d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:54.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.954+0000 7f71df7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71e0108780 0x7f71e01988d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:42882/0 (socket says 192.168.123.105:42882) 2026-03-09T15:08:54.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.954+0000 7f71df7fe700 1 -- 192.168.123.105:0/4060874003 learned_addr learned my addr 192.168.123.105:0/4060874003 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:54.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.954+0000 7f71dffff700 1 --2- 192.168.123.105:0/4060874003 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f71e0102780 0x7f71e0198390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:54.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.954+0000 7f71df7fe700 1 -- 192.168.123.105:0/4060874003 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f71e0102780 msgr2=0x7f71e0198390 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:54.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.955+0000 7f71df7fe700 1 --2- 192.168.123.105:0/4060874003 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f71e0102780 0x7f71e0198390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:54.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.955+0000 7f71df7fe700 1 -- 192.168.123.105:0/4060874003 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f71d00097e0 con 0x7f71e0108780 2026-03-09T15:08:54.956 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.955+0000 7f71df7fe700 1 --2- 192.168.123.105:0/4060874003 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71e0108780 0x7f71e01988d0 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f71d0005f50 tx=0x7f71d0004ab0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:54.956 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.955+0000 7f71dd7fa700 1 -- 192.168.123.105:0/4060874003 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f71d001d070 con 0x7f71e0108780 2026-03-09T15:08:54.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.955+0000 7f71dd7fa700 1 -- 192.168.123.105:0/4060874003 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f71d000bcf0 con 0x7f71e0108780 2026-03-09T15:08:54.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.955+0000 7f71e69e4700 1 -- 192.168.123.105:0/4060874003 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f71e019ce00 con 0x7f71e0108780 2026-03-09T15:08:54.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.955+0000 7f71dd7fa700 1 -- 192.168.123.105:0/4060874003 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f71d000f8d0 con 0x7f71e0108780 2026-03-09T15:08:54.957 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.955+0000 7f71e69e4700 1 -- 192.168.123.105:0/4060874003 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f71e019d2f0 con 0x7f71e0108780 2026-03-09T15:08:54.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.957+0000 7f71dd7fa700 1 -- 192.168.123.105:0/4060874003 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f71d000fa30 con 0x7f71e0108780 2026-03-09T15:08:54.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.957+0000 7f71e69e4700 1 -- 192.168.123.105:0/4060874003 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f71e004ea50 con 0x7f71e0108780 2026-03-09T15:08:54.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.958+0000 7f71dd7fa700 1 --2- 192.168.123.105:0/4060874003 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f71cc0779e0 0x7f71cc079e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:54.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.958+0000 7f71dd7fa700 1 -- 192.168.123.105:0/4060874003 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f71d009c400 con 0x7f71e0108780 2026-03-09T15:08:54.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.958+0000 7f71dffff700 1 --2- 192.168.123.105:0/4060874003 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f71cc0779e0 0x7f71cc079e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:54.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.958+0000 7f71dffff700 1 --2- 192.168.123.105:0/4060874003 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f71cc0779e0 0x7f71cc079e90 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f71e01038c0 tx=0x7f71c8005d70 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:54.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:54.961+0000 7f71dd7fa700 1 -- 192.168.123.105:0/4060874003 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f71d0064b50 con 0x7f71e0108780 2026-03-09T15:08:55.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.101+0000 7f71e69e4700 1 -- 192.168.123.105:0/4060874003 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 13, "format": "json"} v 0) v1 -- 0x7f71e0199740 con 0x7f71e0108780 2026-03-09T15:08:55.102 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.101+0000 7f71dd7fa700 1 -- 192.168.123.105:0/4060874003 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 13, "format": "json"}]=0 dumped fsmap epoch 13 v35) v1 ==== 107+0+4123 (secure 0 0 0) 0x7f71d00642a0 con 0x7f71e0108780 2026-03-09T15:08:55.102 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:55.102 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":13,"btime":"2026-03-09T15:06:24:405529+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":14518,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.105:6829/1321316558","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":1321316558},{"type":"v1","addr":"192.168.123.105:6829","nonce":1321316558}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":10},{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":11},{"gid":34270,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/4257546649","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":4257546649},{"type":"v1","addr":"192.168.123.109:6825","nonce":4257546649}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13}],"filesystems":[{"mdsmap":{"epoch":13,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:24.405528+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":84,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{},"failed":[0],"damaged":[],"stopped":[],"info":{},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:08:55.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.104+0000 7f71e69e4700 1 -- 192.168.123.105:0/4060874003 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f71cc0779e0 msgr2=0x7f71cc079e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:55.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.104+0000 7f71e69e4700 1 --2- 192.168.123.105:0/4060874003 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f71cc0779e0 0x7f71cc079e90 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f71e01038c0 tx=0x7f71c8005d70 comp rx=0 tx=0).stop 2026-03-09T15:08:55.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.104+0000 7f71e69e4700 1 -- 192.168.123.105:0/4060874003 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71e0108780 msgr2=0x7f71e01988d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:55.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.104+0000 7f71e69e4700 1 --2- 192.168.123.105:0/4060874003 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71e0108780 0x7f71e01988d0 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f71d0005f50 tx=0x7f71d0004ab0 comp rx=0 tx=0).stop 2026-03-09T15:08:55.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.104+0000 7f71e69e4700 1 -- 192.168.123.105:0/4060874003 shutdown_connections 2026-03-09T15:08:55.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.104+0000 7f71e69e4700 1 --2- 192.168.123.105:0/4060874003 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f71cc0779e0 0x7f71cc079e90 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:55.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.104+0000 7f71e69e4700 1 --2- 192.168.123.105:0/4060874003 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f71e0102780 0x7f71e0198390 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:55.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.104+0000 7f71e69e4700 1 --2- 192.168.123.105:0/4060874003 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f71e0108780 0x7f71e01988d0 unknown :-1 s=CLOSED pgs=138 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:55.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.104+0000 7f71e69e4700 1 -- 192.168.123.105:0/4060874003 >> 192.168.123.105:0/4060874003 conn(0x7f71e00fe280 msgr2=0x7f71e00ffbd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:55.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.104+0000 7f71e69e4700 1 -- 192.168.123.105:0/4060874003 shutdown_connections 2026-03-09T15:08:55.105 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.104+0000 7f71e69e4700 1 -- 192.168.123.105:0/4060874003 wait complete. 2026-03-09T15:08:55.106 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 13 2026-03-09T15:08:55.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:54 vm09.local ceph-mon[98742]: pgmap v211: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T15:08:55.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:54 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/2401271962' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 11, "format": "json"}]: dispatch 2026-03-09T15:08:55.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:54 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/4000655051' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 12, "format": "json"}]: dispatch 2026-03-09T15:08:55.177 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 13 2026-03-09T15:08:55.177 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 14 2026-03-09T15:08:55.340 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:55.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.617+0000 7fd8932a1700 1 -- 192.168.123.105:0/2026854711 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd88c0ff2f0 msgr2=0x7fd88c0ff6c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:55.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.617+0000 7fd8932a1700 1 --2- 192.168.123.105:0/2026854711 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd88c0ff2f0 0x7fd88c0ff6c0 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7fd884009b00 tx=0x7fd884009e10 comp rx=0 tx=0).stop 2026-03-09T15:08:55.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.618+0000 7fd8932a1700 1 -- 192.168.123.105:0/2026854711 shutdown_connections 2026-03-09T15:08:55.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.618+0000 7fd8932a1700 1 --2- 192.168.123.105:0/2026854711 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd88c0ffc00 0x7fd88c10c960 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:55.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.618+0000 7fd8932a1700 1 --2- 192.168.123.105:0/2026854711 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd88c0ff2f0 0x7fd88c0ff6c0 unknown :-1 s=CLOSED pgs=139 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:55.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.618+0000 7fd8932a1700 1 -- 192.168.123.105:0/2026854711 >> 192.168.123.105:0/2026854711 conn(0x7fd88c0faf00 msgr2=0x7fd88c0fd310 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:55.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.618+0000 7fd8932a1700 1 -- 192.168.123.105:0/2026854711 shutdown_connections 2026-03-09T15:08:55.619 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.619+0000 7fd8932a1700 1 -- 192.168.123.105:0/2026854711 wait complete. 2026-03-09T15:08:55.620 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.619+0000 7fd8932a1700 1 Processor -- start 2026-03-09T15:08:55.620 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.619+0000 7fd8932a1700 1 -- start start 2026-03-09T15:08:55.620 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.619+0000 7fd8932a1700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd88c0ff2f0 0x7fd88c198380 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:55.620 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.619+0000 7fd8932a1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd88c0ffc00 0x7fd88c1988c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:55.620 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.619+0000 7fd8932a1700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd88c198fa0 con 0x7fd88c0ffc00 2026-03-09T15:08:55.620 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.619+0000 7fd8932a1700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fd88c19cd30 con 0x7fd88c0ff2f0 2026-03-09T15:08:55.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.620+0000 7fd89083c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd88c0ffc00 0x7fd88c1988c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:55.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.620+0000 7fd89083c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd88c0ffc00 0x7fd88c1988c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:42902/0 (socket says 192.168.123.105:42902) 2026-03-09T15:08:55.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.620+0000 7fd89083c700 1 -- 192.168.123.105:0/2761535066 learned_addr learned my addr 192.168.123.105:0/2761535066 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:55.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.620+0000 7fd89103d700 1 --2- 192.168.123.105:0/2761535066 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd88c0ff2f0 0x7fd88c198380 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:55.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.620+0000 7fd89083c700 1 -- 192.168.123.105:0/2761535066 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd88c0ff2f0 msgr2=0x7fd88c198380 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:55.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.620+0000 7fd89083c700 1 --2- 192.168.123.105:0/2761535066 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd88c0ff2f0 0x7fd88c198380 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:55.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.620+0000 7fd89083c700 1 -- 192.168.123.105:0/2761535066 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fd8840097e0 con 0x7fd88c0ffc00 2026-03-09T15:08:55.621 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.620+0000 7fd89083c700 1 --2- 192.168.123.105:0/2761535066 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd88c0ffc00 0x7fd88c1988c0 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7fd888009f50 tx=0x7fd888009f80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:55.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.620+0000 7fd8827fc700 1 -- 192.168.123.105:0/2761535066 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd88800cbe0 con 0x7fd88c0ffc00 2026-03-09T15:08:55.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.620+0000 7fd8827fc700 1 -- 192.168.123.105:0/2761535066 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fd88800cd40 con 0x7fd88c0ffc00 2026-03-09T15:08:55.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.621+0000 7fd8827fc700 1 -- 192.168.123.105:0/2761535066 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fd8880077f0 con 0x7fd88c0ffc00 2026-03-09T15:08:55.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.621+0000 7fd8932a1700 1 -- 192.168.123.105:0/2761535066 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fd88c19d010 con 0x7fd88c0ffc00 2026-03-09T15:08:55.622 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.621+0000 7fd8932a1700 1 -- 192.168.123.105:0/2761535066 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fd88c19d4e0 con 0x7fd88c0ffc00 2026-03-09T15:08:55.623 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.622+0000 7fd8932a1700 1 -- 192.168.123.105:0/2761535066 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fd88c10a0d0 con 0x7fd88c0ffc00 2026-03-09T15:08:55.624 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.622+0000 7fd8827fc700 1 -- 192.168.123.105:0/2761535066 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fd888007950 con 0x7fd88c0ffc00 2026-03-09T15:08:55.624 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.623+0000 7fd8827fc700 1 --2- 192.168.123.105:0/2761535066 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd8780776d0 0x7fd878079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:55.624 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.623+0000 7fd8827fc700 1 -- 192.168.123.105:0/2761535066 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fd888006e40 con 0x7fd88c0ffc00 2026-03-09T15:08:55.624 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.623+0000 7fd89103d700 1 --2- 192.168.123.105:0/2761535066 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd8780776d0 0x7fd878079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:55.624 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.623+0000 7fd89103d700 1 --2- 192.168.123.105:0/2761535066 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd8780776d0 0x7fd878079b80 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7fd884005f50 tx=0x7fd884005dc0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:55.631 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.630+0000 7fd8827fc700 1 -- 192.168.123.105:0/2761535066 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fd888061c00 con 0x7fd88c0ffc00 2026-03-09T15:08:55.776 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:55 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/4060874003' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 13, "format": "json"}]: dispatch 2026-03-09T15:08:55.777 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.775+0000 7fd8932a1700 1 -- 192.168.123.105:0/2761535066 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 14, "format": "json"} v 0) v1 -- 0x7fd88c0689d0 con 0x7fd88c0ffc00 2026-03-09T15:08:55.779 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.778+0000 7fd8827fc700 1 -- 192.168.123.105:0/2761535066 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 14, "format": "json"}]=0 dumped fsmap epoch 14 v35) v1 ==== 107+0+4134 (secure 0 0 0) 0x7fd888061350 con 0x7fd88c0ffc00 2026-03-09T15:08:55.779 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:55.779 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":14,"btime":"2026-03-09T15:06:24:422952+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":11},{"gid":34270,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/4257546649","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":4257546649},{"type":"v1","addr":"192.168.123.109:6825","nonce":4257546649}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13}],"filesystems":[{"mdsmap":{"epoch":14,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:24.422930+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":84,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14518},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14518":{"gid":14518,"name":"cephfs.vm05.rrcyql","rank":0,"incarnation":14,"state":"up:replay","state_seq":2,"addr":"192.168.123.105:6829/1321316558","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":1321316558},{"type":"v1","addr":"192.168.123.105:6829","nonce":1321316558}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:08:55.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.781+0000 7fd8932a1700 1 -- 192.168.123.105:0/2761535066 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd8780776d0 msgr2=0x7fd878079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:55.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.781+0000 7fd8932a1700 1 --2- 192.168.123.105:0/2761535066 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd8780776d0 0x7fd878079b80 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7fd884005f50 tx=0x7fd884005dc0 comp rx=0 tx=0).stop 2026-03-09T15:08:55.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.781+0000 7fd8932a1700 1 -- 192.168.123.105:0/2761535066 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd88c0ffc00 msgr2=0x7fd88c1988c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:55.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.781+0000 7fd8932a1700 1 --2- 192.168.123.105:0/2761535066 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd88c0ffc00 0x7fd88c1988c0 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x7fd888009f50 tx=0x7fd888009f80 comp rx=0 tx=0).stop 2026-03-09T15:08:55.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.782+0000 7fd8932a1700 1 -- 192.168.123.105:0/2761535066 shutdown_connections 2026-03-09T15:08:55.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.782+0000 7fd8932a1700 1 --2- 192.168.123.105:0/2761535066 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fd8780776d0 0x7fd878079b80 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:55.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.782+0000 7fd8932a1700 1 --2- 192.168.123.105:0/2761535066 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fd88c0ff2f0 0x7fd88c198380 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:55.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.782+0000 7fd8932a1700 1 --2- 192.168.123.105:0/2761535066 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fd88c0ffc00 0x7fd88c1988c0 unknown :-1 s=CLOSED pgs=140 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:55.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.782+0000 7fd8932a1700 1 -- 192.168.123.105:0/2761535066 >> 192.168.123.105:0/2761535066 conn(0x7fd88c0faf00 msgr2=0x7fd88c0fc460 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:55.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.783+0000 7fd8932a1700 1 -- 192.168.123.105:0/2761535066 shutdown_connections 2026-03-09T15:08:55.784 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:55.783+0000 7fd8932a1700 1 -- 192.168.123.105:0/2761535066 wait complete. 2026-03-09T15:08:55.784 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 14 2026-03-09T15:08:55.856 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 14 2026-03-09T15:08:55.857 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 15 2026-03-09T15:08:56.010 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:56.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:55 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/4060874003' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 13, "format": "json"}]: dispatch 2026-03-09T15:08:56.287 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.284+0000 7f6e6cec2700 1 -- 192.168.123.105:0/981970243 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e68106560 msgr2=0x7f6e68106930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:56.287 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.284+0000 7f6e6cec2700 1 --2- 192.168.123.105:0/981970243 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e68106560 0x7f6e68106930 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7f6e50009b50 tx=0x7f6e50009e60 comp rx=0 tx=0).stop 2026-03-09T15:08:56.287 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.286+0000 7f6e6cec2700 1 -- 192.168.123.105:0/981970243 shutdown_connections 2026-03-09T15:08:56.287 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.286+0000 7f6e6cec2700 1 --2- 192.168.123.105:0/981970243 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6e68100540 0x7f6e681009b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:56.287 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.286+0000 7f6e6cec2700 1 --2- 192.168.123.105:0/981970243 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e68106560 0x7f6e68106930 unknown :-1 s=CLOSED pgs=141 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:56.287 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.286+0000 7f6e6cec2700 1 -- 192.168.123.105:0/981970243 >> 192.168.123.105:0/981970243 conn(0x7f6e680fbfc0 msgr2=0x7f6e680fe3d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:56.287 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.287+0000 7f6e6cec2700 1 -- 192.168.123.105:0/981970243 shutdown_connections 2026-03-09T15:08:56.288 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.287+0000 7f6e6cec2700 1 -- 192.168.123.105:0/981970243 wait complete. 2026-03-09T15:08:56.288 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.287+0000 7f6e6cec2700 1 Processor -- start 2026-03-09T15:08:56.288 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.287+0000 7f6e6cec2700 1 -- start start 2026-03-09T15:08:56.288 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e6cec2700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6e68100540 0x7f6e681983f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:56.289 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e6cec2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e68106560 0x7f6e68198930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:56.289 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e6cec2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6e68199010 con 0x7f6e68106560 2026-03-09T15:08:56.289 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e6cec2700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f6e6819cda0 con 0x7f6e68100540 2026-03-09T15:08:56.289 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e65d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e68106560 0x7f6e68198930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:56.289 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e65d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e68106560 0x7f6e68198930 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:42930/0 (socket says 192.168.123.105:42930) 2026-03-09T15:08:56.289 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e65d9b700 1 -- 192.168.123.105:0/1103527944 learned_addr learned my addr 192.168.123.105:0/1103527944 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:56.289 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e65d9b700 1 -- 192.168.123.105:0/1103527944 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6e68100540 msgr2=0x7f6e681983f0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:08:56.289 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e65d9b700 1 --2- 192.168.123.105:0/1103527944 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6e68100540 0x7f6e681983f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:56.289 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e65d9b700 1 -- 192.168.123.105:0/1103527944 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f6e500097e0 con 0x7f6e68106560 2026-03-09T15:08:56.289 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e65d9b700 1 --2- 192.168.123.105:0/1103527944 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e68106560 0x7f6e68198930 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7f6e5800eb10 tx=0x7f6e5800eed0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:56.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e5f7fe700 1 -- 192.168.123.105:0/1103527944 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6e5800cca0 con 0x7f6e68106560 2026-03-09T15:08:56.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.288+0000 7f6e5f7fe700 1 -- 192.168.123.105:0/1103527944 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f6e5800ce00 con 0x7f6e68106560 2026-03-09T15:08:56.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.289+0000 7f6e5f7fe700 1 -- 192.168.123.105:0/1103527944 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f6e580189c0 con 0x7f6e68106560 2026-03-09T15:08:56.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.289+0000 7f6e6cec2700 1 -- 192.168.123.105:0/1103527944 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f6e6819d080 con 0x7f6e68106560 2026-03-09T15:08:56.291 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.289+0000 7f6e6cec2700 1 -- 192.168.123.105:0/1103527944 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f6e6819d550 con 0x7f6e68106560 2026-03-09T15:08:56.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.290+0000 7f6e5f7fe700 1 -- 192.168.123.105:0/1103527944 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f6e58018b20 con 0x7f6e68106560 2026-03-09T15:08:56.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.291+0000 7f6e5f7fe700 1 --2- 192.168.123.105:0/1103527944 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f6e540776c0 0x7f6e54079b70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:56.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.291+0000 7f6e6659c700 1 --2- 192.168.123.105:0/1103527944 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f6e540776c0 0x7f6e54079b70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:56.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.291+0000 7f6e5f7fe700 1 -- 192.168.123.105:0/1103527944 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f6e58014070 con 0x7f6e68106560 2026-03-09T15:08:56.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.291+0000 7f6e6659c700 1 --2- 192.168.123.105:0/1103527944 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f6e540776c0 0x7f6e54079b70 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f6e50000c00 tx=0x7f6e50005fb0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:56.295 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.292+0000 7f6e6cec2700 1 -- 192.168.123.105:0/1103527944 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f6e68108a10 con 0x7f6e68106560 2026-03-09T15:08:56.295 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.294+0000 7f6e5f7fe700 1 -- 192.168.123.105:0/1103527944 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f6e58062800 con 0x7f6e68106560 2026-03-09T15:08:56.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.449+0000 7f6e6cec2700 1 -- 192.168.123.105:0/1103527944 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 15, "format": "json"} v 0) v1 -- 0x7f6e68066e40 con 0x7f6e68106560 2026-03-09T15:08:56.454 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.453+0000 7f6e5f7fe700 1 -- 192.168.123.105:0/1103527944 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 15, "format": "json"}]=0 dumped fsmap epoch 15 v35) v1 ==== 107+0+4139 (secure 0 0 0) 0x7f6e58061f50 con 0x7f6e68106560 2026-03-09T15:08:56.454 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:56.454 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":15,"btime":"2026-03-09T15:06:30:084247+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":11},{"gid":34270,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/4257546649","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":4257546649},{"type":"v1","addr":"192.168.123.109:6825","nonce":4257546649}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13}],"filesystems":[{"mdsmap":{"epoch":15,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:30.033652+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":84,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14518},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14518":{"gid":14518,"name":"cephfs.vm05.rrcyql","rank":0,"incarnation":14,"state":"up:reconnect","state_seq":122,"addr":"192.168.123.105:6829/1321316558","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":1321316558},{"type":"v1","addr":"192.168.123.105:6829","nonce":1321316558}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:08:56.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.456+0000 7f6e6cec2700 1 -- 192.168.123.105:0/1103527944 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f6e540776c0 msgr2=0x7f6e54079b70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:56.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.456+0000 7f6e6cec2700 1 --2- 192.168.123.105:0/1103527944 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f6e540776c0 0x7f6e54079b70 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f6e50000c00 tx=0x7f6e50005fb0 comp rx=0 tx=0).stop 2026-03-09T15:08:56.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.456+0000 7f6e6cec2700 1 -- 192.168.123.105:0/1103527944 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e68106560 msgr2=0x7f6e68198930 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:56.457 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.456+0000 7f6e6cec2700 1 --2- 192.168.123.105:0/1103527944 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e68106560 0x7f6e68198930 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7f6e5800eb10 tx=0x7f6e5800eed0 comp rx=0 tx=0).stop 2026-03-09T15:08:56.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.456+0000 7f6e6cec2700 1 -- 192.168.123.105:0/1103527944 shutdown_connections 2026-03-09T15:08:56.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.456+0000 7f6e6cec2700 1 --2- 192.168.123.105:0/1103527944 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f6e540776c0 0x7f6e54079b70 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:56.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.456+0000 7f6e6cec2700 1 --2- 192.168.123.105:0/1103527944 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f6e68100540 0x7f6e681983f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:56.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.456+0000 7f6e6cec2700 1 --2- 192.168.123.105:0/1103527944 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f6e68106560 0x7f6e68198930 unknown :-1 s=CLOSED pgs=142 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:56.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.456+0000 7f6e6cec2700 1 -- 192.168.123.105:0/1103527944 >> 192.168.123.105:0/1103527944 conn(0x7f6e680fbfc0 msgr2=0x7f6e680fda40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:56.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.456+0000 7f6e6cec2700 1 -- 192.168.123.105:0/1103527944 shutdown_connections 2026-03-09T15:08:56.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.456+0000 7f6e6cec2700 1 -- 192.168.123.105:0/1103527944 wait complete. 2026-03-09T15:08:56.458 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 15 2026-03-09T15:08:56.506 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 15 2026-03-09T15:08:56.506 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 16 2026-03-09T15:08:56.669 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:56.935 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.933+0000 7f1a3d888700 1 -- 192.168.123.105:0/3588842431 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1a380730f0 msgr2=0x7f1a380734c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:56.935 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.933+0000 7f1a3d888700 1 --2- 192.168.123.105:0/3588842431 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1a380730f0 0x7f1a380734c0 secure :-1 s=READY pgs=143 cs=0 l=1 rev1=1 crypto rx=0x7f1a20009b00 tx=0x7f1a20009e10 comp rx=0 tx=0).stop 2026-03-09T15:08:56.935 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.934+0000 7f1a3d888700 1 -- 192.168.123.105:0/3588842431 shutdown_connections 2026-03-09T15:08:56.935 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.934+0000 7f1a3d888700 1 --2- 192.168.123.105:0/3588842431 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1a38073a00 0x7f1a38110ff0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:56.935 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.934+0000 7f1a3d888700 1 --2- 192.168.123.105:0/3588842431 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1a380730f0 0x7f1a380734c0 unknown :-1 s=CLOSED pgs=143 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:56.935 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.934+0000 7f1a3d888700 1 -- 192.168.123.105:0/3588842431 >> 192.168.123.105:0/3588842431 conn(0x7f1a380fc000 msgr2=0x7f1a380fe410 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:56.935 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.934+0000 7f1a3d888700 1 -- 192.168.123.105:0/3588842431 shutdown_connections 2026-03-09T15:08:56.935 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.935+0000 7f1a3d888700 1 -- 192.168.123.105:0/3588842431 wait complete. 2026-03-09T15:08:56.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.935+0000 7f1a3d888700 1 Processor -- start 2026-03-09T15:08:56.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.935+0000 7f1a3d888700 1 -- start start 2026-03-09T15:08:56.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.936+0000 7f1a3d888700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1a380730f0 0x7f1a381a2590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:56.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.936+0000 7f1a3d888700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1a38073a00 0x7f1a381a2ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:56.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.936+0000 7f1a3d888700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1a381a3160 con 0x7f1a380730f0 2026-03-09T15:08:56.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.936+0000 7f1a3d888700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f1a3819c610 con 0x7f1a38073a00 2026-03-09T15:08:56.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.936+0000 7f1a367fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1a38073a00 0x7f1a381a2ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:56.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.936+0000 7f1a367fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1a38073a00 0x7f1a381a2ad0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45784/0 (socket says 192.168.123.105:45784) 2026-03-09T15:08:56.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.936+0000 7f1a367fc700 1 -- 192.168.123.105:0/4019553939 learned_addr learned my addr 192.168.123.105:0/4019553939 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:56.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.936+0000 7f1a36ffd700 1 --2- 192.168.123.105:0/4019553939 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1a380730f0 0x7f1a381a2590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:56.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.936+0000 7f1a36ffd700 1 -- 192.168.123.105:0/4019553939 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1a38073a00 msgr2=0x7f1a381a2ad0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:56.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.936+0000 7f1a36ffd700 1 --2- 192.168.123.105:0/4019553939 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1a38073a00 0x7f1a381a2ad0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:56.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.936+0000 7f1a36ffd700 1 -- 192.168.123.105:0/4019553939 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f1a200097e0 con 0x7f1a380730f0 2026-03-09T15:08:56.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.937+0000 7f1a367fc700 1 --2- 192.168.123.105:0/4019553939 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1a38073a00 0x7f1a381a2ad0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:08:56.938 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.937+0000 7f1a36ffd700 1 --2- 192.168.123.105:0/4019553939 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1a380730f0 0x7f1a381a2590 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7f1a20009fd0 tx=0x7f1a20004930 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:56.939 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.938+0000 7f1a3c886700 1 -- 192.168.123.105:0/4019553939 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1a2001d070 con 0x7f1a380730f0 2026-03-09T15:08:56.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.938+0000 7f1a3c886700 1 -- 192.168.123.105:0/4019553939 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f1a20022470 con 0x7f1a380730f0 2026-03-09T15:08:56.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.938+0000 7f1a3c886700 1 -- 192.168.123.105:0/4019553939 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1a2000f700 con 0x7f1a380730f0 2026-03-09T15:08:56.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.938+0000 7f1a3d888700 1 -- 192.168.123.105:0/4019553939 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f1a3819c8f0 con 0x7f1a380730f0 2026-03-09T15:08:56.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.938+0000 7f1a3d888700 1 -- 192.168.123.105:0/4019553939 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f1a3819ce40 con 0x7f1a380730f0 2026-03-09T15:08:56.940 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:56 vm05.local ceph-mon[116516]: pgmap v212: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:56.940 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:56 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/2761535066' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 14, "format": "json"}]: dispatch 2026-03-09T15:08:56.940 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:08:56.940 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:56 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1103527944' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 15, "format": "json"}]: dispatch 2026-03-09T15:08:56.941 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.939+0000 7f1a3c886700 1 -- 192.168.123.105:0/4019553939 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f1a20022ac0 con 0x7f1a380730f0 2026-03-09T15:08:56.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.941+0000 7f1a3c886700 1 --2- 192.168.123.105:0/4019553939 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f1a24077870 0x7f1a24079d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:56.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.941+0000 7f1a3c886700 1 -- 192.168.123.105:0/4019553939 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f1a2009b110 con 0x7f1a380730f0 2026-03-09T15:08:56.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.944+0000 7f1a367fc700 1 --2- 192.168.123.105:0/4019553939 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f1a24077870 0x7f1a24079d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:56.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.944+0000 7f1a3d888700 1 -- 192.168.123.105:0/4019553939 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f1a3819ca80 con 0x7f1a380730f0 2026-03-09T15:08:56.948 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.946+0000 7f1a367fc700 1 --2- 192.168.123.105:0/4019553939 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f1a24077870 0x7f1a24079d20 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f1a380fd840 tx=0x7f1a28009480 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:56.949 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:56.948+0000 7f1a3c886700 1 -- 192.168.123.105:0/4019553939 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f1a3819ca80 con 0x7f1a380730f0 2026-03-09T15:08:57.094 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.093+0000 7f1a3d888700 1 -- 192.168.123.105:0/4019553939 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 16, "format": "json"} v 0) v1 -- 0x7f1a38066e40 con 0x7f1a380730f0 2026-03-09T15:08:57.095 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.094+0000 7f1a3c886700 1 -- 192.168.123.105:0/4019553939 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 16, "format": "json"}]=0 dumped fsmap epoch 16 v35) v1 ==== 107+0+4136 (secure 0 0 0) 0x7f1a200637e0 con 0x7f1a380730f0 2026-03-09T15:08:57.096 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:57.096 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":16,"btime":"2026-03-09T15:06:31:100063+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":11},{"gid":34270,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/4257546649","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":4257546649},{"type":"v1","addr":"192.168.123.109:6825","nonce":4257546649}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13}],"filesystems":[{"mdsmap":{"epoch":16,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:30.105024+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":84,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14518},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14518":{"gid":14518,"name":"cephfs.vm05.rrcyql","rank":0,"incarnation":14,"state":"up:rejoin","state_seq":123,"addr":"192.168.123.105:6829/1321316558","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":1321316558},{"type":"v1","addr":"192.168.123.105:6829","nonce":1321316558}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:08:57.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.097+0000 7f1a3d888700 1 -- 192.168.123.105:0/4019553939 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f1a24077870 msgr2=0x7f1a24079d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:57.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.097+0000 7f1a3d888700 1 --2- 192.168.123.105:0/4019553939 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f1a24077870 0x7f1a24079d20 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f1a380fd840 tx=0x7f1a28009480 comp rx=0 tx=0).stop 2026-03-09T15:08:57.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.097+0000 7f1a3d888700 1 -- 192.168.123.105:0/4019553939 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1a380730f0 msgr2=0x7f1a381a2590 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:57.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.097+0000 7f1a3d888700 1 --2- 192.168.123.105:0/4019553939 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1a380730f0 0x7f1a381a2590 secure :-1 s=READY pgs=144 cs=0 l=1 rev1=1 crypto rx=0x7f1a20009fd0 tx=0x7f1a20004930 comp rx=0 tx=0).stop 2026-03-09T15:08:57.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.098+0000 7f1a3d888700 1 -- 192.168.123.105:0/4019553939 shutdown_connections 2026-03-09T15:08:57.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.098+0000 7f1a3d888700 1 --2- 192.168.123.105:0/4019553939 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f1a24077870 0x7f1a24079d20 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:57.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.098+0000 7f1a3d888700 1 --2- 192.168.123.105:0/4019553939 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f1a380730f0 0x7f1a381a2590 unknown :-1 s=CLOSED pgs=144 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:57.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.098+0000 7f1a3d888700 1 --2- 192.168.123.105:0/4019553939 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f1a38073a00 0x7f1a381a2ad0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:57.098 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.098+0000 7f1a3d888700 1 -- 192.168.123.105:0/4019553939 >> 192.168.123.105:0/4019553939 conn(0x7f1a380fc000 msgr2=0x7f1a38102b00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:57.099 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.098+0000 7f1a3d888700 1 -- 192.168.123.105:0/4019553939 shutdown_connections 2026-03-09T15:08:57.099 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.098+0000 7f1a3d888700 1 -- 192.168.123.105:0/4019553939 wait complete. 2026-03-09T15:08:57.099 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 16 2026-03-09T15:08:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:56 vm09.local ceph-mon[98742]: pgmap v212: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:08:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:56 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/2761535066' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 14, "format": "json"}]: dispatch 2026-03-09T15:08:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:08:57.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:56 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1103527944' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 15, "format": "json"}]: dispatch 2026-03-09T15:08:57.165 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 16 2026-03-09T15:08:57.166 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 17 2026-03-09T15:08:57.318 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:57.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.588+0000 7f1042d70700 1 -- 192.168.123.105:0/1288560577 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f103c102810 msgr2=0x7f103c102c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:57.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.588+0000 7f1042d70700 1 --2- 192.168.123.105:0/1288560577 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f103c102810 0x7f103c102c80 secure :-1 s=READY pgs=145 cs=0 l=1 rev1=1 crypto rx=0x7f1028009b00 tx=0x7f1028009e10 comp rx=0 tx=0).stop 2026-03-09T15:08:57.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.589+0000 7f1042d70700 1 -- 192.168.123.105:0/1288560577 shutdown_connections 2026-03-09T15:08:57.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.589+0000 7f1042d70700 1 --2- 192.168.123.105:0/1288560577 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f103c102810 0x7f103c102c80 unknown :-1 s=CLOSED pgs=145 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:57.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.589+0000 7f1042d70700 1 --2- 192.168.123.105:0/1288560577 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f103c108810 0x7f103c108be0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:57.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.589+0000 7f1042d70700 1 -- 192.168.123.105:0/1288560577 >> 192.168.123.105:0/1288560577 conn(0x7f103c0fe330 msgr2=0x7f103c100740 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:57.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.589+0000 7f1042d70700 1 -- 192.168.123.105:0/1288560577 shutdown_connections 2026-03-09T15:08:57.590 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.590+0000 7f1042d70700 1 -- 192.168.123.105:0/1288560577 wait complete. 2026-03-09T15:08:57.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.590+0000 7f1042d70700 1 Processor -- start 2026-03-09T15:08:57.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.590+0000 7f1042d70700 1 -- start start 2026-03-09T15:08:57.591 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.591+0000 7f1042d70700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f103c108810 0x7f103c198620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:57.592 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.591+0000 7f1042d70700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f103c198b60 0x7f103c19cfd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:57.592 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.591+0000 7f1040b0c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f103c108810 0x7f103c198620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:57.592 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.591+0000 7f1040b0c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f103c108810 0x7f103c198620 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:42974/0 (socket says 192.168.123.105:42974) 2026-03-09T15:08:57.592 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.591+0000 7f1040b0c700 1 -- 192.168.123.105:0/1374914332 learned_addr learned my addr 192.168.123.105:0/1374914332 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:57.592 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.591+0000 7f1042d70700 1 -- 192.168.123.105:0/1374914332 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f103c199170 con 0x7f103c108810 2026-03-09T15:08:57.592 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.591+0000 7f1042d70700 1 -- 192.168.123.105:0/1374914332 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f103c1992e0 con 0x7f103c198b60 2026-03-09T15:08:57.592 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.591+0000 7f1040b0c700 1 -- 192.168.123.105:0/1374914332 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f103c198b60 msgr2=0x7f103c19cfd0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:08:57.592 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.591+0000 7f1040b0c700 1 --2- 192.168.123.105:0/1374914332 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f103c198b60 0x7f103c19cfd0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:57.592 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.591+0000 7f1040b0c700 1 -- 192.168.123.105:0/1374914332 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f10280097e0 con 0x7f103c108810 2026-03-09T15:08:57.592 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.592+0000 7f1040b0c700 1 --2- 192.168.123.105:0/1374914332 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f103c108810 0x7f103c198620 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7f103000d930 tx=0x7f103000dcf0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:57.593 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.592+0000 7f1039ffb700 1 -- 192.168.123.105:0/1374914332 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f10300041d0 con 0x7f103c108810 2026-03-09T15:08:57.593 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.592+0000 7f1039ffb700 1 -- 192.168.123.105:0/1374914332 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f1030004330 con 0x7f103c108810 2026-03-09T15:08:57.593 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.592+0000 7f1039ffb700 1 -- 192.168.123.105:0/1374914332 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f1030010460 con 0x7f103c108810 2026-03-09T15:08:57.594 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.592+0000 7f1042d70700 1 -- 192.168.123.105:0/1374914332 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f103c19d5d0 con 0x7f103c108810 2026-03-09T15:08:57.594 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.592+0000 7f1042d70700 1 -- 192.168.123.105:0/1374914332 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f103c19db20 con 0x7f103c108810 2026-03-09T15:08:57.594 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.593+0000 7f1042d70700 1 -- 192.168.123.105:0/1374914332 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f103c10ad80 con 0x7f103c108810 2026-03-09T15:08:57.597 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.596+0000 7f1039ffb700 1 -- 192.168.123.105:0/1374914332 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f1030009730 con 0x7f103c108810 2026-03-09T15:08:57.598 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.596+0000 7f1039ffb700 1 --2- 192.168.123.105:0/1374914332 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f102c077990 0x7f102c079e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:57.598 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.596+0000 7f1039ffb700 1 -- 192.168.123.105:0/1374914332 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f1030099680 con 0x7f103c108810 2026-03-09T15:08:57.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.596+0000 7f103bfff700 1 --2- 192.168.123.105:0/1374914332 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f102c077990 0x7f102c079e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:57.599 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.598+0000 7f103bfff700 1 --2- 192.168.123.105:0/1374914332 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f102c077990 0x7f102c079e40 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f102800b5c0 tx=0x7f1028005dc0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:57.601 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.600+0000 7f1039ffb700 1 -- 192.168.123.105:0/1374914332 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f1030061d20 con 0x7f103c108810 2026-03-09T15:08:57.754 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.753+0000 7f1042d70700 1 -- 192.168.123.105:0/1374914332 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 17, "format": "json"} v 0) v1 -- 0x7f103c19de00 con 0x7f103c108810 2026-03-09T15:08:57.755 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.754+0000 7f1039ffb700 1 -- 192.168.123.105:0/1374914332 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 17, "format": "json"}]=0 dumped fsmap epoch 17 v35) v1 ==== 107+0+4996 (secure 0 0 0) 0x7f1030061470 con 0x7f103c108810 2026-03-09T15:08:57.755 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:57.756 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":17,"btime":"2026-03-09T15:06:32:106330+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":11},{"gid":34270,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/4257546649","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":4257546649},{"type":"v1","addr":"192.168.123.109:6825","nonce":4257546649}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":17}],"filesystems":[{"mdsmap":{"epoch":17,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:32.106327+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":84,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":14518},"failed":[],"damaged":[],"stopped":[],"info":{"gid_14518":{"gid":14518,"name":"cephfs.vm05.rrcyql","rank":0,"incarnation":14,"state":"up:active","state_seq":124,"addr":"192.168.123.105:6829/1321316558","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":1321316558},{"type":"v1","addr":"192.168.123.105:6829","nonce":1321316558}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":14518,"qdb_cluster":[14518]},"id":1}]} 2026-03-09T15:08:57.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.757+0000 7f1042d70700 1 -- 192.168.123.105:0/1374914332 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f102c077990 msgr2=0x7f102c079e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:57.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.757+0000 7f1042d70700 1 --2- 192.168.123.105:0/1374914332 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f102c077990 0x7f102c079e40 secure :-1 s=READY pgs=98 cs=0 l=1 rev1=1 crypto rx=0x7f102800b5c0 tx=0x7f1028005dc0 comp rx=0 tx=0).stop 2026-03-09T15:08:57.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.757+0000 7f1042d70700 1 -- 192.168.123.105:0/1374914332 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f103c108810 msgr2=0x7f103c198620 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:57.758 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.757+0000 7f1042d70700 1 --2- 192.168.123.105:0/1374914332 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f103c108810 0x7f103c198620 secure :-1 s=READY pgs=146 cs=0 l=1 rev1=1 crypto rx=0x7f103000d930 tx=0x7f103000dcf0 comp rx=0 tx=0).stop 2026-03-09T15:08:57.759 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.758+0000 7f1042d70700 1 -- 192.168.123.105:0/1374914332 shutdown_connections 2026-03-09T15:08:57.759 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.758+0000 7f1042d70700 1 --2- 192.168.123.105:0/1374914332 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f102c077990 0x7f102c079e40 unknown :-1 s=CLOSED pgs=98 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:57.759 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.758+0000 7f1042d70700 1 --2- 192.168.123.105:0/1374914332 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f103c108810 0x7f103c198620 unknown :-1 s=CLOSED pgs=146 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:57.759 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.758+0000 7f1042d70700 1 --2- 192.168.123.105:0/1374914332 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f103c198b60 0x7f103c19cfd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:57.759 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.758+0000 7f1042d70700 1 -- 192.168.123.105:0/1374914332 >> 192.168.123.105:0/1374914332 conn(0x7f103c0fe330 msgr2=0x7f103c0ffd40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:57.759 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.758+0000 7f1042d70700 1 -- 192.168.123.105:0/1374914332 shutdown_connections 2026-03-09T15:08:57.759 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:57.758+0000 7f1042d70700 1 -- 192.168.123.105:0/1374914332 wait complete. 2026-03-09T15:08:57.760 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 17 2026-03-09T15:08:57.807 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 17 2026-03-09T15:08:57.807 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 18 2026-03-09T15:08:57.976 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:58.014 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:57 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/4019553939' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 16, "format": "json"}]: dispatch 2026-03-09T15:08:58.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:57 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/4019553939' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 16, "format": "json"}]: dispatch 2026-03-09T15:08:58.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.252+0000 7f99e2d2f700 1 -- 192.168.123.105:0/374230594 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99dc0ff2f0 msgr2=0x7f99dc0ff6c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:58.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.252+0000 7f99e2d2f700 1 --2- 192.168.123.105:0/374230594 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99dc0ff2f0 0x7f99dc0ff6c0 secure :-1 s=READY pgs=147 cs=0 l=1 rev1=1 crypto rx=0x7f99cc009b00 tx=0x7f99cc009e10 comp rx=0 tx=0).stop 2026-03-09T15:08:58.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.253+0000 7f99e2d2f700 1 -- 192.168.123.105:0/374230594 shutdown_connections 2026-03-09T15:08:58.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.253+0000 7f99e2d2f700 1 --2- 192.168.123.105:0/374230594 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f99dc0ffc00 0x7f99dc10c960 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:58.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.253+0000 7f99e2d2f700 1 --2- 192.168.123.105:0/374230594 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99dc0ff2f0 0x7f99dc0ff6c0 unknown :-1 s=CLOSED pgs=147 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:58.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.253+0000 7f99e2d2f700 1 -- 192.168.123.105:0/374230594 >> 192.168.123.105:0/374230594 conn(0x7f99dc0faf00 msgr2=0x7f99dc0fd310 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:58.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.253+0000 7f99e2d2f700 1 -- 192.168.123.105:0/374230594 shutdown_connections 2026-03-09T15:08:58.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.253+0000 7f99e2d2f700 1 -- 192.168.123.105:0/374230594 wait complete. 2026-03-09T15:08:58.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.254+0000 7f99e2d2f700 1 Processor -- start 2026-03-09T15:08:58.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.254+0000 7f99e2d2f700 1 -- start start 2026-03-09T15:08:58.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.254+0000 7f99e2d2f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99dc0ff2f0 0x7f99dc1982c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:58.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.254+0000 7f99e2d2f700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f99dc0ffc00 0x7f99dc198800 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:58.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.254+0000 7f99e2d2f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f99dc198ee0 con 0x7f99dc0ff2f0 2026-03-09T15:08:58.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.254+0000 7f99e2d2f700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f99dc19cc70 con 0x7f99dc0ffc00 2026-03-09T15:08:58.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.255+0000 7f99dbfff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f99dc0ffc00 0x7f99dc198800 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:58.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.255+0000 7f99dbfff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f99dc0ffc00 0x7f99dc198800 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:45810/0 (socket says 192.168.123.105:45810) 2026-03-09T15:08:58.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.255+0000 7f99dbfff700 1 -- 192.168.123.105:0/4291865155 learned_addr learned my addr 192.168.123.105:0/4291865155 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:58.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.255+0000 7f99e0acb700 1 --2- 192.168.123.105:0/4291865155 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99dc0ff2f0 0x7f99dc1982c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:58.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.255+0000 7f99dbfff700 1 -- 192.168.123.105:0/4291865155 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99dc0ff2f0 msgr2=0x7f99dc1982c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:58.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.255+0000 7f99dbfff700 1 --2- 192.168.123.105:0/4291865155 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99dc0ff2f0 0x7f99dc1982c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:58.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.255+0000 7f99dbfff700 1 -- 192.168.123.105:0/4291865155 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f99cc0097e0 con 0x7f99dc0ffc00 2026-03-09T15:08:58.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.255+0000 7f99e0acb700 1 --2- 192.168.123.105:0/4291865155 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99dc0ff2f0 0x7f99dc1982c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:08:58.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.256+0000 7f99dbfff700 1 --2- 192.168.123.105:0/4291865155 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f99dc0ffc00 0x7f99dc198800 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f99d000eb10 tx=0x7f99d000eed0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:58.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.256+0000 7f99d9ffb700 1 -- 192.168.123.105:0/4291865155 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f99d000cca0 con 0x7f99dc0ffc00 2026-03-09T15:08:58.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.256+0000 7f99e2d2f700 1 -- 192.168.123.105:0/4291865155 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f99dc19cf50 con 0x7f99dc0ffc00 2026-03-09T15:08:58.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.256+0000 7f99e2d2f700 1 -- 192.168.123.105:0/4291865155 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f99dc19d4a0 con 0x7f99dc0ffc00 2026-03-09T15:08:58.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.256+0000 7f99d9ffb700 1 -- 192.168.123.105:0/4291865155 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f99d000ce00 con 0x7f99dc0ffc00 2026-03-09T15:08:58.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.257+0000 7f99d9ffb700 1 -- 192.168.123.105:0/4291865155 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f99d00189c0 con 0x7f99dc0ffc00 2026-03-09T15:08:58.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.257+0000 7f99d9ffb700 1 -- 192.168.123.105:0/4291865155 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f99d0018bf0 con 0x7f99dc0ffc00 2026-03-09T15:08:58.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.257+0000 7f99e2d2f700 1 -- 192.168.123.105:0/4291865155 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f99dc10a0d0 con 0x7f99dc0ffc00 2026-03-09T15:08:58.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.258+0000 7f99d9ffb700 1 --2- 192.168.123.105:0/4291865155 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f99c4077870 0x7f99c4079d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:58.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.258+0000 7f99d9ffb700 1 -- 192.168.123.105:0/4291865155 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f99d0014070 con 0x7f99dc0ffc00 2026-03-09T15:08:58.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.258+0000 7f99e0acb700 1 --2- 192.168.123.105:0/4291865155 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f99c4077870 0x7f99c4079d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:58.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.259+0000 7f99e0acb700 1 --2- 192.168.123.105:0/4291865155 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f99c4077870 0x7f99c4079d20 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f99cc000c00 tx=0x7f99cc005fb0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:58.262 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.261+0000 7f99d9ffb700 1 -- 192.168.123.105:0/4291865155 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f99d0062d00 con 0x7f99dc0ffc00 2026-03-09T15:08:58.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.405+0000 7f99e2d2f700 1 -- 192.168.123.105:0/4291865155 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 18, "format": "json"} v 0) v1 -- 0x7f99dc04ea50 con 0x7f99dc0ffc00 2026-03-09T15:08:58.406 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.405+0000 7f99d9ffb700 1 -- 192.168.123.105:0/4291865155 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 18, "format": "json"}]=0 dumped fsmap epoch 18 v35) v1 ==== 107+0+4191 (secure 0 0 0) 0x7f99d0062450 con 0x7f99dc0ffc00 2026-03-09T15:08:58.407 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:58.407 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":18,"btime":"2026-03-09T15:06:35:380101+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":2,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":11},{"gid":34270,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/4257546649","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":4257546649},{"type":"v1","addr":"192.168.123.109:6825","nonce":4257546649}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":17}],"filesystems":[{"mdsmap":{"epoch":18,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:35.380098+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":87,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{},"failed":[0],"damaged":[],"stopped":[],"info":{},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:08:58.410 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.409+0000 7f99e2d2f700 1 -- 192.168.123.105:0/4291865155 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f99c4077870 msgr2=0x7f99c4079d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:58.410 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.409+0000 7f99e2d2f700 1 --2- 192.168.123.105:0/4291865155 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f99c4077870 0x7f99c4079d20 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f99cc000c00 tx=0x7f99cc005fb0 comp rx=0 tx=0).stop 2026-03-09T15:08:58.410 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.409+0000 7f99e2d2f700 1 -- 192.168.123.105:0/4291865155 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f99dc0ffc00 msgr2=0x7f99dc198800 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:58.410 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.409+0000 7f99e2d2f700 1 --2- 192.168.123.105:0/4291865155 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f99dc0ffc00 0x7f99dc198800 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f99d000eb10 tx=0x7f99d000eed0 comp rx=0 tx=0).stop 2026-03-09T15:08:58.410 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.409+0000 7f99e2d2f700 1 -- 192.168.123.105:0/4291865155 shutdown_connections 2026-03-09T15:08:58.410 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.409+0000 7f99e2d2f700 1 --2- 192.168.123.105:0/4291865155 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f99c4077870 0x7f99c4079d20 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:58.410 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.409+0000 7f99e2d2f700 1 --2- 192.168.123.105:0/4291865155 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f99dc0ff2f0 0x7f99dc1982c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:58.410 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.409+0000 7f99e2d2f700 1 --2- 192.168.123.105:0/4291865155 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f99dc0ffc00 0x7f99dc198800 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:58.410 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.409+0000 7f99e2d2f700 1 -- 192.168.123.105:0/4291865155 >> 192.168.123.105:0/4291865155 conn(0x7f99dc0faf00 msgr2=0x7f99dc0fc430 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:58.410 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.410+0000 7f99e2d2f700 1 -- 192.168.123.105:0/4291865155 shutdown_connections 2026-03-09T15:08:58.410 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.410+0000 7f99e2d2f700 1 -- 192.168.123.105:0/4291865155 wait complete. 2026-03-09T15:08:58.411 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 18 2026-03-09T15:08:58.460 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 18 2026-03-09T15:08:58.460 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 19 2026-03-09T15:08:58.618 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:58.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.900+0000 7f400a014700 1 -- 192.168.123.105:0/845072235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4004068490 msgr2=0x7f4004068900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:58.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.900+0000 7f400a014700 1 --2- 192.168.123.105:0/845072235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4004068490 0x7f4004068900 secure :-1 s=READY pgs=148 cs=0 l=1 rev1=1 crypto rx=0x7f3ff4009b00 tx=0x7f3ff4009e10 comp rx=0 tx=0).stop 2026-03-09T15:08:58.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.901+0000 7f400a014700 1 -- 192.168.123.105:0/845072235 shutdown_connections 2026-03-09T15:08:58.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.901+0000 7f400a014700 1 --2- 192.168.123.105:0/845072235 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4004068490 0x7f4004068900 unknown :-1 s=CLOSED pgs=148 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:58.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.901+0000 7f400a014700 1 --2- 192.168.123.105:0/845072235 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f40041066c0 0x7f4004106a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:58.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.901+0000 7f400a014700 1 -- 192.168.123.105:0/845072235 >> 192.168.123.105:0/845072235 conn(0x7f40040754a0 msgr2=0x7f40040758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:58.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.901+0000 7f400a014700 1 -- 192.168.123.105:0/845072235 shutdown_connections 2026-03-09T15:08:58.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.901+0000 7f400a014700 1 -- 192.168.123.105:0/845072235 wait complete. 2026-03-09T15:08:58.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.902+0000 7f400a014700 1 Processor -- start 2026-03-09T15:08:58.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.902+0000 7f400a014700 1 -- start start 2026-03-09T15:08:58.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.902+0000 7f400a014700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4004068490 0x7f40041962b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:58.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.902+0000 7f400a014700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f40041066c0 0x7f40041967f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:58.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.902+0000 7f400a014700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f4004196ed0 con 0x7f4004068490 2026-03-09T15:08:58.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.902+0000 7f400a014700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f400419ac60 con 0x7f40041066c0 2026-03-09T15:08:58.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.902+0000 7f4002ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f40041066c0 0x7f40041967f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:58.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.902+0000 7f4002ffd700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f40041066c0 0x7f40041967f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:57152/0 (socket says 192.168.123.105:57152) 2026-03-09T15:08:58.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.902+0000 7f4002ffd700 1 -- 192.168.123.105:0/3446290214 learned_addr learned my addr 192.168.123.105:0/3446290214 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:58.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.903+0000 7f40037fe700 1 --2- 192.168.123.105:0/3446290214 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4004068490 0x7f40041962b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:58.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.903+0000 7f4002ffd700 1 -- 192.168.123.105:0/3446290214 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4004068490 msgr2=0x7f40041962b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:58.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.903+0000 7f4002ffd700 1 --2- 192.168.123.105:0/3446290214 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4004068490 0x7f40041962b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:58.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.903+0000 7f4002ffd700 1 -- 192.168.123.105:0/3446290214 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f3ff40097e0 con 0x7f40041066c0 2026-03-09T15:08:58.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.903+0000 7f40037fe700 1 --2- 192.168.123.105:0/3446290214 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4004068490 0x7f40041962b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:08:58.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.903+0000 7f4002ffd700 1 --2- 192.168.123.105:0/3446290214 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f40041066c0 0x7f40041967f0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f3ff400b5c0 tx=0x7f3ff40049b0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:58.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.904+0000 7f4000ff9700 1 -- 192.168.123.105:0/3446290214 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3ff401d070 con 0x7f40041066c0 2026-03-09T15:08:58.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.904+0000 7f400a014700 1 -- 192.168.123.105:0/3446290214 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f400419aee0 con 0x7f40041066c0 2026-03-09T15:08:58.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.904+0000 7f4000ff9700 1 -- 192.168.123.105:0/3446290214 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f3ff400bc50 con 0x7f40041066c0 2026-03-09T15:08:58.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.904+0000 7f4000ff9700 1 -- 192.168.123.105:0/3446290214 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f3ff400f6b0 con 0x7f40041066c0 2026-03-09T15:08:58.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.904+0000 7f400a014700 1 -- 192.168.123.105:0/3446290214 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f400419b3d0 con 0x7f40041066c0 2026-03-09T15:08:58.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.905+0000 7f400a014700 1 -- 192.168.123.105:0/3446290214 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f4004103950 con 0x7f40041066c0 2026-03-09T15:08:58.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.905+0000 7f4000ff9700 1 -- 192.168.123.105:0/3446290214 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f3ff4044b60 con 0x7f40041066c0 2026-03-09T15:08:58.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.906+0000 7f4000ff9700 1 --2- 192.168.123.105:0/3446290214 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3ff0077920 0x7f3ff0079dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:58.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.906+0000 7f4000ff9700 1 -- 192.168.123.105:0/3446290214 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f3ff409b5f0 con 0x7f40041066c0 2026-03-09T15:08:58.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.906+0000 7f40037fe700 1 --2- 192.168.123.105:0/3446290214 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3ff0077920 0x7f3ff0079dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:58.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.907+0000 7f40037fe700 1 --2- 192.168.123.105:0/3446290214 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3ff0077920 0x7f3ff0079dd0 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f3fec007900 tx=0x7f3fec008040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:58.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:58.910+0000 7f4000ff9700 1 -- 192.168.123.105:0/3446290214 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f3ff4063c10 con 0x7f40041066c0 2026-03-09T15:08:59.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:58 vm05.local ceph-mon[116516]: pgmap v213: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:59.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:58 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1374914332' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 17, "format": "json"}]: dispatch 2026-03-09T15:08:59.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:58 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/4291865155' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 18, "format": "json"}]: dispatch 2026-03-09T15:08:59.058 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.057+0000 7f400a014700 1 -- 192.168.123.105:0/3446290214 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 19, "format": "json"} v 0) v1 -- 0x7f400404ea50 con 0x7f40041066c0 2026-03-09T15:08:59.059 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.058+0000 7f4000ff9700 1 -- 192.168.123.105:0/3446290214 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 19, "format": "json"}]=0 dumped fsmap epoch 19 v35) v1 ==== 107+0+4202 (secure 0 0 0) 0x7f3ff4063360 con 0x7f40041066c0 2026-03-09T15:08:59.059 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:59.059 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":19,"btime":"2026-03-09T15:06:35:391498+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34270,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/4257546649","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":4257546649},{"type":"v1","addr":"192.168.123.109:6825","nonce":4257546649}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":17}],"filesystems":[{"mdsmap":{"epoch":19,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:35.391474+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":87,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":24317},"failed":[],"damaged":[],"stopped":[],"info":{"gid_24317":{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":0,"incarnation":19,"state":"up:replay","state_seq":2,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:08:59.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.060+0000 7f400a014700 1 -- 192.168.123.105:0/3446290214 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3ff0077920 msgr2=0x7f3ff0079dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:59.061 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.060+0000 7f400a014700 1 --2- 192.168.123.105:0/3446290214 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3ff0077920 0x7f3ff0079dd0 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f3fec007900 tx=0x7f3fec008040 comp rx=0 tx=0).stop 2026-03-09T15:08:59.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.061+0000 7f400a014700 1 -- 192.168.123.105:0/3446290214 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f40041066c0 msgr2=0x7f40041967f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:59.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.061+0000 7f400a014700 1 --2- 192.168.123.105:0/3446290214 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f40041066c0 0x7f40041967f0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f3ff400b5c0 tx=0x7f3ff40049b0 comp rx=0 tx=0).stop 2026-03-09T15:08:59.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.061+0000 7f400a014700 1 -- 192.168.123.105:0/3446290214 shutdown_connections 2026-03-09T15:08:59.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.061+0000 7f400a014700 1 --2- 192.168.123.105:0/3446290214 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f3ff0077920 0x7f3ff0079dd0 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:59.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.061+0000 7f400a014700 1 --2- 192.168.123.105:0/3446290214 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f4004068490 0x7f40041962b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:59.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.061+0000 7f400a014700 1 --2- 192.168.123.105:0/3446290214 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f40041066c0 0x7f40041967f0 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:59.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.061+0000 7f400a014700 1 -- 192.168.123.105:0/3446290214 >> 192.168.123.105:0/3446290214 conn(0x7f40040754a0 msgr2=0x7f40040fee20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:59.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.061+0000 7f400a014700 1 -- 192.168.123.105:0/3446290214 shutdown_connections 2026-03-09T15:08:59.062 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.062+0000 7f400a014700 1 -- 192.168.123.105:0/3446290214 wait complete. 2026-03-09T15:08:59.063 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 19 2026-03-09T15:08:59.114 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 19 2026-03-09T15:08:59.114 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 20 2026-03-09T15:08:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:58 vm09.local ceph-mon[98742]: pgmap v213: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:08:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:58 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1374914332' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 17, "format": "json"}]: dispatch 2026-03-09T15:08:59.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:58 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/4291865155' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 18, "format": "json"}]: dispatch 2026-03-09T15:08:59.271 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:59.532 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.531+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1228903736 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8a0071e40 msgr2=0x7fc8a00722b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:59.532 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.531+0000 7fc8a48cf700 1 --2- 192.168.123.105:0/1228903736 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8a0071e40 0x7fc8a00722b0 secure :-1 s=READY pgs=149 cs=0 l=1 rev1=1 crypto rx=0x7fc89800d700 tx=0x7fc89800da10 comp rx=0 tx=0).stop 2026-03-09T15:08:59.533 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.532+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1228903736 shutdown_connections 2026-03-09T15:08:59.533 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.532+0000 7fc8a48cf700 1 --2- 192.168.123.105:0/1228903736 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8a0071e40 0x7fc8a00722b0 unknown :-1 s=CLOSED pgs=149 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:59.533 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.532+0000 7fc8a48cf700 1 --2- 192.168.123.105:0/1228903736 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc8a010c8f0 0x7fc8a010ccc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:59.533 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.532+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1228903736 >> 192.168.123.105:0/1228903736 conn(0x7fc8a006c6c0 msgr2=0x7fc8a006cac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:59.533 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.532+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1228903736 shutdown_connections 2026-03-09T15:08:59.533 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.532+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1228903736 wait complete. 2026-03-09T15:08:59.533 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.533+0000 7fc8a48cf700 1 Processor -- start 2026-03-09T15:08:59.533 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.533+0000 7fc8a48cf700 1 -- start start 2026-03-09T15:08:59.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.533+0000 7fc8a48cf700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc8a0071e40 0x7fc8a00833e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:59.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.533+0000 7fc8a48cf700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8a010c8f0 0x7fc8a0083920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:59.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.533+0000 7fc8a48cf700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc8a010a4a0 con 0x7fc8a010c8f0 2026-03-09T15:08:59.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.533+0000 7fc89e59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8a010c8f0 0x7fc8a0083920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:59.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.533+0000 7fc89e59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8a010c8f0 0x7fc8a0083920 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:58166/0 (socket says 192.168.123.105:58166) 2026-03-09T15:08:59.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.533+0000 7fc89e59c700 1 -- 192.168.123.105:0/1599538308 learned_addr learned my addr 192.168.123.105:0/1599538308 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:08:59.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.533+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1599538308 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fc8a0108400 con 0x7fc8a0071e40 2026-03-09T15:08:59.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.534+0000 7fc89ed9d700 1 --2- 192.168.123.105:0/1599538308 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc8a0071e40 0x7fc8a00833e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:59.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.534+0000 7fc89e59c700 1 -- 192.168.123.105:0/1599538308 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc8a0071e40 msgr2=0x7fc8a00833e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:59.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.534+0000 7fc89e59c700 1 --2- 192.168.123.105:0/1599538308 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc8a0071e40 0x7fc8a00833e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:59.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.534+0000 7fc89e59c700 1 -- 192.168.123.105:0/1599538308 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fc898007ed0 con 0x7fc8a010c8f0 2026-03-09T15:08:59.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.534+0000 7fc89e59c700 1 --2- 192.168.123.105:0/1599538308 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8a010c8f0 0x7fc8a0083920 secure :-1 s=READY pgs=150 cs=0 l=1 rev1=1 crypto rx=0x7fc89800f8e0 tx=0x7fc898004320 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:59.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.535+0000 7fc887fff700 1 -- 192.168.123.105:0/1599538308 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc898004720 con 0x7fc8a010c8f0 2026-03-09T15:08:59.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.535+0000 7fc887fff700 1 -- 192.168.123.105:0/1599538308 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fc898003e90 con 0x7fc8a010c8f0 2026-03-09T15:08:59.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.535+0000 7fc887fff700 1 -- 192.168.123.105:0/1599538308 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fc89800e770 con 0x7fc8a010c8f0 2026-03-09T15:08:59.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.535+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1599538308 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fc8a0108680 con 0x7fc8a010c8f0 2026-03-09T15:08:59.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.535+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1599538308 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fc8a0108b70 con 0x7fc8a010c8f0 2026-03-09T15:08:59.537 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.536+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1599538308 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fc8a0076620 con 0x7fc8a010c8f0 2026-03-09T15:08:59.541 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.540+0000 7fc887fff700 1 -- 192.168.123.105:0/1599538308 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fc898004880 con 0x7fc8a010c8f0 2026-03-09T15:08:59.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.541+0000 7fc887fff700 1 --2- 192.168.123.105:0/1599538308 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc8880778c0 0x7fc888079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:08:59.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.541+0000 7fc89ed9d700 1 --2- 192.168.123.105:0/1599538308 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc8880778c0 0x7fc888079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:08:59.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.541+0000 7fc887fff700 1 -- 192.168.123.105:0/1599538308 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fc898013070 con 0x7fc8a010c8f0 2026-03-09T15:08:59.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.541+0000 7fc887fff700 1 -- 192.168.123.105:0/1599538308 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fc89809c710 con 0x7fc8a010c8f0 2026-03-09T15:08:59.543 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.542+0000 7fc89ed9d700 1 --2- 192.168.123.105:0/1599538308 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc8880778c0 0x7fc888079d70 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7fc890009910 tx=0x7fc890008040 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:08:59.684 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.683+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1599538308 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 20, "format": "json"} v 0) v1 -- 0x7fc8a004f2a0 con 0x7fc8a010c8f0 2026-03-09T15:08:59.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.685+0000 7fc887fff700 1 -- 192.168.123.105:0/1599538308 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 20, "format": "json"}]=0 dumped fsmap epoch 20 v35) v1 ==== 107+0+4207 (secure 0 0 0) 0x7fc898064ae0 con 0x7fc8a010c8f0 2026-03-09T15:08:59.687 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:08:59.687 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":20,"btime":"2026-03-09T15:06:39:882564+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34270,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/4257546649","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":4257546649},{"type":"v1","addr":"192.168.123.109:6825","nonce":4257546649}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":17}],"filesystems":[{"mdsmap":{"epoch":20,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:39.832418+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":87,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":24317},"failed":[],"damaged":[],"stopped":[],"info":{"gid_24317":{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":0,"incarnation":19,"state":"up:reconnect","state_seq":125,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:08:59.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.688+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1599538308 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc8880778c0 msgr2=0x7fc888079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:59.689 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.689+0000 7fc8a48cf700 1 --2- 192.168.123.105:0/1599538308 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc8880778c0 0x7fc888079d70 secure :-1 s=READY pgs=101 cs=0 l=1 rev1=1 crypto rx=0x7fc890009910 tx=0x7fc890008040 comp rx=0 tx=0).stop 2026-03-09T15:08:59.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.689+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1599538308 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8a010c8f0 msgr2=0x7fc8a0083920 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:08:59.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.689+0000 7fc8a48cf700 1 --2- 192.168.123.105:0/1599538308 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8a010c8f0 0x7fc8a0083920 secure :-1 s=READY pgs=150 cs=0 l=1 rev1=1 crypto rx=0x7fc89800f8e0 tx=0x7fc898004320 comp rx=0 tx=0).stop 2026-03-09T15:08:59.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.689+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1599538308 shutdown_connections 2026-03-09T15:08:59.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.689+0000 7fc8a48cf700 1 --2- 192.168.123.105:0/1599538308 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fc8880778c0 0x7fc888079d70 unknown :-1 s=CLOSED pgs=101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:59.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.689+0000 7fc8a48cf700 1 --2- 192.168.123.105:0/1599538308 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fc8a0071e40 0x7fc8a00833e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:59.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.690+0000 7fc8a48cf700 1 --2- 192.168.123.105:0/1599538308 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fc8a010c8f0 0x7fc8a0083920 unknown :-1 s=CLOSED pgs=150 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:08:59.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.690+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1599538308 >> 192.168.123.105:0/1599538308 conn(0x7fc8a006c6c0 msgr2=0x7fc8a010bc80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:08:59.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.690+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1599538308 shutdown_connections 2026-03-09T15:08:59.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:08:59.690+0000 7fc8a48cf700 1 -- 192.168.123.105:0/1599538308 wait complete. 2026-03-09T15:08:59.692 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 20 2026-03-09T15:08:59.742 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 20 2026-03-09T15:08:59.742 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 21 2026-03-09T15:08:59.896 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:08:59.938 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:59 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3446290214' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 19, "format": "json"}]: dispatch 2026-03-09T15:08:59.938 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:08:59 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1599538308' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 20, "format": "json"}]: dispatch 2026-03-09T15:09:00.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:59 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3446290214' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 19, "format": "json"}]: dispatch 2026-03-09T15:09:00.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:08:59 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1599538308' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 20, "format": "json"}]: dispatch 2026-03-09T15:09:00.249 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.247+0000 7efee35b8700 1 -- 192.168.123.105:0/4084942054 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efedc102780 msgr2=0x7efedc102bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:00.249 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.247+0000 7efee35b8700 1 --2- 192.168.123.105:0/4084942054 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efedc102780 0x7efedc102bf0 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7efed8009b00 tx=0x7efed8009e10 comp rx=0 tx=0).stop 2026-03-09T15:09:00.249 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.248+0000 7efee35b8700 1 -- 192.168.123.105:0/4084942054 shutdown_connections 2026-03-09T15:09:00.249 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.248+0000 7efee35b8700 1 --2- 192.168.123.105:0/4084942054 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efedc102780 0x7efedc102bf0 unknown :-1 s=CLOSED pgs=151 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:00.249 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.248+0000 7efee35b8700 1 --2- 192.168.123.105:0/4084942054 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efedc108780 0x7efedc108b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:00.249 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.248+0000 7efee35b8700 1 -- 192.168.123.105:0/4084942054 >> 192.168.123.105:0/4084942054 conn(0x7efedc0fe280 msgr2=0x7efedc100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:00.249 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.248+0000 7efee35b8700 1 -- 192.168.123.105:0/4084942054 shutdown_connections 2026-03-09T15:09:00.249 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.248+0000 7efee35b8700 1 -- 192.168.123.105:0/4084942054 wait complete. 2026-03-09T15:09:00.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.249+0000 7efee35b8700 1 Processor -- start 2026-03-09T15:09:00.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.249+0000 7efee35b8700 1 -- start start 2026-03-09T15:09:00.250 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.249+0000 7efee35b8700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efedc102780 0x7efedc198390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:00.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.249+0000 7efee35b8700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efedc108780 0x7efedc1988d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:00.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.250+0000 7efee35b8700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efedc198f20 con 0x7efedc108780 2026-03-09T15:09:00.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.250+0000 7efee35b8700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7efedc199060 con 0x7efedc102780 2026-03-09T15:09:00.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.250+0000 7efee0b53700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efedc108780 0x7efedc1988d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:00.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.250+0000 7efee0b53700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efedc108780 0x7efedc1988d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:58178/0 (socket says 192.168.123.105:58178) 2026-03-09T15:09:00.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.250+0000 7efee0b53700 1 -- 192.168.123.105:0/1130931442 learned_addr learned my addr 192.168.123.105:0/1130931442 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:00.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.250+0000 7efee0b53700 1 -- 192.168.123.105:0/1130931442 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efedc102780 msgr2=0x7efedc198390 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:09:00.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.250+0000 7efee0b53700 1 --2- 192.168.123.105:0/1130931442 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efedc102780 0x7efedc198390 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:00.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.250+0000 7efee0b53700 1 -- 192.168.123.105:0/1130931442 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7efed80097e0 con 0x7efedc108780 2026-03-09T15:09:00.251 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.250+0000 7efee0b53700 1 --2- 192.168.123.105:0/1130931442 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efedc108780 0x7efedc1988d0 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7efed8009ad0 tx=0x7efed80052e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:00.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.250+0000 7efece7fc700 1 -- 192.168.123.105:0/1130931442 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efed801d070 con 0x7efedc108780 2026-03-09T15:09:00.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.250+0000 7efece7fc700 1 -- 192.168.123.105:0/1130931442 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7efed800bc50 con 0x7efedc108780 2026-03-09T15:09:00.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.250+0000 7efece7fc700 1 -- 192.168.123.105:0/1130931442 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7efed800f7d0 con 0x7efedc108780 2026-03-09T15:09:00.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.251+0000 7efee35b8700 1 -- 192.168.123.105:0/1130931442 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7efedc19ce00 con 0x7efedc108780 2026-03-09T15:09:00.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.251+0000 7efee35b8700 1 -- 192.168.123.105:0/1130931442 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7efedc19d2f0 con 0x7efedc108780 2026-03-09T15:09:00.253 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.252+0000 7efee35b8700 1 -- 192.168.123.105:0/1130931442 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7efedc04ea50 con 0x7efedc108780 2026-03-09T15:09:00.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.253+0000 7efece7fc700 1 -- 192.168.123.105:0/1130931442 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7efed800f930 con 0x7efedc108780 2026-03-09T15:09:00.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.253+0000 7efece7fc700 1 --2- 192.168.123.105:0/1130931442 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7efec80779e0 0x7efec8079e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:00.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.253+0000 7efece7fc700 1 -- 192.168.123.105:0/1130931442 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7efed809b4a0 con 0x7efedc108780 2026-03-09T15:09:00.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.256+0000 7efece7fc700 1 -- 192.168.123.105:0/1130931442 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7efed8063b40 con 0x7efedc108780 2026-03-09T15:09:00.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.256+0000 7efee1354700 1 --2- 192.168.123.105:0/1130931442 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7efec80779e0 0x7efec8079e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:00.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.257+0000 7efee1354700 1 --2- 192.168.123.105:0/1130931442 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7efec80779e0 0x7efec8079e90 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7efedc1038c0 tx=0x7efed0005cb0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:00.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.402+0000 7efee35b8700 1 -- 192.168.123.105:0/1130931442 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 21, "format": "json"} v 0) v1 -- 0x7efedc066e40 con 0x7efedc108780 2026-03-09T15:09:00.404 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.403+0000 7efece7fc700 1 -- 192.168.123.105:0/1130931442 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 21, "format": "json"}]=0 dumped fsmap epoch 21 v35) v1 ==== 107+0+4204 (secure 0 0 0) 0x7efed8063290 con 0x7efedc108780 2026-03-09T15:09:00.404 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:00.404 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":21,"btime":"2026-03-09T15:06:40:928071+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34270,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/4257546649","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":4257546649},{"type":"v1","addr":"192.168.123.109:6825","nonce":4257546649}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":17}],"filesystems":[{"mdsmap":{"epoch":21,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:39.932380+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":87,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":24317},"failed":[],"damaged":[],"stopped":[],"info":{"gid_24317":{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":0,"incarnation":19,"state":"up:rejoin","state_seq":126,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:09:00.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.406+0000 7efee35b8700 1 -- 192.168.123.105:0/1130931442 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7efec80779e0 msgr2=0x7efec8079e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:00.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.406+0000 7efee35b8700 1 --2- 192.168.123.105:0/1130931442 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7efec80779e0 0x7efec8079e90 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7efedc1038c0 tx=0x7efed0005cb0 comp rx=0 tx=0).stop 2026-03-09T15:09:00.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.406+0000 7efee35b8700 1 -- 192.168.123.105:0/1130931442 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efedc108780 msgr2=0x7efedc1988d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:00.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.406+0000 7efee35b8700 1 --2- 192.168.123.105:0/1130931442 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efedc108780 0x7efedc1988d0 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7efed8009ad0 tx=0x7efed80052e0 comp rx=0 tx=0).stop 2026-03-09T15:09:00.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.406+0000 7efee35b8700 1 -- 192.168.123.105:0/1130931442 shutdown_connections 2026-03-09T15:09:00.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.406+0000 7efee35b8700 1 --2- 192.168.123.105:0/1130931442 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7efec80779e0 0x7efec8079e90 secure :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7efedc1038c0 tx=0x7efed0005cb0 comp rx=0 tx=0).stop 2026-03-09T15:09:00.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.406+0000 7efee35b8700 1 --2- 192.168.123.105:0/1130931442 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7efedc102780 0x7efedc198390 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:00.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.406+0000 7efee35b8700 1 --2- 192.168.123.105:0/1130931442 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7efedc108780 0x7efedc1988d0 unknown :-1 s=CLOSED pgs=152 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:00.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.406+0000 7efee35b8700 1 -- 192.168.123.105:0/1130931442 >> 192.168.123.105:0/1130931442 conn(0x7efedc0fe280 msgr2=0x7efedc0ffbd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:00.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.407+0000 7efee35b8700 1 -- 192.168.123.105:0/1130931442 shutdown_connections 2026-03-09T15:09:00.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.407+0000 7efee35b8700 1 -- 192.168.123.105:0/1130931442 wait complete. 2026-03-09T15:09:00.408 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 21 2026-03-09T15:09:00.449 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 21 2026-03-09T15:09:00.449 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 22 2026-03-09T15:09:00.603 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:00.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.870+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/2788532100 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbf840730f0 msgr2=0x7fbf840734c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:00.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.870+0000 7fbf8a8b9700 1 --2- 192.168.123.105:0/2788532100 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbf840730f0 0x7fbf840734c0 secure :-1 s=READY pgs=153 cs=0 l=1 rev1=1 crypto rx=0x7fbf74009b50 tx=0x7fbf74009e60 comp rx=0 tx=0).stop 2026-03-09T15:09:00.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.871+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/2788532100 shutdown_connections 2026-03-09T15:09:00.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.871+0000 7fbf8a8b9700 1 --2- 192.168.123.105:0/2788532100 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbf84073a00 0x7fbf8410c850 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:00.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.871+0000 7fbf8a8b9700 1 --2- 192.168.123.105:0/2788532100 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbf840730f0 0x7fbf840734c0 unknown :-1 s=CLOSED pgs=153 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:00.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.871+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/2788532100 >> 192.168.123.105:0/2788532100 conn(0x7fbf840fc050 msgr2=0x7fbf840fe460 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:00.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.872+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/2788532100 shutdown_connections 2026-03-09T15:09:00.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.872+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/2788532100 wait complete. 2026-03-09T15:09:00.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.872+0000 7fbf8a8b9700 1 Processor -- start 2026-03-09T15:09:00.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.873+0000 7fbf8a8b9700 1 -- start start 2026-03-09T15:09:00.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.873+0000 7fbf8a8b9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbf840730f0 0x7fbf841982a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:00.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.873+0000 7fbf8a8b9700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbf84073a00 0x7fbf84198910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:00.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.873+0000 7fbf8a8b9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbf8419cc50 con 0x7fbf840730f0 2026-03-09T15:09:00.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.873+0000 7fbf8a8b9700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fbf8419cdc0 con 0x7fbf84073a00 2026-03-09T15:09:00.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.873+0000 7fbf837fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbf84073a00 0x7fbf84198910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:00.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.873+0000 7fbf837fe700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbf84073a00 0x7fbf84198910 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:57200/0 (socket says 192.168.123.105:57200) 2026-03-09T15:09:00.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.873+0000 7fbf837fe700 1 -- 192.168.123.105:0/3398297017 learned_addr learned my addr 192.168.123.105:0/3398297017 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:00.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.874+0000 7fbf837fe700 1 -- 192.168.123.105:0/3398297017 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbf840730f0 msgr2=0x7fbf841982a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:00.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.874+0000 7fbf83fff700 1 --2- 192.168.123.105:0/3398297017 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbf840730f0 0x7fbf841982a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:00.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.874+0000 7fbf837fe700 1 --2- 192.168.123.105:0/3398297017 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbf840730f0 0x7fbf841982a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:00.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.874+0000 7fbf837fe700 1 -- 192.168.123.105:0/3398297017 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fbf740097e0 con 0x7fbf84073a00 2026-03-09T15:09:00.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.874+0000 7fbf83fff700 1 --2- 192.168.123.105:0/3398297017 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbf840730f0 0x7fbf841982a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:09:00.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.874+0000 7fbf837fe700 1 --2- 192.168.123.105:0/3398297017 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbf84073a00 0x7fbf84198910 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7fbf6c00eb10 tx=0x7fbf6c00eed0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:00.875 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.874+0000 7fbf817fa700 1 -- 192.168.123.105:0/3398297017 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbf6c00cca0 con 0x7fbf84073a00 2026-03-09T15:09:00.876 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.875+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/3398297017 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fbf8419d0a0 con 0x7fbf84073a00 2026-03-09T15:09:00.876 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.875+0000 7fbf817fa700 1 -- 192.168.123.105:0/3398297017 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fbf6c00ce00 con 0x7fbf84073a00 2026-03-09T15:09:00.876 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.875+0000 7fbf817fa700 1 -- 192.168.123.105:0/3398297017 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fbf6c0189c0 con 0x7fbf84073a00 2026-03-09T15:09:00.876 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.875+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/3398297017 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fbf8419d5f0 con 0x7fbf84073a00 2026-03-09T15:09:00.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.876+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/3398297017 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fbf84109f50 con 0x7fbf84073a00 2026-03-09T15:09:00.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.876+0000 7fbf817fa700 1 -- 192.168.123.105:0/3398297017 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fbf6c018b20 con 0x7fbf84073a00 2026-03-09T15:09:00.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.877+0000 7fbf817fa700 1 --2- 192.168.123.105:0/3398297017 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbf700778c0 0x7fbf70079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:00.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.877+0000 7fbf83fff700 1 --2- 192.168.123.105:0/3398297017 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbf700778c0 0x7fbf70079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:00.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.877+0000 7fbf817fa700 1 -- 192.168.123.105:0/3398297017 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fbf6c014070 con 0x7fbf84073a00 2026-03-09T15:09:00.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.877+0000 7fbf83fff700 1 --2- 192.168.123.105:0/3398297017 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbf700778c0 0x7fbf70079d70 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fbf7400b5c0 tx=0x7fbf740058e0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:00.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:00.880+0000 7fbf817fa700 1 -- 192.168.123.105:0/3398297017 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fbf6c0d09f0 con 0x7fbf84073a00 2026-03-09T15:09:01.026 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:00 vm05.local ceph-mon[116516]: pgmap v214: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:09:01.026 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:00 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1130931442' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 21, "format": "json"}]: dispatch 2026-03-09T15:09:01.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.025+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/3398297017 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 22, "format": "json"} v 0) v1 -- 0x7fbf8404ea50 con 0x7fbf84073a00 2026-03-09T15:09:01.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.026+0000 7fbf817fa700 1 -- 192.168.123.105:0/3398297017 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 22, "format": "json"}]=0 dumped fsmap epoch 22 v35) v1 ==== 107+0+4213 (secure 0 0 0) 0x7fbf6c0627d0 con 0x7fbf84073a00 2026-03-09T15:09:01.030 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:01.030 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":22,"btime":"2026-03-09T15:06:41:935076+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34270,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/4257546649","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":4257546649},{"type":"v1","addr":"192.168.123.109:6825","nonce":4257546649}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":17}],"filesystems":[{"mdsmap":{"epoch":22,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:41.935075+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":87,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":24317},"failed":[],"damaged":[],"stopped":[],"info":{"gid_24317":{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":0,"incarnation":19,"state":"up:active","state_seq":127,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":24317,"qdb_cluster":[24317]},"id":1}]} 2026-03-09T15:09:01.032 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.031+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/3398297017 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbf700778c0 msgr2=0x7fbf70079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:01.032 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.031+0000 7fbf8a8b9700 1 --2- 192.168.123.105:0/3398297017 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbf700778c0 0x7fbf70079d70 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7fbf7400b5c0 tx=0x7fbf740058e0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.032 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.032+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/3398297017 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbf84073a00 msgr2=0x7fbf84198910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:01.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.032+0000 7fbf8a8b9700 1 --2- 192.168.123.105:0/3398297017 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbf84073a00 0x7fbf84198910 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7fbf6c00eb10 tx=0x7fbf6c00eed0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.032+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/3398297017 shutdown_connections 2026-03-09T15:09:01.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.032+0000 7fbf8a8b9700 1 --2- 192.168.123.105:0/3398297017 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fbf700778c0 0x7fbf70079d70 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.032+0000 7fbf8a8b9700 1 --2- 192.168.123.105:0/3398297017 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fbf840730f0 0x7fbf841982a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.032+0000 7fbf8a8b9700 1 --2- 192.168.123.105:0/3398297017 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fbf84073a00 0x7fbf84198910 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.032+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/3398297017 >> 192.168.123.105:0/3398297017 conn(0x7fbf840fc050 msgr2=0x7fbf84107090 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:01.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.032+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/3398297017 shutdown_connections 2026-03-09T15:09:01.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.032+0000 7fbf8a8b9700 1 -- 192.168.123.105:0/3398297017 wait complete. 2026-03-09T15:09:01.034 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 22 2026-03-09T15:09:01.082 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 22 2026-03-09T15:09:01.082 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 23 2026-03-09T15:09:01.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:00 vm09.local ceph-mon[98742]: pgmap v214: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:09:01.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:00 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1130931442' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 21, "format": "json"}]: dispatch 2026-03-09T15:09:01.239 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:01.531 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.529+0000 7f3097ce0700 1 -- 192.168.123.105:0/1681722236 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3090102780 msgr2=0x7f3090102bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:01.531 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.529+0000 7f3097ce0700 1 --2- 192.168.123.105:0/1681722236 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3090102780 0x7f3090102bf0 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7f308c009b00 tx=0x7f308c009e10 comp rx=0 tx=0).stop 2026-03-09T15:09:01.531 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.530+0000 7f3097ce0700 1 -- 192.168.123.105:0/1681722236 shutdown_connections 2026-03-09T15:09:01.531 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.530+0000 7f3097ce0700 1 --2- 192.168.123.105:0/1681722236 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3090102780 0x7f3090102bf0 unknown :-1 s=CLOSED pgs=154 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.531 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.530+0000 7f3097ce0700 1 --2- 192.168.123.105:0/1681722236 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3090108780 0x7f3090108b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.531 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.530+0000 7f3097ce0700 1 -- 192.168.123.105:0/1681722236 >> 192.168.123.105:0/1681722236 conn(0x7f30900fe280 msgr2=0x7f3090100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:01.531 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.530+0000 7f3097ce0700 1 -- 192.168.123.105:0/1681722236 shutdown_connections 2026-03-09T15:09:01.531 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.531+0000 7f3097ce0700 1 -- 192.168.123.105:0/1681722236 wait complete. 2026-03-09T15:09:01.532 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.531+0000 7f3097ce0700 1 Processor -- start 2026-03-09T15:09:01.532 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.531+0000 7f3097ce0700 1 -- start start 2026-03-09T15:09:01.532 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.532+0000 7f3097ce0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3090102780 0x7f3090198390 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:01.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.532+0000 7f3097ce0700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3090108780 0x7f30901988d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:01.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.532+0000 7f3097ce0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f3090198fb0 con 0x7f3090102780 2026-03-09T15:09:01.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.532+0000 7f3097ce0700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f309019ccf0 con 0x7f3090108780 2026-03-09T15:09:01.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.532+0000 7f309527b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3090108780 0x7f30901988d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:01.534 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.532+0000 7f309527b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3090108780 0x7f30901988d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:57228/0 (socket says 192.168.123.105:57228) 2026-03-09T15:09:01.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.532+0000 7f309527b700 1 -- 192.168.123.105:0/3524561568 learned_addr learned my addr 192.168.123.105:0/3524561568 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:01.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.532+0000 7f309527b700 1 -- 192.168.123.105:0/3524561568 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3090102780 msgr2=0x7f3090198390 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:09:01.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.534+0000 7f3095a7c700 1 --2- 192.168.123.105:0/3524561568 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3090102780 0x7f3090198390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:01.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.534+0000 7f309527b700 1 --2- 192.168.123.105:0/3524561568 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3090102780 0x7f3090198390 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.534+0000 7f309527b700 1 -- 192.168.123.105:0/3524561568 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f308c0097e0 con 0x7f3090108780 2026-03-09T15:09:01.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.534+0000 7f3095a7c700 1 --2- 192.168.123.105:0/3524561568 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3090102780 0x7f3090198390 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:09:01.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.534+0000 7f309527b700 1 --2- 192.168.123.105:0/3524561568 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3090108780 0x7f30901988d0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f308c009ad0 tx=0x7f308c0052e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:01.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.534+0000 7f3086ffd700 1 -- 192.168.123.105:0/3524561568 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f308c01d070 con 0x7f3090108780 2026-03-09T15:09:01.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.534+0000 7f3086ffd700 1 -- 192.168.123.105:0/3524561568 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f308c00bc50 con 0x7f3090108780 2026-03-09T15:09:01.535 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.534+0000 7f3086ffd700 1 -- 192.168.123.105:0/3524561568 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f308c00f790 con 0x7f3090108780 2026-03-09T15:09:01.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.534+0000 7f3097ce0700 1 -- 192.168.123.105:0/3524561568 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f309019cf70 con 0x7f3090108780 2026-03-09T15:09:01.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.534+0000 7f3097ce0700 1 -- 192.168.123.105:0/3524561568 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f309019d380 con 0x7f3090108780 2026-03-09T15:09:01.536 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.535+0000 7f3097ce0700 1 -- 192.168.123.105:0/3524561568 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f309004ea50 con 0x7f3090108780 2026-03-09T15:09:01.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.538+0000 7f3086ffd700 1 -- 192.168.123.105:0/3524561568 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f308c022470 con 0x7f3090108780 2026-03-09T15:09:01.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.538+0000 7f3086ffd700 1 --2- 192.168.123.105:0/3524561568 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f307c077910 0x7f307c079dc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:01.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.539+0000 7f3095a7c700 1 --2- 192.168.123.105:0/3524561568 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f307c077910 0x7f307c079dc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:01.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.539+0000 7f3086ffd700 1 -- 192.168.123.105:0/3524561568 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f308c09bfb0 con 0x7f3090108780 2026-03-09T15:09:01.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.539+0000 7f3095a7c700 1 --2- 192.168.123.105:0/3524561568 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f307c077910 0x7f307c079dc0 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7f3080005fd0 tx=0x7f3080005dc0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:01.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.540+0000 7f3086ffd700 1 -- 192.168.123.105:0/3524561568 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f308c064770 con 0x7f3090108780 2026-03-09T15:09:01.686 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.685+0000 7f3097ce0700 1 -- 192.168.123.105:0/3524561568 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 23, "format": "json"} v 0) v1 -- 0x7f3090066e40 con 0x7f3090108780 2026-03-09T15:09:01.687 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.686+0000 7f3086ffd700 1 -- 192.168.123.105:0/3524561568 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 23, "format": "json"}]=0 dumped fsmap epoch 23 v35) v1 ==== 107+0+5064 (secure 0 0 0) 0x7f308c063ec0 con 0x7f3090108780 2026-03-09T15:09:01.688 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:01.688 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":23,"btime":"2026-03-09T15:06:43:749131+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34270,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/4257546649","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":4257546649},{"type":"v1","addr":"192.168.123.109:6825","nonce":4257546649}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"epoch":13},{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":17},{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":23}],"filesystems":[{"mdsmap":{"epoch":22,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:41.935075+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":87,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":24317},"failed":[],"damaged":[],"stopped":[],"info":{"gid_24317":{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":0,"incarnation":19,"state":"up:active","state_seq":127,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":24317,"qdb_cluster":[24317]},"id":1}]} 2026-03-09T15:09:01.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.689+0000 7f3097ce0700 1 -- 192.168.123.105:0/3524561568 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f307c077910 msgr2=0x7f307c079dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:01.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.689+0000 7f3097ce0700 1 --2- 192.168.123.105:0/3524561568 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f307c077910 0x7f307c079dc0 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7f3080005fd0 tx=0x7f3080005dc0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.690+0000 7f3097ce0700 1 -- 192.168.123.105:0/3524561568 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3090108780 msgr2=0x7f30901988d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:01.690 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.690+0000 7f3097ce0700 1 --2- 192.168.123.105:0/3524561568 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3090108780 0x7f30901988d0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f308c009ad0 tx=0x7f308c0052e0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.690+0000 7f3097ce0700 1 -- 192.168.123.105:0/3524561568 shutdown_connections 2026-03-09T15:09:01.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.690+0000 7f3097ce0700 1 --2- 192.168.123.105:0/3524561568 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f307c077910 0x7f307c079dc0 unknown :-1 s=CLOSED pgs=104 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.690+0000 7f3097ce0700 1 --2- 192.168.123.105:0/3524561568 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f3090102780 0x7f3090198390 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.690+0000 7f3097ce0700 1 --2- 192.168.123.105:0/3524561568 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f3090108780 0x7f30901988d0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:01.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.690+0000 7f3097ce0700 1 -- 192.168.123.105:0/3524561568 >> 192.168.123.105:0/3524561568 conn(0x7f30900fe280 msgr2=0x7f30900ffbd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:01.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.691+0000 7f3097ce0700 1 -- 192.168.123.105:0/3524561568 shutdown_connections 2026-03-09T15:09:01.691 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:01.691+0000 7f3097ce0700 1 -- 192.168.123.105:0/3524561568 wait complete. 2026-03-09T15:09:01.692 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 23 2026-03-09T15:09:01.764 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 23 2026-03-09T15:09:01.764 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 24 2026-03-09T15:09:01.789 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:01 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3398297017' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 22, "format": "json"}]: dispatch 2026-03-09T15:09:01.789 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:01 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3524561568' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 23, "format": "json"}]: dispatch 2026-03-09T15:09:01.920 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:02.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:01 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3398297017' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 22, "format": "json"}]: dispatch 2026-03-09T15:09:02.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:01 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3524561568' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 23, "format": "json"}]: dispatch 2026-03-09T15:09:02.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.198+0000 7feaa142f700 1 -- 192.168.123.105:0/447375679 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea9c073910 msgr2=0x7fea9c1111c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:02.199 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.198+0000 7feaa142f700 1 --2- 192.168.123.105:0/447375679 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea9c073910 0x7fea9c1111c0 secure :-1 s=READY pgs=155 cs=0 l=1 rev1=1 crypto rx=0x7fea8c009b00 tx=0x7fea8c009e10 comp rx=0 tx=0).stop 2026-03-09T15:09:02.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.199+0000 7feaa142f700 1 -- 192.168.123.105:0/447375679 shutdown_connections 2026-03-09T15:09:02.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.199+0000 7feaa142f700 1 --2- 192.168.123.105:0/447375679 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea9c073910 0x7fea9c1111c0 unknown :-1 s=CLOSED pgs=155 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:02.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.199+0000 7feaa142f700 1 --2- 192.168.123.105:0/447375679 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea9c073000 0x7fea9c0733d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:02.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.199+0000 7feaa142f700 1 -- 192.168.123.105:0/447375679 >> 192.168.123.105:0/447375679 conn(0x7fea9c078550 msgr2=0x7fea9c078950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:02.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.199+0000 7feaa142f700 1 -- 192.168.123.105:0/447375679 shutdown_connections 2026-03-09T15:09:02.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.199+0000 7feaa142f700 1 -- 192.168.123.105:0/447375679 wait complete. 2026-03-09T15:09:02.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.200+0000 7feaa142f700 1 Processor -- start 2026-03-09T15:09:02.200 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.200+0000 7feaa142f700 1 -- start start 2026-03-09T15:09:02.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.200+0000 7feaa142f700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea9c073000 0x7fea9c1a2580 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:02.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.200+0000 7feaa142f700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea9c073910 0x7fea9c1a2ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:02.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.200+0000 7feaa142f700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fea9c1a30c0 con 0x7fea9c073000 2026-03-09T15:09:02.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.200+0000 7feaa142f700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fea9c19c600 con 0x7fea9c073910 2026-03-09T15:09:02.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.200+0000 7fea9a7fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea9c073910 0x7fea9c1a2ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:02.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.200+0000 7fea9a7fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea9c073910 0x7fea9c1a2ac0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:57250/0 (socket says 192.168.123.105:57250) 2026-03-09T15:09:02.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.200+0000 7fea9a7fc700 1 -- 192.168.123.105:0/3305803759 learned_addr learned my addr 192.168.123.105:0/3305803759 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:02.201 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.201+0000 7fea9a7fc700 1 -- 192.168.123.105:0/3305803759 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea9c073000 msgr2=0x7fea9c1a2580 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:09:02.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.201+0000 7fea9affd700 1 --2- 192.168.123.105:0/3305803759 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea9c073000 0x7fea9c1a2580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:02.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.201+0000 7fea9a7fc700 1 --2- 192.168.123.105:0/3305803759 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea9c073000 0x7fea9c1a2580 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:02.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.201+0000 7fea9a7fc700 1 -- 192.168.123.105:0/3305803759 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fea8c0097e0 con 0x7fea9c073910 2026-03-09T15:09:02.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.201+0000 7fea9affd700 1 --2- 192.168.123.105:0/3305803759 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea9c073000 0x7fea9c1a2580 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:09:02.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.201+0000 7fea9a7fc700 1 --2- 192.168.123.105:0/3305803759 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea9c073910 0x7fea9c1a2ac0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7fea8c009ad0 tx=0x7fea8c0052e0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:02.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.201+0000 7fea93fff700 1 -- 192.168.123.105:0/3305803759 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fea8c01d070 con 0x7fea9c073910 2026-03-09T15:09:02.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.202+0000 7fea93fff700 1 -- 192.168.123.105:0/3305803759 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fea8c00bc50 con 0x7fea9c073910 2026-03-09T15:09:02.202 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.202+0000 7feaa142f700 1 -- 192.168.123.105:0/3305803759 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fea9c19c8e0 con 0x7fea9c073910 2026-03-09T15:09:02.203 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.202+0000 7fea93fff700 1 -- 192.168.123.105:0/3305803759 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fea8c00f790 con 0x7fea9c073910 2026-03-09T15:09:02.203 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.202+0000 7feaa142f700 1 -- 192.168.123.105:0/3305803759 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fea9c19ce30 con 0x7fea9c073910 2026-03-09T15:09:02.203 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.203+0000 7feaa142f700 1 -- 192.168.123.105:0/3305803759 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fea9c10e940 con 0x7fea9c073910 2026-03-09T15:09:02.205 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.204+0000 7fea93fff700 1 -- 192.168.123.105:0/3305803759 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fea8c022470 con 0x7fea9c073910 2026-03-09T15:09:02.205 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.204+0000 7fea93fff700 1 --2- 192.168.123.105:0/3305803759 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fea88077910 0x7fea88079dc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:02.205 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.204+0000 7fea93fff700 1 -- 192.168.123.105:0/3305803759 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fea8c09beb0 con 0x7fea9c073910 2026-03-09T15:09:02.205 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.204+0000 7fea9affd700 1 --2- 192.168.123.105:0/3305803759 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fea88077910 0x7fea88079dc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:02.206 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.205+0000 7fea9affd700 1 --2- 192.168.123.105:0/3305803759 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fea88077910 0x7fea88079dc0 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7fea84005fd0 tx=0x7fea84005e20 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:02.207 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.206+0000 7fea93fff700 1 -- 192.168.123.105:0/3305803759 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fea8c064550 con 0x7fea9c073910 2026-03-09T15:09:02.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.351+0000 7feaa142f700 1 -- 192.168.123.105:0/3305803759 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 24, "format": "json"} v 0) v1 -- 0x7fea9c066ea0 con 0x7fea9c073910 2026-03-09T15:09:02.354 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.352+0000 7fea93fff700 1 -- 192.168.123.105:0/3305803759 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 24, "format": "json"}]=0 dumped fsmap epoch 24 v35) v1 ==== 107+0+4281 (secure 0 0 0) 0x7fea8c063ca0 con 0x7fea9c073910 2026-03-09T15:09:02.354 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:02.354 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":24,"btime":"2026-03-09T15:06:47:346371+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":17},{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":23}],"filesystems":[{"mdsmap":{"epoch":22,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:41.935075+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":87,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":24317},"failed":[],"damaged":[],"stopped":[],"info":{"gid_24317":{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":0,"incarnation":19,"state":"up:active","state_seq":127,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":24317,"qdb_cluster":[24317]},"id":1}]} 2026-03-09T15:09:02.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.356+0000 7feaa142f700 1 -- 192.168.123.105:0/3305803759 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fea88077910 msgr2=0x7fea88079dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:02.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.356+0000 7feaa142f700 1 --2- 192.168.123.105:0/3305803759 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fea88077910 0x7fea88079dc0 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7fea84005fd0 tx=0x7fea84005e20 comp rx=0 tx=0).stop 2026-03-09T15:09:02.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.356+0000 7feaa142f700 1 -- 192.168.123.105:0/3305803759 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea9c073910 msgr2=0x7fea9c1a2ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:02.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.356+0000 7feaa142f700 1 --2- 192.168.123.105:0/3305803759 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea9c073910 0x7fea9c1a2ac0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7fea8c009ad0 tx=0x7fea8c0052e0 comp rx=0 tx=0).stop 2026-03-09T15:09:02.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.356+0000 7feaa142f700 1 -- 192.168.123.105:0/3305803759 shutdown_connections 2026-03-09T15:09:02.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.356+0000 7feaa142f700 1 --2- 192.168.123.105:0/3305803759 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fea88077910 0x7fea88079dc0 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:02.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.356+0000 7feaa142f700 1 --2- 192.168.123.105:0/3305803759 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fea9c073000 0x7fea9c1a2580 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:02.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.356+0000 7feaa142f700 1 --2- 192.168.123.105:0/3305803759 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fea9c073910 0x7fea9c1a2ac0 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:02.357 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.357+0000 7feaa142f700 1 -- 192.168.123.105:0/3305803759 >> 192.168.123.105:0/3305803759 conn(0x7fea9c078550 msgr2=0x7fea9c102e20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:02.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.357+0000 7feaa142f700 1 -- 192.168.123.105:0/3305803759 shutdown_connections 2026-03-09T15:09:02.358 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.357+0000 7feaa142f700 1 -- 192.168.123.105:0/3305803759 wait complete. 2026-03-09T15:09:02.359 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 24 2026-03-09T15:09:02.402 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 24 2026-03-09T15:09:02.402 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 25 2026-03-09T15:09:02.603 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:02.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:02 vm05.local ceph-mon[116516]: pgmap v215: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:02.895 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:02 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3305803759' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 24, "format": "json"}]: dispatch 2026-03-09T15:09:02.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.893+0000 7ff9c5e68700 1 -- 192.168.123.105:0/1995395513 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff9c00ff3a0 msgr2=0x7ff9c00ff770 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:02.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.893+0000 7ff9c5e68700 1 --2- 192.168.123.105:0/1995395513 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff9c00ff3a0 0x7ff9c00ff770 secure :-1 s=READY pgs=156 cs=0 l=1 rev1=1 crypto rx=0x7ff9b0009b50 tx=0x7ff9b0009e60 comp rx=0 tx=0).stop 2026-03-09T15:09:02.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.896+0000 7ff9c5e68700 1 -- 192.168.123.105:0/1995395513 shutdown_connections 2026-03-09T15:09:02.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.896+0000 7ff9c5e68700 1 --2- 192.168.123.105:0/1995395513 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff9c00ffd40 0x7ff9c010ca10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:02.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.896+0000 7ff9c5e68700 1 --2- 192.168.123.105:0/1995395513 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff9c00ff3a0 0x7ff9c00ff770 unknown :-1 s=CLOSED pgs=156 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:02.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.896+0000 7ff9c5e68700 1 -- 192.168.123.105:0/1995395513 >> 192.168.123.105:0/1995395513 conn(0x7ff9c0076270 msgr2=0x7ff9c0076670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:02.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.896+0000 7ff9c5e68700 1 -- 192.168.123.105:0/1995395513 shutdown_connections 2026-03-09T15:09:02.897 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.896+0000 7ff9c5e68700 1 -- 192.168.123.105:0/1995395513 wait complete. 2026-03-09T15:09:02.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.897+0000 7ff9c5e68700 1 Processor -- start 2026-03-09T15:09:02.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.897+0000 7ff9c5e68700 1 -- start start 2026-03-09T15:09:02.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.897+0000 7ff9c5e68700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff9c00ff3a0 0x7ff9c0101680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:02.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.897+0000 7ff9c5e68700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff9c00ffd40 0x7ff9c0101bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:02.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.897+0000 7ff9c5e68700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff9c01057c0 con 0x7ff9c00ff3a0 2026-03-09T15:09:02.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.897+0000 7ff9c5e68700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7ff9c0102100 con 0x7ff9c00ffd40 2026-03-09T15:09:02.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.897+0000 7ff9bf7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff9c00ff3a0 0x7ff9c0101680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:02.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.897+0000 7ff9bf7fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff9c00ff3a0 0x7ff9c0101680 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:58258/0 (socket says 192.168.123.105:58258) 2026-03-09T15:09:02.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.897+0000 7ff9bf7fe700 1 -- 192.168.123.105:0/3321957215 learned_addr learned my addr 192.168.123.105:0/3321957215 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.898+0000 7ff9bf7fe700 1 -- 192.168.123.105:0/3321957215 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff9c00ffd40 msgr2=0x7ff9c0101bc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.898+0000 7ff9bf7fe700 1 --2- 192.168.123.105:0/3321957215 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff9c00ffd40 0x7ff9c0101bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.898+0000 7ff9bf7fe700 1 -- 192.168.123.105:0/3321957215 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7ff9b00097e0 con 0x7ff9c00ff3a0 2026-03-09T15:09:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.898+0000 7ff9bf7fe700 1 --2- 192.168.123.105:0/3321957215 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff9c00ff3a0 0x7ff9c0101680 secure :-1 s=READY pgs=157 cs=0 l=1 rev1=1 crypto rx=0x7ff9b0009b20 tx=0x7ff9b0004af0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.898+0000 7ff9bd7fa700 1 -- 192.168.123.105:0/3321957215 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff9b001d070 con 0x7ff9c00ff3a0 2026-03-09T15:09:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.898+0000 7ff9bd7fa700 1 -- 192.168.123.105:0/3321957215 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7ff9b000bcf0 con 0x7ff9c00ff3a0 2026-03-09T15:09:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.898+0000 7ff9bd7fa700 1 -- 192.168.123.105:0/3321957215 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7ff9b0021880 con 0x7ff9c00ff3a0 2026-03-09T15:09:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.899+0000 7ff9c5e68700 1 -- 192.168.123.105:0/3321957215 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7ff9c01022a0 con 0x7ff9c00ff3a0 2026-03-09T15:09:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.899+0000 7ff9c5e68700 1 -- 192.168.123.105:0/3321957215 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7ff9c0071aa0 con 0x7ff9c00ff3a0 2026-03-09T15:09:02.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.900+0000 7ff9b67fc700 1 -- 192.168.123.105:0/3321957215 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7ff9a40052f0 con 0x7ff9c00ff3a0 2026-03-09T15:09:02.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.901+0000 7ff9bd7fa700 1 -- 192.168.123.105:0/3321957215 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7ff9b000fbd0 con 0x7ff9c00ff3a0 2026-03-09T15:09:02.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.901+0000 7ff9bd7fa700 1 --2- 192.168.123.105:0/3321957215 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff9ac077990 0x7ff9ac079e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:02.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.901+0000 7ff9bd7fa700 1 -- 192.168.123.105:0/3321957215 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7ff9b009b0d0 con 0x7ff9c00ff3a0 2026-03-09T15:09:02.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.902+0000 7ff9b7fff700 1 --2- 192.168.123.105:0/3321957215 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff9ac077990 0x7ff9ac079e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:02.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.902+0000 7ff9b7fff700 1 --2- 192.168.123.105:0/3321957215 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff9ac077990 0x7ff9ac079e40 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7ff9c0102e20 tx=0x7ff9a8006cd0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:02.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:02.904+0000 7ff9bd7fa700 1 -- 192.168.123.105:0/3321957215 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7ff9b0063770 con 0x7ff9c00ff3a0 2026-03-09T15:09:03.048 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.047+0000 7ff9b67fc700 1 -- 192.168.123.105:0/3321957215 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 25, "format": "json"} v 0) v1 -- 0x7ff9a4005160 con 0x7ff9c00ff3a0 2026-03-09T15:09:03.049 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.048+0000 7ff9bd7fa700 1 -- 192.168.123.105:0/3321957215 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 25, "format": "json"}]=0 dumped fsmap epoch 25 v35) v1 ==== 107+0+5132 (secure 0 0 0) 0x7ff9b0062ec0 con 0x7ff9c00ff3a0 2026-03-09T15:09:03.050 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:03.050 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":25,"btime":"2026-03-09T15:06:52:288957+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":17},{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":23},{"gid":44239,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/2799240855","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2799240855},{"type":"v1","addr":"192.168.123.109:6825","nonce":2799240855}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":25}],"filesystems":[{"mdsmap":{"epoch":22,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:41.935075+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":87,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{"mds_0":24317},"failed":[],"damaged":[],"stopped":[],"info":{"gid_24317":{"gid":24317,"name":"cephfs.vm09.jrhwzz","rank":0,"incarnation":19,"state":"up:active","state_seq":127,"addr":"192.168.123.109:6827/2393799497","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":2393799497},{"type":"v1","addr":"192.168.123.109:6827","nonce":2393799497}]},"join_fscid":1,"export_targets":[],"features":4540138322906710015,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":24317,"qdb_cluster":[24317]},"id":1}]} 2026-03-09T15:09:03.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.051+0000 7ff9b67fc700 1 -- 192.168.123.105:0/3321957215 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff9ac077990 msgr2=0x7ff9ac079e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:03.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.051+0000 7ff9b67fc700 1 --2- 192.168.123.105:0/3321957215 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff9ac077990 0x7ff9ac079e40 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7ff9c0102e20 tx=0x7ff9a8006cd0 comp rx=0 tx=0).stop 2026-03-09T15:09:03.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.051+0000 7ff9b67fc700 1 -- 192.168.123.105:0/3321957215 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff9c00ff3a0 msgr2=0x7ff9c0101680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:03.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.051+0000 7ff9b67fc700 1 --2- 192.168.123.105:0/3321957215 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff9c00ff3a0 0x7ff9c0101680 secure :-1 s=READY pgs=157 cs=0 l=1 rev1=1 crypto rx=0x7ff9b0009b20 tx=0x7ff9b0004af0 comp rx=0 tx=0).stop 2026-03-09T15:09:03.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.052+0000 7ff9b67fc700 1 -- 192.168.123.105:0/3321957215 shutdown_connections 2026-03-09T15:09:03.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.052+0000 7ff9b67fc700 1 --2- 192.168.123.105:0/3321957215 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7ff9ac077990 0x7ff9ac079e40 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:03.052 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.052+0000 7ff9b67fc700 1 --2- 192.168.123.105:0/3321957215 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7ff9c00ff3a0 0x7ff9c0101680 unknown :-1 s=CLOSED pgs=157 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:03.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.052+0000 7ff9b67fc700 1 --2- 192.168.123.105:0/3321957215 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7ff9c00ffd40 0x7ff9c0101bc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:03.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.052+0000 7ff9b67fc700 1 -- 192.168.123.105:0/3321957215 >> 192.168.123.105:0/3321957215 conn(0x7ff9c0076270 msgr2=0x7ff9c00fdb80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:03.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.052+0000 7ff9b67fc700 1 -- 192.168.123.105:0/3321957215 shutdown_connections 2026-03-09T15:09:03.053 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.052+0000 7ff9b67fc700 1 -- 192.168.123.105:0/3321957215 wait complete. 2026-03-09T15:09:03.054 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 25 2026-03-09T15:09:03.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:02 vm09.local ceph-mon[98742]: pgmap v215: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:03.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:02 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3305803759' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 24, "format": "json"}]: dispatch 2026-03-09T15:09:03.119 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 25 2026-03-09T15:09:03.119 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 26 2026-03-09T15:09:03.273 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:03.543 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.541+0000 7fb5534ad700 1 -- 192.168.123.105:0/3728851594 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb54c100540 msgr2=0x7fb54c1009b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:03.543 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.541+0000 7fb5534ad700 1 --2- 192.168.123.105:0/3728851594 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb54c100540 0x7fb54c1009b0 secure :-1 s=READY pgs=158 cs=0 l=1 rev1=1 crypto rx=0x7fb548009b50 tx=0x7fb548009e60 comp rx=0 tx=0).stop 2026-03-09T15:09:03.543 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.542+0000 7fb5534ad700 1 -- 192.168.123.105:0/3728851594 shutdown_connections 2026-03-09T15:09:03.543 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.542+0000 7fb5534ad700 1 --2- 192.168.123.105:0/3728851594 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb54c100540 0x7fb54c1009b0 unknown :-1 s=CLOSED pgs=158 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:03.543 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.542+0000 7fb5534ad700 1 --2- 192.168.123.105:0/3728851594 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb54c106560 0x7fb54c106930 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:03.543 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.542+0000 7fb5534ad700 1 -- 192.168.123.105:0/3728851594 >> 192.168.123.105:0/3728851594 conn(0x7fb54c0fc000 msgr2=0x7fb54c0fe410 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:03.543 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.542+0000 7fb5534ad700 1 -- 192.168.123.105:0/3728851594 shutdown_connections 2026-03-09T15:09:03.543 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.542+0000 7fb5534ad700 1 -- 192.168.123.105:0/3728851594 wait complete. 2026-03-09T15:09:03.543 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.543+0000 7fb5534ad700 1 Processor -- start 2026-03-09T15:09:03.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.543+0000 7fb5534ad700 1 -- start start 2026-03-09T15:09:03.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.543+0000 7fb5534ad700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb54c100540 0x7fb54c198300 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:03.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.543+0000 7fb5534ad700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb54c106560 0x7fb54c198840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:03.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.543+0000 7fb5534ad700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb54c198f20 con 0x7fb54c106560 2026-03-09T15:09:03.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.543+0000 7fb5534ad700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fb54c19ccb0 con 0x7fb54c100540 2026-03-09T15:09:03.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.543+0000 7fb551249700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb54c100540 0x7fb54c198300 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:03.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.544+0000 7fb551249700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb54c100540 0x7fb54c198300 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:57284/0 (socket says 192.168.123.105:57284) 2026-03-09T15:09:03.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.544+0000 7fb551249700 1 -- 192.168.123.105:0/170535850 learned_addr learned my addr 192.168.123.105:0/170535850 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:03.544 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.544+0000 7fb550a48700 1 --2- 192.168.123.105:0/170535850 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb54c106560 0x7fb54c198840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:03.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.544+0000 7fb551249700 1 -- 192.168.123.105:0/170535850 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb54c106560 msgr2=0x7fb54c198840 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:03.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.544+0000 7fb551249700 1 --2- 192.168.123.105:0/170535850 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb54c106560 0x7fb54c198840 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:03.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.544+0000 7fb551249700 1 -- 192.168.123.105:0/170535850 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fb5480097e0 con 0x7fb54c100540 2026-03-09T15:09:03.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.544+0000 7fb550a48700 1 --2- 192.168.123.105:0/170535850 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb54c106560 0x7fb54c198840 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:09:03.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.544+0000 7fb551249700 1 --2- 192.168.123.105:0/170535850 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb54c100540 0x7fb54c198300 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fb53c00eb10 tx=0x7fb53c00ee20 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:03.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.545+0000 7fb5427fc700 1 -- 192.168.123.105:0/170535850 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb53c00cc40 con 0x7fb54c100540 2026-03-09T15:09:03.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.545+0000 7fb5534ad700 1 -- 192.168.123.105:0/170535850 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fb54c19cf90 con 0x7fb54c100540 2026-03-09T15:09:03.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.545+0000 7fb5534ad700 1 -- 192.168.123.105:0/170535850 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fb54c19d4b0 con 0x7fb54c100540 2026-03-09T15:09:03.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.545+0000 7fb5427fc700 1 -- 192.168.123.105:0/170535850 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fb53c00cda0 con 0x7fb54c100540 2026-03-09T15:09:03.547 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.545+0000 7fb5427fc700 1 -- 192.168.123.105:0/170535850 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fb53c018810 con 0x7fb54c100540 2026-03-09T15:09:03.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.546+0000 7fb5427fc700 1 -- 192.168.123.105:0/170535850 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fb53c018aa0 con 0x7fb54c100540 2026-03-09T15:09:03.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.546+0000 7fb5534ad700 1 -- 192.168.123.105:0/170535850 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fb54c04ea50 con 0x7fb54c100540 2026-03-09T15:09:03.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.547+0000 7fb5427fc700 1 --2- 192.168.123.105:0/170535850 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb5380778e0 0x7fb538079d90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:03.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.547+0000 7fb5427fc700 1 -- 192.168.123.105:0/170535850 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fb53c014070 con 0x7fb54c100540 2026-03-09T15:09:03.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.547+0000 7fb550a48700 1 --2- 192.168.123.105:0/170535850 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb5380778e0 0x7fb538079d90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:03.548 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.547+0000 7fb550a48700 1 --2- 192.168.123.105:0/170535850 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb5380778e0 0x7fb538079d90 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7fb548009b20 tx=0x7fb5480058e0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:03.550 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.550+0000 7fb5427fc700 1 -- 192.168.123.105:0/170535850 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fb53c062ba0 con 0x7fb54c100540 2026-03-09T15:09:03.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.694+0000 7fb5534ad700 1 -- 192.168.123.105:0/170535850 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 26, "format": "json"} v 0) v1 -- 0x7fb54c066e40 con 0x7fb54c100540 2026-03-09T15:09:03.696 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.695+0000 7fb5427fc700 1 -- 192.168.123.105:0/170535850 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 26, "format": "json"}]=0 dumped fsmap epoch 26 v35) v1 ==== 107+0+4327 (secure 0 0 0) 0x7fb53c0622f0 con 0x7fb54c100540 2026-03-09T15:09:03.696 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:03.696 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":26,"btime":"2026-03-09T15:06:54:367783+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":17},{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":23},{"gid":44239,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/2799240855","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2799240855},{"type":"v1","addr":"192.168.123.109:6825","nonce":2799240855}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":25}],"filesystems":[{"mdsmap":{"epoch":26,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:54.367783+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":91,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"max_mds":1,"in":[0],"up":{},"failed":[0],"damaged":[],"stopped":[],"info":{},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:09:03.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.698+0000 7fb5534ad700 1 -- 192.168.123.105:0/170535850 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb5380778e0 msgr2=0x7fb538079d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:03.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.698+0000 7fb5534ad700 1 --2- 192.168.123.105:0/170535850 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb5380778e0 0x7fb538079d90 secure :-1 s=READY pgs=107 cs=0 l=1 rev1=1 crypto rx=0x7fb548009b20 tx=0x7fb5480058e0 comp rx=0 tx=0).stop 2026-03-09T15:09:03.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.698+0000 7fb5534ad700 1 -- 192.168.123.105:0/170535850 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb54c100540 msgr2=0x7fb54c198300 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:03.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.698+0000 7fb5534ad700 1 --2- 192.168.123.105:0/170535850 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb54c100540 0x7fb54c198300 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fb53c00eb10 tx=0x7fb53c00ee20 comp rx=0 tx=0).stop 2026-03-09T15:09:03.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.699+0000 7fb5534ad700 1 -- 192.168.123.105:0/170535850 shutdown_connections 2026-03-09T15:09:03.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.699+0000 7fb5534ad700 1 --2- 192.168.123.105:0/170535850 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fb5380778e0 0x7fb538079d90 unknown :-1 s=CLOSED pgs=107 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:03.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.699+0000 7fb5534ad700 1 --2- 192.168.123.105:0/170535850 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fb54c100540 0x7fb54c198300 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:03.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.699+0000 7fb5534ad700 1 --2- 192.168.123.105:0/170535850 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fb54c106560 0x7fb54c198840 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:03.699 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.699+0000 7fb5534ad700 1 -- 192.168.123.105:0/170535850 >> 192.168.123.105:0/170535850 conn(0x7fb54c0fc000 msgr2=0x7fb54c0fd7e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:03.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.699+0000 7fb5534ad700 1 -- 192.168.123.105:0/170535850 shutdown_connections 2026-03-09T15:09:03.700 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:03.699+0000 7fb5534ad700 1 -- 192.168.123.105:0/170535850 wait complete. 2026-03-09T15:09:03.701 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 26 2026-03-09T15:09:03.768 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 26 2026-03-09T15:09:03.768 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 27 2026-03-09T15:09:03.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:03 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3321957215' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 25, "format": "json"}]: dispatch 2026-03-09T15:09:03.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:03 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/170535850' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 26, "format": "json"}]: dispatch 2026-03-09T15:09:03.924 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:04.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:03 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3321957215' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 25, "format": "json"}]: dispatch 2026-03-09T15:09:04.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:03 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/170535850' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 26, "format": "json"}]: dispatch 2026-03-09T15:09:04.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.177+0000 7f9eb3b83700 1 -- 192.168.123.105:0/562472079 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac100540 msgr2=0x7f9eac1009b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:04.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.177+0000 7f9eb3b83700 1 --2- 192.168.123.105:0/562472079 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac100540 0x7f9eac1009b0 secure :-1 s=READY pgs=159 cs=0 l=1 rev1=1 crypto rx=0x7f9e9c009b50 tx=0x7f9e9c009e60 comp rx=0 tx=0).stop 2026-03-09T15:09:04.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.178+0000 7f9eb3b83700 1 -- 192.168.123.105:0/562472079 shutdown_connections 2026-03-09T15:09:04.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.178+0000 7f9eb3b83700 1 --2- 192.168.123.105:0/562472079 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac100540 0x7f9eac1009b0 unknown :-1 s=CLOSED pgs=159 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:04.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.178+0000 7f9eb3b83700 1 --2- 192.168.123.105:0/562472079 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9eac106560 0x7f9eac106930 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:04.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.178+0000 7f9eb3b83700 1 -- 192.168.123.105:0/562472079 >> 192.168.123.105:0/562472079 conn(0x7f9eac0fbfc0 msgr2=0x7f9eac0fe3d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:04.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.178+0000 7f9eb3b83700 1 -- 192.168.123.105:0/562472079 shutdown_connections 2026-03-09T15:09:04.179 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.178+0000 7f9eb3b83700 1 -- 192.168.123.105:0/562472079 wait complete. 2026-03-09T15:09:04.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.179+0000 7f9eb3b83700 1 Processor -- start 2026-03-09T15:09:04.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.179+0000 7f9eb3b83700 1 -- start start 2026-03-09T15:09:04.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.179+0000 7f9eb3b83700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9eac100540 0x7f9eac1983b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:04.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.179+0000 7f9eb3b83700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac106560 0x7f9eac1988f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:04.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.179+0000 7f9eb3b83700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9eac198fd0 con 0x7f9eac106560 2026-03-09T15:09:04.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.179+0000 7f9eb3b83700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f9eac19cd60 con 0x7f9eac100540 2026-03-09T15:09:04.180 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.180+0000 7f9eb191f700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9eac100540 0x7f9eac1983b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:04.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.180+0000 7f9eb111e700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac106560 0x7f9eac1988f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:04.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.180+0000 7f9eb191f700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9eac100540 0x7f9eac1983b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:57306/0 (socket says 192.168.123.105:57306) 2026-03-09T15:09:04.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.180+0000 7f9eb191f700 1 -- 192.168.123.105:0/921902916 learned_addr learned my addr 192.168.123.105:0/921902916 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:04.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.180+0000 7f9eb111e700 1 -- 192.168.123.105:0/921902916 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9eac100540 msgr2=0x7f9eac1983b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:04.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.180+0000 7f9eb111e700 1 --2- 192.168.123.105:0/921902916 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9eac100540 0x7f9eac1983b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:04.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.180+0000 7f9eb111e700 1 -- 192.168.123.105:0/921902916 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f9e9c0097e0 con 0x7f9eac106560 2026-03-09T15:09:04.181 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.180+0000 7f9eb111e700 1 --2- 192.168.123.105:0/921902916 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac106560 0x7f9eac1988f0 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7f9e9c000c00 tx=0x7f9e9c00baa0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:04.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.181+0000 7f9ea2ffd700 1 -- 192.168.123.105:0/921902916 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9e9c01d070 con 0x7f9eac106560 2026-03-09T15:09:04.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.181+0000 7f9ea2ffd700 1 -- 192.168.123.105:0/921902916 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f9e9c004e80 con 0x7f9eac106560 2026-03-09T15:09:04.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.181+0000 7f9ea2ffd700 1 -- 192.168.123.105:0/921902916 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f9e9c00f460 con 0x7f9eac106560 2026-03-09T15:09:04.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.181+0000 7f9eb3b83700 1 -- 192.168.123.105:0/921902916 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f9eac19cfe0 con 0x7f9eac106560 2026-03-09T15:09:04.183 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.181+0000 7f9eb3b83700 1 -- 192.168.123.105:0/921902916 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f9eac19d470 con 0x7f9eac106560 2026-03-09T15:09:04.186 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.182+0000 7f9eb3b83700 1 -- 192.168.123.105:0/921902916 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f9eac04ea50 con 0x7f9eac106560 2026-03-09T15:09:04.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.186+0000 7f9ea2ffd700 1 -- 192.168.123.105:0/921902916 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f9e9c003680 con 0x7f9eac106560 2026-03-09T15:09:04.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.186+0000 7f9ea2ffd700 1 --2- 192.168.123.105:0/921902916 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9e980779e0 0x7f9e98079e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:04.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.186+0000 7f9ea2ffd700 1 -- 192.168.123.105:0/921902916 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f9e9c09b800 con 0x7f9eac106560 2026-03-09T15:09:04.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.186+0000 7f9ea2ffd700 1 -- 192.168.123.105:0/921902916 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f9e9c09bc80 con 0x7f9eac106560 2026-03-09T15:09:04.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.186+0000 7f9eb191f700 1 --2- 192.168.123.105:0/921902916 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9e980779e0 0x7f9e98079e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:04.188 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.187+0000 7f9eb191f700 1 --2- 192.168.123.105:0/921902916 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9e980779e0 0x7f9e98079e90 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f9ea8009cc0 tx=0x7f9ea8009400 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:04.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.341+0000 7f9eb3b83700 1 -- 192.168.123.105:0/921902916 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 27, "format": "json"} v 0) v1 -- 0x7f9eac066e40 con 0x7f9eac106560 2026-03-09T15:09:04.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.342+0000 7f9ea2ffd700 1 -- 192.168.123.105:0/921902916 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 27, "format": "json"}]=0 dumped fsmap epoch 27 v35) v1 ==== 107+0+4406 (secure 0 0 0) 0x7f9e9c063ea0 con 0x7f9eac106560 2026-03-09T15:09:04.343 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:04.343 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":27,"btime":"2026-03-09T15:06:54:374118+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":23},{"gid":44239,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/2799240855","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2799240855},{"type":"v1","addr":"192.168.123.109:6825","nonce":2799240855}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":25}],"filesystems":[{"mdsmap":{"epoch":27,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:54.374099+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":91,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34272},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34272":{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":27,"state":"up:replay","state_seq":1,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:09:04.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.345+0000 7f9eb3b83700 1 -- 192.168.123.105:0/921902916 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9e980779e0 msgr2=0x7f9e98079e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:04.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.345+0000 7f9eb3b83700 1 --2- 192.168.123.105:0/921902916 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9e980779e0 0x7f9e98079e90 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f9ea8009cc0 tx=0x7f9ea8009400 comp rx=0 tx=0).stop 2026-03-09T15:09:04.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.345+0000 7f9eb3b83700 1 -- 192.168.123.105:0/921902916 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac106560 msgr2=0x7f9eac1988f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:04.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.345+0000 7f9eb3b83700 1 --2- 192.168.123.105:0/921902916 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac106560 0x7f9eac1988f0 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7f9e9c000c00 tx=0x7f9e9c00baa0 comp rx=0 tx=0).stop 2026-03-09T15:09:04.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.345+0000 7f9eb3b83700 1 -- 192.168.123.105:0/921902916 shutdown_connections 2026-03-09T15:09:04.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.345+0000 7f9eb3b83700 1 --2- 192.168.123.105:0/921902916 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f9e980779e0 0x7f9e98079e90 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:04.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.345+0000 7f9eb3b83700 1 --2- 192.168.123.105:0/921902916 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f9eac100540 0x7f9eac1983b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:04.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.345+0000 7f9eb3b83700 1 --2- 192.168.123.105:0/921902916 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f9eac106560 0x7f9eac1988f0 unknown :-1 s=CLOSED pgs=160 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:04.346 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.346+0000 7f9eb3b83700 1 -- 192.168.123.105:0/921902916 >> 192.168.123.105:0/921902916 conn(0x7f9eac0fbfc0 msgr2=0x7f9eac073230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:04.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.346+0000 7f9eb3b83700 1 -- 192.168.123.105:0/921902916 shutdown_connections 2026-03-09T15:09:04.347 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.346+0000 7f9eb3b83700 1 -- 192.168.123.105:0/921902916 wait complete. 2026-03-09T15:09:04.347 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 27 2026-03-09T15:09:04.412 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 27 2026-03-09T15:09:04.412 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 28 2026-03-09T15:09:04.597 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:04.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.867+0000 7fcfccaf9700 1 -- 192.168.123.105:0/1883086372 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc8068490 msgr2=0x7fcfc8068900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:04.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.867+0000 7fcfccaf9700 1 --2- 192.168.123.105:0/1883086372 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc8068490 0x7fcfc8068900 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7fcfb8009b00 tx=0x7fcfb8009e10 comp rx=0 tx=0).stop 2026-03-09T15:09:04.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.868+0000 7fcfccaf9700 1 -- 192.168.123.105:0/1883086372 shutdown_connections 2026-03-09T15:09:04.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.868+0000 7fcfccaf9700 1 --2- 192.168.123.105:0/1883086372 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc8068490 0x7fcfc8068900 unknown :-1 s=CLOSED pgs=161 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:04.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.868+0000 7fcfccaf9700 1 --2- 192.168.123.105:0/1883086372 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcfc81013a0 0x7fcfc8101770 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:04.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.868+0000 7fcfccaf9700 1 -- 192.168.123.105:0/1883086372 >> 192.168.123.105:0/1883086372 conn(0x7fcfc80754a0 msgr2=0x7fcfc80758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:04.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.868+0000 7fcfccaf9700 1 -- 192.168.123.105:0/1883086372 shutdown_connections 2026-03-09T15:09:04.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.868+0000 7fcfccaf9700 1 -- 192.168.123.105:0/1883086372 wait complete. 2026-03-09T15:09:04.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.869+0000 7fcfccaf9700 1 Processor -- start 2026-03-09T15:09:04.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.869+0000 7fcfccaf9700 1 -- start start 2026-03-09T15:09:04.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.869+0000 7fcfccaf9700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcfc8068490 0x7fcfc81982f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.869+0000 7fcfccaf9700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc81013a0 0x7fcfc8198830 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.869+0000 7fcfccaf9700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcfc8198f10 con 0x7fcfc81013a0 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.869+0000 7fcfccaf9700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fcfc819cca0 con 0x7fcfc8068490 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.870+0000 7fcfc5d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc81013a0 0x7fcfc8198830 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.870+0000 7fcfc5d9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc81013a0 0x7fcfc8198830 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:58304/0 (socket says 192.168.123.105:58304) 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.870+0000 7fcfc5d9b700 1 -- 192.168.123.105:0/697772315 learned_addr learned my addr 192.168.123.105:0/697772315 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.870+0000 7fcfc5d9b700 1 -- 192.168.123.105:0/697772315 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcfc8068490 msgr2=0x7fcfc81982f0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.870+0000 7fcfc5d9b700 1 --2- 192.168.123.105:0/697772315 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcfc8068490 0x7fcfc81982f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.870+0000 7fcfc5d9b700 1 -- 192.168.123.105:0/697772315 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fcfb80097e0 con 0x7fcfc81013a0 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.870+0000 7fcfc5d9b700 1 --2- 192.168.123.105:0/697772315 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc81013a0 0x7fcfc8198830 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7fcfb80048c0 tx=0x7fcfb80048f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.870+0000 7fcfbf7fe700 1 -- 192.168.123.105:0/697772315 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcfb801d070 con 0x7fcfc81013a0 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.870+0000 7fcfbf7fe700 1 -- 192.168.123.105:0/697772315 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fcfb8004b80 con 0x7fcfc81013a0 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.870+0000 7fcfbf7fe700 1 -- 192.168.123.105:0/697772315 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fcfb800f670 con 0x7fcfc81013a0 2026-03-09T15:09:04.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.870+0000 7fcfccaf9700 1 -- 192.168.123.105:0/697772315 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fcfc819cf20 con 0x7fcfc81013a0 2026-03-09T15:09:04.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.871+0000 7fcfccaf9700 1 -- 192.168.123.105:0/697772315 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fcfc819d410 con 0x7fcfc81013a0 2026-03-09T15:09:04.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.872+0000 7fcfccaf9700 1 -- 192.168.123.105:0/697772315 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fcfc804ea50 con 0x7fcfc81013a0 2026-03-09T15:09:04.876 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.873+0000 7fcfbf7fe700 1 -- 192.168.123.105:0/697772315 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fcfb800bc50 con 0x7fcfc81013a0 2026-03-09T15:09:04.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.873+0000 7fcfbf7fe700 1 --2- 192.168.123.105:0/697772315 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcfb40779e0 0x7fcfb4079e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:04.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.873+0000 7fcfbf7fe700 1 -- 192.168.123.105:0/697772315 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fcfb809b3f0 con 0x7fcfc81013a0 2026-03-09T15:09:04.877 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:04 vm05.local ceph-mon[116516]: pgmap v216: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:04.877 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:04 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/921902916' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 27, "format": "json"}]: dispatch 2026-03-09T15:09:04.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.876+0000 7fcfbf7fe700 1 -- 192.168.123.105:0/697772315 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fcfb8063a90 con 0x7fcfc81013a0 2026-03-09T15:09:04.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.877+0000 7fcfc659c700 1 --2- 192.168.123.105:0/697772315 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcfb40779e0 0x7fcfb4079e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:04.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:04.877+0000 7fcfc659c700 1 --2- 192.168.123.105:0/697772315 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcfb40779e0 0x7fcfb4079e90 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fcfb000ba60 tx=0x7fcfb000b3f0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:05.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.025+0000 7fcfccaf9700 1 -- 192.168.123.105:0/697772315 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 28, "format": "json"} v 0) v1 -- 0x7fcfc8199680 con 0x7fcfc81013a0 2026-03-09T15:09:05.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.026+0000 7fcfbf7fe700 1 -- 192.168.123.105:0/697772315 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 28, "format": "json"}]=0 dumped fsmap epoch 28 v35) v1 ==== 107+0+4409 (secure 0 0 0) 0x7fcfb80631e0 con 0x7fcfc81013a0 2026-03-09T15:09:05.028 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:05.028 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":28,"btime":"2026-03-09T15:06:58:534461+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":23},{"gid":44239,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/2799240855","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2799240855},{"type":"v1","addr":"192.168.123.109:6825","nonce":2799240855}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":25}],"filesystems":[{"mdsmap":{"epoch":28,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:58.088982+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":91,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34272},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34272":{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":27,"state":"up:reconnect","state_seq":8,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:09:05.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.030+0000 7fcfccaf9700 1 -- 192.168.123.105:0/697772315 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcfb40779e0 msgr2=0x7fcfb4079e90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:05.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.030+0000 7fcfccaf9700 1 --2- 192.168.123.105:0/697772315 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcfb40779e0 0x7fcfb4079e90 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fcfb000ba60 tx=0x7fcfb000b3f0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.030+0000 7fcfccaf9700 1 -- 192.168.123.105:0/697772315 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc81013a0 msgr2=0x7fcfc8198830 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:05.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.030+0000 7fcfccaf9700 1 --2- 192.168.123.105:0/697772315 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc81013a0 0x7fcfc8198830 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7fcfb80048c0 tx=0x7fcfb80048f0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.030+0000 7fcfccaf9700 1 -- 192.168.123.105:0/697772315 shutdown_connections 2026-03-09T15:09:05.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.030+0000 7fcfccaf9700 1 --2- 192.168.123.105:0/697772315 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fcfb40779e0 0x7fcfb4079e90 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.030+0000 7fcfccaf9700 1 --2- 192.168.123.105:0/697772315 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fcfc8068490 0x7fcfc81982f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.030+0000 7fcfccaf9700 1 --2- 192.168.123.105:0/697772315 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fcfc81013a0 0x7fcfc8198830 secure :-1 s=CLOSED pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7fcfb80048c0 tx=0x7fcfb80048f0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.030+0000 7fcfccaf9700 1 -- 192.168.123.105:0/697772315 >> 192.168.123.105:0/697772315 conn(0x7fcfc80754a0 msgr2=0x7fcfc80fdd70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:05.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.030+0000 7fcfccaf9700 1 -- 192.168.123.105:0/697772315 shutdown_connections 2026-03-09T15:09:05.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.030+0000 7fcfccaf9700 1 -- 192.168.123.105:0/697772315 wait complete. 2026-03-09T15:09:05.034 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 28 2026-03-09T15:09:05.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:04 vm09.local ceph-mon[98742]: pgmap v216: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:05.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:04 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/921902916' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 27, "format": "json"}]: dispatch 2026-03-09T15:09:05.118 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 28 2026-03-09T15:09:05.118 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 29 2026-03-09T15:09:05.271 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:05.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.602+0000 7f80142c1700 1 -- 192.168.123.105:0/368275512 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f800c101100 msgr2=0x7f800c101570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:05.603 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.602+0000 7f80142c1700 1 --2- 192.168.123.105:0/368275512 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f800c101100 0x7f800c101570 secure :-1 s=READY pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7f8008009b00 tx=0x7f8008009e10 comp rx=0 tx=0).stop 2026-03-09T15:09:05.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.603+0000 7f80142c1700 1 -- 192.168.123.105:0/368275512 shutdown_connections 2026-03-09T15:09:05.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.603+0000 7f80142c1700 1 --2- 192.168.123.105:0/368275512 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f800c101100 0x7f800c101570 unknown :-1 s=CLOSED pgs=163 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.603+0000 7f80142c1700 1 --2- 192.168.123.105:0/368275512 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f800c0ff480 0x7f800c100bc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.603+0000 7f80142c1700 1 -- 192.168.123.105:0/368275512 >> 192.168.123.105:0/368275512 conn(0x7f800c0747e0 msgr2=0x7f800c074be0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:05.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.603+0000 7f80142c1700 1 -- 192.168.123.105:0/368275512 shutdown_connections 2026-03-09T15:09:05.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.603+0000 7f80142c1700 1 -- 192.168.123.105:0/368275512 wait complete. 2026-03-09T15:09:05.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.603+0000 7f80142c1700 1 Processor -- start 2026-03-09T15:09:05.604 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.604+0000 7f80142c1700 1 -- start start 2026-03-09T15:09:05.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.604+0000 7f80142c1700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f800c0ff480 0x7f800c194080 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:05.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.604+0000 7f801205d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f800c0ff480 0x7f800c194080 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:05.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.604+0000 7f801205d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f800c0ff480 0x7f800c194080 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:58336/0 (socket says 192.168.123.105:58336) 2026-03-09T15:09:05.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.604+0000 7f80142c1700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f800c101100 0x7f800c1945c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:05.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.605+0000 7f801185c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f800c101100 0x7f800c1945c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:05.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.605+0000 7f801185c700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f800c101100 0x7f800c1945c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:57336/0 (socket says 192.168.123.105:57336) 2026-03-09T15:09:05.605 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.605+0000 7f801185c700 1 -- 192.168.123.105:0/1314396513 learned_addr learned my addr 192.168.123.105:0/1314396513 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:05.606 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.605+0000 7f80142c1700 1 -- 192.168.123.105:0/1314396513 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f800c194ca0 con 0x7f800c0ff480 2026-03-09T15:09:05.606 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.605+0000 7f80142c1700 1 -- 192.168.123.105:0/1314396513 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f800c198a30 con 0x7f800c101100 2026-03-09T15:09:05.606 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.605+0000 7f801205d700 1 -- 192.168.123.105:0/1314396513 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f800c101100 msgr2=0x7f800c1945c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:05.606 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.605+0000 7f801205d700 1 --2- 192.168.123.105:0/1314396513 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f800c101100 0x7f800c1945c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.606 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.605+0000 7f801205d700 1 -- 192.168.123.105:0/1314396513 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f80080097e0 con 0x7f800c0ff480 2026-03-09T15:09:05.607 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.606+0000 7f801205d700 1 --2- 192.168.123.105:0/1314396513 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f800c0ff480 0x7f800c194080 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7f8000007ae0 tx=0x7f8000007df0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:05.608 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.606+0000 7f7fff7fe700 1 -- 192.168.123.105:0/1314396513 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f8000010040 con 0x7f800c0ff480 2026-03-09T15:09:05.608 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.606+0000 7f7fff7fe700 1 -- 192.168.123.105:0/1314396513 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f8000015470 con 0x7f800c0ff480 2026-03-09T15:09:05.608 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.606+0000 7f7fff7fe700 1 -- 192.168.123.105:0/1314396513 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f80000145c0 con 0x7f800c0ff480 2026-03-09T15:09:05.608 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.606+0000 7f80142c1700 1 -- 192.168.123.105:0/1314396513 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f800c198d10 con 0x7f800c0ff480 2026-03-09T15:09:05.608 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.606+0000 7f80142c1700 1 -- 192.168.123.105:0/1314396513 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f800c199260 con 0x7f800c0ff480 2026-03-09T15:09:05.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.608+0000 7f7fff7fe700 1 -- 192.168.123.105:0/1314396513 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f8000014830 con 0x7f800c0ff480 2026-03-09T15:09:05.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.608+0000 7f80142c1700 1 -- 192.168.123.105:0/1314396513 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f800c103630 con 0x7f800c0ff480 2026-03-09T15:09:05.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.608+0000 7f7fff7fe700 1 --2- 192.168.123.105:0/1314396513 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f7ff80778c0 0x7f7ff8079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:05.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.609+0000 7f7fff7fe700 1 -- 192.168.123.105:0/1314396513 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f800009a140 con 0x7f800c0ff480 2026-03-09T15:09:05.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.609+0000 7f801185c700 1 --2- 192.168.123.105:0/1314396513 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f7ff80778c0 0x7f7ff8079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:05.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.609+0000 7f801185c700 1 --2- 192.168.123.105:0/1314396513 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f7ff80778c0 0x7f7ff8079d70 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7f800c1956a0 tx=0x7f8008005fb0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:05.612 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.611+0000 7f7fff7fe700 1 -- 192.168.123.105:0/1314396513 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f8000063890 con 0x7f800c0ff480 2026-03-09T15:09:05.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.764+0000 7f80142c1700 1 -- 192.168.123.105:0/1314396513 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 29, "format": "json"} v 0) v1 -- 0x7f800c04ea50 con 0x7f800c0ff480 2026-03-09T15:09:05.766 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.765+0000 7f7fff7fe700 1 -- 192.168.123.105:0/1314396513 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 29, "format": "json"}]=0 dumped fsmap epoch 29 v35) v1 ==== 107+0+4406 (secure 0 0 0) 0x7f8000062fe0 con 0x7f800c0ff480 2026-03-09T15:09:05.766 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:05.766 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":29,"btime":"2026-03-09T15:06:59:542225+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":23},{"gid":44239,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/2799240855","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2799240855},{"type":"v1","addr":"192.168.123.109:6825","nonce":2799240855}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":25}],"filesystems":[{"mdsmap":{"epoch":29,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:06:58.545499+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":91,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34272},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34272":{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":27,"state":"up:rejoin","state_seq":9,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":0,"qdb_cluster":[]},"id":1}]} 2026-03-09T15:09:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.768+0000 7f80142c1700 1 -- 192.168.123.105:0/1314396513 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f7ff80778c0 msgr2=0x7f7ff8079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.768+0000 7f80142c1700 1 --2- 192.168.123.105:0/1314396513 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f7ff80778c0 0x7f7ff8079d70 secure :-1 s=READY pgs=110 cs=0 l=1 rev1=1 crypto rx=0x7f800c1956a0 tx=0x7f8008005fb0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.768+0000 7f80142c1700 1 -- 192.168.123.105:0/1314396513 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f800c0ff480 msgr2=0x7f800c194080 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.768+0000 7f80142c1700 1 --2- 192.168.123.105:0/1314396513 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f800c0ff480 0x7f800c194080 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7f8000007ae0 tx=0x7f8000007df0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.768+0000 7f80142c1700 1 -- 192.168.123.105:0/1314396513 shutdown_connections 2026-03-09T15:09:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.768+0000 7f80142c1700 1 --2- 192.168.123.105:0/1314396513 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f7ff80778c0 0x7f7ff8079d70 unknown :-1 s=CLOSED pgs=110 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.768+0000 7f80142c1700 1 --2- 192.168.123.105:0/1314396513 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f800c0ff480 0x7f800c194080 unknown :-1 s=CLOSED pgs=164 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.768+0000 7f80142c1700 1 --2- 192.168.123.105:0/1314396513 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f800c101100 0x7f800c1945c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.768+0000 7f80142c1700 1 -- 192.168.123.105:0/1314396513 >> 192.168.123.105:0/1314396513 conn(0x7f800c0747e0 msgr2=0x7f800c109170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.768+0000 7f80142c1700 1 -- 192.168.123.105:0/1314396513 shutdown_connections 2026-03-09T15:09:05.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:05.769+0000 7f80142c1700 1 -- 192.168.123.105:0/1314396513 wait complete. 2026-03-09T15:09:05.770 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 29 2026-03-09T15:09:05.842 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 29 2026-03-09T15:09:05.842 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 30 2026-03-09T15:09:06.001 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:06.031 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:05 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/697772315' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 28, "format": "json"}]: dispatch 2026-03-09T15:09:06.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:05 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/697772315' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 28, "format": "json"}]: dispatch 2026-03-09T15:09:06.272 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.271+0000 7f84d57ff700 1 -- 192.168.123.105:0/631612558 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84d00ff4d0 msgr2=0x7f84d00ff8a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:06.272 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.271+0000 7f84d57ff700 1 --2- 192.168.123.105:0/631612558 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84d00ff4d0 0x7f84d00ff8a0 secure :-1 s=READY pgs=165 cs=0 l=1 rev1=1 crypto rx=0x7f84c0009b00 tx=0x7f84c0009e10 comp rx=0 tx=0).stop 2026-03-09T15:09:06.272 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.272+0000 7f84d57ff700 1 -- 192.168.123.105:0/631612558 shutdown_connections 2026-03-09T15:09:06.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.272+0000 7f84d57ff700 1 --2- 192.168.123.105:0/631612558 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f84d00ffde0 0x7f84d01042c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:06.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.272+0000 7f84d57ff700 1 --2- 192.168.123.105:0/631612558 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84d00ff4d0 0x7f84d00ff8a0 unknown :-1 s=CLOSED pgs=165 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:06.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.272+0000 7f84d57ff700 1 -- 192.168.123.105:0/631612558 >> 192.168.123.105:0/631612558 conn(0x7f84d00faf00 msgr2=0x7f84d00fd310 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:06.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.272+0000 7f84d57ff700 1 -- 192.168.123.105:0/631612558 shutdown_connections 2026-03-09T15:09:06.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.272+0000 7f84d57ff700 1 -- 192.168.123.105:0/631612558 wait complete. 2026-03-09T15:09:06.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.273+0000 7f84d57ff700 1 Processor -- start 2026-03-09T15:09:06.273 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.273+0000 7f84d57ff700 1 -- start start 2026-03-09T15:09:06.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.273+0000 7f84d57ff700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84d00ff4d0 0x7f84d01a2540 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:06.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.273+0000 7f84d57ff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f84d00ffde0 0x7f84d01a2a80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:06.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.273+0000 7f84d57ff700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f84d01a3110 con 0x7f84d00ff4d0 2026-03-09T15:09:06.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.273+0000 7f84d57ff700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7f84d019c5c0 con 0x7f84d00ffde0 2026-03-09T15:09:06.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.273+0000 7f84ce7fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f84d00ffde0 0x7f84d01a2a80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:06.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.273+0000 7f84ce7fc700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f84d00ffde0 0x7f84d01a2a80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:57356/0 (socket says 192.168.123.105:57356) 2026-03-09T15:09:06.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.273+0000 7f84ce7fc700 1 -- 192.168.123.105:0/1266094547 learned_addr learned my addr 192.168.123.105:0/1266094547 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:06.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.273+0000 7f84ceffd700 1 --2- 192.168.123.105:0/1266094547 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84d00ff4d0 0x7f84d01a2540 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:06.274 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.274+0000 7f84ce7fc700 1 -- 192.168.123.105:0/1266094547 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84d00ff4d0 msgr2=0x7f84d01a2540 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:06.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.274+0000 7f84ce7fc700 1 --2- 192.168.123.105:0/1266094547 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84d00ff4d0 0x7f84d01a2540 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:06.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.274+0000 7f84ce7fc700 1 -- 192.168.123.105:0/1266094547 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7f84c00097e0 con 0x7f84d00ffde0 2026-03-09T15:09:06.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.274+0000 7f84ceffd700 1 --2- 192.168.123.105:0/1266094547 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84d00ff4d0 0x7f84d01a2540 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:09:06.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.274+0000 7f84ce7fc700 1 --2- 192.168.123.105:0/1266094547 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f84d00ffde0 0x7f84d01a2a80 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f84c400b700 tx=0x7f84c400ba10 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:06.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.274+0000 7f84b7fff700 1 -- 192.168.123.105:0/1266094547 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f84c40107c0 con 0x7f84d00ffde0 2026-03-09T15:09:06.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.274+0000 7f84d57ff700 1 -- 192.168.123.105:0/1266094547 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7f84d019c8a0 con 0x7f84d00ffde0 2026-03-09T15:09:06.275 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.275+0000 7f84d57ff700 1 -- 192.168.123.105:0/1266094547 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7f84d019cdf0 con 0x7f84d00ffde0 2026-03-09T15:09:06.276 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.275+0000 7f84b7fff700 1 -- 192.168.123.105:0/1266094547 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7f84c4010e00 con 0x7f84d00ffde0 2026-03-09T15:09:06.276 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.275+0000 7f84b7fff700 1 -- 192.168.123.105:0/1266094547 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7f84c400f360 con 0x7f84d00ffde0 2026-03-09T15:09:06.277 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.276+0000 7f84b7fff700 1 -- 192.168.123.105:0/1266094547 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7f84c4010920 con 0x7f84d00ffde0 2026-03-09T15:09:06.278 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.276+0000 7f84d57ff700 1 -- 192.168.123.105:0/1266094547 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7f84bc005320 con 0x7f84d00ffde0 2026-03-09T15:09:06.278 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.276+0000 7f84b7fff700 1 --2- 192.168.123.105:0/1266094547 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f84b8077870 0x7f84b8079d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:06.278 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.277+0000 7f84ceffd700 1 --2- 192.168.123.105:0/1266094547 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f84b8077870 0x7f84b8079d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:06.278 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.277+0000 7f84b7fff700 1 -- 192.168.123.105:0/1266094547 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7f84c4099180 con 0x7f84d00ffde0 2026-03-09T15:09:06.278 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.277+0000 7f84ceffd700 1 --2- 192.168.123.105:0/1266094547 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f84b8077870 0x7f84b8079d20 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f84c0005850 tx=0x7f84c000b540 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:06.280 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.279+0000 7f84b7fff700 1 -- 192.168.123.105:0/1266094547 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7f84c40610d0 con 0x7f84d00ffde0 2026-03-09T15:09:06.434 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.433+0000 7f84d57ff700 1 -- 192.168.123.105:0/1266094547 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 30, "format": "json"} v 0) v1 -- 0x7f84bc005190 con 0x7f84d00ffde0 2026-03-09T15:09:06.435 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.434+0000 7f84b7fff700 1 -- 192.168.123.105:0/1266094547 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 30, "format": "json"}]=0 dumped fsmap epoch 30 v35) v1 ==== 107+0+4416 (secure 0 0 0) 0x7f84c4060ef0 con 0x7f84d00ffde0 2026-03-09T15:09:06.435 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:06.436 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":30,"btime":"2026-03-09T15:07:00:677497+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":23},{"gid":44239,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/2799240855","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2799240855},{"type":"v1","addr":"192.168.123.109:6825","nonce":2799240855}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":25}],"filesystems":[{"mdsmap":{"epoch":30,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:07:00.677493+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":91,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34272},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34272":{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":27,"state":"up:active","state_seq":10,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34272,"qdb_cluster":[34272]},"id":1}]} 2026-03-09T15:09:06.437 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.437+0000 7f84d57ff700 1 -- 192.168.123.105:0/1266094547 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f84b8077870 msgr2=0x7f84b8079d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:06.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.437+0000 7f84d57ff700 1 --2- 192.168.123.105:0/1266094547 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f84b8077870 0x7f84b8079d20 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f84c0005850 tx=0x7f84c000b540 comp rx=0 tx=0).stop 2026-03-09T15:09:06.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.437+0000 7f84d57ff700 1 -- 192.168.123.105:0/1266094547 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f84d00ffde0 msgr2=0x7f84d01a2a80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:06.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.437+0000 7f84d57ff700 1 --2- 192.168.123.105:0/1266094547 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f84d00ffde0 0x7f84d01a2a80 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f84c400b700 tx=0x7f84c400ba10 comp rx=0 tx=0).stop 2026-03-09T15:09:06.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.437+0000 7f84d57ff700 1 -- 192.168.123.105:0/1266094547 shutdown_connections 2026-03-09T15:09:06.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.437+0000 7f84d57ff700 1 --2- 192.168.123.105:0/1266094547 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7f84b8077870 0x7f84b8079d20 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:06.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.437+0000 7f84d57ff700 1 --2- 192.168.123.105:0/1266094547 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7f84d00ff4d0 0x7f84d01a2540 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:06.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.437+0000 7f84d57ff700 1 --2- 192.168.123.105:0/1266094547 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7f84d00ffde0 0x7f84d01a2a80 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:06.438 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.437+0000 7f84d57ff700 1 -- 192.168.123.105:0/1266094547 >> 192.168.123.105:0/1266094547 conn(0x7f84d00faf00 msgr2=0x7f84d0110940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:06.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.438+0000 7f84d57ff700 1 -- 192.168.123.105:0/1266094547 shutdown_connections 2026-03-09T15:09:06.439 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.438+0000 7f84d57ff700 1 -- 192.168.123.105:0/1266094547 wait complete. 2026-03-09T15:09:06.439 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 30 2026-03-09T15:09:06.484 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 30 2026-03-09T15:09:06.484 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 31 2026-03-09T15:09:06.639 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:06.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.922+0000 7fa830b28700 1 -- 192.168.123.105:0/1707293345 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa82c0ff480 msgr2=0x7fa82c100bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:06.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.922+0000 7fa830b28700 1 --2- 192.168.123.105:0/1707293345 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa82c0ff480 0x7fa82c100bc0 secure :-1 s=READY pgs=166 cs=0 l=1 rev1=1 crypto rx=0x7fa81c009b50 tx=0x7fa81c009e60 comp rx=0 tx=0).stop 2026-03-09T15:09:06.925 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:06 vm05.local ceph-mon[116516]: pgmap v217: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:06.925 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:06 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1314396513' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 29, "format": "json"}]: dispatch 2026-03-09T15:09:06.925 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:06 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1266094547' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 30, "format": "json"}]: dispatch 2026-03-09T15:09:06.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.925+0000 7fa830b28700 1 -- 192.168.123.105:0/1707293345 shutdown_connections 2026-03-09T15:09:06.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.925+0000 7fa830b28700 1 --2- 192.168.123.105:0/1707293345 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa82c101100 0x7fa82c101570 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:06.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.925+0000 7fa830b28700 1 --2- 192.168.123.105:0/1707293345 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa82c0ff480 0x7fa82c100bc0 unknown :-1 s=CLOSED pgs=166 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:06.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.925+0000 7fa830b28700 1 -- 192.168.123.105:0/1707293345 >> 192.168.123.105:0/1707293345 conn(0x7fa82c0747e0 msgr2=0x7fa82c074be0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:06.926 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.925+0000 7fa830b28700 1 -- 192.168.123.105:0/1707293345 shutdown_connections 2026-03-09T15:09:06.926 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.925+0000 7fa830b28700 1 -- 192.168.123.105:0/1707293345 wait complete. 2026-03-09T15:09:06.926 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.925+0000 7fa830b28700 1 Processor -- start 2026-03-09T15:09:06.926 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.926+0000 7fa830b28700 1 -- start start 2026-03-09T15:09:06.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.926+0000 7fa830b28700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa82c0ff480 0x7fa82c194090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:06.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.926+0000 7fa830b28700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa82c101100 0x7fa82c1945d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:06.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.926+0000 7fa830b28700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa82c194cb0 con 0x7fa82c0ff480 2026-03-09T15:09:06.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.926+0000 7fa830b28700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa82c198a40 con 0x7fa82c101100 2026-03-09T15:09:06.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.926+0000 7fa82a59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa82c0ff480 0x7fa82c194090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:06.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.926+0000 7fa82a59c700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa82c0ff480 0x7fa82c194090 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:58376/0 (socket says 192.168.123.105:58376) 2026-03-09T15:09:06.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.926+0000 7fa82a59c700 1 -- 192.168.123.105:0/3690455980 learned_addr learned my addr 192.168.123.105:0/3690455980 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:06.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.927+0000 7fa82a59c700 1 -- 192.168.123.105:0/3690455980 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa82c101100 msgr2=0x7fa82c1945d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-09T15:09:06.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.927+0000 7fa82a59c700 1 --2- 192.168.123.105:0/3690455980 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa82c101100 0x7fa82c1945d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:06.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.927+0000 7fa82a59c700 1 -- 192.168.123.105:0/3690455980 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa81c0097e0 con 0x7fa82c0ff480 2026-03-09T15:09:06.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.927+0000 7fa82a59c700 1 --2- 192.168.123.105:0/3690455980 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa82c0ff480 0x7fa82c194090 secure :-1 s=READY pgs=167 cs=0 l=1 rev1=1 crypto rx=0x7fa81c004960 tx=0x7fa81c004a40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:06.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.927+0000 7fa8237fe700 1 -- 192.168.123.105:0/3690455980 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa81c01d070 con 0x7fa82c0ff480 2026-03-09T15:09:06.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.927+0000 7fa8237fe700 1 -- 192.168.123.105:0/3690455980 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fa81c00bb80 con 0x7fa82c0ff480 2026-03-09T15:09:06.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.927+0000 7fa8237fe700 1 -- 192.168.123.105:0/3690455980 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa81c00f7c0 con 0x7fa82c0ff480 2026-03-09T15:09:06.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.927+0000 7fa830b28700 1 -- 192.168.123.105:0/3690455980 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa82c198cc0 con 0x7fa82c0ff480 2026-03-09T15:09:06.928 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.927+0000 7fa830b28700 1 -- 192.168.123.105:0/3690455980 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa82c199210 con 0x7fa82c0ff480 2026-03-09T15:09:06.929 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.929+0000 7fa8237fe700 1 -- 192.168.123.105:0/3690455980 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fa81c00f920 con 0x7fa82c0ff480 2026-03-09T15:09:06.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.929+0000 7fa8237fe700 1 --2- 192.168.123.105:0/3690455980 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa81807bcd0 0x7fa81807e180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:06.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.929+0000 7fa8237fe700 1 -- 192.168.123.105:0/3690455980 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fa81c09c3e0 con 0x7fa82c0ff480 2026-03-09T15:09:06.930 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.930+0000 7fa823fff700 1 --2- 192.168.123.105:0/3690455980 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa81807bcd0 0x7fa81807e180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:06.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.930+0000 7fa823fff700 1 --2- 192.168.123.105:0/3690455980 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa81807bcd0 0x7fa81807e180 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7fa82c1956b0 tx=0x7fa814006c60 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:06.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.930+0000 7fa830b28700 1 -- 192.168.123.105:0/3690455980 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa82c103730 con 0x7fa82c0ff480 2026-03-09T15:09:06.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:06.933+0000 7fa8237fe700 1 -- 192.168.123.105:0/3690455980 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fa81c06b080 con 0x7fa82c0ff480 2026-03-09T15:09:07.077 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.076+0000 7fa830b28700 1 -- 192.168.123.105:0/3690455980 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 31, "format": "json"} v 0) v1 -- 0x7fa82c066e40 con 0x7fa82c0ff480 2026-03-09T15:09:07.077 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.077+0000 7fa8237fe700 1 -- 192.168.123.105:0/3690455980 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 31, "format": "json"}]=0 dumped fsmap epoch 31 v35) v1 ==== 107+0+5264 (secure 0 0 0) 0x7fa81c027070 con 0x7fa82c0ff480 2026-03-09T15:09:07.078 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:07.078 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":31,"btime":"2026-03-09T15:07:03:283836+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":23},{"gid":44239,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/2799240855","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2799240855},{"type":"v1","addr":"192.168.123.109:6825","nonce":2799240855}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":25},{"gid":44243,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6827/632428118","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":632428118},{"type":"v1","addr":"192.168.123.109:6827","nonce":632428118}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":31}],"filesystems":[{"mdsmap":{"epoch":30,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:07:00.677493+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":91,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34272},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34272":{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":27,"state":"up:active","state_seq":10,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34272,"qdb_cluster":[34272]},"id":1}]} 2026-03-09T15:09:07.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.079+0000 7fa830b28700 1 -- 192.168.123.105:0/3690455980 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa81807bcd0 msgr2=0x7fa81807e180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:07.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.080+0000 7fa830b28700 1 --2- 192.168.123.105:0/3690455980 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa81807bcd0 0x7fa81807e180 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7fa82c1956b0 tx=0x7fa814006c60 comp rx=0 tx=0).stop 2026-03-09T15:09:07.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.080+0000 7fa830b28700 1 -- 192.168.123.105:0/3690455980 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa82c0ff480 msgr2=0x7fa82c194090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:07.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.080+0000 7fa830b28700 1 --2- 192.168.123.105:0/3690455980 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa82c0ff480 0x7fa82c194090 secure :-1 s=READY pgs=167 cs=0 l=1 rev1=1 crypto rx=0x7fa81c004960 tx=0x7fa81c004a40 comp rx=0 tx=0).stop 2026-03-09T15:09:07.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.080+0000 7fa830b28700 1 -- 192.168.123.105:0/3690455980 shutdown_connections 2026-03-09T15:09:07.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.080+0000 7fa830b28700 1 --2- 192.168.123.105:0/3690455980 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa81807bcd0 0x7fa81807e180 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:07.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.080+0000 7fa830b28700 1 --2- 192.168.123.105:0/3690455980 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa82c0ff480 0x7fa82c194090 unknown :-1 s=CLOSED pgs=167 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:07.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.080+0000 7fa830b28700 1 --2- 192.168.123.105:0/3690455980 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa82c101100 0x7fa82c1945d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:07.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.080+0000 7fa830b28700 1 -- 192.168.123.105:0/3690455980 >> 192.168.123.105:0/3690455980 conn(0x7fa82c0747e0 msgr2=0x7fa82c1091e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:07.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.080+0000 7fa830b28700 1 -- 192.168.123.105:0/3690455980 shutdown_connections 2026-03-09T15:09:07.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.081+0000 7fa830b28700 1 -- 192.168.123.105:0/3690455980 wait complete. 2026-03-09T15:09:07.082 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 31 2026-03-09T15:09:07.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:06 vm09.local ceph-mon[98742]: pgmap v217: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:07.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:06 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1314396513' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 29, "format": "json"}]: dispatch 2026-03-09T15:09:07.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:06 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1266094547' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 30, "format": "json"}]: dispatch 2026-03-09T15:09:07.145 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 31 2026-03-09T15:09:07.145 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 32 2026-03-09T15:09:07.295 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:07.572 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.570+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/1261955626 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa8f00686f0 msgr2=0x7fa8f0068ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:07.572 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.570+0000 7fa8f7cf2700 1 --2- 192.168.123.105:0/1261955626 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa8f00686f0 0x7fa8f0068ac0 secure :-1 s=READY pgs=168 cs=0 l=1 rev1=1 crypto rx=0x7fa8e0009b50 tx=0x7fa8e0009e60 comp rx=0 tx=0).stop 2026-03-09T15:09:07.572 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.571+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/1261955626 shutdown_connections 2026-03-09T15:09:07.572 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.571+0000 7fa8f7cf2700 1 --2- 192.168.123.105:0/1261955626 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa8f0069000 0x7fa8f01051e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:07.572 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.571+0000 7fa8f7cf2700 1 --2- 192.168.123.105:0/1261955626 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa8f00686f0 0x7fa8f0068ac0 unknown :-1 s=CLOSED pgs=168 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:07.572 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.571+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/1261955626 >> 192.168.123.105:0/1261955626 conn(0x7fa8f00754a0 msgr2=0x7fa8f00758a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:07.572 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.571+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/1261955626 shutdown_connections 2026-03-09T15:09:07.572 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.571+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/1261955626 wait complete. 2026-03-09T15:09:07.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.572+0000 7fa8f7cf2700 1 Processor -- start 2026-03-09T15:09:07.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.572+0000 7fa8f7cf2700 1 -- start start 2026-03-09T15:09:07.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.572+0000 7fa8f7cf2700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa8f00686f0 0x7fa8f0198470 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:07.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.572+0000 7fa8f7cf2700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa8f0069000 0x7fa8f01989b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:07.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.572+0000 7fa8f7cf2700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa8f0199090 con 0x7fa8f0069000 2026-03-09T15:09:07.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.572+0000 7fa8f7cf2700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa8f019ce20 con 0x7fa8f00686f0 2026-03-09T15:09:07.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.572+0000 7fa8f528d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa8f0069000 0x7fa8f01989b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:07.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.573+0000 7fa8f5a8e700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa8f00686f0 0x7fa8f0198470 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:07.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.573+0000 7fa8f528d700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa8f0069000 0x7fa8f01989b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:58390/0 (socket says 192.168.123.105:58390) 2026-03-09T15:09:07.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.573+0000 7fa8f528d700 1 -- 192.168.123.105:0/97678332 learned_addr learned my addr 192.168.123.105:0/97678332 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:07.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.573+0000 7fa8f528d700 1 -- 192.168.123.105:0/97678332 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa8f00686f0 msgr2=0x7fa8f0198470 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:07.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.573+0000 7fa8f528d700 1 --2- 192.168.123.105:0/97678332 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa8f00686f0 0x7fa8f0198470 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:07.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.573+0000 7fa8f528d700 1 -- 192.168.123.105:0/97678332 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa8e00097e0 con 0x7fa8f0069000 2026-03-09T15:09:07.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.573+0000 7fa8f5a8e700 1 --2- 192.168.123.105:0/97678332 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa8f00686f0 0x7fa8f0198470 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-09T15:09:07.574 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.573+0000 7fa8f528d700 1 --2- 192.168.123.105:0/97678332 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa8f0069000 0x7fa8f01989b0 secure :-1 s=READY pgs=169 cs=0 l=1 rev1=1 crypto rx=0x7fa8ec00eb10 tx=0x7fa8ec00eed0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:07.575 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.574+0000 7fa8e6ffd700 1 -- 192.168.123.105:0/97678332 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa8ec00cca0 con 0x7fa8f0069000 2026-03-09T15:09:07.575 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.574+0000 7fa8e6ffd700 1 -- 192.168.123.105:0/97678332 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fa8ec00ce00 con 0x7fa8f0069000 2026-03-09T15:09:07.575 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.574+0000 7fa8e6ffd700 1 -- 192.168.123.105:0/97678332 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa8ec018910 con 0x7fa8f0069000 2026-03-09T15:09:07.575 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.574+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/97678332 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa8f019d100 con 0x7fa8f0069000 2026-03-09T15:09:07.575 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.574+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/97678332 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa8f019d570 con 0x7fa8f0069000 2026-03-09T15:09:07.576 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.575+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/97678332 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa8f0104960 con 0x7fa8f0069000 2026-03-09T15:09:07.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.579+0000 7fa8e6ffd700 1 -- 192.168.123.105:0/97678332 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fa8ec018a70 con 0x7fa8f0069000 2026-03-09T15:09:07.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.579+0000 7fa8e6ffd700 1 --2- 192.168.123.105:0/97678332 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa8dc077990 0x7fa8dc079e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:07.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.579+0000 7fa8e6ffd700 1 -- 192.168.123.105:0/97678332 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fa8ec014070 con 0x7fa8f0069000 2026-03-09T15:09:07.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.580+0000 7fa8e6ffd700 1 -- 192.168.123.105:0/97678332 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fa8ec0631b0 con 0x7fa8f0069000 2026-03-09T15:09:07.580 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.580+0000 7fa8f5a8e700 1 --2- 192.168.123.105:0/97678332 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa8dc077990 0x7fa8dc079e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:07.581 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.580+0000 7fa8f5a8e700 1 --2- 192.168.123.105:0/97678332 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa8dc077990 0x7fa8dc079e40 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7fa8e0006010 tx=0x7fa8e000b540 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:07.724 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.723+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/97678332 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 32, "format": "json"} v 0) v1 -- 0x7fa8f0199870 con 0x7fa8f0069000 2026-03-09T15:09:07.725 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.724+0000 7fa8e6ffd700 1 -- 192.168.123.105:0/97678332 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 32, "format": "json"}]=0 dumped fsmap epoch 32 v35) v1 ==== 107+0+5264 (secure 0 0 0) 0x7fa8ec062900 con 0x7fa8f0069000 2026-03-09T15:09:07.725 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:07.725 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":32,"btime":"2026-03-09T15:07:05:968594+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":23},{"gid":44239,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/2799240855","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2799240855},{"type":"v1","addr":"192.168.123.109:6825","nonce":2799240855}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":25},{"gid":44243,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6827/632428118","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":632428118},{"type":"v1","addr":"192.168.123.109:6827","nonce":632428118}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":31}],"filesystems":[{"mdsmap":{"epoch":32,"flags":18,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":false,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:07:04.970950+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":91,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34272},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34272":{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":27,"state":"up:active","state_seq":10,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34272,"qdb_cluster":[34272]},"id":1}]} 2026-03-09T15:09:07.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.727+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/97678332 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa8dc077990 msgr2=0x7fa8dc079e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:07.728 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.727+0000 7fa8f7cf2700 1 --2- 192.168.123.105:0/97678332 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa8dc077990 0x7fa8dc079e40 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7fa8e0006010 tx=0x7fa8e000b540 comp rx=0 tx=0).stop 2026-03-09T15:09:07.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.728+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/97678332 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa8f0069000 msgr2=0x7fa8f01989b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:07.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.728+0000 7fa8f7cf2700 1 --2- 192.168.123.105:0/97678332 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa8f0069000 0x7fa8f01989b0 secure :-1 s=READY pgs=169 cs=0 l=1 rev1=1 crypto rx=0x7fa8ec00eb10 tx=0x7fa8ec00eed0 comp rx=0 tx=0).stop 2026-03-09T15:09:07.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.728+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/97678332 shutdown_connections 2026-03-09T15:09:07.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.728+0000 7fa8f7cf2700 1 --2- 192.168.123.105:0/97678332 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa8dc077990 0x7fa8dc079e40 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:07.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.728+0000 7fa8f7cf2700 1 --2- 192.168.123.105:0/97678332 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa8f00686f0 0x7fa8f0198470 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:07.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.728+0000 7fa8f7cf2700 1 --2- 192.168.123.105:0/97678332 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa8f0069000 0x7fa8f01989b0 unknown :-1 s=CLOSED pgs=169 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:07.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.728+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/97678332 >> 192.168.123.105:0/97678332 conn(0x7fa8f00754a0 msgr2=0x7fa8f0101770 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:07.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.728+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/97678332 shutdown_connections 2026-03-09T15:09:07.729 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:07.728+0000 7fa8f7cf2700 1 -- 192.168.123.105:0/97678332 wait complete. 2026-03-09T15:09:07.730 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 32 2026-03-09T15:09:07.803 DEBUG:tasks.fs:allow_standby_replay disabled in epoch 32 2026-03-09T15:09:07.803 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 33 2026-03-09T15:09:07.829 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:07 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3690455980' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 31, "format": "json"}]: dispatch 2026-03-09T15:09:07.829 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:07 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/97678332' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 32, "format": "json"}]: dispatch 2026-03-09T15:09:07.968 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:08.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:07 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3690455980' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 31, "format": "json"}]: dispatch 2026-03-09T15:09:08.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:07 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/97678332' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 32, "format": "json"}]: dispatch 2026-03-09T15:09:08.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.255+0000 7fac4e520700 1 -- 192.168.123.105:0/3633921966 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fac48108780 msgr2=0x7fac48108b50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:08.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.255+0000 7fac4e520700 1 --2- 192.168.123.105:0/3633921966 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fac48108780 0x7fac48108b50 secure :-1 s=READY pgs=170 cs=0 l=1 rev1=1 crypto rx=0x7fac30009b50 tx=0x7fac30009e60 comp rx=0 tx=0).stop 2026-03-09T15:09:08.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.256+0000 7fac4e520700 1 -- 192.168.123.105:0/3633921966 shutdown_connections 2026-03-09T15:09:08.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.256+0000 7fac4e520700 1 --2- 192.168.123.105:0/3633921966 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fac48102780 0x7fac48102bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:08.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.256+0000 7fac4e520700 1 --2- 192.168.123.105:0/3633921966 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fac48108780 0x7fac48108b50 unknown :-1 s=CLOSED pgs=170 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:08.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.256+0000 7fac4e520700 1 -- 192.168.123.105:0/3633921966 >> 192.168.123.105:0/3633921966 conn(0x7fac480fe280 msgr2=0x7fac48100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:08.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.256+0000 7fac4e520700 1 -- 192.168.123.105:0/3633921966 shutdown_connections 2026-03-09T15:09:08.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.256+0000 7fac4e520700 1 -- 192.168.123.105:0/3633921966 wait complete. 2026-03-09T15:09:08.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.257+0000 7fac4e520700 1 Processor -- start 2026-03-09T15:09:08.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.257+0000 7fac4e520700 1 -- start start 2026-03-09T15:09:08.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.257+0000 7fac4e520700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fac48102780 0x7fac481982d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:08.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.257+0000 7fac4e520700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fac48108780 0x7fac48198810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:08.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.257+0000 7fac4e520700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fac48198ef0 con 0x7fac48108780 2026-03-09T15:09:08.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.257+0000 7fac4e520700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fac4819cc30 con 0x7fac48102780 2026-03-09T15:09:08.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.257+0000 7fac47fff700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fac48102780 0x7fac481982d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:08.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.258+0000 7fac477fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fac48108780 0x7fac48198810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:08.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.258+0000 7fac477fe700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fac48108780 0x7fac48198810 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:58400/0 (socket says 192.168.123.105:58400) 2026-03-09T15:09:08.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.258+0000 7fac477fe700 1 -- 192.168.123.105:0/1289811541 learned_addr learned my addr 192.168.123.105:0/1289811541 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:08.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.258+0000 7fac477fe700 1 -- 192.168.123.105:0/1289811541 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fac48102780 msgr2=0x7fac481982d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:08.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.258+0000 7fac477fe700 1 --2- 192.168.123.105:0/1289811541 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fac48102780 0x7fac481982d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:08.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.258+0000 7fac477fe700 1 -- 192.168.123.105:0/1289811541 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fac300097e0 con 0x7fac48108780 2026-03-09T15:09:08.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.258+0000 7fac477fe700 1 --2- 192.168.123.105:0/1289811541 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fac48108780 0x7fac48198810 secure :-1 s=READY pgs=171 cs=0 l=1 rev1=1 crypto rx=0x7fac3800eb10 tx=0x7fac3800ee20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:08.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.258+0000 7fac457fa700 1 -- 192.168.123.105:0/1289811541 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fac3800cc40 con 0x7fac48108780 2026-03-09T15:09:08.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.259+0000 7fac4e520700 1 -- 192.168.123.105:0/1289811541 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fac4819cf10 con 0x7fac48108780 2026-03-09T15:09:08.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.259+0000 7fac4e520700 1 -- 192.168.123.105:0/1289811541 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fac4819d460 con 0x7fac48108780 2026-03-09T15:09:08.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.259+0000 7fac4e520700 1 -- 192.168.123.105:0/1289811541 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fac4804ea50 con 0x7fac48108780 2026-03-09T15:09:08.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.259+0000 7fac457fa700 1 -- 192.168.123.105:0/1289811541 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fac3800cda0 con 0x7fac48108780 2026-03-09T15:09:08.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.259+0000 7fac457fa700 1 -- 192.168.123.105:0/1289811541 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fac38018810 con 0x7fac48108780 2026-03-09T15:09:08.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.261+0000 7fac457fa700 1 -- 192.168.123.105:0/1289811541 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fac38018a80 con 0x7fac48108780 2026-03-09T15:09:08.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.261+0000 7fac457fa700 1 --2- 192.168.123.105:0/1289811541 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fac34077910 0x7fac34079dc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:08.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.261+0000 7fac457fa700 1 -- 192.168.123.105:0/1289811541 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fac38014070 con 0x7fac48108780 2026-03-09T15:09:08.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.263+0000 7fac47fff700 1 --2- 192.168.123.105:0/1289811541 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fac34077910 0x7fac34079dc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:08.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.264+0000 7fac457fa700 1 -- 192.168.123.105:0/1289811541 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fac38064720 con 0x7fac48108780 2026-03-09T15:09:08.265 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.264+0000 7fac47fff700 1 --2- 192.168.123.105:0/1289811541 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fac34077910 0x7fac34079dc0 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7fac30005850 tx=0x7fac3000b540 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:08.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.405+0000 7fac4e520700 1 -- 192.168.123.105:0/1289811541 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 33, "format": "json"} v 0) v1 -- 0x7fac48066e40 con 0x7fac48108780 2026-03-09T15:09:08.407 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.407+0000 7fac457fa700 1 -- 192.168.123.105:0/1289811541 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 33, "format": "json"}]=0 dumped fsmap epoch 33 v35) v1 ==== 107+0+5263 (secure 0 0 0) 0x7fac38063e70 con 0x7fac48108780 2026-03-09T15:09:08.409 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:08.409 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":33,"btime":"2026-03-09T15:07:06:994652+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":23},{"gid":44239,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/2799240855","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2799240855},{"type":"v1","addr":"192.168.123.109:6825","nonce":2799240855}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":25},{"gid":44243,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6827/632428118","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":632428118},{"type":"v1","addr":"192.168.123.109:6827","nonce":632428118}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":31}],"filesystems":[{"mdsmap":{"epoch":33,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:07:05.976380+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":91,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34272},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34272":{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":27,"state":"up:active","state_seq":10,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34272,"qdb_cluster":[34272]},"id":1}]} 2026-03-09T15:09:08.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.410+0000 7fac4e520700 1 -- 192.168.123.105:0/1289811541 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fac34077910 msgr2=0x7fac34079dc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:08.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.410+0000 7fac4e520700 1 --2- 192.168.123.105:0/1289811541 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fac34077910 0x7fac34079dc0 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7fac30005850 tx=0x7fac3000b540 comp rx=0 tx=0).stop 2026-03-09T15:09:08.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.411+0000 7fac4e520700 1 -- 192.168.123.105:0/1289811541 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fac48108780 msgr2=0x7fac48198810 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:08.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.411+0000 7fac4e520700 1 --2- 192.168.123.105:0/1289811541 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fac48108780 0x7fac48198810 secure :-1 s=READY pgs=171 cs=0 l=1 rev1=1 crypto rx=0x7fac3800eb10 tx=0x7fac3800ee20 comp rx=0 tx=0).stop 2026-03-09T15:09:08.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.411+0000 7fac4e520700 1 -- 192.168.123.105:0/1289811541 shutdown_connections 2026-03-09T15:09:08.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.411+0000 7fac4e520700 1 --2- 192.168.123.105:0/1289811541 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fac34077910 0x7fac34079dc0 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:08.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.411+0000 7fac4e520700 1 --2- 192.168.123.105:0/1289811541 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fac48102780 0x7fac481982d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:08.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.411+0000 7fac4e520700 1 --2- 192.168.123.105:0/1289811541 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fac48108780 0x7fac48198810 unknown :-1 s=CLOSED pgs=171 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:08.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.411+0000 7fac4e520700 1 -- 192.168.123.105:0/1289811541 >> 192.168.123.105:0/1289811541 conn(0x7fac480fe280 msgr2=0x7fac480ff9a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:08.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.412+0000 7fac4e520700 1 -- 192.168.123.105:0/1289811541 shutdown_connections 2026-03-09T15:09:08.412 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.412+0000 7fac4e520700 1 -- 192.168.123.105:0/1289811541 wait complete. 2026-03-09T15:09:08.413 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 33 2026-03-09T15:09:08.477 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph fs dump --format=json 34 2026-03-09T15:09:08.678 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:08.973 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.971+0000 7fa154bb5700 1 -- 192.168.123.105:0/2999007810 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa150105df0 msgr2=0x7fa1501005c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:08.973 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.971+0000 7fa154bb5700 1 --2- 192.168.123.105:0/2999007810 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa150105df0 0x7fa1501005c0 secure :-1 s=READY pgs=172 cs=0 l=1 rev1=1 crypto rx=0x7fa140009b00 tx=0x7fa140009e10 comp rx=0 tx=0).stop 2026-03-09T15:09:08.973 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:08 vm05.local ceph-mon[116516]: pgmap v218: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:08.973 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:08 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1289811541' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 33, "format": "json"}]: dispatch 2026-03-09T15:09:08.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.972+0000 7fa154bb5700 1 -- 192.168.123.105:0/2999007810 shutdown_connections 2026-03-09T15:09:08.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.972+0000 7fa154bb5700 1 --2- 192.168.123.105:0/2999007810 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa150105df0 0x7fa1501005c0 unknown :-1 s=CLOSED pgs=172 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:08.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.972+0000 7fa154bb5700 1 --2- 192.168.123.105:0/2999007810 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa1501054e0 0x7fa1501058b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:08.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.972+0000 7fa154bb5700 1 -- 192.168.123.105:0/2999007810 >> 192.168.123.105:0/2999007810 conn(0x7fa150078580 msgr2=0x7fa150078980 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:08.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.973+0000 7fa154bb5700 1 -- 192.168.123.105:0/2999007810 shutdown_connections 2026-03-09T15:09:08.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.973+0000 7fa154bb5700 1 -- 192.168.123.105:0/2999007810 wait complete. 2026-03-09T15:09:08.975 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.974+0000 7fa154bb5700 1 Processor -- start 2026-03-09T15:09:08.975 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.974+0000 7fa154bb5700 1 -- start start 2026-03-09T15:09:08.975 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.974+0000 7fa154bb5700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa1501054e0 0x7fa150196190 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:08.975 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.974+0000 7fa154bb5700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa1501966d0 0x7fa15019ab40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:08.975 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.974+0000 7fa154bb5700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa150196ce0 con 0x7fa1501966d0 2026-03-09T15:09:08.975 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.974+0000 7fa154bb5700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa150196e50 con 0x7fa1501054e0 2026-03-09T15:09:08.975 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.975+0000 7fa14dd9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa1501966d0 0x7fa15019ab40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:08.976 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.975+0000 7fa14dd9b700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa1501966d0 0x7fa15019ab40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:50734/0 (socket says 192.168.123.105:50734) 2026-03-09T15:09:08.976 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.975+0000 7fa14dd9b700 1 -- 192.168.123.105:0/539520353 learned_addr learned my addr 192.168.123.105:0/539520353 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:08.976 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.975+0000 7fa14dd9b700 1 -- 192.168.123.105:0/539520353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa1501054e0 msgr2=0x7fa150196190 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:08.976 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.975+0000 7fa14dd9b700 1 --2- 192.168.123.105:0/539520353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa1501054e0 0x7fa150196190 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:08.976 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.975+0000 7fa14dd9b700 1 -- 192.168.123.105:0/539520353 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa1400097e0 con 0x7fa1501966d0 2026-03-09T15:09:08.976 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.975+0000 7fa14dd9b700 1 --2- 192.168.123.105:0/539520353 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa1501966d0 0x7fa15019ab40 secure :-1 s=READY pgs=173 cs=0 l=1 rev1=1 crypto rx=0x7fa140004960 tx=0x7fa140004a40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:08.976 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.975+0000 7fa1477fe700 1 -- 192.168.123.105:0/539520353 <== mon.0 v2:192.168.123.105:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa14001d070 con 0x7fa1501966d0 2026-03-09T15:09:08.977 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.976+0000 7fa154bb5700 1 -- 192.168.123.105:0/539520353 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa15019b0e0 con 0x7fa1501966d0 2026-03-09T15:09:08.977 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.976+0000 7fa154bb5700 1 -- 192.168.123.105:0/539520353 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa15019b630 con 0x7fa1501966d0 2026-03-09T15:09:08.978 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.976+0000 7fa1477fe700 1 -- 192.168.123.105:0/539520353 <== mon.0 v2:192.168.123.105:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fa14000bc50 con 0x7fa1501966d0 2026-03-09T15:09:08.980 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.976+0000 7fa154bb5700 1 -- 192.168.123.105:0/539520353 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa15004ea50 con 0x7fa1501966d0 2026-03-09T15:09:08.980 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.978+0000 7fa1477fe700 1 -- 192.168.123.105:0/539520353 <== mon.0 v2:192.168.123.105:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa1400216b0 con 0x7fa1501966d0 2026-03-09T15:09:08.980 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.978+0000 7fa1477fe700 1 -- 192.168.123.105:0/539520353 <== mon.0 v2:192.168.123.105:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fa14002b430 con 0x7fa1501966d0 2026-03-09T15:09:08.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.979+0000 7fa1477fe700 1 --2- 192.168.123.105:0/539520353 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa13c0779a0 0x7fa13c079e50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:08.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.979+0000 7fa1477fe700 1 -- 192.168.123.105:0/539520353 <== mon.0 v2:192.168.123.105:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fa14009c220 con 0x7fa1501966d0 2026-03-09T15:09:08.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.981+0000 7fa1477fe700 1 -- 192.168.123.105:0/539520353 <== mon.0 v2:192.168.123.105:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fa1400648c0 con 0x7fa1501966d0 2026-03-09T15:09:08.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.981+0000 7fa14e59c700 1 --2- 192.168.123.105:0/539520353 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa13c0779a0 0x7fa13c079e50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:08.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:08.981+0000 7fa14e59c700 1 --2- 192.168.123.105:0/539520353 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa13c0779a0 0x7fa13c079e50 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fa1501017d0 tx=0x7fa138005ca0 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:09.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:08 vm09.local ceph-mon[98742]: pgmap v218: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:09.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:08 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1289811541' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 33, "format": "json"}]: dispatch 2026-03-09T15:09:09.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.126+0000 7fa154bb5700 1 -- 192.168.123.105:0/539520353 --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_command({"prefix": "fs dump", "epoch": 34, "format": "json"} v 0) v1 -- 0x7fa150066e40 con 0x7fa1501966d0 2026-03-09T15:09:09.128 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.127+0000 7fa1477fe700 1 -- 192.168.123.105:0/539520353 <== mon.0 v2:192.168.123.105:3300/0 7 ==== mon_command_ack([{"prefix": "fs dump", "epoch": 34, "format": "json"}]=0 dumped fsmap epoch 34 v35) v1 ==== 107+0+5270 (secure 0 0 0) 0x7fa140064010 con 0x7fa1501966d0 2026-03-09T15:09:09.128 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:09:09.128 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":34,"btime":"2026-03-09T15:07:07:008677+0000","default_fscid":1,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2"}},"feature_flags":{"enable_multiple":true,"ever_enabled_multiple":true},"standbys":[{"gid":44239,"name":"cephfs.vm09.ohmitn","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6825/2799240855","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":2799240855},{"type":"v1","addr":"192.168.123.109:6825","nonce":2799240855}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":25},{"gid":44243,"name":"cephfs.vm09.jrhwzz","rank":-1,"incarnation":0,"state":"up:standby","state_seq":1,"addr":"192.168.123.109:6827/632428118","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":632428118},{"type":"v1","addr":"192.168.123.109:6827","nonce":632428118}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"epoch":31}],"filesystems":[{"mdsmap":{"epoch":34,"flags":50,"flags_state":{"joinable":true,"allow_snaps":true,"allow_multimds_snaps":true,"allow_standby_replay":true,"refuse_client_session":false,"refuse_standby_for_another_fs":false,"balance_automate":false},"ever_allowed_features":32,"explicitly_allowed_features":32,"created":"2026-03-09T14:58:23.182447+0000","modified":"2026-03-09T15:07:07.008672+0000","tableserver":0,"root":0,"session_timeout":60,"session_autoclose":300,"required_client_features":{},"max_file_size":1099511627776,"max_xattr_size":65536,"last_failure":0,"last_failure_osd_epoch":91,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}},"max_mds":1,"in":[0],"up":{"mds_0":34272},"failed":[],"damaged":[],"stopped":[],"info":{"gid_34272":{"gid":34272,"name":"cephfs.vm05.nrocqt","rank":0,"incarnation":27,"state":"up:active","state_seq":10,"addr":"192.168.123.105:6827/3005307080","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3005307080},{"type":"v1","addr":"192.168.123.105:6827","nonce":3005307080}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}},"gid_34292":{"gid":34292,"name":"cephfs.vm05.rrcyql","rank":0,"incarnation":0,"state":"up:standby-replay","state_seq":1,"addr":"192.168.123.105:6829/3529134522","addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3529134522},{"type":"v1","addr":"192.168.123.105:6829","nonce":3529134522}]},"join_fscid":1,"export_targets":[],"features":4540701547738038271,"flags":0,"compat":{"compat":{},"ro_compat":{},"incompat":{"feature_1":"base v0.20","feature_2":"client writeable ranges","feature_3":"default file layouts on dirs","feature_4":"dir inode in separate object","feature_5":"mds uses versioned encoding","feature_6":"dirfrag is stored in omap","feature_7":"mds uses inline data","feature_8":"no anchor table","feature_9":"file layout v2","feature_10":"snaprealm v2","feature_11":"minor log segments","feature_12":"quiesce subvolumes"}}}},"data_pools":[3],"metadata_pool":2,"enabled":true,"fs_name":"cephfs","balancer":"","bal_rank_mask":"-1","standby_count_wanted":1,"qdb_leader":34272,"qdb_cluster":[34272]},"id":1}]} 2026-03-09T15:09:09.131 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.130+0000 7fa154bb5700 1 -- 192.168.123.105:0/539520353 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa13c0779a0 msgr2=0x7fa13c079e50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:09.131 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.130+0000 7fa154bb5700 1 --2- 192.168.123.105:0/539520353 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa13c0779a0 0x7fa13c079e50 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fa1501017d0 tx=0x7fa138005ca0 comp rx=0 tx=0).stop 2026-03-09T15:09:09.131 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.131+0000 7fa154bb5700 1 -- 192.168.123.105:0/539520353 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa1501966d0 msgr2=0x7fa15019ab40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:09.131 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.131+0000 7fa154bb5700 1 --2- 192.168.123.105:0/539520353 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa1501966d0 0x7fa15019ab40 secure :-1 s=READY pgs=173 cs=0 l=1 rev1=1 crypto rx=0x7fa140004960 tx=0x7fa140004a40 comp rx=0 tx=0).stop 2026-03-09T15:09:09.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.131+0000 7fa154bb5700 1 -- 192.168.123.105:0/539520353 shutdown_connections 2026-03-09T15:09:09.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.131+0000 7fa154bb5700 1 --2- 192.168.123.105:0/539520353 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa13c0779a0 0x7fa13c079e50 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:09.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.131+0000 7fa154bb5700 1 --2- 192.168.123.105:0/539520353 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa1501054e0 0x7fa150196190 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:09.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.131+0000 7fa154bb5700 1 --2- 192.168.123.105:0/539520353 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa1501966d0 0x7fa15019ab40 unknown :-1 s=CLOSED pgs=173 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:09.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.131+0000 7fa154bb5700 1 -- 192.168.123.105:0/539520353 >> 192.168.123.105:0/539520353 conn(0x7fa150078580 msgr2=0x7fa1500fe060 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:09.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.131+0000 7fa154bb5700 1 -- 192.168.123.105:0/539520353 shutdown_connections 2026-03-09T15:09:09.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.131+0000 7fa154bb5700 1 -- 192.168.123.105:0/539520353 wait complete. 2026-03-09T15:09:09.133 INFO:teuthology.orchestra.run.vm05.stderr:dumped fsmap epoch 34 2026-03-09T15:09:09.181 DEBUG:teuthology.run_tasks:Unwinding manager ceph-fuse 2026-03-09T15:09:09.185 INFO:tasks.ceph_fuse:Unmounting ceph-fuse clients... 2026-03-09T15:09:09.185 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T15:09:09.185 DEBUG:teuthology.orchestra.run.vm05:> dd if=/proc/self/mounts of=/dev/stdout 2026-03-09T15:09:09.203 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T15:09:09.203 DEBUG:teuthology.orchestra.run.vm05:> dd if=/proc/self/mounts of=/dev/stdout 2026-03-09T15:09:09.258 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd blocklist ls 2026-03-09T15:09:09.464 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:09.745 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.743+0000 7fe424898700 1 -- 192.168.123.105:0/2718293802 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe41c102780 msgr2=0x7fe41c102bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:09.745 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.743+0000 7fe424898700 1 --2- 192.168.123.105:0/2718293802 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe41c102780 0x7fe41c102bf0 secure :-1 s=READY pgs=174 cs=0 l=1 rev1=1 crypto rx=0x7fe410009b00 tx=0x7fe410009e10 comp rx=0 tx=0).stop 2026-03-09T15:09:09.745 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.744+0000 7fe424898700 1 -- 192.168.123.105:0/2718293802 shutdown_connections 2026-03-09T15:09:09.745 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.744+0000 7fe424898700 1 --2- 192.168.123.105:0/2718293802 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe41c102780 0x7fe41c102bf0 unknown :-1 s=CLOSED pgs=174 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:09.745 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.744+0000 7fe424898700 1 --2- 192.168.123.105:0/2718293802 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe41c108780 0x7fe41c108b50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:09.745 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.744+0000 7fe424898700 1 -- 192.168.123.105:0/2718293802 >> 192.168.123.105:0/2718293802 conn(0x7fe41c0fe280 msgr2=0x7fe41c100690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:09.745 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.744+0000 7fe424898700 1 -- 192.168.123.105:0/2718293802 shutdown_connections 2026-03-09T15:09:09.745 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.744+0000 7fe424898700 1 -- 192.168.123.105:0/2718293802 wait complete. 2026-03-09T15:09:09.746 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.745+0000 7fe424898700 1 Processor -- start 2026-03-09T15:09:09.746 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.745+0000 7fe424898700 1 -- start start 2026-03-09T15:09:09.746 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.745+0000 7fe424898700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe41c102780 0x7fe41c1983c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:09.746 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.745+0000 7fe424898700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe41c108780 0x7fe41c198900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:09.746 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.745+0000 7fe424898700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe41c198fe0 con 0x7fe41c102780 2026-03-09T15:09:09.746 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.745+0000 7fe424898700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fe41c19cd70 con 0x7fe41c108780 2026-03-09T15:09:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.746+0000 7fe422634700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe41c102780 0x7fe41c1983c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.746+0000 7fe422634700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe41c102780 0x7fe41c1983c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.105:3300/0 says I am v2:192.168.123.105:50762/0 (socket says 192.168.123.105:50762) 2026-03-09T15:09:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.746+0000 7fe422634700 1 -- 192.168.123.105:0/3895870121 learned_addr learned my addr 192.168.123.105:0/3895870121 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.746+0000 7fe421e33700 1 --2- 192.168.123.105:0/3895870121 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe41c108780 0x7fe41c198900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.746+0000 7fe421e33700 1 -- 192.168.123.105:0/3895870121 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe41c102780 msgr2=0x7fe41c1983c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.747+0000 7fe421e33700 1 --2- 192.168.123.105:0/3895870121 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe41c102780 0x7fe41c1983c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.747+0000 7fe421e33700 1 -- 192.168.123.105:0/3895870121 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fe4100097e0 con 0x7fe41c108780 2026-03-09T15:09:09.747 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.747+0000 7fe422634700 1 --2- 192.168.123.105:0/3895870121 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe41c102780 0x7fe41c1983c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-09T15:09:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.747+0000 7fe421e33700 1 --2- 192.168.123.105:0/3895870121 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe41c108780 0x7fe41c198900 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7fe410009b00 tx=0x7fe4100048c0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.747+0000 7fe4177fe700 1 -- 192.168.123.105:0/3895870121 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe41001d070 con 0x7fe41c108780 2026-03-09T15:09:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.747+0000 7fe424898700 1 -- 192.168.123.105:0/3895870121 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fe41c19cff0 con 0x7fe41c108780 2026-03-09T15:09:09.748 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.747+0000 7fe424898700 1 -- 192.168.123.105:0/3895870121 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fe41c19d560 con 0x7fe41c108780 2026-03-09T15:09:09.749 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.748+0000 7fe4177fe700 1 -- 192.168.123.105:0/3895870121 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fe410022470 con 0x7fe41c108780 2026-03-09T15:09:09.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.748+0000 7fe4177fe700 1 -- 192.168.123.105:0/3895870121 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fe41000f670 con 0x7fe41c108780 2026-03-09T15:09:09.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.748+0000 7fe424898700 1 -- 192.168.123.105:0/3895870121 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fe41c04ea50 con 0x7fe41c108780 2026-03-09T15:09:09.750 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.749+0000 7fe4177fe700 1 -- 192.168.123.105:0/3895870121 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fe41000f7d0 con 0x7fe41c108780 2026-03-09T15:09:09.751 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.750+0000 7fe4177fe700 1 --2- 192.168.123.105:0/3895870121 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe40c0778c0 0x7fe40c079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:09.751 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.750+0000 7fe422634700 1 --2- 192.168.123.105:0/3895870121 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe40c0778c0 0x7fe40c079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:09.751 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.751+0000 7fe422634700 1 --2- 192.168.123.105:0/3895870121 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe40c0778c0 0x7fe40c079d70 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fe40800e330 tx=0x7fe408009430 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:09.752 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.751+0000 7fe4177fe700 1 -- 192.168.123.105:0/3895870121 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fe41009b8e0 con 0x7fe41c108780 2026-03-09T15:09:09.754 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.753+0000 7fe4177fe700 1 -- 192.168.123.105:0/3895870121 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fe410064030 con 0x7fe41c108780 2026-03-09T15:09:09.890 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:09 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/539520353' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 34, "format": "json"}]: dispatch 2026-03-09T15:09:09.890 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.887+0000 7fe424898700 1 -- 192.168.123.105:0/3895870121 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd blocklist ls"} v 0) v1 -- 0x7fe41c066e40 con 0x7fe41c108780 2026-03-09T15:09:09.890 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.888+0000 7fe4177fe700 1 -- 192.168.123.105:0/3895870121 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd blocklist ls"}]=0 listed 37 entries v93) v1 ==== 81+0+2269 (secure 0 0 0) 0x7fe4100278f0 con 0x7fe41c108780 2026-03-09T15:09:09.890 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6828/1321316558 2026-03-10T15:06:35.379877+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:6826/2393799497 2026-03-10T15:06:54.367540+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6826/2659122886 2026-03-10T15:06:24.405122+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3523930106 2026-03-10T14:55:24.676889+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/1763872844 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/391473931 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:6827/2393799497 2026-03-10T15:06:54.367540+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3749740799 2026-03-10T15:01:38.080422+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3215275472 2026-03-10T14:55:24.676889+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/883441699 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:6828/2887506718 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/4048863196 2026-03-10T14:55:24.676889+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6827/2659122886 2026-03-10T15:06:24.405122+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/116363292 2026-03-10T14:55:39.031351+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3014060932 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:0/487816467 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6801/2 2026-03-10T14:55:24.676889+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:0/3419397818 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/2507132558 2026-03-10T14:56:37.039546+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6800/2 2026-03-10T14:55:24.676889+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:0/1461298959 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:0/1474624475 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3660404579 2026-03-10T15:01:38.080422+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/2152893737 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/2983840049 2026-03-10T15:01:38.080422+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3823132518 2026-03-10T14:56:37.039546+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:6829/2887506718 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/2534607224 2026-03-10T15:01:38.080422+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/1030329164 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6801/456689610 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:0/1789491850 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/2546376349 2026-03-10T14:56:37.039546+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:0/1721752219 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6829/1321316558 2026-03-10T15:06:35.379877+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6800/456689610 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/2312625867 2026-03-10T14:55:39.031351+0000 2026-03-09T15:09:09.891 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3432926156 2026-03-10T14:55:39.031351+0000 2026-03-09T15:09:09.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.893+0000 7fe424898700 1 -- 192.168.123.105:0/3895870121 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe40c0778c0 msgr2=0x7fe40c079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:09.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.893+0000 7fe424898700 1 --2- 192.168.123.105:0/3895870121 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe40c0778c0 0x7fe40c079d70 secure :-1 s=READY pgs=116 cs=0 l=1 rev1=1 crypto rx=0x7fe40800e330 tx=0x7fe408009430 comp rx=0 tx=0).stop 2026-03-09T15:09:09.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.893+0000 7fe424898700 1 -- 192.168.123.105:0/3895870121 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe41c108780 msgr2=0x7fe41c198900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:09.894 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.894+0000 7fe424898700 1 --2- 192.168.123.105:0/3895870121 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe41c108780 0x7fe41c198900 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7fe410009b00 tx=0x7fe4100048c0 comp rx=0 tx=0).stop 2026-03-09T15:09:09.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.894+0000 7fe424898700 1 -- 192.168.123.105:0/3895870121 shutdown_connections 2026-03-09T15:09:09.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.894+0000 7fe424898700 1 --2- 192.168.123.105:0/3895870121 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fe40c0778c0 0x7fe40c079d70 unknown :-1 s=CLOSED pgs=116 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:09.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.894+0000 7fe424898700 1 --2- 192.168.123.105:0/3895870121 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fe41c102780 0x7fe41c1983c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:09.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.894+0000 7fe424898700 1 --2- 192.168.123.105:0/3895870121 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fe41c108780 0x7fe41c198900 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:09.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.894+0000 7fe424898700 1 -- 192.168.123.105:0/3895870121 >> 192.168.123.105:0/3895870121 conn(0x7fe41c0fe280 msgr2=0x7fe41c0ffa10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:09.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.894+0000 7fe424898700 1 -- 192.168.123.105:0/3895870121 shutdown_connections 2026-03-09T15:09:09.895 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:09.895+0000 7fe424898700 1 -- 192.168.123.105:0/3895870121 wait complete. 2026-03-09T15:09:09.897 INFO:teuthology.orchestra.run.vm05.stderr:listed 37 entries 2026-03-09T15:09:09.967 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-09T15:09:09.967 DEBUG:teuthology.orchestra.run.vm05:> dd if=/proc/self/mounts of=/dev/stdout 2026-03-09T15:09:09.984 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph osd blocklist ls 2026-03-09T15:09:10.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:09 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/539520353' entity='client.admin' cmd=[{"prefix": "fs dump", "epoch": 34, "format": "json"}]: dispatch 2026-03-09T15:09:10.207 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:09:10.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.512+0000 7fa2434d0700 1 -- 192.168.123.105:0/2783285525 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa23c073a00 msgr2=0x7fa23c110ff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:10.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.512+0000 7fa2434d0700 1 --2- 192.168.123.105:0/2783285525 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa23c073a00 0x7fa23c110ff0 secure :-1 s=READY pgs=175 cs=0 l=1 rev1=1 crypto rx=0x7fa238009b50 tx=0x7fa238009e60 comp rx=0 tx=0).stop 2026-03-09T15:09:10.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.513+0000 7fa2434d0700 1 -- 192.168.123.105:0/2783285525 shutdown_connections 2026-03-09T15:09:10.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.513+0000 7fa2434d0700 1 --2- 192.168.123.105:0/2783285525 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa23c073a00 0x7fa23c110ff0 unknown :-1 s=CLOSED pgs=175 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:10.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.513+0000 7fa2434d0700 1 --2- 192.168.123.105:0/2783285525 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa23c0730f0 0x7fa23c0734c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:10.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.513+0000 7fa2434d0700 1 -- 192.168.123.105:0/2783285525 >> 192.168.123.105:0/2783285525 conn(0x7fa23c0fc000 msgr2=0x7fa23c0fe410 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:10.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.513+0000 7fa2434d0700 1 -- 192.168.123.105:0/2783285525 shutdown_connections 2026-03-09T15:09:10.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.513+0000 7fa2434d0700 1 -- 192.168.123.105:0/2783285525 wait complete. 2026-03-09T15:09:10.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.514+0000 7fa2434d0700 1 Processor -- start 2026-03-09T15:09:10.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.514+0000 7fa2434d0700 1 -- start start 2026-03-09T15:09:10.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.514+0000 7fa2434d0700 1 --2- >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa23c0730f0 0x7fa23c1a2550 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:10.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.515+0000 7fa2434d0700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa23c073a00 0x7fa23c1a2a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:10.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.515+0000 7fa2434d0700 1 -- --> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa23c1a3120 con 0x7fa23c0730f0 2026-03-09T15:09:10.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.515+0000 7fa2434d0700 1 -- --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_getmap magic: 0 v1 -- 0x7fa23c19c5d0 con 0x7fa23c073a00 2026-03-09T15:09:10.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.515+0000 7fa240a6b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa23c073a00 0x7fa23c1a2a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:10.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.515+0000 7fa240a6b700 1 --2- >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa23c073a00 0x7fa23c1a2a90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.109:3300/0 says I am v2:192.168.123.105:43392/0 (socket says 192.168.123.105:43392) 2026-03-09T15:09:10.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.515+0000 7fa240a6b700 1 -- 192.168.123.105:0/1623189866 learned_addr learned my addr 192.168.123.105:0/1623189866 (peer_addr_for_me v2:192.168.123.105:0/0) 2026-03-09T15:09:10.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.515+0000 7fa24126c700 1 --2- 192.168.123.105:0/1623189866 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa23c0730f0 0x7fa23c1a2550 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:10.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.515+0000 7fa240a6b700 1 -- 192.168.123.105:0/1623189866 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa23c0730f0 msgr2=0x7fa23c1a2550 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:10.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.515+0000 7fa240a6b700 1 --2- 192.168.123.105:0/1623189866 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa23c0730f0 0x7fa23c1a2550 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:10.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.515+0000 7fa240a6b700 1 -- 192.168.123.105:0/1623189866 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x7fa2380097e0 con 0x7fa23c073a00 2026-03-09T15:09:10.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.515+0000 7fa24126c700 1 --2- 192.168.123.105:0/1623189866 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa23c0730f0 0x7fa23c1a2550 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-09T15:09:10.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.515+0000 7fa240a6b700 1 --2- 192.168.123.105:0/1623189866 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa23c073a00 0x7fa23c1a2a90 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7fa2380094d0 tx=0x7fa238004970 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:10.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.516+0000 7fa2327fc700 1 -- 192.168.123.105:0/1623189866 <== mon.1 v2:192.168.123.109:3300/0 1 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa23801d070 con 0x7fa23c073a00 2026-03-09T15:09:10.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.516+0000 7fa2434d0700 1 -- 192.168.123.105:0/1623189866 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x7fa23c19c8b0 con 0x7fa23c073a00 2026-03-09T15:09:10.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.516+0000 7fa2327fc700 1 -- 192.168.123.105:0/1623189866 <== mon.1 v2:192.168.123.109:3300/0 2 ==== config(28 keys) v1 ==== 1147+0+0 (secure 0 0 0) 0x7fa23800bc50 con 0x7fa23c073a00 2026-03-09T15:09:10.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.516+0000 7fa2327fc700 1 -- 192.168.123.105:0/1623189866 <== mon.1 v2:192.168.123.109:3300/0 3 ==== mon_map magic: 0 v1 ==== 327+0+0 (secure 0 0 0) 0x7fa23800f670 con 0x7fa23c073a00 2026-03-09T15:09:10.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.516+0000 7fa2434d0700 1 -- 192.168.123.105:0/1623189866 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x7fa23c19ce00 con 0x7fa23c073a00 2026-03-09T15:09:10.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.517+0000 7fa2434d0700 1 -- 192.168.123.105:0/1623189866 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fa23c10e770 con 0x7fa23c073a00 2026-03-09T15:09:10.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.518+0000 7fa2327fc700 1 -- 192.168.123.105:0/1623189866 <== mon.1 v2:192.168.123.109:3300/0 4 ==== mgrmap(e 38) v1 ==== 100115+0+0 (secure 0 0 0) 0x7fa238044b60 con 0x7fa23c073a00 2026-03-09T15:09:10.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.518+0000 7fa2327fc700 1 --2- 192.168.123.105:0/1623189866 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa2280778c0 0x7fa228079d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-09T15:09:10.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.518+0000 7fa2327fc700 1 -- 192.168.123.105:0/1623189866 <== mon.1 v2:192.168.123.109:3300/0 5 ==== osd_map(93..93 src has 1..93) v4 ==== 6663+0+0 (secure 0 0 0) 0x7fa23809b1c0 con 0x7fa23c073a00 2026-03-09T15:09:10.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.518+0000 7fa24126c700 1 --2- 192.168.123.105:0/1623189866 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa2280778c0 0x7fa228079d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-09T15:09:10.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.519+0000 7fa24126c700 1 --2- 192.168.123.105:0/1623189866 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa2280778c0 0x7fa228079d70 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7fa22c009dd0 tx=0x7fa22c009450 comp rx=0 tx=0).ready entity=mgr.34104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-09T15:09:10.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.521+0000 7fa2327fc700 1 -- 192.168.123.105:0/1623189866 <== mon.1 v2:192.168.123.109:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+195034 (secure 0 0 0) 0x7fa2380637e0 con 0x7fa23c073a00 2026-03-09T15:09:10.651 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.650+0000 7fa2434d0700 1 -- 192.168.123.105:0/1623189866 --> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] -- mon_command({"prefix": "osd blocklist ls"} v 0) v1 -- 0x7fa23c19db70 con 0x7fa23c073a00 2026-03-09T15:09:10.652 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.651+0000 7fa2327fc700 1 -- 192.168.123.105:0/1623189866 <== mon.1 v2:192.168.123.109:3300/0 7 ==== mon_command_ack([{"prefix": "osd blocklist ls"}]=0 listed 37 entries v93) v1 ==== 81+0+2269 (secure 0 0 0) 0x7fa238062f30 con 0x7fa23c073a00 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6828/1321316558 2026-03-10T15:06:35.379877+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:6826/2393799497 2026-03-10T15:06:54.367540+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6826/2659122886 2026-03-10T15:06:24.405122+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3523930106 2026-03-10T14:55:24.676889+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/1763872844 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/391473931 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:6827/2393799497 2026-03-10T15:06:54.367540+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3749740799 2026-03-10T15:01:38.080422+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3215275472 2026-03-10T14:55:24.676889+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/883441699 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:6828/2887506718 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/4048863196 2026-03-10T14:55:24.676889+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6827/2659122886 2026-03-10T15:06:24.405122+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/116363292 2026-03-10T14:55:39.031351+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3014060932 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:0/487816467 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6801/2 2026-03-10T14:55:24.676889+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:0/3419397818 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/2507132558 2026-03-10T14:56:37.039546+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6800/2 2026-03-10T14:55:24.676889+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:0/1461298959 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:0/1474624475 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3660404579 2026-03-10T15:01:38.080422+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/2152893737 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/2983840049 2026-03-10T15:01:38.080422+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3823132518 2026-03-10T14:56:37.039546+0000 2026-03-09T15:09:10.654 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:6829/2887506718 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:10.655 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/2534607224 2026-03-10T15:01:38.080422+0000 2026-03-09T15:09:10.655 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/1030329164 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:10.655 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6801/456689610 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:10.655 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:0/1789491850 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:10.655 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/2546376349 2026-03-10T14:56:37.039546+0000 2026-03-09T15:09:10.655 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.109:0/1721752219 2026-03-10T15:02:08.977106+0000 2026-03-09T15:09:10.655 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6829/1321316558 2026-03-10T15:06:35.379877+0000 2026-03-09T15:09:10.655 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:6800/456689610 2026-03-10T15:03:41.295887+0000 2026-03-09T15:09:10.655 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/2312625867 2026-03-10T14:55:39.031351+0000 2026-03-09T15:09:10.655 INFO:teuthology.orchestra.run.vm05.stdout:192.168.123.105:0/3432926156 2026-03-10T14:55:39.031351+0000 2026-03-09T15:09:10.655 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.655+0000 7fa2434d0700 1 -- 192.168.123.105:0/1623189866 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa2280778c0 msgr2=0x7fa228079d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.655+0000 7fa2434d0700 1 --2- 192.168.123.105:0/1623189866 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa2280778c0 0x7fa228079d70 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7fa22c009dd0 tx=0x7fa22c009450 comp rx=0 tx=0).stop 2026-03-09T15:09:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.655+0000 7fa2434d0700 1 -- 192.168.123.105:0/1623189866 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa23c073a00 msgr2=0x7fa23c1a2a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-09T15:09:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.655+0000 7fa2434d0700 1 --2- 192.168.123.105:0/1623189866 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa23c073a00 0x7fa23c1a2a90 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7fa2380094d0 tx=0x7fa238004970 comp rx=0 tx=0).stop 2026-03-09T15:09:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.655+0000 7fa2434d0700 1 -- 192.168.123.105:0/1623189866 shutdown_connections 2026-03-09T15:09:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.656+0000 7fa2434d0700 1 --2- 192.168.123.105:0/1623189866 >> [v2:192.168.123.105:6800/3615781754,v1:192.168.123.105:6801/3615781754] conn(0x7fa2280778c0 0x7fa228079d70 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:10.656 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.656+0000 7fa2434d0700 1 --2- 192.168.123.105:0/1623189866 >> [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] conn(0x7fa23c0730f0 0x7fa23c1a2550 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:10.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.656+0000 7fa2434d0700 1 --2- 192.168.123.105:0/1623189866 >> [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] conn(0x7fa23c073a00 0x7fa23c1a2a90 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-09T15:09:10.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.656+0000 7fa2434d0700 1 -- 192.168.123.105:0/1623189866 >> 192.168.123.105:0/1623189866 conn(0x7fa23c0fc000 msgr2=0x7fa23c102b00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-09T15:09:10.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.656+0000 7fa2434d0700 1 -- 192.168.123.105:0/1623189866 shutdown_connections 2026-03-09T15:09:10.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-09T15:09:10.656+0000 7fa2434d0700 1 -- 192.168.123.105:0/1623189866 wait complete. 2026-03-09T15:09:10.658 INFO:teuthology.orchestra.run.vm05.stderr:listed 37 entries 2026-03-09T15:09:10.754 INFO:tasks.cephfs.fuse_mount:Running fusermount -u on ubuntu@vm05.local... 2026-03-09T15:09:10.754 INFO:teuthology.orchestra.run:Running command with timeout 300 2026-03-09T15:09:10.754 DEBUG:teuthology.orchestra.run.vm05:> sudo fusermount -u /home/ubuntu/cephtest/mnt.0 2026-03-09T15:09:10.789 INFO:teuthology.orchestra.run:waiting for 300 2026-03-09T15:09:11.017 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:10 vm09.local ceph-mon[98742]: pgmap v219: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:09:11.017 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:10 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/3895870121' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T15:09:11.017 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:10 vm09.local ceph-mon[98742]: from='client.? 192.168.123.105:0/1623189866' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T15:09:11.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:10 vm05.local ceph-mon[116516]: pgmap v219: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:09:11.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:10 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/3895870121' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T15:09:11.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:10 vm05.local ceph-mon[116516]: from='client.? 192.168.123.105:0/1623189866' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T15:09:12.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:09:12.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:09:13.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:12 vm05.local ceph-mon[116516]: pgmap v220: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:13.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:12 vm09.local ceph-mon[98742]: pgmap v220: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:15.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:14 vm05.local ceph-mon[116516]: pgmap v221: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:15.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:14 vm09.local ceph-mon[98742]: pgmap v221: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:16.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:15 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:09:16.067 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:15 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:09:17.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:16 vm05.local ceph-mon[116516]: pgmap v222: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:17.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:09:17.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:09:17.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:09:17.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:16 vm09.local ceph-mon[98742]: pgmap v222: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:17.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:09:17.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:09:17.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:09:19.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:18 vm05.local ceph-mon[116516]: pgmap v223: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:19.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:18 vm09.local ceph-mon[98742]: pgmap v223: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:21.115 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:20 vm09.local ceph-mon[98742]: pgmap v224: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:09:21.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:20 vm05.local ceph-mon[116516]: pgmap v224: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:09:23.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:22 vm05.local ceph-mon[116516]: pgmap v225: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:23.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:22 vm09.local ceph-mon[98742]: pgmap v225: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:25.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:24 vm09.local ceph-mon[98742]: pgmap v226: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:25.277 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:24 vm05.local ceph-mon[116516]: pgmap v226: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:27.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:26 vm09.local ceph-mon[98742]: pgmap v227: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:27.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:26 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:09:27.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:26 vm05.local ceph-mon[116516]: pgmap v227: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:27.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:26 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:09:29.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:28 vm09.local ceph-mon[98742]: pgmap v228: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:29.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:28 vm05.local ceph-mon[116516]: pgmap v228: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:30 vm09.local ceph-mon[98742]: pgmap v229: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:09:31.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:30 vm05.local ceph-mon[116516]: pgmap v229: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:09:33.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:32 vm09.local ceph-mon[98742]: pgmap v230: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:33.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:32 vm05.local ceph-mon[116516]: pgmap v230: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:34 vm09.local ceph-mon[98742]: pgmap v231: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:35.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:34 vm05.local ceph-mon[116516]: pgmap v231: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:37.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:36 vm09.local ceph-mon[98742]: pgmap v232: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:37.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:36 vm05.local ceph-mon[116516]: pgmap v232: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:39.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:38 vm09.local ceph-mon[98742]: pgmap v233: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:38 vm05.local ceph-mon[116516]: pgmap v233: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:41.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:40 vm09.local ceph-mon[98742]: pgmap v234: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:09:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:40 vm05.local ceph-mon[116516]: pgmap v234: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:09:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:09:42.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:09:43.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:42 vm05.local ceph-mon[116516]: pgmap v235: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:42 vm09.local ceph-mon[98742]: pgmap v235: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:45.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:44 vm05.local ceph-mon[116516]: pgmap v236: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:45.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:44 vm09.local ceph-mon[98742]: pgmap v236: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:46 vm05.local ceph-mon[116516]: pgmap v237: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:47.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:46 vm09.local ceph-mon[98742]: pgmap v237: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:48 vm05.local ceph-mon[116516]: pgmap v238: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:49.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:48 vm09.local ceph-mon[98742]: pgmap v238: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:51.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:50 vm05.local ceph-mon[116516]: pgmap v239: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:09:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:50 vm09.local ceph-mon[98742]: pgmap v239: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:09:53.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:52 vm05.local ceph-mon[116516]: pgmap v240: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:53.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:52 vm09.local ceph-mon[98742]: pgmap v240: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:55.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:54 vm05.local ceph-mon[116516]: pgmap v241: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:54 vm09.local ceph-mon[98742]: pgmap v241: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:57.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:56 vm05.local ceph-mon[116516]: pgmap v242: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:57.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:09:57.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:56 vm09.local ceph-mon[98742]: pgmap v242: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:09:57.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:09:59.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:09:58 vm05.local ceph-mon[116516]: pgmap v243: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:09:59.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:09:58 vm09.local ceph-mon[98742]: pgmap v243: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:01.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:00 vm05.local ceph-mon[116516]: pgmap v244: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:10:01.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:00 vm05.local ceph-mon[116516]: overall HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:10:01.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:00 vm09.local ceph-mon[98742]: pgmap v244: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:10:01.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:00 vm09.local ceph-mon[98742]: overall HEALTH_WARN 1 filesystem with deprecated feature inline_data 2026-03-09T15:10:03.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:02 vm05.local ceph-mon[116516]: pgmap v245: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:03.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:02 vm09.local ceph-mon[98742]: pgmap v245: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:05.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:04 vm05.local ceph-mon[116516]: pgmap v246: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:05.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:04 vm09.local ceph-mon[98742]: pgmap v246: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:07.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:06 vm05.local ceph-mon[116516]: pgmap v247: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:07.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:06 vm09.local ceph-mon[98742]: pgmap v247: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:09.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:08 vm05.local ceph-mon[116516]: pgmap v248: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:09.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:08 vm09.local ceph-mon[98742]: pgmap v248: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:11.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:10 vm05.local ceph-mon[116516]: pgmap v249: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:10:11.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:10 vm09.local ceph-mon[98742]: pgmap v249: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:10:12.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:10:12.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:10:13.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:12 vm05.local ceph-mon[116516]: pgmap v250: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:13.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:12 vm09.local ceph-mon[98742]: pgmap v250: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:15.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:14 vm05.local ceph-mon[116516]: pgmap v251: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:15.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:14 vm09.local ceph-mon[98742]: pgmap v251: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:17.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:16 vm05.local ceph-mon[116516]: pgmap v252: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:17.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:10:17.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:10:17.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:10:17.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:16 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:10:17.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:16 vm09.local ceph-mon[98742]: pgmap v252: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:17.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:10:17.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:10:17.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:10:17.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:16 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:10:19.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:18 vm05.local ceph-mon[116516]: pgmap v253: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:19.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:18 vm09.local ceph-mon[98742]: pgmap v253: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:21.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:20 vm05.local ceph-mon[116516]: pgmap v254: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:10:21.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:20 vm09.local ceph-mon[98742]: pgmap v254: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:10:23.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:22 vm05.local ceph-mon[116516]: pgmap v255: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:23.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:22 vm09.local ceph-mon[98742]: pgmap v255: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:25.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:24 vm05.local ceph-mon[116516]: pgmap v256: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:25.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:24 vm09.local ceph-mon[98742]: pgmap v256: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:27.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:27 vm05.local ceph-mon[116516]: pgmap v257: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:27.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:27 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:10:27.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:27 vm09.local ceph-mon[98742]: pgmap v257: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:27.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:27 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:10:29.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:29 vm05.local ceph-mon[116516]: pgmap v258: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:29.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:29 vm09.local ceph-mon[98742]: pgmap v258: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:31.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:31 vm05.local ceph-mon[116516]: pgmap v259: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:10:31.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:31 vm09.local ceph-mon[98742]: pgmap v259: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:10:33.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:33 vm05.local ceph-mon[116516]: pgmap v260: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:33.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:33 vm09.local ceph-mon[98742]: pgmap v260: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:35.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:35 vm05.local ceph-mon[116516]: pgmap v261: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:35.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:35 vm09.local ceph-mon[98742]: pgmap v261: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:37.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:37 vm09.local ceph-mon[98742]: pgmap v262: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:37.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:37 vm05.local ceph-mon[116516]: pgmap v262: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:39.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:39 vm05.local ceph-mon[116516]: pgmap v263: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:39.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:39 vm09.local ceph-mon[98742]: pgmap v263: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:41.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:41 vm05.local ceph-mon[116516]: pgmap v264: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:10:41.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:41 vm09.local ceph-mon[98742]: pgmap v264: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:10:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:10:42.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:10:43.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:43 vm05.local ceph-mon[116516]: pgmap v265: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:43.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:43 vm09.local ceph-mon[98742]: pgmap v265: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:45.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:45 vm05.local ceph-mon[116516]: pgmap v266: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:45.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:45 vm09.local ceph-mon[98742]: pgmap v266: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:47.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:47 vm09.local ceph-mon[98742]: pgmap v267: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:47.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:47 vm05.local ceph-mon[116516]: pgmap v267: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:49.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:49 vm09.local ceph-mon[98742]: pgmap v268: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:49.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:49 vm05.local ceph-mon[116516]: pgmap v268: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:51 vm09.local ceph-mon[98742]: pgmap v269: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:10:51.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:51 vm05.local ceph-mon[116516]: pgmap v269: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:10:53.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:53 vm09.local ceph-mon[98742]: pgmap v270: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:53.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:53 vm05.local ceph-mon[116516]: pgmap v270: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:55 vm09.local ceph-mon[98742]: pgmap v271: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:55.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:55 vm05.local ceph-mon[116516]: pgmap v271: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:57.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:57 vm09.local ceph-mon[98742]: pgmap v272: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:57.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:10:57.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:57 vm05.local ceph-mon[116516]: pgmap v272: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:10:57.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:10:59.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:10:59 vm09.local ceph-mon[98742]: pgmap v273: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:10:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:10:59 vm05.local ceph-mon[116516]: pgmap v273: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:01 vm05.local ceph-mon[116516]: pgmap v274: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:11:01.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:01 vm09.local ceph-mon[98742]: pgmap v274: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:11:03.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:03 vm05.local ceph-mon[116516]: pgmap v275: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:03.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:03 vm09.local ceph-mon[98742]: pgmap v275: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:05.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:05 vm05.local ceph-mon[116516]: pgmap v276: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:05.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:05 vm09.local ceph-mon[98742]: pgmap v276: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:07.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:07 vm09.local ceph-mon[98742]: pgmap v277: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:07.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:07 vm05.local ceph-mon[116516]: pgmap v277: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:09.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:09 vm05.local ceph-mon[116516]: pgmap v278: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:09 vm09.local ceph-mon[98742]: pgmap v278: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:11.418 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:11 vm09.local ceph-mon[98742]: pgmap v279: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:11:11.483 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:11 vm05.local ceph-mon[116516]: pgmap v279: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:11:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:12 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:11:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:12 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:11:13.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:13 vm05.local ceph-mon[116516]: pgmap v280: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:13 vm09.local ceph-mon[98742]: pgmap v280: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:15.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:15 vm05.local ceph-mon[116516]: pgmap v281: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:15.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:15 vm09.local ceph-mon[98742]: pgmap v281: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:17 vm05.local ceph-mon[116516]: pgmap v282: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:11:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:11:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:11:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:11:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:17 vm09.local ceph-mon[98742]: pgmap v282: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:11:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:11:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:11:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:11:18.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:18 vm05.local ceph-mon[116516]: pgmap v283: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:18.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:18 vm09.local ceph-mon[98742]: pgmap v283: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:20.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:20 vm05.local ceph-mon[116516]: pgmap v284: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:11:20.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:20 vm09.local ceph-mon[98742]: pgmap v284: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:11:22.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:22 vm05.local ceph-mon[116516]: pgmap v285: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:22.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:22 vm09.local ceph-mon[98742]: pgmap v285: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:25.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:24 vm05.local ceph-mon[116516]: pgmap v286: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:25.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:24 vm09.local ceph-mon[98742]: pgmap v286: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:27.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:26 vm05.local ceph-mon[116516]: pgmap v287: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:27.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:26 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:11:27.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:26 vm09.local ceph-mon[98742]: pgmap v287: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:27.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:26 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:11:29.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:28 vm05.local ceph-mon[116516]: pgmap v288: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:29.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:28 vm09.local ceph-mon[98742]: pgmap v288: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:31.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:30 vm05.local ceph-mon[116516]: pgmap v289: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:11:31.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:30 vm09.local ceph-mon[98742]: pgmap v289: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:11:33.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:32 vm05.local ceph-mon[116516]: pgmap v290: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:33.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:32 vm09.local ceph-mon[98742]: pgmap v290: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:34.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:34 vm09.local ceph-mon[98742]: pgmap v291: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:34 vm05.local ceph-mon[116516]: pgmap v291: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:36.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:36 vm09.local ceph-mon[98742]: pgmap v292: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:37.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:36 vm05.local ceph-mon[116516]: pgmap v292: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:38.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:38 vm09.local ceph-mon[98742]: pgmap v293: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:39.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:38 vm05.local ceph-mon[116516]: pgmap v293: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:41.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:40 vm05.local ceph-mon[116516]: pgmap v294: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:11:41.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:40 vm09.local ceph-mon[98742]: pgmap v294: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:11:42.055 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:11:42.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:11:43.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:42 vm05.local ceph-mon[116516]: pgmap v295: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:43.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:42 vm09.local ceph-mon[98742]: pgmap v295: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:45.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:44 vm05.local ceph-mon[116516]: pgmap v296: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:45.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:44 vm09.local ceph-mon[98742]: pgmap v296: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:47.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:46 vm05.local ceph-mon[116516]: pgmap v297: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:47.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:46 vm09.local ceph-mon[98742]: pgmap v297: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:49.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:48 vm09.local ceph-mon[98742]: pgmap v298: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:49.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:48 vm05.local ceph-mon[116516]: pgmap v298: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:51.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:51 vm05.local ceph-mon[116516]: pgmap v299: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:11:51.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:51 vm09.local ceph-mon[98742]: pgmap v299: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:11:53.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:53 vm09.local ceph-mon[98742]: pgmap v300: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:53.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:53 vm05.local ceph-mon[116516]: pgmap v300: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:55.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:55 vm09.local ceph-mon[98742]: pgmap v301: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:55.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:55 vm05.local ceph-mon[116516]: pgmap v301: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:57.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:57 vm05.local ceph-mon[116516]: pgmap v302: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:57.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:57 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:11:57.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:57 vm09.local ceph-mon[98742]: pgmap v302: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:11:57.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:57 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:11:59.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:11:59 vm09.local ceph-mon[98742]: pgmap v303: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:11:59.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:11:59 vm05.local ceph-mon[116516]: pgmap v303: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:01.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:01 vm05.local ceph-mon[116516]: pgmap v304: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:12:01.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:01 vm09.local ceph-mon[98742]: pgmap v304: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:12:03.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:03 vm05.local ceph-mon[116516]: pgmap v305: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:03.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:03 vm09.local ceph-mon[98742]: pgmap v305: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:05.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:05 vm05.local ceph-mon[116516]: pgmap v306: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:05.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:05 vm09.local ceph-mon[98742]: pgmap v306: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:07.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:07 vm05.local ceph-mon[116516]: pgmap v307: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:07.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:07 vm09.local ceph-mon[98742]: pgmap v307: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:09.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:09 vm05.local ceph-mon[116516]: pgmap v308: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:09.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:09 vm09.local ceph-mon[98742]: pgmap v308: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:11.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:11 vm05.local ceph-mon[116516]: pgmap v309: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:12:11.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:11 vm09.local ceph-mon[98742]: pgmap v309: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:12:12.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:12 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:12:12.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:12 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:12:13.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:13 vm05.local ceph-mon[116516]: pgmap v310: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:13.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:13 vm09.local ceph-mon[98742]: pgmap v310: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:15.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:15 vm05.local ceph-mon[116516]: pgmap v311: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:15.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:15 vm09.local ceph-mon[98742]: pgmap v311: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:17 vm05.local ceph-mon[116516]: pgmap v312: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:12:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:12:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:12:17.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:12:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:17 vm09.local ceph-mon[98742]: pgmap v312: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:12:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:12:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:12:17.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:12:19.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:19 vm05.local ceph-mon[116516]: pgmap v313: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:19.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:19 vm09.local ceph-mon[98742]: pgmap v313: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:20.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:20 vm05.local ceph-mon[116516]: pgmap v314: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:12:20.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:20 vm09.local ceph-mon[98742]: pgmap v314: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:12:22.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:22 vm05.local ceph-mon[116516]: pgmap v315: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:22.777 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:22 vm09.local ceph-mon[98742]: pgmap v315: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:24.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:24 vm05.local ceph-mon[116516]: pgmap v316: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:24.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:24 vm09.local ceph-mon[98742]: pgmap v316: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:26 vm05.local ceph-mon[116516]: pgmap v317: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:26.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:26 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:12:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:26 vm09.local ceph-mon[98742]: pgmap v317: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:26.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:26 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:12:28.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:28 vm05.local ceph-mon[116516]: pgmap v318: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:28.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:28 vm09.local ceph-mon[98742]: pgmap v318: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:30.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:30 vm05.local ceph-mon[116516]: pgmap v319: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:12:30.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:30 vm09.local ceph-mon[98742]: pgmap v319: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:12:33.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:32 vm05.local ceph-mon[116516]: pgmap v320: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:33.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:32 vm09.local ceph-mon[98742]: pgmap v320: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:35.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:34 vm05.local ceph-mon[116516]: pgmap v321: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:35.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:34 vm09.local ceph-mon[98742]: pgmap v321: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:37.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:37 vm05.local ceph-mon[116516]: pgmap v322: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:37.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:37 vm09.local ceph-mon[98742]: pgmap v322: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:38.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:38 vm05.local ceph-mon[116516]: pgmap v323: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:38.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:38 vm09.local ceph-mon[98742]: pgmap v323: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:40.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:40 vm05.local ceph-mon[116516]: pgmap v324: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:12:40.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:40 vm09.local ceph-mon[98742]: pgmap v324: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:12:41.767 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:41 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:12:41.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:41 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:12:42.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:42 vm05.local ceph-mon[116516]: pgmap v325: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:42.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:42 vm09.local ceph-mon[98742]: pgmap v325: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:44.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:44 vm05.local ceph-mon[116516]: pgmap v326: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:44.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:44 vm09.local ceph-mon[98742]: pgmap v326: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:46.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:46 vm05.local ceph-mon[116516]: pgmap v327: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:46.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:46 vm09.local ceph-mon[98742]: pgmap v327: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:48.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:48 vm05.local ceph-mon[116516]: pgmap v328: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:48.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:48 vm09.local ceph-mon[98742]: pgmap v328: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:50.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:50 vm05.local ceph-mon[116516]: pgmap v329: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:12:50.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:50 vm09.local ceph-mon[98742]: pgmap v329: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:12:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:52 vm05.local ceph-mon[116516]: pgmap v330: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:52.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:52 vm09.local ceph-mon[98742]: pgmap v330: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:54 vm09.local ceph-mon[98742]: pgmap v331: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:55.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:54 vm05.local ceph-mon[116516]: pgmap v331: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:56 vm09.local ceph-mon[98742]: pgmap v332: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:12:57.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:56 vm05.local ceph-mon[116516]: pgmap v332: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:12:57.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:12:58.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:12:58 vm09.local ceph-mon[98742]: pgmap v333: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:12:59.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:12:58 vm05.local ceph-mon[116516]: pgmap v333: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:00.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:00 vm09.local ceph-mon[98742]: pgmap v334: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:13:01.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:00 vm05.local ceph-mon[116516]: pgmap v334: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:13:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:02 vm09.local ceph-mon[98742]: pgmap v335: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:03.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:02 vm05.local ceph-mon[116516]: pgmap v335: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:04.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:04 vm09.local ceph-mon[98742]: pgmap v336: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:05.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:04 vm05.local ceph-mon[116516]: pgmap v336: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:06.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:06 vm09.local ceph-mon[98742]: pgmap v337: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:07.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:06 vm05.local ceph-mon[116516]: pgmap v337: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:09.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:08 vm05.local ceph-mon[116516]: pgmap v338: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:09.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:08 vm09.local ceph-mon[98742]: pgmap v338: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:10.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:10 vm05.local ceph-mon[116516]: pgmap v339: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:13:11.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:10 vm09.local ceph-mon[98742]: pgmap v339: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:13:12.017 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:11 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:13:12.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:11 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:13:13.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:12 vm05.local ceph-mon[116516]: pgmap v340: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:13.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:12 vm09.local ceph-mon[98742]: pgmap v340: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:15.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:14 vm05.local ceph-mon[116516]: pgmap v341: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:15.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:14 vm09.local ceph-mon[98742]: pgmap v341: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:17.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:16 vm05.local ceph-mon[116516]: pgmap v342: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:17.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:16 vm09.local ceph-mon[98742]: pgmap v342: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:18.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:13:18.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:13:18.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:13:18.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:17 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:13:18.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T15:13:18.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T15:13:18.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T15:13:18.116 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:17 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' 2026-03-09T15:13:19.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:19 vm05.local ceph-mon[116516]: pgmap v343: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:19.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:19 vm09.local ceph-mon[98742]: pgmap v343: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:21.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:21 vm05.local ceph-mon[116516]: pgmap v344: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:13:21.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:21 vm09.local ceph-mon[98742]: pgmap v344: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:13:23.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:23 vm05.local ceph-mon[116516]: pgmap v345: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:23.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:23 vm09.local ceph-mon[98742]: pgmap v345: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:25.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:25 vm05.local ceph-mon[116516]: pgmap v346: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:25.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:25 vm09.local ceph-mon[98742]: pgmap v346: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:27.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:27 vm05.local ceph-mon[116516]: pgmap v347: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:27.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:27 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:13:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:27 vm09.local ceph-mon[98742]: pgmap v347: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:27.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:27 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:13:29.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:29 vm05.local ceph-mon[116516]: pgmap v348: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:29.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:29 vm09.local ceph-mon[98742]: pgmap v348: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:31.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:31 vm05.local ceph-mon[116516]: pgmap v349: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:13:31.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:31 vm09.local ceph-mon[98742]: pgmap v349: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:13:33.304 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:33 vm05.local ceph-mon[116516]: pgmap v350: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:33.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:33 vm09.local ceph-mon[98742]: pgmap v350: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:35.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:35 vm05.local ceph-mon[116516]: pgmap v351: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:35.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:35 vm09.local ceph-mon[98742]: pgmap v351: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:37.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:37 vm05.local ceph-mon[116516]: pgmap v352: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:37.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:37 vm09.local ceph-mon[98742]: pgmap v352: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:39.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:39 vm05.local ceph-mon[116516]: pgmap v353: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:39.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:39 vm09.local ceph-mon[98742]: pgmap v353: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:41.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:41 vm05.local ceph-mon[116516]: pgmap v354: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:13:41.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:41 vm09.local ceph-mon[98742]: pgmap v354: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:13:42.518 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:42 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:13:42.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:42 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:13:43.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:43 vm05.local ceph-mon[116516]: pgmap v355: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:43.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:43 vm09.local ceph-mon[98742]: pgmap v355: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:45.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:45 vm05.local ceph-mon[116516]: pgmap v356: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:45.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:45 vm09.local ceph-mon[98742]: pgmap v356: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:47.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:47 vm05.local ceph-mon[116516]: pgmap v357: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:47.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:47 vm09.local ceph-mon[98742]: pgmap v357: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:48.554 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:48 vm05.local ceph-mon[116516]: pgmap v358: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:48.616 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:48 vm09.local ceph-mon[98742]: pgmap v358: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:50.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:50 vm05.local ceph-mon[116516]: pgmap v359: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:13:50.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:50 vm09.local ceph-mon[98742]: pgmap v359: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:13:52.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:52 vm05.local ceph-mon[116516]: pgmap v360: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:52.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:52 vm09.local ceph-mon[98742]: pgmap v360: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:54.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:54 vm05.local ceph-mon[116516]: pgmap v361: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:54.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:54 vm09.local ceph-mon[98742]: pgmap v361: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:56.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:56 vm05.local ceph-mon[116516]: pgmap v362: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:56.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:56 vm05.local ceph-mon[116516]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:13:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:56 vm09.local ceph-mon[98742]: pgmap v362: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:13:56.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:56 vm09.local ceph-mon[98742]: from='mgr.34104 192.168.123.105:0/1675357238' entity='mgr.vm05.lhsexd' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T15:13:58.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:13:58 vm05.local ceph-mon[116516]: pgmap v363: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:13:58.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:13:58 vm09.local ceph-mon[98742]: pgmap v363: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:14:00.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:14:00 vm05.local ceph-mon[116516]: pgmap v364: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:14:00.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:14:00 vm09.local ceph-mon[98742]: pgmap v364: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 1 op/s 2026-03-09T15:14:02.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:14:02 vm09.local ceph-mon[98742]: pgmap v365: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:14:03.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:14:02 vm05.local ceph-mon[116516]: pgmap v365: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:14:04.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:14:04 vm09.local ceph-mon[98742]: pgmap v366: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:14:05.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:14:04 vm05.local ceph-mon[116516]: pgmap v366: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:14:06.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:14:06 vm09.local ceph-mon[98742]: pgmap v367: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:14:07.054 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:14:06 vm05.local ceph-mon[116516]: pgmap v367: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 853 B/s rd, 2 op/s 2026-03-09T15:14:08.804 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:14:08 vm05.local ceph-mon[116516]: pgmap v368: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:14:08.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:14:08 vm09.local ceph-mon[98742]: pgmap v368: 65 pgs: 65 active+clean; 257 MiB data, 992 MiB used, 119 GiB / 120 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-09T15:14:09.836 ERROR:tasks.cephfs.fuse_mount:process failed to terminate after unmount. This probably indicates a bug within ceph-fuse. 2026-03-09T15:14:09.836 ERROR:teuthology.run_tasks:Manager failed: ceph-fuse Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/ceph_fuse.py", line 248, in task mount.umount_wait() File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephfs/fuse_mount.py", line 403, in umount_wait run.wait([self.fuse_daemon], timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 479, in wait check_time() File "/home/teuthos/teuthology/teuthology/contextutil.py", line 134, in __call__ raise MaxWhileTries(error_msg) teuthology.exceptions.MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds 2026-03-09T15:14:09.837 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T15:14:09.839 INFO:tasks.cephadm:Teardown begin 2026-03-09T15:14:09.839 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 2252, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/ceph_fuse.py", line 248, in task mount.umount_wait() File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephfs/fuse_mount.py", line 403, in umount_wait run.wait([self.fuse_daemon], timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 479, in wait check_time() File "/home/teuthos/teuthology/teuthology/contextutil.py", line 134, in __call__ raise MaxWhileTries(error_msg) teuthology.exceptions.MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds 2026-03-09T15:14:09.840 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T15:14:09.866 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T15:14:09.891 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T15:14:09.891 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 -- ceph mgr module disable cephadm 2026-03-09T15:14:10.053 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/mon.vm05/config 2026-03-09T15:14:10.206 INFO:teuthology.orchestra.run.vm05.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-09T15:14:10.221 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-09T15:14:10.221 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T15:14:10.221 DEBUG:teuthology.orchestra.run.vm05:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T15:14:10.235 DEBUG:teuthology.orchestra.run.vm09:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T15:14:10.250 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T15:14:10.250 INFO:tasks.cephadm.mon.vm05:Stopping mon.vm05... 2026-03-09T15:14:10.250 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm05 2026-03-09T15:14:10.389 INFO:journalctl@ceph.mon.vm05.vm05.stdout:Mar 09 15:14:10 vm05.local systemd[1]: Stopping Ceph mon.vm05 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:14:10.533 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm05.service' 2026-03-09T15:14:10.567 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T15:14:10.567 INFO:tasks.cephadm.mon.vm05:Stopped mon.vm05 2026-03-09T15:14:10.567 INFO:tasks.cephadm.mon.vm09:Stopping mon.vm09... 2026-03-09T15:14:10.567 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm09 2026-03-09T15:14:10.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:14:10 vm09.local systemd[1]: Stopping Ceph mon.vm09 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:14:10.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:14:10 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm09[98738]: 2026-03-09T15:14:10.664+0000 7f9748064640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm09 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:14:10.866 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 09 15:14:10 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-mon-vm09[98738]: 2026-03-09T15:14:10.664+0000 7f9748064640 -1 mon.vm09@1(peon) e3 *** Got Signal Terminated *** 2026-03-09T15:14:10.945 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@mon.vm09.service' 2026-03-09T15:14:10.981 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T15:14:10.981 INFO:tasks.cephadm.mon.vm09:Stopped mon.vm09 2026-03-09T15:14:10.981 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T15:14:10.981 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.0 2026-03-09T15:14:11.073 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:14:11 vm05.local systemd[1]: Stopping Ceph osd.0 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:14:11.554 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:14:11 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[125688]: 2026-03-09T15:14:11.072+0000 7f40572a5640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:14:11.554 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:14:11 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[125688]: 2026-03-09T15:14:11.072+0000 7f40572a5640 -1 osd.0 93 *** Got signal Terminated *** 2026-03-09T15:14:11.554 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:14:11 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0[125688]: 2026-03-09T15:14:11.072+0000 7f40572a5640 -1 osd.0 93 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T15:14:16.438 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:14:16 vm05.local podman[156475]: 2026-03-09 15:14:16.114527531 +0000 UTC m=+5.054357401 container died f2883abca2d23322474e24e6f3effa5ec059a26f9c2ae1c66fe51430c364ec7b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T15:14:16.438 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:14:16 vm05.local podman[156475]: 2026-03-09 15:14:16.132164347 +0000 UTC m=+5.071994217 container remove f2883abca2d23322474e24e6f3effa5ec059a26f9c2ae1c66fe51430c364ec7b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T15:14:16.438 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:14:16 vm05.local bash[156475]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0 2026-03-09T15:14:16.438 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:14:16 vm05.local podman[156553]: 2026-03-09 15:14:16.258566492 +0000 UTC m=+0.016237606 container create 5ca5d32ec73208c6501f2921f8a50d55e9541d4d47c17ec0bba6a68484047dad (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-deactivate, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , ceph=True) 2026-03-09T15:14:16.438 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:14:16 vm05.local podman[156553]: 2026-03-09 15:14:16.301875033 +0000 UTC m=+0.059546147 container init 5ca5d32ec73208c6501f2921f8a50d55e9541d4d47c17ec0bba6a68484047dad (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-deactivate, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T15:14:16.438 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:14:16 vm05.local podman[156553]: 2026-03-09 15:14:16.304506149 +0000 UTC m=+0.062177263 container start 5ca5d32ec73208c6501f2921f8a50d55e9541d4d47c17ec0bba6a68484047dad (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-deactivate, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T15:14:16.438 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:14:16 vm05.local podman[156553]: 2026-03-09 15:14:16.30834232 +0000 UTC m=+0.066013443 container attach 5ca5d32ec73208c6501f2921f8a50d55e9541d4d47c17ec0bba6a68484047dad (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-0-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0) 2026-03-09T15:14:16.438 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 09 15:14:16 vm05.local podman[156553]: 2026-03-09 15:14:16.251787253 +0000 UTC m=+0.009458376 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:14:16.486 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.0.service' 2026-03-09T15:14:16.522 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T15:14:16.522 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T15:14:16.522 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T15:14:16.522 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.1 2026-03-09T15:14:16.804 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:14:16 vm05.local systemd[1]: Stopping Ceph osd.1 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:14:16.804 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:14:16 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[130289]: 2026-03-09T15:14:16.659+0000 7efff9654640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:14:16.804 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:14:16 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[130289]: 2026-03-09T15:14:16.659+0000 7efff9654640 -1 osd.1 93 *** Got signal Terminated *** 2026-03-09T15:14:16.804 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:14:16 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1[130289]: 2026-03-09T15:14:16.659+0000 7efff9654640 -1 osd.1 93 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T15:14:22.028 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:14:21 vm05.local podman[156651]: 2026-03-09 15:14:21.69245503 +0000 UTC m=+5.045428737 container died b830d7f764983fbff5a713df464dd8eade826327ce723fc348710c49b5cf2735 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2) 2026-03-09T15:14:22.028 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:14:21 vm05.local podman[156651]: 2026-03-09 15:14:21.717369109 +0000 UTC m=+5.070342816 container remove b830d7f764983fbff5a713df464dd8eade826327ce723fc348710c49b5cf2735 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T15:14:22.028 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:14:21 vm05.local bash[156651]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1 2026-03-09T15:14:22.028 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:14:21 vm05.local podman[156718]: 2026-03-09 15:14:21.85250822 +0000 UTC m=+0.017836209 container create 6b413c3341ecc7de16a395c87fdda132fcf63b2ddb4e6ab7f2565689fea9d5b5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-deactivate, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2) 2026-03-09T15:14:22.028 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:14:21 vm05.local podman[156718]: 2026-03-09 15:14:21.897848263 +0000 UTC m=+0.063176273 container init 6b413c3341ecc7de16a395c87fdda132fcf63b2ddb4e6ab7f2565689fea9d5b5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, CEPH_REF=squid) 2026-03-09T15:14:22.028 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:14:21 vm05.local podman[156718]: 2026-03-09 15:14:21.901117562 +0000 UTC m=+0.066445562 container start 6b413c3341ecc7de16a395c87fdda132fcf63b2ddb4e6ab7f2565689fea9d5b5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS) 2026-03-09T15:14:22.028 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:14:21 vm05.local podman[156718]: 2026-03-09 15:14:21.907337666 +0000 UTC m=+0.072665667 container attach 6b413c3341ecc7de16a395c87fdda132fcf63b2ddb4e6ab7f2565689fea9d5b5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-1-deactivate, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T15:14:22.028 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 09 15:14:21 vm05.local podman[156718]: 2026-03-09 15:14:21.843698829 +0000 UTC m=+0.009026839 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:14:22.058 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.1.service' 2026-03-09T15:14:22.092 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T15:14:22.092 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T15:14:22.092 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T15:14:22.092 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.2 2026-03-09T15:14:22.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:14:22 vm05.local systemd[1]: Stopping Ceph osd.2 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:14:22.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:14:22 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[134989]: 2026-03-09T15:14:22.231+0000 7fc3307dc640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:14:22.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:14:22 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[134989]: 2026-03-09T15:14:22.231+0000 7fc3307dc640 -1 osd.2 93 *** Got signal Terminated *** 2026-03-09T15:14:22.304 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:14:22 vm05.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2[134989]: 2026-03-09T15:14:22.231+0000 7fc3307dc640 -1 osd.2 93 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T15:14:27.514 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:14:27 vm05.local podman[156813]: 2026-03-09 15:14:27.255425449 +0000 UTC m=+5.034904204 container died 01cf87b8bc05621c4b373948476c7352d7858eabed22c53c7a4ff62e6d8bd9eb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid) 2026-03-09T15:14:27.515 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:14:27 vm05.local podman[156813]: 2026-03-09 15:14:27.281335512 +0000 UTC m=+5.060814267 container remove 01cf87b8bc05621c4b373948476c7352d7858eabed22c53c7a4ff62e6d8bd9eb (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3) 2026-03-09T15:14:27.515 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:14:27 vm05.local bash[156813]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2 2026-03-09T15:14:27.515 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:14:27 vm05.local podman[156880]: 2026-03-09 15:14:27.425467577 +0000 UTC m=+0.020982369 container create eaf87d2744b2cc9a595fc593089cb20d829ff7ee9c1173fe606a087c25344031 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-09T15:14:27.515 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:14:27 vm05.local podman[156880]: 2026-03-09 15:14:27.470890356 +0000 UTC m=+0.066405148 container init eaf87d2744b2cc9a595fc593089cb20d829ff7ee9c1173fe606a087c25344031 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-deactivate, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T15:14:27.515 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:14:27 vm05.local podman[156880]: 2026-03-09 15:14:27.474486487 +0000 UTC m=+0.070001279 container start eaf87d2744b2cc9a595fc593089cb20d829ff7ee9c1173fe606a087c25344031 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-deactivate, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T15:14:27.515 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:14:27 vm05.local podman[156880]: 2026-03-09 15:14:27.478289687 +0000 UTC m=+0.073804489 container attach eaf87d2744b2cc9a595fc593089cb20d829ff7ee9c1173fe606a087c25344031 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-2-deactivate, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default) 2026-03-09T15:14:27.515 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 09 15:14:27 vm05.local podman[156880]: 2026-03-09 15:14:27.416029249 +0000 UTC m=+0.011544050 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:14:27.635 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.2.service' 2026-03-09T15:14:27.667 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T15:14:27.667 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T15:14:27.667 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-09T15:14:27.667 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.3 2026-03-09T15:14:28.116 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:14:27 vm09.local systemd[1]: Stopping Ceph osd.3 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:14:28.116 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:14:27 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[106435]: 2026-03-09T15:14:27.775+0000 7faebdd00640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:14:28.116 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:14:27 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[106435]: 2026-03-09T15:14:27.775+0000 7faebdd00640 -1 osd.3 93 *** Got signal Terminated *** 2026-03-09T15:14:28.116 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:14:27 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3[106435]: 2026-03-09T15:14:27.775+0000 7faebdd00640 -1 osd.3 93 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T15:14:33.075 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:14:32 vm09.local podman[129303]: 2026-03-09 15:14:32.813560048 +0000 UTC m=+5.052628709 container died 9359c3ced4d3cc7e6d01ec4dfa16c7490d477e6917eb557a8e1c78a1995686f6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223) 2026-03-09T15:14:33.075 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:14:32 vm09.local podman[129303]: 2026-03-09 15:14:32.836047705 +0000 UTC m=+5.075116356 container remove 9359c3ced4d3cc7e6d01ec4dfa16c7490d477e6917eb557a8e1c78a1995686f6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T15:14:33.075 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:14:32 vm09.local bash[129303]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3 2026-03-09T15:14:33.075 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:14:32 vm09.local podman[129368]: 2026-03-09 15:14:32.982656637 +0000 UTC m=+0.019627414 container create 7c76dcb3bdf8f7b18cdedbbc08590af7632495a087860b8c0e9c303b2f9bcf7c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-deactivate, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid) 2026-03-09T15:14:33.075 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:14:33 vm09.local podman[129368]: 2026-03-09 15:14:33.025738767 +0000 UTC m=+0.062709565 container init 7c76dcb3bdf8f7b18cdedbbc08590af7632495a087860b8c0e9c303b2f9bcf7c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True) 2026-03-09T15:14:33.075 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:14:33 vm09.local podman[129368]: 2026-03-09 15:14:33.029607658 +0000 UTC m=+0.066578445 container start 7c76dcb3bdf8f7b18cdedbbc08590af7632495a087860b8c0e9c303b2f9bcf7c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-deactivate, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T15:14:33.075 INFO:journalctl@ceph.osd.3.vm09.stdout:Mar 09 15:14:33 vm09.local podman[129368]: 2026-03-09 15:14:33.035680083 +0000 UTC m=+0.072650859 container attach 7c76dcb3bdf8f7b18cdedbbc08590af7632495a087860b8c0e9c303b2f9bcf7c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-3-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T15:14:33.188 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.3.service' 2026-03-09T15:14:33.220 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T15:14:33.220 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-09T15:14:33.220 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-09T15:14:33.220 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.4 2026-03-09T15:14:33.357 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:14:33 vm09.local systemd[1]: Stopping Ceph osd.4 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:14:33.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:14:33 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[110666]: 2026-03-09T15:14:33.356+0000 7f8f4fd97640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:14:33.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:14:33 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[110666]: 2026-03-09T15:14:33.356+0000 7f8f4fd97640 -1 osd.4 93 *** Got signal Terminated *** 2026-03-09T15:14:33.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:14:33 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4[110666]: 2026-03-09T15:14:33.356+0000 7f8f4fd97640 -1 osd.4 93 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T15:14:37.866 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:37 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:14:37.379+0000 7f1d22a00640 -1 osd.5 93 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-09T15:14:13.861118+0000 front 2026-03-09T15:14:13.861207+0000 (oldest deadline 2026-03-09T15:14:37.360977+0000) 2026-03-09T15:14:38.616 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:38 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:14:38.352+0000 7f1d22a00640 -1 osd.5 93 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-09T15:14:13.861118+0000 front 2026-03-09T15:14:13.861207+0000 (oldest deadline 2026-03-09T15:14:37.360977+0000) 2026-03-09T15:14:38.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:14:38 vm09.local podman[129464]: 2026-03-09 15:14:38.377472339 +0000 UTC m=+5.033032084 container died 985038f550f842efe94371992de8ca39429755f3866dc7cc801e057986c2e207 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T15:14:38.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:14:38 vm09.local podman[129464]: 2026-03-09 15:14:38.404662326 +0000 UTC m=+5.060222081 container remove 985038f550f842efe94371992de8ca39429755f3866dc7cc801e057986c2e207 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T15:14:38.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:14:38 vm09.local bash[129464]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4 2026-03-09T15:14:38.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:14:38 vm09.local podman[129542]: 2026-03-09 15:14:38.54772888 +0000 UTC m=+0.016821513 container create b8a36809d7e4af9342cb8ee3f9fcf9687cb54030d9b3e5e866d1be0ab1444964 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-09T15:14:38.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:14:38 vm09.local podman[129542]: 2026-03-09 15:14:38.584518653 +0000 UTC m=+0.053611296 container init b8a36809d7e4af9342cb8ee3f9fcf9687cb54030d9b3e5e866d1be0ab1444964 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-deactivate, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.build-date=20260223, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2) 2026-03-09T15:14:38.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:14:38 vm09.local podman[129542]: 2026-03-09 15:14:38.587325486 +0000 UTC m=+0.056418119 container start b8a36809d7e4af9342cb8ee3f9fcf9687cb54030d9b3e5e866d1be0ab1444964 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-deactivate, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T15:14:38.616 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 09 15:14:38 vm09.local podman[129542]: 2026-03-09 15:14:38.594591304 +0000 UTC m=+0.063683927 container attach b8a36809d7e4af9342cb8ee3f9fcf9687cb54030d9b3e5e866d1be0ab1444964 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-4-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20260223, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) 2026-03-09T15:14:38.746 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.4.service' 2026-03-09T15:14:38.782 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T15:14:38.782 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-09T15:14:38.782 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-09T15:14:38.782 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.5 2026-03-09T15:14:38.923 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:38 vm09.local systemd[1]: Stopping Ceph osd.5 for d952ca1a-1bc7-11f1-a184-f9dcb7ee7000... 2026-03-09T15:14:39.366 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:38 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:14:38.921+0000 7f1d263f8640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T15:14:39.366 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:38 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:14:38.921+0000 7f1d263f8640 -1 osd.5 93 *** Got signal Terminated *** 2026-03-09T15:14:39.366 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:38 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:14:38.921+0000 7f1d263f8640 -1 osd.5 93 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T15:14:39.866 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:39 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:14:39.379+0000 7f1d22a00640 -1 osd.5 93 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-09T15:14:13.861118+0000 front 2026-03-09T15:14:13.861207+0000 (oldest deadline 2026-03-09T15:14:37.360977+0000) 2026-03-09T15:14:40.866 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:40 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:14:40.374+0000 7f1d22a00640 -1 osd.5 93 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-09T15:14:13.861118+0000 front 2026-03-09T15:14:13.861207+0000 (oldest deadline 2026-03-09T15:14:37.360977+0000) 2026-03-09T15:14:41.866 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:41 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:14:41.386+0000 7f1d22a00640 -1 osd.5 93 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-09T15:14:13.861118+0000 front 2026-03-09T15:14:13.861207+0000 (oldest deadline 2026-03-09T15:14:37.360977+0000) 2026-03-09T15:14:42.866 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:42 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:14:42.393+0000 7f1d22a00640 -1 osd.5 93 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-09T15:14:13.861118+0000 front 2026-03-09T15:14:13.861207+0000 (oldest deadline 2026-03-09T15:14:37.360977+0000) 2026-03-09T15:14:42.866 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:42 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:14:42.393+0000 7f1d22a00640 -1 osd.5 93 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-09T15:14:17.861692+0000 front 2026-03-09T15:14:17.861451+0000 (oldest deadline 2026-03-09T15:14:41.961386+0000) 2026-03-09T15:14:43.866 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:43 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:14:43.430+0000 7f1d22a00640 -1 osd.5 93 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-09T15:14:13.861118+0000 front 2026-03-09T15:14:13.861207+0000 (oldest deadline 2026-03-09T15:14:37.360977+0000) 2026-03-09T15:14:43.866 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:43 vm09.local ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5[115234]: 2026-03-09T15:14:43.430+0000 7f1d22a00640 -1 osd.5 93 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-09T15:14:17.861692+0000 front 2026-03-09T15:14:17.861451+0000 (oldest deadline 2026-03-09T15:14:41.961386+0000) 2026-03-09T15:14:44.278 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:43 vm09.local podman[129638]: 2026-03-09 15:14:43.949481103 +0000 UTC m=+5.041534807 container died 15ec92bc2880e82bec54f28b42ab92cf5fdd36cd3ff169a2978afa8045e00427 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223) 2026-03-09T15:14:44.278 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:43 vm09.local podman[129638]: 2026-03-09 15:14:43.981451206 +0000 UTC m=+5.073504910 container remove 15ec92bc2880e82bec54f28b42ab92cf5fdd36cd3ff169a2978afa8045e00427 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20260223) 2026-03-09T15:14:44.278 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:43 vm09.local bash[129638]: ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5 2026-03-09T15:14:44.278 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:44 vm09.local podman[129703]: 2026-03-09 15:14:44.107728147 +0000 UTC m=+0.015498858 container create 48f7af2ed5ac1af43b9b9c5b645beca4b42068b7d12cfccf6e86eaf3d55689ae (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-deactivate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223) 2026-03-09T15:14:44.278 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:44 vm09.local podman[129703]: 2026-03-09 15:14:44.152631445 +0000 UTC m=+0.060402156 container init 48f7af2ed5ac1af43b9b9c5b645beca4b42068b7d12cfccf6e86eaf3d55689ae (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-deactivate, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T15:14:44.278 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:44 vm09.local podman[129703]: 2026-03-09 15:14:44.15561029 +0000 UTC m=+0.063381021 container start 48f7af2ed5ac1af43b9b9c5b645beca4b42068b7d12cfccf6e86eaf3d55689ae (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-deactivate, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T15:14:44.278 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:44 vm09.local podman[129703]: 2026-03-09 15:14:44.160610898 +0000 UTC m=+0.068381619 container attach 48f7af2ed5ac1af43b9b9c5b645beca4b42068b7d12cfccf6e86eaf3d55689ae (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000-osd-5-deactivate, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T15:14:44.278 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 09 15:14:44 vm09.local podman[129703]: 2026-03-09 15:14:44.101618653 +0000 UTC m=+0.009389374 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T15:14:44.304 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-d952ca1a-1bc7-11f1-a184-f9dcb7ee7000@osd.5.service' 2026-03-09T15:14:44.336 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T15:14:44.336 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-09T15:14:44.336 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 --force --keep-logs 2026-03-09T15:14:44.432 INFO:teuthology.orchestra.run.vm05.stdout:Deleting cluster with fsid: d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T15:14:45.739 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vm05.stderr:ceph-fuse[92220]: fuse finished with error 0 and tester_r 0 2026-03-09T15:14:52.840 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 --force --keep-logs 2026-03-09T15:14:52.932 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T15:14:57.656 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T15:14:57.683 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T15:14:57.709 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T15:14:57.709 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/514/remote/vm05/crash 2026-03-09T15:14:57.709 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/crash -- . 2026-03-09T15:14:57.745 INFO:teuthology.orchestra.run.vm05.stderr:tar: /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/crash: Cannot open: No such file or directory 2026-03-09T15:14:57.745 INFO:teuthology.orchestra.run.vm05.stderr:tar: Error is not recoverable: exiting now 2026-03-09T15:14:57.746 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/514/remote/vm09/crash 2026-03-09T15:14:57.746 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/crash -- . 2026-03-09T15:14:57.772 INFO:teuthology.orchestra.run.vm09.stderr:tar: /var/lib/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/crash: Cannot open: No such file or directory 2026-03-09T15:14:57.772 INFO:teuthology.orchestra.run.vm09.stderr:tar: Error is not recoverable: exiting now 2026-03-09T15:14:57.773 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T15:14:57.773 DEBUG:teuthology.orchestra.run.vm05:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.log | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v FS_DEGRADED | egrep -v 'filesystem is degraded' | egrep -v FS_INLINE_DATA_DEPRECATED | egrep -v FS_WITH_FAILED_MDS | egrep -v MDS_ALL_DOWN | egrep -v 'filesystem is offline' | egrep -v 'is offline because no MDS' | egrep -v MDS_DAMAGE | egrep -v MDS_DEGRADED | egrep -v MDS_FAILED | egrep -v MDS_INSUFFICIENT_STANDBY | egrep -v MDS_UP_LESS_THAN_MAX | egrep -v 'online, but wants' | egrep -v 'filesystem is online with fewer MDS than max_mds' | egrep -v POOL_APP_NOT_ENABLED | egrep -v 'do not have an application enabled' | egrep -v 'overall HEALTH_' | egrep -v 'Replacing daemon' | egrep -v 'deprecated feature inline_data' | egrep -v MGR_MODULE_ERROR | egrep -v OSD_DOWN | egrep -v 'osds down' | egrep -v 'overall HEALTH_' | egrep -v '\(OSD_DOWN\)' | egrep -v '\(OSD_' | egrep -v 'but it is still running' | egrep -v 'is not responding' | egrep -v MON_DOWN | egrep -v PG_AVAILABILITY | egrep -v PG_DEGRADED | egrep -v 'Reduced data availability' | egrep -v 'Degraded data redundancy' | egrep -v 'pg .* is stuck inactive' | egrep -v 'pg .* is .*degraded' | egrep -v 'pg .* is stuck peering' | head -n 1 2026-03-09T15:14:57.839 INFO:tasks.cephadm:Compressing logs... 2026-03-09T15:14:57.839 DEBUG:teuthology.orchestra.run.vm05:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T15:14:57.840 DEBUG:teuthology.orchestra.run.vm09:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T15:14:57.861 INFO:teuthology.orchestra.run.vm05.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T15:14:57.861 INFO:teuthology.orchestra.run.vm05.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T15:14:57.862 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mon.vm05.log 2026-03-09T15:14:57.863 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.log 2026-03-09T15:14:57.863 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mon.vm05.log: gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mgr.vm05.lhsexd.log 2026-03-09T15:14:57.863 INFO:teuthology.orchestra.run.vm09.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T15:14:57.865 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T15:14:57.865 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-volume.log 2026-03-09T15:14:57.867 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/cephadm.log: 92.9% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T15:14:57.867 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-client.ceph-exporter.vm09.log 2026-03-09T15:14:57.868 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mgr.vm09.cfuwdz.log 2026-03-09T15:14:57.869 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mon.vm09.log 2026-03-09T15:14:57.872 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.log: 87.7% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.log.gz 2026-03-09T15:14:57.874 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mgr.vm09.cfuwdz.log: 94.2% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-volume.log.gz 2026-03-09T15:14:57.875 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-client.ceph-exporter.vm09.log: 94.0% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-client.ceph-exporter.vm09.log.gz 2026-03-09T15:14:57.875 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.audit.log 2026-03-09T15:14:57.875 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.audit.log 2026-03-09T15:14:57.878 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mgr.vm05.lhsexd.log: 91.9% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T15:14:57.880 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mon.vm09.log: gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.log 2026-03-09T15:14:57.882 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.cephadm.log 2026-03-09T15:14:57.886 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.audit.log: 91.4% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.audit.log.gz 2026-03-09T15:14:57.887 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.audit.log: 91.2% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.audit.log.gz 2026-03-09T15:14:57.888 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.cephadm.log 2026-03-09T15:14:57.889 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.log: 89.4% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mgr.vm09.cfuwdz.log.gz 2026-03-09T15:14:57.889 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-volume.log 2026-03-09T15:14:57.890 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.3.log 2026-03-09T15:14:57.890 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.cephadm.log: 84.9% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.cephadm.log.gz 2026-03-09T15:14:57.890 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.4.log 2026-03-09T15:14:57.891 INFO:teuthology.orchestra.run.vm09.stderr: 87.8%/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.5.log 2026-03-09T15:14:57.891 INFO:teuthology.orchestra.run.vm09.stderr: -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.log.gz 2026-03-09T15:14:57.892 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.cephadm.log: 85.2% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph.cephadm.log.gz 2026-03-09T15:14:57.895 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-client.ceph-exporter.vm05.log 2026-03-09T15:14:57.901 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.4.log: gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mds.cephfs.vm09.ohmitn.log 2026-03-09T15:14:57.902 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mds.cephfs.vm09.jrhwzz.log 2026-03-09T15:14:57.904 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.0.log 2026-03-09T15:14:57.908 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-client.ceph-exporter.vm05.log: 94.0% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-client.ceph-exporter.vm05.log.gz 2026-03-09T15:14:57.908 INFO:teuthology.orchestra.run.vm05.stderr: 94.2% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-volume.log.gz 2026-03-09T15:14:57.911 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.1.log 2026-03-09T15:14:57.916 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mds.cephfs.vm09.ohmitn.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.1.log 2026-03-09T15:14:57.919 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.2.log 2026-03-09T15:14:57.930 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mds.cephfs.vm05.nrocqt.log 2026-03-09T15:14:57.931 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mds.cephfs.vm05.rrcyql.log 2026-03-09T15:14:57.941 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mds.cephfs.vm05.nrocqt.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.0.log 2026-03-09T15:14:58.404 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mds.cephfs.vm09.jrhwzz.log: /var/log/ceph/ceph-client.1.log: 92.3% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mon.vm09.log.gz 2026-03-09T15:14:58.471 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mds.cephfs.vm05.rrcyql.log: /var/log/ceph/ceph-client.0.log: 89.4% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mgr.vm05.lhsexd.log.gz 2026-03-09T15:14:59.396 INFO:teuthology.orchestra.run.vm05.stderr: 90.6% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mon.vm05.log.gz 2026-03-09T15:15:04.312 INFO:teuthology.orchestra.run.vm09.stderr: 93.8% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.4.log.gz 2026-03-09T15:15:05.710 INFO:teuthology.orchestra.run.vm05.stderr: 93.8% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.2.log.gz 2026-03-09T15:15:06.525 INFO:teuthology.orchestra.run.vm05.stderr: 94.0% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.0.log.gz 2026-03-09T15:15:06.579 INFO:teuthology.orchestra.run.vm05.stderr: 93.9% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.1.log.gz 2026-03-09T15:15:06.844 INFO:teuthology.orchestra.run.vm09.stderr: 94.2% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.5.log.gz 2026-03-09T15:15:07.711 INFO:teuthology.orchestra.run.vm09.stderr: 94.1% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-osd.3.log.gz 2026-03-09T15:15:08.078 INFO:teuthology.orchestra.run.vm09.stderr: 94.9% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mds.cephfs.vm09.jrhwzz.log.gz 2026-03-09T15:15:08.840 INFO:teuthology.orchestra.run.vm09.stderr: 95.0% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mds.cephfs.vm09.ohmitn.log.gz 2026-03-09T15:15:12.297 INFO:teuthology.orchestra.run.vm09.stderr:gzip: /var/log/ceph/ceph-client.1.log: file size changed while zipping 2026-03-09T15:15:12.297 INFO:teuthology.orchestra.run.vm09.stderr: 93.6% -- replaced with /var/log/ceph/ceph-client.1.log.gz 2026-03-09T15:15:12.299 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-09T15:15:12.299 INFO:teuthology.orchestra.run.vm09.stderr:real 0m14.444s 2026-03-09T15:15:12.299 INFO:teuthology.orchestra.run.vm09.stderr:user 0m24.082s 2026-03-09T15:15:12.299 INFO:teuthology.orchestra.run.vm09.stderr:sys 0m1.159s 2026-03-09T15:15:14.148 INFO:teuthology.orchestra.run.vm05.stderr: 94.9% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mds.cephfs.vm05.rrcyql.log.gz 2026-03-09T15:15:14.759 INFO:teuthology.orchestra.run.vm05.stderr:gzip: /var/log/ceph/ceph-client.0.log: file size changed while zipping 2026-03-09T15:15:14.759 INFO:teuthology.orchestra.run.vm05.stderr: 93.5% -- replaced with /var/log/ceph/ceph-client.0.log.gz 2026-03-09T15:16:08.090 INFO:teuthology.orchestra.run.vm05.stderr: 93.1% -- replaced with /var/log/ceph/d952ca1a-1bc7-11f1-a184-f9dcb7ee7000/ceph-mds.cephfs.vm05.nrocqt.log.gz 2026-03-09T15:16:08.093 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-09T15:16:08.093 INFO:teuthology.orchestra.run.vm05.stderr:real 1m10.241s 2026-03-09T15:16:08.093 INFO:teuthology.orchestra.run.vm05.stderr:user 1m20.797s 2026-03-09T15:16:08.093 INFO:teuthology.orchestra.run.vm05.stderr:sys 0m5.219s 2026-03-09T15:16:08.093 INFO:tasks.cephadm:Archiving logs... 2026-03-09T15:16:08.093 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/514/remote/vm05/log 2026-03-09T15:16:08.093 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T15:16:12.060 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/514/remote/vm09/log 2026-03-09T15:16:12.061 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T15:16:13.207 INFO:tasks.cephadm:Removing cluster... 2026-03-09T15:16:13.207 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 --force 2026-03-09T15:16:13.320 INFO:teuthology.orchestra.run.vm05.stdout:Deleting cluster with fsid: d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T15:16:13.612 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 --force 2026-03-09T15:16:13.703 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: d952ca1a-1bc7-11f1-a184-f9dcb7ee7000 2026-03-09T15:16:13.923 INFO:tasks.cephadm:Removing cephadm ... 2026-03-09T15:16:13.923 DEBUG:teuthology.orchestra.run.vm05:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T15:16:13.940 DEBUG:teuthology.orchestra.run.vm09:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T15:16:13.954 INFO:tasks.cephadm:Teardown complete 2026-03-09T15:16:13.954 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T15:16:13.956 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/teuthology/teuthology/task/install/__init__.py", line 644, in task yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/ceph_fuse.py", line 248, in task mount.umount_wait() File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephfs/fuse_mount.py", line 403, in umount_wait run.wait([self.fuse_daemon], timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 479, in wait check_time() File "/home/teuthos/teuthology/teuthology/contextutil.py", line 134, in __call__ raise MaxWhileTries(error_msg) teuthology.exceptions.MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds 2026-03-09T15:16:13.957 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T15:16:13.957 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T15:16:13.982 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T15:16:14.023 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-09T15:16:14.023 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-09T15:16:14.023 DEBUG:teuthology.orchestra.run.vm05:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-09T15:16:14.023 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y remove $d || true 2026-03-09T15:16:14.023 DEBUG:teuthology.orchestra.run.vm05:> done 2026-03-09T15:16:14.028 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-09T15:16:14.028 DEBUG:teuthology.orchestra.run.vm09:> 2026-03-09T15:16:14.028 DEBUG:teuthology.orchestra.run.vm09:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-09T15:16:14.029 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y remove $d || true 2026-03-09T15:16:14.029 DEBUG:teuthology.orchestra.run.vm09:> done 2026-03-09T15:16:14.268 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:14.268 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:14.268 INFO:teuthology.orchestra.run.vm09.stdout: Package Architecture Version Repository Size 2026-03-09T15:16:14.268 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:14.268 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T15:16:14.268 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:18.2.0-0.el9 @ceph 31 M 2026-03-09T15:16:14.268 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T15:16:14.268 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-09T15:16:14.268 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:14.268 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T15:16:14.268 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:14.268 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-09T15:16:14.269 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:14.269 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 31 M 2026-03-09T15:16:14.269 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T15:16:14.273 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T15:16:14.273 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T15:16:14.287 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T15:16:14.287 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T15:16:14.318 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T15:16:14.342 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:18.2.0-0.el9.x86_64 1/2 2026-03-09T15:16:14.342 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:14.342 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T15:16:14.342 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-09T15:16:14.342 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-09T15:16:14.342 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:14.343 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-radosgw-2:18.2.0-0.el9.x86_64 1/2 2026-03-09T15:16:14.353 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:18.2.0-0.el9.x86_64 1/2 2026-03-09T15:16:14.359 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:14.359 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout: Package Architecture Version Repository Size 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw x86_64 2:18.2.0-0.el9 @ceph 31 M 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout:Remove 2 Packages 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 31 M 2026-03-09T15:16:14.360 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T15:16:14.364 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T15:16:14.364 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T15:16:14.367 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T15:16:14.378 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T15:16:14.379 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T15:16:14.410 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T15:16:14.432 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:18.2.0-0.el9.x86_64 1/2 2026-03-09T15:16:14.432 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:14.432 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T15:16:14.432 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-09T15:16:14.432 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-09T15:16:14.432 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:14.433 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-radosgw-2:18.2.0-0.el9.x86_64 1/2 2026-03-09T15:16:14.435 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T15:16:14.435 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:18.2.0-0.el9.x86_64 1/2 2026-03-09T15:16:14.443 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:18.2.0-0.el9.x86_64 1/2 2026-03-09T15:16:14.459 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T15:16:14.478 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T15:16:14.478 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:14.478 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T15:16:14.478 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:18.2.0-0.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-09T15:16:14.478 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:14.478 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:14.535 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T15:16:14.535 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-radosgw-2:18.2.0-0.el9.x86_64 1/2 2026-03-09T15:16:14.579 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T15:16:14.579 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:14.579 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T15:16:14.579 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw-2:18.2.0-0.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-09T15:16:14.579 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:14.579 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:14.675 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:14.675 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:14.675 INFO:teuthology.orchestra.run.vm09.stdout: Package Architecture Version Repository Size 2026-03-09T15:16:14.675 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:14.675 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T15:16:14.675 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:18.2.0-0.el9 @ceph 164 M 2026-03-09T15:16:14.675 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T15:16:14.676 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-09T15:16:14.676 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-09T15:16:14.676 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-09T15:16:14.676 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:14.676 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T15:16:14.676 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:14.676 INFO:teuthology.orchestra.run.vm09.stdout:Remove 4 Packages 2026-03-09T15:16:14.676 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:14.676 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 166 M 2026-03-09T15:16:14.676 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T15:16:14.678 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T15:16:14.678 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T15:16:14.703 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T15:16:14.703 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T15:16:14.752 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T15:16:14.758 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-test-2:18.2.0-0.el9.x86_64 1/4 2026-03-09T15:16:14.760 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-09T15:16:14.764 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-09T15:16:14.769 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:14.769 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:14.769 INFO:teuthology.orchestra.run.vm05.stdout: Package Architecture Version Repository Size 2026-03-09T15:16:14.769 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test x86_64 2:18.2.0-0.el9 @ceph 164 M 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout:Remove 4 Packages 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 166 M 2026-03-09T15:16:14.770 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T15:16:14.772 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T15:16:14.772 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T15:16:14.780 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T15:16:14.796 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T15:16:14.796 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T15:16:14.841 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T15:16:14.842 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:18.2.0-0.el9.x86_64 1/4 2026-03-09T15:16:14.842 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-09T15:16:14.842 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-09T15:16:14.849 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T15:16:14.854 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-test-2:18.2.0-0.el9.x86_64 1/4 2026-03-09T15:16:14.856 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-09T15:16:14.860 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-09T15:16:14.875 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T15:16:14.888 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-09T15:16:14.888 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:14.888 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T15:16:14.888 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:18.2.0-0.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-09T15:16:14.888 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T15:16:14.888 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:14.888 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:14.938 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T15:16:14.938 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-test-2:18.2.0-0.el9.x86_64 1/4 2026-03-09T15:16:14.938 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-09T15:16:14.938 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-09T15:16:14.986 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-09T15:16:14.986 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:14.986 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T15:16:14.986 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test-2:18.2.0-0.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-09T15:16:14.986 INFO:teuthology.orchestra.run.vm05.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T15:16:14.986 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:14.986 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:15.079 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:18.2.0-0.el9 @ceph 0 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:18.2.0-0.el9 @ceph 6.4 M 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:18.2.0-0.el9 @ceph 20 M 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:18.2.0-0.el9 @ceph 61 M 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout:Remove 8 Packages 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 89 M 2026-03-09T15:16:15.080 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T15:16:15.083 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T15:16:15.083 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T15:16:15.104 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T15:16:15.104 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T15:16:15.140 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T15:16:15.142 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-2:18.2.0-0.el9.x86_64 1/8 2026-03-09T15:16:15.160 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:18.2.0-0.el9.x86_64 2/8 2026-03-09T15:16:15.161 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:15.161 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T15:16:15.161 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-09T15:16:15.161 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-09T15:16:15.161 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:15.163 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-osd-2:18.2.0-0.el9.x86_64 2/8 2026-03-09T15:16:15.170 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:18.2.0-0.el9.x86_64 2/8 2026-03-09T15:16:15.182 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-09T15:16:15.182 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-09T15:16:15.182 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:15.182 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-09T15:16:15.200 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-09T15:16:15.203 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 4/8 2026-03-09T15:16:15.205 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 5/8 2026-03-09T15:16:15.206 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 6/8 2026-03-09T15:16:15.227 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:18.2.0-0.el9.x86_64 7/8 2026-03-09T15:16:15.227 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:15.227 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T15:16:15.227 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-09T15:16:15.227 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-09T15:16:15.228 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:15.228 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mds-2:18.2.0-0.el9.x86_64 7/8 2026-03-09T15:16:15.235 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:18.2.0-0.el9.x86_64 7/8 2026-03-09T15:16:15.243 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout: ceph x86_64 2:18.2.0-0.el9 @ceph 0 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds x86_64 2:18.2.0-0.el9 @ceph 6.4 M 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon x86_64 2:18.2.0-0.el9 @ceph 20 M 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd x86_64 2:18.2.0-0.el9 @ceph 61 M 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout:Remove 8 Packages 2026-03-09T15:16:15.244 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:15.245 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 89 M 2026-03-09T15:16:15.245 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T15:16:15.247 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T15:16:15.247 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T15:16:15.253 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:18.2.0-0.el9.x86_64 8/8 2026-03-09T15:16:15.253 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:15.253 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T15:16:15.254 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-09T15:16:15.254 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-09T15:16:15.254 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:15.254 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mon-2:18.2.0-0.el9.x86_64 8/8 2026-03-09T15:16:15.269 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T15:16:15.269 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T15:16:15.305 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T15:16:15.306 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-2:18.2.0-0.el9.x86_64 1/8 2026-03-09T15:16:15.325 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:18.2.0-0.el9.x86_64 2/8 2026-03-09T15:16:15.325 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:15.325 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T15:16:15.325 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-09T15:16:15.325 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-09T15:16:15.325 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:15.327 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-osd-2:18.2.0-0.el9.x86_64 2/8 2026-03-09T15:16:15.332 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:18.2.0-0.el9.x86_64 8/8 2026-03-09T15:16:15.332 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:18.2.0-0.el9.x86_64 1/8 2026-03-09T15:16:15.332 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:18.2.0-0.el9.x86_64 2/8 2026-03-09T15:16:15.332 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:18.2.0-0.el9.x86_64 3/8 2026-03-09T15:16:15.332 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:18.2.0-0.el9.x86_64 4/8 2026-03-09T15:16:15.332 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 5/8 2026-03-09T15:16:15.332 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 6/8 2026-03-09T15:16:15.332 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 7/8 2026-03-09T15:16:15.335 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:18.2.0-0.el9.x86_64 2/8 2026-03-09T15:16:15.348 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-09T15:16:15.348 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-09T15:16:15.348 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:15.349 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-09T15:16:15.370 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 3/8 2026-03-09T15:16:15.373 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 4/8 2026-03-09T15:16:15.375 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 5/8 2026-03-09T15:16:15.377 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 6/8 2026-03-09T15:16:15.386 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 8/8 2026-03-09T15:16:15.386 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:15.386 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T15:16:15.386 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:18.2.0-0.el9.x86_64 ceph-mds-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:15.386 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:18.2.0-0.el9.x86_64 ceph-osd-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:15.386 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 libconfig-1.7.2-9.el9.x86_64 2026-03-09T15:16:15.386 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T15:16:15.386 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:15.386 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:15.397 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:18.2.0-0.el9.x86_64 7/8 2026-03-09T15:16:15.397 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:15.397 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T15:16:15.397 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-09T15:16:15.397 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-09T15:16:15.397 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:15.398 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mds-2:18.2.0-0.el9.x86_64 7/8 2026-03-09T15:16:15.405 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:18.2.0-0.el9.x86_64 7/8 2026-03-09T15:16:15.425 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:18.2.0-0.el9.x86_64 8/8 2026-03-09T15:16:15.425 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:15.425 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T15:16:15.425 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-09T15:16:15.425 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-09T15:16:15.425 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:15.426 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mon-2:18.2.0-0.el9.x86_64 8/8 2026-03-09T15:16:15.506 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:18.2.0-0.el9.x86_64 8/8 2026-03-09T15:16:15.506 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-2:18.2.0-0.el9.x86_64 1/8 2026-03-09T15:16:15.506 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mds-2:18.2.0-0.el9.x86_64 2/8 2026-03-09T15:16:15.506 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mon-2:18.2.0-0.el9.x86_64 3/8 2026-03-09T15:16:15.506 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-osd-2:18.2.0-0.el9.x86_64 4/8 2026-03-09T15:16:15.506 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 5/8 2026-03-09T15:16:15.506 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 6/8 2026-03-09T15:16:15.506 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 7/8 2026-03-09T15:16:15.560 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 8/8 2026-03-09T15:16:15.560 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:15.560 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T15:16:15.560 INFO:teuthology.orchestra.run.vm05.stdout: ceph-2:18.2.0-0.el9.x86_64 ceph-mds-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:15.560 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon-2:18.2.0-0.el9.x86_64 ceph-osd-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:15.560 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs-1.1.0-3.el9.x86_64 libconfig-1.7.2-9.el9.x86_64 2026-03-09T15:16:15.560 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T15:16:15.560 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:15.560 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:15.586 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:18.2.0-0.el9 @ceph 22 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:18.2.0-0.el9 @ceph 399 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:18.2.0-0.el9 @ceph 4.5 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:18.2.0-0.el9 @ceph-noarch 654 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:18.2.0-0.el9 @ceph-noarch 7.1 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:18.2.0-0.el9 @ceph-noarch 66 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:18.2.0-0.el9 @ceph-noarch 567 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:18.2.0-0.el9 @ceph 12 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:18.2.0-0.el9 @ceph 70 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:18.2.0-0.el9 @ceph-noarch 319 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:18.2.0-0.el9 @ceph-noarch 1.4 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:18.2.0-0.el9 @ceph-noarch 40 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:18.2.0-0.el9 @ceph 138 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: fmt x86_64 8.1.1-5.el9 @epel 337 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:18.2.0-0.el9 @ceph 426 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:18.2.0-0.el9 @ceph 1.5 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:18.2.0-0.el9 @ceph 570 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-09T15:16:15.591 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-jwt noarch 2.4.0-1.el9 @epel 103 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-jwt+crypto noarch 2.4.0-1.el9 @epel 5.4 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout:Remove 84 Packages 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 433 M 2026-03-09T15:16:15.592 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T15:16:15.615 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T15:16:15.615 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T15:16:15.715 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T15:16:15.715 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T15:16:15.757 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base x86_64 2:18.2.0-0.el9 @ceph 22 M 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache x86_64 2:18.2.0-0.el9 @ceph 399 k 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr x86_64 2:18.2.0-0.el9 @ceph 4.5 M 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm noarch 2:18.2.0-0.el9 @ceph-noarch 654 k 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard noarch 2:18.2.0-0.el9 @ceph-noarch 7.1 M 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local noarch 2:18.2.0-0.el9 @ceph-noarch 66 M 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook noarch 2:18.2.0-0.el9 @ceph-noarch 567 k 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror x86_64 2:18.2.0-0.el9 @ceph 12 M 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common x86_64 2:18.2.0-0.el9 @ceph 70 M 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards noarch 2:18.2.0-0.el9 @ceph-noarch 319 k 2026-03-09T15:16:15.761 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core noarch 2:18.2.0-0.el9 @ceph-noarch 1.4 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts noarch 2:18.2.0-0.el9 @ceph-noarch 40 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux x86_64 2:18.2.0-0.el9 @ceph 138 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: fmt x86_64 8.1.1-5.el9 @epel 337 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite x86_64 2:18.2.0-0.el9 @ceph 426 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 x86_64 2:18.2.0-0.el9 @ceph 1.5 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common x86_64 2:18.2.0-0.el9 @ceph 570 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt noarch 2.4.0-1.el9 @epel 103 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt+crypto noarch 2.4.0-1.el9 @epel 5.4 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-09T15:16:15.762 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout:Remove 84 Packages 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 433 M 2026-03-09T15:16:15.763 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T15:16:15.784 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T15:16:15.784 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T15:16:15.840 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T15:16:15.840 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-rook-2:18.2.0-0.el9.noarch 1/84 2026-03-09T15:16:15.846 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:18.2.0-0.el9.noarch 1/84 2026-03-09T15:16:15.863 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:18.2.0-0.el9.x86_64 2/84 2026-03-09T15:16:15.863 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:15.863 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T15:16:15.863 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-09T15:16:15.863 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-09T15:16:15.863 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:15.863 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-2:18.2.0-0.el9.x86_64 2/84 2026-03-09T15:16:15.876 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:18.2.0-0.el9.x86_64 2/84 2026-03-09T15:16:15.883 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-modules-core-2:18.2.0-0.el9.noarch 3/84 2026-03-09T15:16:15.884 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 4/84 2026-03-09T15:16:15.884 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T15:16:15.884 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T15:16:15.941 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 4/84 2026-03-09T15:16:15.949 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/84 2026-03-09T15:16:15.953 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/84 2026-03-09T15:16:15.953 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 7/84 2026-03-09T15:16:15.963 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 7/84 2026-03-09T15:16:15.970 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/84 2026-03-09T15:16:15.973 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/84 2026-03-09T15:16:15.975 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 10/84 2026-03-09T15:16:15.980 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 11/84 2026-03-09T15:16:15.984 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 12/84 2026-03-09T15:16:15.993 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 13/84 2026-03-09T15:16:16.006 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 14/84 2026-03-09T15:16:16.012 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 15/84 2026-03-09T15:16:16.012 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T15:16:16.012 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-rook-2:18.2.0-0.el9.noarch 1/84 2026-03-09T15:16:16.019 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-rook-2:18.2.0-0.el9.noarch 1/84 2026-03-09T15:16:16.022 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 16/84 2026-03-09T15:16:16.028 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 17/84 2026-03-09T15:16:16.035 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:18.2.0-0.el9.x86_64 2/84 2026-03-09T15:16:16.035 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:16.035 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T15:16:16.035 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-09T15:16:16.035 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-09T15:16:16.035 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:16.036 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-2:18.2.0-0.el9.x86_64 2/84 2026-03-09T15:16:16.048 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:18.2.0-0.el9.x86_64 2/84 2026-03-09T15:16:16.055 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-modules-core-2:18.2.0-0.el9.noarch 3/84 2026-03-09T15:16:16.055 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 4/84 2026-03-09T15:16:16.057 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 18/84 2026-03-09T15:16:16.065 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 19/84 2026-03-09T15:16:16.068 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 20/84 2026-03-09T15:16:16.077 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 21/84 2026-03-09T15:16:16.084 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 22/84 2026-03-09T15:16:16.085 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noarc 23/84 2026-03-09T15:16:16.091 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noarc 23/84 2026-03-09T15:16:16.110 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 4/84 2026-03-09T15:16:16.119 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/84 2026-03-09T15:16:16.123 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/84 2026-03-09T15:16:16.123 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 7/84 2026-03-09T15:16:16.133 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 7/84 2026-03-09T15:16:16.139 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/84 2026-03-09T15:16:16.142 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/84 2026-03-09T15:16:16.145 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 10/84 2026-03-09T15:16:16.150 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 11/84 2026-03-09T15:16:16.154 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 12/84 2026-03-09T15:16:16.163 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 13/84 2026-03-09T15:16:16.175 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 14/84 2026-03-09T15:16:16.180 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 15/84 2026-03-09T15:16:16.181 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 24/84 2026-03-09T15:16:16.190 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 16/84 2026-03-09T15:16:16.196 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 17/84 2026-03-09T15:16:16.209 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 25/84 2026-03-09T15:16:16.214 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 26/84 2026-03-09T15:16:16.220 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 27/84 2026-03-09T15:16:16.225 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 28/84 2026-03-09T15:16:16.226 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 18/84 2026-03-09T15:16:16.229 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 29/84 2026-03-09T15:16:16.233 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 19/84 2026-03-09T15:16:16.233 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 30/84 2026-03-09T15:16:16.235 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 20/84 2026-03-09T15:16:16.236 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 31/84 2026-03-09T15:16:16.239 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 32/84 2026-03-09T15:16:16.242 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jwt-2.4.0-1.el9.noarch 33/84 2026-03-09T15:16:16.244 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 21/84 2026-03-09T15:16:16.245 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jwt+crypto-2.4.0-1.el9.noarch 34/84 2026-03-09T15:16:16.252 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 22/84 2026-03-09T15:16:16.252 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noarc 23/84 2026-03-09T15:16:16.259 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noarc 23/84 2026-03-09T15:16:16.259 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 35/84 2026-03-09T15:16:16.266 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 36/84 2026-03-09T15:16:16.271 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 37/84 2026-03-09T15:16:16.319 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 38/84 2026-03-09T15:16:16.332 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 39/84 2026-03-09T15:16:16.335 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 40/84 2026-03-09T15:16:16.339 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 41/84 2026-03-09T15:16:16.341 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 42/84 2026-03-09T15:16:16.343 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 43/84 2026-03-09T15:16:16.353 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 24/84 2026-03-09T15:16:16.363 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:18.2.0-0.el9.x86_64 44/84 2026-03-09T15:16:16.363 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:16.363 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T15:16:16.363 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-09T15:16:16.363 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-09T15:16:16.363 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:16.364 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-mirror-2:18.2.0-0.el9.x86_64 44/84 2026-03-09T15:16:16.372 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:18.2.0-0.el9.x86_64 44/84 2026-03-09T15:16:16.379 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 25/84 2026-03-09T15:16:16.384 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 26/84 2026-03-09T15:16:16.390 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 27/84 2026-03-09T15:16:16.392 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.0-0.el9.x86_64 45/84 2026-03-09T15:16:16.392 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:16.392 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T15:16:16.392 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:16.392 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-immutable-object-cache-2:18.2.0-0.el9.x86_64 45/84 2026-03-09T15:16:16.394 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 28/84 2026-03-09T15:16:16.396 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 29/84 2026-03-09T15:16:16.399 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 30/84 2026-03-09T15:16:16.400 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.0-0.el9.x86_64 45/84 2026-03-09T15:16:16.402 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 31/84 2026-03-09T15:16:16.402 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 46/84 2026-03-09T15:16:16.404 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 47/84 2026-03-09T15:16:16.405 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 32/84 2026-03-09T15:16:16.407 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jwt-2.4.0-1.el9.noarch 33/84 2026-03-09T15:16:16.408 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ply-3.11-14.el9.noarch 48/84 2026-03-09T15:16:16.410 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jwt+crypto-2.4.0-1.el9.noarch 34/84 2026-03-09T15:16:16.410 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 49/84 2026-03-09T15:16:16.413 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 50/84 2026-03-09T15:16:16.416 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 51/84 2026-03-09T15:16:16.419 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 52/84 2026-03-09T15:16:16.422 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 53/84 2026-03-09T15:16:16.423 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 35/84 2026-03-09T15:16:16.429 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 36/84 2026-03-09T15:16:16.430 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 54/84 2026-03-09T15:16:16.434 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 37/84 2026-03-09T15:16:16.435 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 55/84 2026-03-09T15:16:16.437 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 56/84 2026-03-09T15:16:16.440 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 57/84 2026-03-09T15:16:16.443 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 58/84 2026-03-09T15:16:16.448 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 59/84 2026-03-09T15:16:16.453 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 60/84 2026-03-09T15:16:16.458 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 61/84 2026-03-09T15:16:16.462 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 62/84 2026-03-09T15:16:16.468 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 63/84 2026-03-09T15:16:16.472 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 64/84 2026-03-09T15:16:16.475 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 65/84 2026-03-09T15:16:16.479 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 66/84 2026-03-09T15:16:16.481 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 38/84 2026-03-09T15:16:16.487 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 67/84 2026-03-09T15:16:16.492 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 68/84 2026-03-09T15:16:16.493 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 39/84 2026-03-09T15:16:16.496 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 40/84 2026-03-09T15:16:16.496 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 69/84 2026-03-09T15:16:16.499 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 70/84 2026-03-09T15:16:16.499 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 41/84 2026-03-09T15:16:16.500 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-grafana-dashboards-2:18.2.0-0.el9.noarch 71/84 2026-03-09T15:16:16.501 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 42/84 2026-03-09T15:16:16.503 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 43/84 2026-03-09T15:16:16.507 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-prometheus-alerts-2:18.2.0-0.el9.noarch 72/84 2026-03-09T15:16:16.511 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 73/84 2026-03-09T15:16:16.521 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:18.2.0-0.el9.x86_64 44/84 2026-03-09T15:16:16.521 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:16.521 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T15:16:16.521 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-09T15:16:16.521 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-09T15:16:16.521 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:16.522 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-mirror-2:18.2.0-0.el9.x86_64 44/84 2026-03-09T15:16:16.529 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:18.2.0-0.el9.x86_64 44/84 2026-03-09T15:16:16.530 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:18.2.0-0.el9.x86_64 74/84 2026-03-09T15:16:16.530 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-09T15:16:16.530 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:16.537 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-base-2:18.2.0-0.el9.x86_64 74/84 2026-03-09T15:16:16.549 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.0-0.el9.x86_64 45/84 2026-03-09T15:16:16.549 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T15:16:16.549 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T15:16:16.549 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:16.550 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-immutable-object-cache-2:18.2.0-0.el9.x86_64 45/84 2026-03-09T15:16:16.554 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:18.2.0-0.el9.x86_64 74/84 2026-03-09T15:16:16.554 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-selinux-2:18.2.0-0.el9.x86_64 75/84 2026-03-09T15:16:16.557 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:18.2.0-0.el9.x86_64 45/84 2026-03-09T15:16:16.559 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 46/84 2026-03-09T15:16:16.561 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 47/84 2026-03-09T15:16:16.564 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ply-3.11-14.el9.noarch 48/84 2026-03-09T15:16:16.567 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 49/84 2026-03-09T15:16:16.569 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 50/84 2026-03-09T15:16:16.571 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 51/84 2026-03-09T15:16:16.574 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 52/84 2026-03-09T15:16:16.577 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 53/84 2026-03-09T15:16:16.585 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 54/84 2026-03-09T15:16:16.589 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 55/84 2026-03-09T15:16:16.591 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 56/84 2026-03-09T15:16:16.593 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 57/84 2026-03-09T15:16:16.596 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 58/84 2026-03-09T15:16:16.601 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 59/84 2026-03-09T15:16:16.605 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 60/84 2026-03-09T15:16:16.610 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 61/84 2026-03-09T15:16:16.614 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 62/84 2026-03-09T15:16:16.619 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 63/84 2026-03-09T15:16:16.622 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 64/84 2026-03-09T15:16:16.625 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 65/84 2026-03-09T15:16:16.629 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 66/84 2026-03-09T15:16:16.637 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 67/84 2026-03-09T15:16:16.642 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 68/84 2026-03-09T15:16:16.644 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 69/84 2026-03-09T15:16:16.647 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 70/84 2026-03-09T15:16:16.648 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-grafana-dashboards-2:18.2.0-0.el9.noarch 71/84 2026-03-09T15:16:16.654 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-prometheus-alerts-2:18.2.0-0.el9.noarch 72/84 2026-03-09T15:16:16.657 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 73/84 2026-03-09T15:16:16.675 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:18.2.0-0.el9.x86_64 74/84 2026-03-09T15:16:16.675 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-09T15:16:16.675 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:16.681 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-base-2:18.2.0-0.el9.x86_64 74/84 2026-03-09T15:16:16.699 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:18.2.0-0.el9.x86_64 74/84 2026-03-09T15:16:16.699 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-selinux-2:18.2.0-0.el9.x86_64 75/84 2026-03-09T15:16:21.796 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:18.2.0-0.el9.x86_64 75/84 2026-03-09T15:16:21.796 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-09T15:16:21.796 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-09T15:16:21.796 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-09T15:16:21.796 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-09T15:16:21.796 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-09T15:16:21.796 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-09T15:16:21.796 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-09T15:16:21.796 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:21.809 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-common-2:18.2.0-0.el9.x86_64 76/84 2026-03-09T15:16:21.831 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:18.2.0-0.el9.x86_64 76/84 2026-03-09T15:16:21.835 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-common-2:18.2.0-0.el9.x86_64 77/84 2026-03-09T15:16:21.837 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 78/84 2026-03-09T15:16:21.839 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 79/84 2026-03-09T15:16:21.840 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libradosstriper1-2:18.2.0-0.el9.x86_64 80/84 2026-03-09T15:16:21.853 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:18.2.0-0.el9.x86_64 80/84 2026-03-09T15:16:21.856 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : fmt-8.1.1-5.el9.x86_64 81/84 2026-03-09T15:16:21.859 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 82/84 2026-03-09T15:16:21.861 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 83/84 2026-03-09T15:16:21.861 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephsqlite-2:18.2.0-0.el9.x86_64 84/84 2026-03-09T15:16:21.916 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-selinux-2:18.2.0-0.el9.x86_64 75/84 2026-03-09T15:16:21.916 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /sys 2026-03-09T15:16:21.916 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /proc 2026-03-09T15:16:21.916 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /mnt 2026-03-09T15:16:21.916 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /var/tmp 2026-03-09T15:16:21.916 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /home 2026-03-09T15:16:21.916 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /root 2026-03-09T15:16:21.916 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /tmp 2026-03-09T15:16:21.916 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:21.927 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-common-2:18.2.0-0.el9.x86_64 76/84 2026-03-09T15:16:21.952 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:18.2.0-0.el9.x86_64 76/84 2026-03-09T15:16:21.955 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ceph-common-2:18.2.0-0.el9.x86_64 77/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:18.2.0-0.el9.x86_64 84/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:18.2.0-0.el9.x86_64 1/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:18.2.0-0.el9.x86_64 2/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:18.2.0-0.el9.noarch 3/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:18.2.0-0.el9.x86_64 4/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:18.2.0-0.el9.x86_64 5/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 6/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 7/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noarc 8/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:18.2.0-0.el9.noarch 9/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:18.2.0-0.el9.noarch 10/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:18.2.0-0.el9.noarch 11/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:18.2.0-0.el9.x86_64 12/84 2026-03-09T15:16:21.956 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 13/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 14/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 15/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : fmt-8.1.1-5.el9.x86_64 16/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:18.2.0-0.el9.x86_64 17/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 18/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 19/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 20/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:18.2.0-0.el9.x86_64 21/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 22/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 23/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 24/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 25/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 26/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 27/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 28/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 29/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:18.2.0-0.el9.x86_64 30/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 31/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 32/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 33/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 34/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 35/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 36/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 37/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 38/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 39/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 40/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 41/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 42/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 43/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 44/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 45/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 46/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 47/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 48/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jwt-2.4.0-1.el9.noarch 49/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jwt+crypto-2.4.0-1.el9.noarch 50/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 51/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 52/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 53/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 54/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 55/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 56/84 2026-03-09T15:16:21.958 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 57/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 58/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 59/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 60/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 61/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 62/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 63/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 64/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 65/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 66/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 67/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 68/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 69/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 70/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 71/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 72/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 73/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 74/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 75/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 76/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 77/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 78/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 79/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 80/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 81/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 82/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 83/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 78/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 79/84 2026-03-09T15:16:21.959 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libradosstriper1-2:18.2.0-0.el9.x86_64 80/84 2026-03-09T15:16:21.971 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libradosstriper1-2:18.2.0-0.el9.x86_64 80/84 2026-03-09T15:16:21.974 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : fmt-8.1.1-5.el9.x86_64 81/84 2026-03-09T15:16:21.977 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 82/84 2026-03-09T15:16:21.979 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 83/84 2026-03-09T15:16:21.980 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libcephsqlite-2:18.2.0-0.el9.x86_64 84/84 2026-03-09T15:16:22.021 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:18.2.0-0.el9.x86_64 84/84 2026-03-09T15:16:22.021 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:22.021 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T15:16:22.021 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.021 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.021 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.021 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.021 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.021 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.021 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: fmt-8.1.1-5.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-jwt-2.4.0-1.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-jwt+crypto-2.4.0-1.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-09T15:16:22.022 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:22.023 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:22.073 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephsqlite-2:18.2.0-0.el9.x86_64 84/84 2026-03-09T15:16:22.073 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-base-2:18.2.0-0.el9.x86_64 1/84 2026-03-09T15:16:22.073 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-common-2:18.2.0-0.el9.x86_64 2/84 2026-03-09T15:16:22.073 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-grafana-dashboards-2:18.2.0-0.el9.noarch 3/84 2026-03-09T15:16:22.073 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-immutable-object-cache-2:18.2.0-0.el9.x86_64 4/84 2026-03-09T15:16:22.073 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-2:18.2.0-0.el9.x86_64 5/84 2026-03-09T15:16:22.073 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 6/84 2026-03-09T15:16:22.073 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 7/84 2026-03-09T15:16:22.073 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noarc 8/84 2026-03-09T15:16:22.073 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-modules-core-2:18.2.0-0.el9.noarch 9/84 2026-03-09T15:16:22.073 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-rook-2:18.2.0-0.el9.noarch 10/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-prometheus-alerts-2:18.2.0-0.el9.noarch 11/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-selinux-2:18.2.0-0.el9.x86_64 12/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 13/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 14/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 15/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : fmt-8.1.1-5.el9.x86_64 16/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephsqlite-2:18.2.0-0.el9.x86_64 17/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 18/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 19/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 20/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libradosstriper1-2:18.2.0-0.el9.x86_64 21/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 22/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 23/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 24/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 25/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 26/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 27/84 2026-03-09T15:16:22.074 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 28/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 29/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-common-2:18.2.0-0.el9.x86_64 30/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 31/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 32/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 33/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 34/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 35/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 36/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 37/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 38/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 39/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 40/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 41/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 42/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 43/84 2026-03-09T15:16:22.075 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 44/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 45/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 46/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 47/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 48/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jwt-2.4.0-1.el9.noarch 49/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jwt+crypto-2.4.0-1.el9.noarch 50/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 51/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 52/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 53/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 54/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 55/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 56/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 57/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 58/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 59/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 60/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ply-3.11-14.el9.noarch 61/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 62/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 63/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 64/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 65/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 66/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 67/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 68/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 69/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 70/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 71/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 72/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 73/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 74/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 75/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 76/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 77/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 78/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 79/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 80/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 81/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 82/84 2026-03-09T15:16:22.076 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 83/84 2026-03-09T15:16:22.145 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-mirror-2:18.2.0-0.el9.x86_64 84/84 2026-03-09T15:16:22.145 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:22.145 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T15:16:22.145 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.145 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.145 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.145 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.145 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.145 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.145 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.145 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: fmt-8.1.1-5.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt-2.4.0-1.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-jwt+crypto-2.4.0-1.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T15:16:22.146 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:22.147 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:22.221 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:22.221 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:22.221 INFO:teuthology.orchestra.run.vm09.stdout: Package Architecture Version Repository Size 2026-03-09T15:16:22.221 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:22.221 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T15:16:22.221 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:18.2.0-0.el9 @ceph-noarch 200 k 2026-03-09T15:16:22.221 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:22.221 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T15:16:22.221 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:22.221 INFO:teuthology.orchestra.run.vm09.stdout:Remove 1 Package 2026-03-09T15:16:22.221 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:22.222 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 200 k 2026-03-09T15:16:22.222 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T15:16:22.223 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T15:16:22.223 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T15:16:22.224 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T15:16:22.225 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T15:16:22.239 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T15:16:22.240 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : cephadm-2:18.2.0-0.el9.noarch 1/1 2026-03-09T15:16:22.333 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:22.334 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:22.334 INFO:teuthology.orchestra.run.vm05.stdout: Package Architecture Version Repository Size 2026-03-09T15:16:22.334 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:22.334 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T15:16:22.334 INFO:teuthology.orchestra.run.vm05.stdout: cephadm noarch 2:18.2.0-0.el9 @ceph-noarch 200 k 2026-03-09T15:16:22.334 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:22.334 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T15:16:22.334 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:22.334 INFO:teuthology.orchestra.run.vm05.stdout:Remove 1 Package 2026-03-09T15:16:22.334 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:22.334 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 200 k 2026-03-09T15:16:22.334 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T15:16:22.335 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T15:16:22.335 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T15:16:22.336 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T15:16:22.336 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T15:16:22.342 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:18.2.0-0.el9.noarch 1/1 2026-03-09T15:16:22.352 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T15:16:22.352 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : cephadm-2:18.2.0-0.el9.noarch 1/1 2026-03-09T15:16:22.378 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:18.2.0-0.el9.noarch 1/1 2026-03-09T15:16:22.378 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:22.378 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T15:16:22.379 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.379 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:22.379 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:22.454 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: cephadm-2:18.2.0-0.el9.noarch 1/1 2026-03-09T15:16:22.494 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cephadm-2:18.2.0-0.el9.noarch 1/1 2026-03-09T15:16:22.494 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:22.494 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T15:16:22.494 INFO:teuthology.orchestra.run.vm05.stdout: cephadm-2:18.2.0-0.el9.noarch 2026-03-09T15:16:22.494 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:22.494 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:22.547 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-immutable-object-cache 2026-03-09T15:16:22.548 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:22.550 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:22.551 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:22.551 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:22.689 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-immutable-object-cache 2026-03-09T15:16:22.689 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:22.691 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:22.692 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:22.692 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:22.710 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr 2026-03-09T15:16:22.710 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:22.712 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:22.713 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:22.713 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:22.847 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr 2026-03-09T15:16:22.847 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:22.850 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:22.850 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:22.850 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:22.870 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-dashboard 2026-03-09T15:16:22.870 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:22.873 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:22.873 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:22.873 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:23.005 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-dashboard 2026-03-09T15:16:23.005 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:23.008 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:23.008 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:23.009 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:23.026 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-09T15:16:23.026 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:23.029 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:23.029 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:23.029 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:23.163 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-09T15:16:23.164 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:23.166 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:23.167 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:23.167 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:23.182 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-rook 2026-03-09T15:16:23.182 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:23.184 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:23.185 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:23.185 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:23.320 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-rook 2026-03-09T15:16:23.320 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:23.323 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:23.323 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:23.323 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:23.338 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-cephadm 2026-03-09T15:16:23.338 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:23.341 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:23.341 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:23.341 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:23.482 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-cephadm 2026-03-09T15:16:23.482 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:23.484 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:23.485 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:23.485 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout: Package Architecture Version Repository Size 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:18.2.0-0.el9 @ceph 2.4 M 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Remove 1 Package 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 2.4 M 2026-03-09T15:16:23.508 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T15:16:23.510 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T15:16:23.510 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T15:16:23.523 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T15:16:23.523 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T15:16:23.550 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T15:16:23.563 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-fuse-2:18.2.0-0.el9.x86_64 1/1 2026-03-09T15:16:23.628 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-fuse-2:18.2.0-0.el9.x86_64 1/1 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout: Package Architecture Version Repository Size 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse x86_64 2:18.2.0-0.el9 @ceph 2.4 M 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout:Remove 1 Package 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 2.4 M 2026-03-09T15:16:23.653 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T15:16:23.655 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T15:16:23.655 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T15:16:23.666 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:18.2.0-0.el9.x86_64 1/1 2026-03-09T15:16:23.667 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:23.667 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T15:16:23.667 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:23.667 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:23.667 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:23.668 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T15:16:23.668 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T15:16:23.696 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T15:16:23.711 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-fuse-2:18.2.0-0.el9.x86_64 1/1 2026-03-09T15:16:23.768 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-fuse-2:18.2.0-0.el9.x86_64 1/1 2026-03-09T15:16:23.806 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-fuse-2:18.2.0-0.el9.x86_64 1/1 2026-03-09T15:16:23.806 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:23.806 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T15:16:23.806 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:23.806 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:23.806 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:23.860 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:23.860 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:23.860 INFO:teuthology.orchestra.run.vm09.stdout: Package Architecture Version Repository Size 2026-03-09T15:16:23.860 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:23.860 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T15:16:23.860 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:18.2.0-0.el9 @ceph 456 k 2026-03-09T15:16:23.861 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-09T15:16:23.861 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:18.2.0-0.el9 @ceph 137 k 2026-03-09T15:16:23.861 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:23.861 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T15:16:23.861 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:23.861 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-09T15:16:23.861 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:23.861 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 593 k 2026-03-09T15:16:23.861 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T15:16:23.862 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T15:16:23.863 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T15:16:23.873 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T15:16:23.873 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T15:16:23.897 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T15:16:23.899 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephfs-devel-2:18.2.0-0.el9.x86_64 1/2 2026-03-09T15:16:23.912 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librados-devel-2:18.2.0-0.el9.x86_64 2/2 2026-03-09T15:16:23.967 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados-devel-2:18.2.0-0.el9.x86_64 2/2 2026-03-09T15:16:23.967 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:18.2.0-0.el9.x86_64 1/2 2026-03-09T15:16:23.992 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout: Package Architecture Version Repository Size 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel x86_64 2:18.2.0-0.el9 @ceph 456 k 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel x86_64 2:18.2.0-0.el9 @ceph 137 k 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout:Remove 2 Packages 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 593 k 2026-03-09T15:16:23.993 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T15:16:23.995 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T15:16:23.995 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T15:16:24.005 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T15:16:24.005 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T15:16:24.011 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:18.2.0-0.el9.x86_64 2/2 2026-03-09T15:16:24.011 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:24.011 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T15:16:24.011 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:18.2.0-0.el9.x86_64 librados-devel-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.011 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:24.011 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:24.030 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T15:16:24.033 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libcephfs-devel-2:18.2.0-0.el9.x86_64 1/2 2026-03-09T15:16:24.046 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librados-devel-2:18.2.0-0.el9.x86_64 2/2 2026-03-09T15:16:24.104 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados-devel-2:18.2.0-0.el9.x86_64 2/2 2026-03-09T15:16:24.104 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs-devel-2:18.2.0-0.el9.x86_64 1/2 2026-03-09T15:16:24.141 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados-devel-2:18.2.0-0.el9.x86_64 2/2 2026-03-09T15:16:24.141 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:24.141 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T15:16:24.141 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel-2:18.2.0-0.el9.x86_64 librados-devel-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.141 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:24.141 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:24.204 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:18.2.0-0.el9 @ceph 1.8 M 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:18.2.0-0.el9 @ceph 480 k 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:18.2.0-0.el9 @ceph 186 k 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout:Remove 3 Packages 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 2.5 M 2026-03-09T15:16:24.205 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T15:16:24.207 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T15:16:24.207 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T15:16:24.218 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T15:16:24.218 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T15:16:24.244 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T15:16:24.246 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cephfs-2:18.2.0-0.el9.x86_64 1/3 2026-03-09T15:16:24.248 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-argparse-2:18.2.0-0.el9.x86_64 2/3 2026-03-09T15:16:24.248 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephfs2-2:18.2.0-0.el9.x86_64 3/3 2026-03-09T15:16:24.306 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:18.2.0-0.el9.x86_64 3/3 2026-03-09T15:16:24.306 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:18.2.0-0.el9.x86_64 1/3 2026-03-09T15:16:24.306 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:18.2.0-0.el9.x86_64 2/3 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2 x86_64 2:18.2.0-0.el9 @ceph 1.8 M 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs x86_64 2:18.2.0-0.el9 @ceph 480 k 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse x86_64 2:18.2.0-0.el9 @ceph 186 k 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout:Remove 3 Packages 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 2.5 M 2026-03-09T15:16:24.323 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T15:16:24.325 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T15:16:24.325 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T15:16:24.338 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T15:16:24.338 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T15:16:24.343 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:18.2.0-0.el9.x86_64 3/3 2026-03-09T15:16:24.343 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:24.343 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T15:16:24.343 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.343 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.343 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.343 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:24.343 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:24.363 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T15:16:24.365 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cephfs-2:18.2.0-0.el9.x86_64 1/3 2026-03-09T15:16:24.366 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ceph-argparse-2:18.2.0-0.el9.x86_64 2/3 2026-03-09T15:16:24.367 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libcephfs2-2:18.2.0-0.el9.x86_64 3/3 2026-03-09T15:16:24.423 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephfs2-2:18.2.0-0.el9.x86_64 3/3 2026-03-09T15:16:24.423 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs2-2:18.2.0-0.el9.x86_64 1/3 2026-03-09T15:16:24.423 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-argparse-2:18.2.0-0.el9.x86_64 2/3 2026-03-09T15:16:24.458 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cephfs-2:18.2.0-0.el9.x86_64 3/3 2026-03-09T15:16:24.458 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:24.458 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T15:16:24.458 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.458 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.458 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.458 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:24.458 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:24.506 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: libcephfs-devel 2026-03-09T15:16:24.507 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:24.509 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:24.510 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:24.510 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:24.623 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: libcephfs-devel 2026-03-09T15:16:24.623 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:24.625 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:24.626 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:24.626 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:24.683 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:18.2.0-0.el9 @ceph 12 M 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:18.2.0-0.el9 @ceph 1.1 M 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:18.2.0-0.el9 @ceph 1.1 M 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:18.2.0-0.el9 @ceph 269 k 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:18.2.0-0.el9 @ceph 230 k 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:18.2.0-0.el9 @ceph 490 k 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:18.2.0-0.el9 @ceph 12 M 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:18.2.0-0.el9 @ceph 15 M 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout:Remove 21 Packages 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 74 M 2026-03-09T15:16:24.685 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-09T15:16:24.689 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-09T15:16:24.689 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-09T15:16:24.710 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-09T15:16:24.711 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-09T15:16:24.750 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-09T15:16:24.753 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-nbd-2:18.2.0-0.el9.x86_64 1/21 2026-03-09T15:16:24.755 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-fuse-2:18.2.0-0.el9.x86_64 2/21 2026-03-09T15:16:24.757 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rgw-2:18.2.0-0.el9.x86_64 3/21 2026-03-09T15:16:24.758 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librgw2-2:18.2.0-0.el9.x86_64 4/21 2026-03-09T15:16:24.770 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:18.2.0-0.el9.x86_64 4/21 2026-03-09T15:16:24.773 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/21 2026-03-09T15:16:24.774 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rbd-2:18.2.0-0.el9.x86_64 6/21 2026-03-09T15:16:24.776 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rados-2:18.2.0-0.el9.x86_64 7/21 2026-03-09T15:16:24.778 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/21 2026-03-09T15:16:24.778 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librbd1-2:18.2.0-0.el9.x86_64 9/21 2026-03-09T15:16:24.791 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:18.2.0-0.el9.x86_64 9/21 2026-03-09T15:16:24.791 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librados2-2:18.2.0-0.el9.x86_64 10/21 2026-03-09T15:16:24.791 INFO:teuthology.orchestra.run.vm09.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-09T15:16:24.791 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:24.795 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout: librados2 x86_64 2:18.2.0-0.el9 @ceph 12 M 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados x86_64 2:18.2.0-0.el9 @ceph 1.1 M 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd x86_64 2:18.2.0-0.el9 @ceph 1.1 M 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw x86_64 2:18.2.0-0.el9 @ceph 269 k 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse x86_64 2:18.2.0-0.el9 @ceph 230 k 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd x86_64 2:18.2.0-0.el9 @ceph 490 k 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-09T15:16:24.796 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout: librbd1 x86_64 2:18.2.0-0.el9 @ceph 12 M 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout: librgw2 x86_64 2:18.2.0-0.el9 @ceph 15 M 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout:Remove 21 Packages 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 74 M 2026-03-09T15:16:24.797 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-09T15:16:24.800 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-09T15:16:24.800 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-09T15:16:24.802 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:18.2.0-0.el9.x86_64 10/21 2026-03-09T15:16:24.805 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 11/21 2026-03-09T15:16:24.807 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 12/21 2026-03-09T15:16:24.809 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 13/21 2026-03-09T15:16:24.811 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 14/21 2026-03-09T15:16:24.815 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : re2-1:20211101-20.el9.x86_64 15/21 2026-03-09T15:16:24.818 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 16/21 2026-03-09T15:16:24.821 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 17/21 2026-03-09T15:16:24.822 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-09T15:16:24.823 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-09T15:16:24.823 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 18/21 2026-03-09T15:16:24.825 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 19/21 2026-03-09T15:16:24.827 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 20/21 2026-03-09T15:16:24.842 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 21/21 2026-03-09T15:16:24.863 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-09T15:16:24.865 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-nbd-2:18.2.0-0.el9.x86_64 1/21 2026-03-09T15:16:24.868 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-fuse-2:18.2.0-0.el9.x86_64 2/21 2026-03-09T15:16:24.870 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rgw-2:18.2.0-0.el9.x86_64 3/21 2026-03-09T15:16:24.870 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librgw2-2:18.2.0-0.el9.x86_64 4/21 2026-03-09T15:16:24.884 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librgw2-2:18.2.0-0.el9.x86_64 4/21 2026-03-09T15:16:24.886 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/21 2026-03-09T15:16:24.888 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rbd-2:18.2.0-0.el9.x86_64 6/21 2026-03-09T15:16:24.889 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rados-2:18.2.0-0.el9.x86_64 7/21 2026-03-09T15:16:24.891 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/21 2026-03-09T15:16:24.891 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librbd1-2:18.2.0-0.el9.x86_64 9/21 2026-03-09T15:16:24.899 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 21/21 2026-03-09T15:16:24.899 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 2/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 3/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 4/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:18.2.0-0.el9.x86_64 7/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:18.2.0-0.el9.x86_64 8/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:18.2.0-0.el9.x86_64 10/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 11/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 12/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 13/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:18.2.0-0.el9.x86_64 14/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:18.2.0-0.el9.x86_64 15/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:18.2.0-0.el9.x86_64 16/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 17/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:18.2.0-0.el9.x86_64 18/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:18.2.0-0.el9.x86_64 19/21 2026-03-09T15:16:24.900 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 20/21 2026-03-09T15:16:24.905 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:18.2.0-0.el9.x86_64 9/21 2026-03-09T15:16:24.905 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librados2-2:18.2.0-0.el9.x86_64 10/21 2026-03-09T15:16:24.905 INFO:teuthology.orchestra.run.vm05.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-09T15:16:24.905 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:24.918 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:18.2.0-0.el9.x86_64 10/21 2026-03-09T15:16:24.920 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 11/21 2026-03-09T15:16:24.923 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 12/21 2026-03-09T15:16:24.924 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 13/21 2026-03-09T15:16:24.927 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 14/21 2026-03-09T15:16:24.931 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : re2-1:20211101-20.el9.x86_64 15/21 2026-03-09T15:16:24.934 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 16/21 2026-03-09T15:16:24.936 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 17/21 2026-03-09T15:16:24.939 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 18/21 2026-03-09T15:16:24.940 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 19/21 2026-03-09T15:16:24.942 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 20/21 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 21/21 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: librbd1-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T15:16:24.943 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:24.956 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 21/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 21/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 2/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 3/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 4/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:18.2.0-0.el9.x86_64 7/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:18.2.0-0.el9.x86_64 8/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librgw2-2:18.2.0-0.el9.x86_64 10/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 11/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 12/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 13/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rados-2:18.2.0-0.el9.x86_64 14/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rbd-2:18.2.0-0.el9.x86_64 15/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rgw-2:18.2.0-0.el9.x86_64 16/21 2026-03-09T15:16:25.017 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 17/21 2026-03-09T15:16:25.018 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-fuse-2:18.2.0-0.el9.x86_64 18/21 2026-03-09T15:16:25.018 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-nbd-2:18.2.0-0.el9.x86_64 19/21 2026-03-09T15:16:25.018 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : re2-1:20211101-20.el9.x86_64 20/21 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 21/21 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: librados2-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: librbd1-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: librgw2-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd-2:18.2.0-0.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-09T15:16:25.063 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:25.147 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: librbd1 2026-03-09T15:16:25.148 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:25.151 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:25.152 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:25.152 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:25.268 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: librbd1 2026-03-09T15:16:25.269 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:25.272 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:25.273 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:25.273 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:25.332 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rados 2026-03-09T15:16:25.332 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:25.336 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:25.336 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:25.337 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:25.456 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-rados 2026-03-09T15:16:25.456 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:25.458 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:25.459 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:25.459 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:25.511 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rgw 2026-03-09T15:16:25.512 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:25.515 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:25.515 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:25.515 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:25.619 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-rgw 2026-03-09T15:16:25.619 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:25.622 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:25.622 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:25.622 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:25.685 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-cephfs 2026-03-09T15:16:25.685 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:25.688 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:25.688 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:25.688 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:25.783 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-cephfs 2026-03-09T15:16:25.783 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:25.786 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:25.787 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:25.787 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:25.855 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rbd 2026-03-09T15:16:25.855 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:25.858 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:25.859 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:25.859 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:25.946 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-rbd 2026-03-09T15:16:25.946 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:25.949 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:25.950 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:25.950 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:26.027 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-fuse 2026-03-09T15:16:26.027 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:26.030 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:26.030 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:26.030 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:26.106 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: rbd-fuse 2026-03-09T15:16:26.106 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:26.109 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:26.110 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:26.110 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:26.198 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-mirror 2026-03-09T15:16:26.198 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:26.201 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:26.202 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:26.202 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:26.266 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: rbd-mirror 2026-03-09T15:16:26.266 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:26.269 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:26.269 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:26.269 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:26.364 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-nbd 2026-03-09T15:16:26.364 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-09T15:16:26.367 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-09T15:16:26.368 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-09T15:16:26.368 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-09T15:16:26.387 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-09T15:16:26.426 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: rbd-nbd 2026-03-09T15:16:26.426 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-09T15:16:26.429 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-09T15:16:26.429 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-09T15:16:26.429 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-09T15:16:26.448 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean all 2026-03-09T15:16:26.510 INFO:teuthology.orchestra.run.vm09.stdout:56 files removed 2026-03-09T15:16:26.530 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T15:16:26.553 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean expire-cache 2026-03-09T15:16:26.568 INFO:teuthology.orchestra.run.vm05.stdout:56 files removed 2026-03-09T15:16:26.585 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T15:16:26.609 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean expire-cache 2026-03-09T15:16:26.703 INFO:teuthology.orchestra.run.vm09.stdout:Cache was expired 2026-03-09T15:16:26.703 INFO:teuthology.orchestra.run.vm09.stdout:0 files removed 2026-03-09T15:16:26.720 DEBUG:teuthology.parallel:result is None 2026-03-09T15:16:26.753 INFO:teuthology.orchestra.run.vm05.stdout:Cache was expired 2026-03-09T15:16:26.753 INFO:teuthology.orchestra.run.vm05.stdout:0 files removed 2026-03-09T15:16:26.769 DEBUG:teuthology.parallel:result is None 2026-03-09T15:16:26.769 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm05.local 2026-03-09T15:16:26.769 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm09.local 2026-03-09T15:16:26.769 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T15:16:26.769 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T15:16:26.792 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-09T15:16:26.796 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-09T15:16:26.856 DEBUG:teuthology.parallel:result is None 2026-03-09T15:16:26.860 DEBUG:teuthology.parallel:result is None 2026-03-09T15:16:26.860 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T15:16:26.863 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T15:16:26.863 DEBUG:teuthology.orchestra.run.vm05:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T15:16:26.898 DEBUG:teuthology.orchestra.run.vm09:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T15:16:26.911 INFO:teuthology.orchestra.run.vm05.stderr:bash: line 1: ntpq: command not found 2026-03-09T15:16:26.914 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-09T15:16:26.995 INFO:teuthology.orchestra.run.vm05.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T15:16:26.995 INFO:teuthology.orchestra.run.vm05.stdout:=============================================================================== 2026-03-09T15:16:26.995 INFO:teuthology.orchestra.run.vm05.stdout:^+ static.179.181.75.5.clie> 3 7 377 21 -669us[ -669us] +/- 28ms 2026-03-09T15:16:26.995 INFO:teuthology.orchestra.run.vm05.stdout:^+ 172-104-149-161.ip.linod> 2 6 377 28 +5384us[+5370us] +/- 31ms 2026-03-09T15:16:26.995 INFO:teuthology.orchestra.run.vm05.stdout:^+ time.cloudflare.com 3 7 377 27 -1203us[-1218us] +/- 14ms 2026-03-09T15:16:26.995 INFO:teuthology.orchestra.run.vm05.stdout:^* time2.sebhosting.de 2 6 377 25 -932us[ -947us] +/- 16ms 2026-03-09T15:16:26.995 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T15:16:26.995 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-09T15:16:26.995 INFO:teuthology.orchestra.run.vm09.stdout:^+ static.179.181.75.5.clie> 3 6 377 30 -743us[ -743us] +/- 28ms 2026-03-09T15:16:26.995 INFO:teuthology.orchestra.run.vm09.stdout:^+ 172-104-149-161.ip.linod> 2 6 377 25 +5184us[+5184us] +/- 31ms 2026-03-09T15:16:26.995 INFO:teuthology.orchestra.run.vm09.stdout:^* time.cloudflare.com 3 7 377 94 -1272us[-1388us] +/- 15ms 2026-03-09T15:16:26.995 INFO:teuthology.orchestra.run.vm09.stdout:^+ time2.sebhosting.de 2 6 377 28 -976us[ -976us] +/- 16ms 2026-03-09T15:16:26.998 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T15:16:27.000 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T15:16:27.000 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T15:16:27.003 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T15:16:27.006 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T15:16:27.009 INFO:teuthology.task.internal:Duration was 1520.545951 seconds 2026-03-09T15:16:27.009 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T15:16:27.012 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T15:16:27.012 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T15:16:27.039 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T15:16:27.076 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T15:16:27.079 INFO:teuthology.orchestra.run.vm05.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T15:16:27.395 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T15:16:27.395 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm05.local 2026-03-09T15:16:27.395 DEBUG:teuthology.orchestra.run.vm05:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T15:16:27.417 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm09.local 2026-03-09T15:16:27.417 DEBUG:teuthology.orchestra.run.vm09:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T15:16:27.462 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T15:16:27.462 DEBUG:teuthology.orchestra.run.vm05:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T15:16:27.464 DEBUG:teuthology.orchestra.run.vm09:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T15:16:28.176 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T15:16:28.176 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T15:16:28.177 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T15:16:28.197 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T15:16:28.198 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T15:16:28.198 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T15:16:28.198 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T15:16:28.198 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T15:16:28.199 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T15:16:28.199 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T15:16:28.199 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T15:16:28.199 INFO:teuthology.orchestra.run.vm09.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T15:16:28.200 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T15:16:28.346 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.8% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T15:16:28.377 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T15:16:28.379 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T15:16:28.382 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T15:16:28.382 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T15:16:28.441 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T15:16:28.463 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T15:16:28.466 DEBUG:teuthology.orchestra.run.vm05:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T15:16:28.482 DEBUG:teuthology.orchestra.run.vm09:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T15:16:28.505 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = core 2026-03-09T15:16:28.527 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = core 2026-03-09T15:16:28.535 DEBUG:teuthology.orchestra.run.vm05:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T15:16:28.571 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T15:16:28.571 DEBUG:teuthology.orchestra.run.vm09:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T15:16:28.592 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T15:16:28.592 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T15:16:28.595 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T15:16:28.595 DEBUG:teuthology.misc:Transferring archived files from vm05:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/514/remote/vm05 2026-03-09T15:16:28.595 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T15:16:28.641 DEBUG:teuthology.misc:Transferring archived files from vm09:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/514/remote/vm09 2026-03-09T15:16:28.641 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T15:16:28.669 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T15:16:28.669 DEBUG:teuthology.orchestra.run.vm05:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T15:16:28.680 DEBUG:teuthology.orchestra.run.vm09:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T15:16:28.722 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T15:16:28.725 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T15:16:28.725 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T15:16:28.728 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T15:16:28.728 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T15:16:28.735 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T15:16:28.749 INFO:teuthology.orchestra.run.vm05.stdout: 8532145 0 drwxr-xr-x 3 ubuntu ubuntu 19 Mar 9 15:16 /home/ubuntu/cephtest 2026-03-09T15:16:28.749 INFO:teuthology.orchestra.run.vm05.stdout: 84221903 0 d--------- 2 ubuntu ubuntu 6 Mar 9 14:58 /home/ubuntu/cephtest/mnt.0 2026-03-09T15:16:28.750 INFO:teuthology.orchestra.run.vm05.stderr:find: ‘/home/ubuntu/cephtest/mnt.0’: Permission denied 2026-03-09T15:16:28.750 INFO:teuthology.orchestra.run.vm05.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-09T15:16:28.763 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T15:16:28.764 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 48, in base yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/ceph_fuse.py", line 248, in task mount.umount_wait() File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephfs/fuse_mount.py", line 403, in umount_wait run.wait([self.fuse_daemon], timeout) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 479, in wait check_time() File "/home/teuthos/teuthology/teuthology/contextutil.py", line 134, in __call__ raise MaxWhileTries(error_msg) teuthology.exceptions.MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm05 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-09T15:16:28.764 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T15:16:28.767 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds 2026-03-09T15:16:28.768 INFO:teuthology.run:Summary data: description: orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes kernel overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} duration: 1520.545951128006 failure_reason: reached maximum tries (50) after waiting for 300 seconds flavor: default owner: kyr status: fail success: false 2026-03-09T15:16:28.768 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T15:16:28.793 INFO:teuthology.run:FAIL